blob: 081d727e86a071ab4f48e9e688f2a0eaf00a35e2 [file] [log] [blame]
recorder = monkeypatch.function(".......")
-------------------------------------------------------------
tags: nice feature
Like monkeypatch.replace but sets a mock-like call recorder:
recorder = monkeypatch.function("os.path.abspath")
recorder.set_return("/hello")
os.path.abspath("hello")
call, = recorder.calls
assert call.args.path == "hello"
assert call.returned == "/hello"
...
Unlike mock, "args.path" acts on the parsed auto-spec'ed ``os.path.abspath``
so it's independent from if the client side called "os.path.abspath(path=...)"
or "os.path.abspath('positional')".
refine parametrize API
-------------------------------------------------------------
tags: critical feature
extend metafunc.parametrize to directly support indirection, example:
def setupdb(request, config):
# setup "resource" based on test request and the values passed
# in to parametrize. setupfunc is called for each such value.
# you may use request.addfinalizer() or request.cached_setup ...
return dynamic_setup_database(val)
@pytest.mark.parametrize("db", ["pg", "mysql"], setupfunc=setupdb)
def test_heavy_functional_test(db):
...
There would be no need to write or explain funcarg factories and
their special __ syntax.
The examples and improvements should also show how to put the parametrize
decorator to a class, to a module or even to a directory. For the directory
part a conftest.py content like this::
pytestmark = [
@pytest.mark.parametrize_setup("db", ...),
]
probably makes sense in order to keep the declarative nature. This mirrors
the marker-mechanism with respect to a test module but puts it to a directory
scale.
When doing larger scoped parametrization it probably becomes necessary
to allow parametrization to be ignored if the according parameter is not
used (currently any parametrized argument that is not present in a function will cause a ValueError). Example:
@pytest.mark.parametrize("db", ..., mustmatch=False)
means to not raise an error but simply ignore the parametrization
if the signature of a decorated function does not match. XXX is it
not sufficient to always allow non-matches?
allow parametrized attributes on classes
--------------------------------------------------
tags: wish 2.4
example:
@pytest.mark.parametrize_attr("db", setupfunc, [1,2,3], scope="class")
@pytest.mark.parametrize_attr("tmp", setupfunc, scope="...")
class TestMe:
def test_hello(self):
access self.db ...
this would run the test_hello() function three times with three
different values for self.db. This could also work with unittest/nose
style tests, i.e. it leverages existing test suites without needing
to rewrite them. Together with the previously mentioned setup_test()
maybe the setupfunc could be omitted?
optimizations
---------------------------------------------------------------
tags: 2.4 core
- look at ihook optimization such that all lookups for
hooks relating to the same fspath are cached.
fix start/finish partial finailization problem
---------------------------------------------------------------
tags: bug core
if a configure/runtest_setup/sessionstart/... hook invocation partially
fails the sessionfinishes is not called. Each hook implementation
should better be repsonsible for registering a cleanup/finalizer
appropriately to avoid this issue. Moreover/Alternatively, we could
record which implementations of a hook succeeded and only call their
teardown.
relax requirement to have tests/testing contain an __init__
----------------------------------------------------------------
tags: feature
bb: http://bitbucket.org/hpk42/py-trunk/issue/64
A local test run of a "tests" directory may work
but a remote one fail because the tests directory
does not contain an "__init__.py". Either give
an error or make it work without the __init__.py
i.e. port the nose-logic of unloading a test module.
customize test function collection
-------------------------------------------------------
tags: feature
- introduce pytest.mark.nocollect for not considering a function for
test collection at all. maybe also introduce a pytest.mark.test to
explicitly mark a function to become a tested one. Lookup JUnit ways
of tagging tests.
introduce pytest.mark.importorskip
-------------------------------------------------------
tags: feature
in addition to the imperative pytest.importorskip also introduce
a pytest.mark.importorskip so that the test count is more correct.
introduce pytest.mark.platform
-------------------------------------------------------
tags: feature
Introduce nice-to-spell platform-skipping, examples:
@pytest.mark.platform("python3")
@pytest.mark.platform("not python3")
@pytest.mark.platform("win32 and not python3")
@pytest.mark.platform("darwin")
@pytest.mark.platform("not (jython and win32)")
@pytest.mark.platform("not (jython and win32)", xfail=True)
etc. Idea is to allow Python expressions which can operate
on common spellings for operating systems and python
interpreter versions.
pytest.mark.xfail signature change
-------------------------------------------------------
tags: feature
change to pytest.mark.xfail(reason, (optional)condition)
to better implement the word meaning. It also signals
better that we always have some kind of an implementation
reason that can be formualated.
Compatibility? how to introduce a new name/keep compat?
allow to non-intrusively apply skipfs/xfail/marks
---------------------------------------------------
tags: feature
use case: mark a module or directory structures
to be skipped on certain platforms (i.e. no import
attempt will be made).
consider introducing a hook/mechanism that allows to apply marks
from conftests or plugins. (See extended parametrization)
explicit referencing of conftest.py files
-----------------------------------------
tags: feature
allow to name conftest.py files (in sub directories) that should
be imported early, as to include command line options.
improve central pytest ini file
-------------------------------
tags: feature
introduce more declarative configuration options:
- (to-be-collected test directories)
- required plugins
- test func/class/file matching patterns
- skip/xfail (non-intrusive)
- pytest.ini and tox.ini and setup.cfg configuration in the same file
new documentation
----------------------------------
tags: feature
- logo pytest
- examples for unittest or functional testing
- resource management for functional testing
- patterns: page object
have imported module mismatch honour relative paths
--------------------------------------------------------
tags: bug
With 1.1.1 pytest fails at least on windows if an import
is relative and compared against an absolute conftest.py
path. Normalize.
consider globals: pytest.ensuretemp and config
--------------------------------------------------------------
tags: experimental-wish
consider deprecating pytest.ensuretemp and pytest.config
to further reduce pytest globality. Also consider
having pytest.config and ensuretemp coming from
a plugin rather than being there from the start.
consider pytest_addsyspath hook
-----------------------------------------
tags: wish
pytest could call a new pytest_addsyspath() in order to systematically
allow manipulation of sys.path and to inhibit it via --no-addsyspath
in order to more easily run against installed packages.
Alternatively it could also be done via the config object
and pytest_configure.
deprecate global pytest.config usage
----------------------------------------------------------------
tags: feature
pytest.ensuretemp and pytest.config are probably the last
objects containing global state. Often using them is not
necessary. This is about trying to get rid of them, i.e.
deprecating them and checking with PyPy's usages as well
as others.
remove deprecated bits in collect.py
-------------------------------------------------------------------
tags: feature
In an effort to further simplify code, review and remove deprecated bits
in collect.py. Probably good:
- inline consider_file/dir methods, no need to have them
subclass-overridable because of hooks
implement fslayout decorator
---------------------------------
tags: feature
Improve the way how tests can work with pre-made examples,
keeping the layout close to the test function:
@pytest.mark.fslayout("""
conftest.py:
# empty
tests/
test_%(NAME)s: # becomes test_run1.py
def test_function(self):
pass
""")
def test_run(pytester, fslayout):
p = fslayout.findone("test_*.py")
result = pytester.runpytest(p)
assert result.ret == 0
assert result.passed == 1
Another idea is to allow to define a full scenario including the run
in one content string::
runscenario("""
test_{TESTNAME}.py:
import pytest
@pytest.mark.xfail
def test_that_fails():
assert 0
@pytest.mark.skipif("True")
def test_hello():
pass
conftest.py:
import pytest
def pytest_runsetup_setup(item):
pytest.skip("abc")
runpytest -rsxX
*SKIP*{TESTNAME}*
*1 skipped*
""")
This could be run with at least three different ways to invoke pytest:
through the shell, through "python -m pytest" and inlined. As inlined
would be the fastest it could be run first (or "--fast" mode).
Create isolate plugin
---------------------
tags: feature
The idea is that you can e.g. import modules in a test and afterwards
sys.modules, sys.meta_path etc would be reverted. It can go further
then just importing however, e.g. current working directory, file
descriptors, ...
This would probably be done by marking::
@pytest.mark.isolate(importing=True, cwd=True, fds=False)
def test_foo():
...
With the possibility of doing this globally in an ini-file.
fnmatch for test names
----------------------
tags: feature-wish
various testsuites use suffixes instead of prefixes for test classes
also it lends itself to bdd style test names::
class UserBehaviour:
def anonymous_should_not_have_inbox(user):
...
def registred_should_have_inbox(user):
..
using the following in pytest.ini::
[pytest]
python_classes = Test *Behaviour *Test
python_functions = test *_should_*
mechanism for running named parts of tests with different reporting behaviour
------------------------------------------------------------------------------
tags: feature-wish-incomplete
a few use-cases come to mind:
* fail assertions and record that without stopping a complete test
* this is in particular hepfull if a small bit of a test is known to fail/xfail::
def test_fun():
with pytest.section('fdcheck, marks=pytest.mark.xfail_if(...)):
breaks_on_windows()
* divide functional/acceptance tests into sections
* provide a different mechanism for generators, maybe something like::
def pytest_runtest_call(item)
if not generator:
...
prepare_check = GeneratorCheckprepare()
gen = item.obj(**fixtures)
for check in gen
id, call = prepare_check(check)
# bubble should only prevent exception propagation after a failure
# the whole test should still fail
# there might be need for a lower level api and taking custom markers into account
with pytest.section(id, bubble=False):
call()