| |
| .. _paramexamples: |
| |
| Parametrizing tests |
| ================================================= |
| |
| .. currentmodule:: _pytest.python |
| |
| ``pytest`` allows to easily parametrize test functions. |
| For basic docs, see :ref:`parametrize-basics`. |
| |
| In the following we provide some examples using |
| the builtin mechanisms. |
| |
| Generating parameters combinations, depending on command line |
| ---------------------------------------------------------------------------- |
| |
| .. regendoc:wipe |
| |
| Let's say we want to execute a test with different computation |
| parameters and the parameter range shall be determined by a command |
| line argument. Let's first write a simple (do-nothing) computation test:: |
| |
| # content of test_compute.py |
| |
| def test_compute(param1): |
| assert param1 < 4 |
| |
| Now we add a test configuration like this:: |
| |
| # content of conftest.py |
| |
| def pytest_addoption(parser): |
| parser.addoption("--all", action="store_true", |
| help="run all combinations") |
| |
| def pytest_generate_tests(metafunc): |
| if 'param1' in metafunc.fixturenames: |
| if metafunc.config.option.all: |
| end = 5 |
| else: |
| end = 2 |
| metafunc.parametrize("param1", range(end)) |
| |
| This means that we only run 2 tests if we do not pass ``--all``:: |
| |
| $ py.test -q test_compute.py |
| .. |
| 2 passed in 0.12 seconds |
| |
| We run only two computations, so we see two dots. |
| let's run the full monty:: |
| |
| $ py.test -q --all |
| ....F |
| ======= FAILURES ======== |
| _______ test_compute[4] ________ |
| |
| param1 = 4 |
| |
| def test_compute(param1): |
| > assert param1 < 4 |
| E assert 4 < 4 |
| |
| test_compute.py:3: AssertionError |
| 1 failed, 4 passed in 0.12 seconds |
| |
| As expected when running the full range of ``param1`` values |
| we'll get an error on the last one. |
| |
| |
| Different options for test IDs |
| ------------------------------------ |
| |
| pytest will build a string that is the test ID for each set of values in a |
| parametrized test. These IDs can be used with ``-k`` to select specific cases |
| to run, and they will also identify the specific case when one is failing. |
| Running pytest with ``--collect-only`` will show the generated IDs. |
| |
| Numbers, strings, booleans and None will have their usual string representation |
| used in the test ID. For other objects, pytest will make a string based on |
| the argument name:: |
| |
| # content of test_time.py |
| |
| import pytest |
| |
| from datetime import datetime, timedelta |
| |
| testdata = [ |
| (datetime(2001, 12, 12), datetime(2001, 12, 11), timedelta(1)), |
| (datetime(2001, 12, 11), datetime(2001, 12, 12), timedelta(-1)), |
| ] |
| |
| |
| @pytest.mark.parametrize("a,b,expected", testdata) |
| def test_timedistance_v0(a, b, expected): |
| diff = a - b |
| assert diff == expected |
| |
| |
| @pytest.mark.parametrize("a,b,expected", testdata, ids=["forward", "backward"]) |
| def test_timedistance_v1(a, b, expected): |
| diff = a - b |
| assert diff == expected |
| |
| |
| def idfn(val): |
| if isinstance(val, (datetime,)): |
| # note this wouldn't show any hours/minutes/seconds |
| return val.strftime('%Y%m%d') |
| |
| |
| @pytest.mark.parametrize("a,b,expected", testdata, ids=idfn) |
| def test_timedistance_v2(a, b, expected): |
| diff = a - b |
| assert diff == expected |
| |
| |
| In ``test_timedistance_v0``, we let pytest generate the test IDs. |
| |
| In ``test_timedistance_v1``, we specified ``ids`` as a list of strings which were |
| used as the test IDs. These are succinct, but can be a pain to maintain. |
| |
| In ``test_timedistance_v2``, we specified ``ids`` as a function that can generate a |
| string representation to make part of the test ID. So our ``datetime`` values use the |
| label generated by ``idfn``, but because we didn't generate a label for ``timedelta`` |
| objects, they are still using the default pytest representation:: |
| |
| |
| $ py.test test_time.py --collect-only |
| ======= test session starts ======== |
| platform linux -- Python 3.4.0, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 |
| rootdir: $REGENDOC_TMPDIR, inifile: |
| collected 6 items |
| <Module 'test_time.py'> |
| <Function 'test_timedistance_v0[a0-b0-expected0]'> |
| <Function 'test_timedistance_v0[a1-b1-expected1]'> |
| <Function 'test_timedistance_v1[forward]'> |
| <Function 'test_timedistance_v1[backward]'> |
| <Function 'test_timedistance_v2[20011212-20011211-expected0]'> |
| <Function 'test_timedistance_v2[20011211-20011212-expected1]'> |
| |
| ======= no tests ran in 0.12 seconds ======== |
| |
| A quick port of "testscenarios" |
| ------------------------------------ |
| |
| .. _`test scenarios`: http://pypi.python.org/pypi/testscenarios/ |
| |
| Here is a quick port to run tests configured with `test scenarios`_, |
| an add-on from Robert Collins for the standard unittest framework. We |
| only have to work a bit to construct the correct arguments for pytest's |
| :py:func:`Metafunc.parametrize`:: |
| |
| # content of test_scenarios.py |
| |
| def pytest_generate_tests(metafunc): |
| idlist = [] |
| argvalues = [] |
| for scenario in metafunc.cls.scenarios: |
| idlist.append(scenario[0]) |
| items = scenario[1].items() |
| argnames = [x[0] for x in items] |
| argvalues.append(([x[1] for x in items])) |
| metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class") |
| |
| scenario1 = ('basic', {'attribute': 'value'}) |
| scenario2 = ('advanced', {'attribute': 'value2'}) |
| |
| class TestSampleWithScenarios: |
| scenarios = [scenario1, scenario2] |
| |
| def test_demo1(self, attribute): |
| assert isinstance(attribute, str) |
| |
| def test_demo2(self, attribute): |
| assert isinstance(attribute, str) |
| |
| this is a fully self-contained example which you can run with:: |
| |
| $ py.test test_scenarios.py |
| ======= test session starts ======== |
| platform linux -- Python 3.4.0, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 |
| rootdir: $REGENDOC_TMPDIR, inifile: |
| collected 4 items |
| |
| test_scenarios.py .... |
| |
| ======= 4 passed in 0.12 seconds ======== |
| |
| If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:: |
| |
| |
| $ py.test --collect-only test_scenarios.py |
| ======= test session starts ======== |
| platform linux -- Python 3.4.0, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 |
| rootdir: $REGENDOC_TMPDIR, inifile: |
| collected 4 items |
| <Module 'test_scenarios.py'> |
| <Class 'TestSampleWithScenarios'> |
| <Instance '()'> |
| <Function 'test_demo1[basic]'> |
| <Function 'test_demo2[basic]'> |
| <Function 'test_demo1[advanced]'> |
| <Function 'test_demo2[advanced]'> |
| |
| ======= no tests ran in 0.12 seconds ======== |
| |
| Note that we told ``metafunc.parametrize()`` that your scenario values |
| should be considered class-scoped. With pytest-2.3 this leads to a |
| resource-based ordering. |
| |
| Deferring the setup of parametrized resources |
| --------------------------------------------------- |
| |
| .. regendoc:wipe |
| |
| The parametrization of test functions happens at collection |
| time. It is a good idea to setup expensive resources like DB |
| connections or subprocess only when the actual test is run. |
| Here is a simple example how you can achieve that, first |
| the actual test requiring a ``db`` object:: |
| |
| # content of test_backends.py |
| |
| import pytest |
| def test_db_initialized(db): |
| # a dummy test |
| if db.__class__.__name__ == "DB2": |
| pytest.fail("deliberately failing for demo purposes") |
| |
| We can now add a test configuration that generates two invocations of |
| the ``test_db_initialized`` function and also implements a factory that |
| creates a database object for the actual test invocations:: |
| |
| # content of conftest.py |
| import pytest |
| |
| def pytest_generate_tests(metafunc): |
| if 'db' in metafunc.fixturenames: |
| metafunc.parametrize("db", ['d1', 'd2'], indirect=True) |
| |
| class DB1: |
| "one database object" |
| class DB2: |
| "alternative database object" |
| |
| @pytest.fixture |
| def db(request): |
| if request.param == "d1": |
| return DB1() |
| elif request.param == "d2": |
| return DB2() |
| else: |
| raise ValueError("invalid internal test config") |
| |
| Let's first see how it looks like at collection time:: |
| |
| $ py.test test_backends.py --collect-only |
| ======= test session starts ======== |
| platform linux -- Python 3.4.0, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 |
| rootdir: $REGENDOC_TMPDIR, inifile: |
| collected 2 items |
| <Module 'test_backends.py'> |
| <Function 'test_db_initialized[d1]'> |
| <Function 'test_db_initialized[d2]'> |
| |
| ======= no tests ran in 0.12 seconds ======== |
| |
| And then when we run the test:: |
| |
| $ py.test -q test_backends.py |
| .F |
| ======= FAILURES ======== |
| _______ test_db_initialized[d2] ________ |
| |
| db = <conftest.DB2 object at 0xdeadbeef> |
| |
| def test_db_initialized(db): |
| # a dummy test |
| if db.__class__.__name__ == "DB2": |
| > pytest.fail("deliberately failing for demo purposes") |
| E Failed: deliberately failing for demo purposes |
| |
| test_backends.py:6: Failed |
| 1 failed, 1 passed in 0.12 seconds |
| |
| The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase. |
| |
| .. regendoc:wipe |
| |
| Apply indirect on particular arguments |
| --------------------------------------------------- |
| |
| Very often parametrization uses more than one argument name. There is opportunity to apply ``indirect`` |
| parameter on particular arguments. It can be done by passing list or tuple of |
| arguments' names to ``indirect``. In the example below there is a function ``test_indirect`` which uses |
| two fixtures: ``x`` and ``y``. Here we give to indirect the list, which contains the name of the |
| fixture ``x``. The indirect parameter will be applied to this argument only, and the value ``a`` |
| will be passed to respective fixture function:: |
| |
| # content of test_indirect_list.py |
| |
| import pytest |
| @pytest.fixture(scope='function') |
| def x(request): |
| return request.param * 3 |
| |
| @pytest.fixture(scope='function') |
| def y(request): |
| return request.param * 2 |
| |
| @pytest.mark.parametrize('x, y', [('a', 'b')], indirect=['x']) |
| def test_indirect(x,y): |
| assert x == 'aaa' |
| assert y == 'b' |
| |
| The result of this test will be successful:: |
| |
| $ py.test test_indirect_list.py --collect-only |
| ======= test session starts ======== |
| platform linux -- Python 3.4.0, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 |
| rootdir: $REGENDOC_TMPDIR, inifile: |
| collected 1 items |
| <Module 'test_indirect_list.py'> |
| <Function 'test_indirect[a-b]'> |
| |
| ======= no tests ran in 0.12 seconds ======== |
| |
| .. regendoc:wipe |
| |
| Parametrizing test methods through per-class configuration |
| -------------------------------------------------------------- |
| |
| .. _`unittest parametrizer`: https://github.com/testing-cabal/unittest-ext/blob/master/params.py |
| |
| |
| Here is an example ``pytest_generate_function`` function implementing a |
| parametrization scheme similar to Michael Foord's `unittest |
| parametrizer`_ but in a lot less code:: |
| |
| # content of ./test_parametrize.py |
| import pytest |
| |
| def pytest_generate_tests(metafunc): |
| # called once per each test function |
| funcarglist = metafunc.cls.params[metafunc.function.__name__] |
| argnames = list(funcarglist[0]) |
| metafunc.parametrize(argnames, [[funcargs[name] for name in argnames] |
| for funcargs in funcarglist]) |
| |
| class TestClass: |
| # a map specifying multiple argument sets for a test method |
| params = { |
| 'test_equals': [dict(a=1, b=2), dict(a=3, b=3), ], |
| 'test_zerodivision': [dict(a=1, b=0), ], |
| } |
| |
| def test_equals(self, a, b): |
| assert a == b |
| |
| def test_zerodivision(self, a, b): |
| pytest.raises(ZeroDivisionError, "a/b") |
| |
| Our test generator looks up a class-level definition which specifies which |
| argument sets to use for each test function. Let's run it:: |
| |
| $ py.test -q |
| F.. |
| ======= FAILURES ======== |
| _______ TestClass.test_equals[1-2] ________ |
| |
| self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2 |
| |
| def test_equals(self, a, b): |
| > assert a == b |
| E assert 1 == 2 |
| |
| test_parametrize.py:18: AssertionError |
| 1 failed, 2 passed in 0.12 seconds |
| |
| Indirect parametrization with multiple fixtures |
| -------------------------------------------------------------- |
| |
| Here is a stripped down real-life example of using parametrized |
| testing for testing serialization of objects between different python |
| interpreters. We define a ``test_basic_objects`` function which |
| is to be run with different sets of arguments for its three arguments: |
| |
| * ``python1``: first python interpreter, run to pickle-dump an object to a file |
| * ``python2``: second interpreter, run to pickle-load an object from a file |
| * ``obj``: object to be dumped/loaded |
| |
| .. literalinclude:: multipython.py |
| |
| Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize):: |
| |
| . $ py.test -rs -q multipython.py |
| ssssssssssss...ssssssssssss |
| ======= short test summary info ======== |
| SKIP [12] $REGENDOC_TMPDIR/CWD/multipython.py:23: 'python3.3' not found |
| SKIP [12] $REGENDOC_TMPDIR/CWD/multipython.py:23: 'python2.6' not found |
| 3 passed, 24 skipped in 0.12 seconds |
| |
| Indirect parametrization of optional implementations/imports |
| -------------------------------------------------------------------- |
| |
| If you want to compare the outcomes of several implementations of a given |
| API, you can write test functions that receive the already imported implementations |
| and get skipped in case the implementation is not importable/available. Let's |
| say we have a "base" implementation and the other (possibly optimized ones) |
| need to provide similar results:: |
| |
| # content of conftest.py |
| |
| import pytest |
| |
| @pytest.fixture(scope="session") |
| def basemod(request): |
| return pytest.importorskip("base") |
| |
| @pytest.fixture(scope="session", params=["opt1", "opt2"]) |
| def optmod(request): |
| return pytest.importorskip(request.param) |
| |
| And then a base implementation of a simple function:: |
| |
| # content of base.py |
| def func1(): |
| return 1 |
| |
| And an optimized version:: |
| |
| # content of opt1.py |
| def func1(): |
| return 1.0001 |
| |
| And finally a little test module:: |
| |
| # content of test_module.py |
| |
| def test_func1(basemod, optmod): |
| assert round(basemod.func1(), 3) == round(optmod.func1(), 3) |
| |
| |
| If you run this with reporting for skips enabled:: |
| |
| $ py.test -rs test_module.py |
| ======= test session starts ======== |
| platform linux -- Python 3.4.0, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 |
| rootdir: $REGENDOC_TMPDIR, inifile: |
| collected 2 items |
| |
| test_module.py .s |
| ======= short test summary info ======== |
| SKIP [1] $REGENDOC_TMPDIR/conftest.py:10: could not import 'opt2' |
| |
| ======= 1 passed, 1 skipped in 0.12 seconds ======== |
| |
| You'll see that we don't have a ``opt2`` module and thus the second test run |
| of our ``test_func1`` was skipped. A few notes: |
| |
| - the fixture functions in the ``conftest.py`` file are "session-scoped" because we |
| don't need to import more than once |
| |
| - if you have multiple test functions and a skipped import, you will see |
| the ``[1]`` count increasing in the report |
| |
| - you can put :ref:`@pytest.mark.parametrize <@pytest.mark.parametrize>` style |
| parametrization on the test functions to parametrize input/output |
| values as well. |
| |
| |
| |