Table Of Contents


Download latest version as PDF

Installation and Getting Started

Pythons: Python 2.6,2.7,3.3,3.4,3.5, Jython, PyPy-2.3

Platforms: Unix/Posix and Windows

PyPI package name: pytest

dependencies: py, colorama (Windows), argparse (py26).

documentation as PDF: download latest

Installation

Installation:

pip install -U pytest

To check your installation has installed the correct version:

$ pytest --version
This is pytest version 3.0.7, imported from $PYTHON_PREFIX/lib/python3.5/site-packages/pytest.py

Our first test run

Let’s create a first test file with a simple test function:

# content of test_sample.py
def func(x):
    return x + 1

def test_answer():
    assert func(3) == 5

That’s it. You can execute the test function now:

$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items

test_sample.py F

======= FAILURES ========
_______ test_answer ________

    def test_answer():
>       assert func(3) == 5
E       assert 4 == 5
E        +  where 4 = func(3)

test_sample.py:5: AssertionError
======= 1 failed in 0.12 seconds ========

We got a failure report because our little func(3) call did not return 5.

Note

You can simply use the assert statement for asserting test expectations. pytest’s Advanced assertion introspection will intelligently report intermediate values of the assert expression freeing you from the need to learn the many names of JUnit legacy methods.

Running multiple tests

pytest will run all files in the current directory and its subdirectories of the form test_*.py or *_test.py. More generally, it follows standard test discovery rules.

Asserting that a certain exception is raised

If you want to assert that some code raises an exception you can use the raises helper:

# content of test_sysexit.py
import pytest
def f():
    raise SystemExit(1)

def test_mytest():
    with pytest.raises(SystemExit):
        f()

Running it with, this time in “quiet” reporting mode:

$ pytest -q test_sysexit.py
.
1 passed in 0.12 seconds

Grouping multiple tests in a class

Once you start to have more than a few tests it often makes sense to group tests logically, in classes and modules. Let’s write a class containing two tests:

# content of test_class.py
class TestClass:
    def test_one(self):
        x = "this"
        assert 'h' in x

    def test_two(self):
        x = "hello"
        assert hasattr(x, 'check')

The two tests are found because of the standard Conventions for Python test discovery. There is no need to subclass anything. We can simply run the module by passing its filename:

$ pytest -q test_class.py
.F
======= FAILURES ========
_______ TestClass.test_two ________

self = <test_class.TestClass object at 0xdeadbeef>

    def test_two(self):
        x = "hello"
>       assert hasattr(x, 'check')
E       AssertionError: assert False
E        +  where False = hasattr('hello', 'check')

test_class.py:8: AssertionError
1 failed, 1 passed in 0.12 seconds

The first test passed, the second failed. Again we can easily see the intermediate values used in the assertion, helping us to understand the reason for the failure.

Going functional: requesting a unique temporary directory

For functional tests one often needs to create some files and pass them to application objects. pytest provides Builtin fixtures/function arguments which allow to request arbitrary resources, for example a unique temporary directory:

# content of test_tmpdir.py
def test_needsfiles(tmpdir):
    print (tmpdir)
    assert 0

We list the name tmpdir in the test function signature and pytest will lookup and call a fixture factory to create the resource before performing the test function call. Let’s just run it:

$ pytest -q test_tmpdir.py
F
======= FAILURES ========
_______ test_needsfiles ________

tmpdir = local('PYTEST_TMPDIR/test_needsfiles0')

    def test_needsfiles(tmpdir):
        print (tmpdir)
>       assert 0
E       assert 0

test_tmpdir.py:3: AssertionError
--------------------------- Captured stdout call ---------------------------
PYTEST_TMPDIR/test_needsfiles0
1 failed in 0.12 seconds

Before the test runs, a unique-per-test-invocation temporary directory was created. More info at Temporary directories and files.

You can find out what kind of builtin pytest fixtures: explicit, modular, scalable exist by typing:

pytest --fixtures   # shows builtin and custom fixtures

Where to go next

Here are a few suggestions where to go next:

Usage and Invocations

Calling pytest through python -m pytest

New in version 2.0.

You can invoke testing through the Python interpreter from the command line:

python -m pytest [...]

This is almost equivalent to invoking the command line script pytest [...] directly, except that python will also add the current directory to sys.path.

Getting help on version, option names, environment variables

pytest --version   # shows where pytest was imported from
pytest --fixtures  # show available builtin function arguments
pytest -h | --help # show help on command line and config file options

Stopping after the first (or N) failures

To stop the testing process after the first (N) failures:

pytest -x            # stop after first failure
pytest --maxfail=2    # stop after two failures

Specifying tests / selecting tests

Several test run options:

pytest test_mod.py   # run tests in module
pytest somepath      # run all tests below somepath
pytest -k stringexpr # only run tests with names that match the
                      # "string expression", e.g. "MyClass and not method"
                      # will select TestMyClass.test_something
                      # but not TestMyClass.test_method_simple
pytest test_mod.py::test_func  # only run tests that match the "node ID",
                                # e.g. "test_mod.py::test_func" will select
                                # only test_func in test_mod.py
pytest test_mod.py::TestClass::test_method  # run a single method in
                                             # a single class

Import ‘pkg’ and use its filesystem location to find and run tests:

pytest --pyargs pkg # run all tests found below directory of pkg

Modifying Python traceback printing

Examples for modifying traceback printing:

pytest --showlocals # show local variables in tracebacks
pytest -l           # show local variables (shortcut)

pytest --tb=auto    # (default) 'long' tracebacks for the first and last
                     # entry, but 'short' style for the other entries
pytest --tb=long    # exhaustive, informative traceback formatting
pytest --tb=short   # shorter traceback format
pytest --tb=line    # only one line per failure
pytest --tb=native  # Python standard library formatting
pytest --tb=no      # no traceback at all

The --full-trace causes very long traces to be printed on error (longer than --tb=long). It also ensures that a stack trace is printed on KeyboardInterrupt (Ctrl+C). This is very useful if the tests are taking too long and you interrupt them with Ctrl+C to find out where the tests are hanging. By default no output will be shown (because KeyboardInterrupt is caught by pytest). By using this option you make sure a trace is shown.

Dropping to PDB (Python Debugger) on failures

Python comes with a builtin Python debugger called PDB. pytest allows one to drop into the PDB prompt via a command line option:

pytest --pdb

This will invoke the Python debugger on every failure. Often you might only want to do this for the first failing test to understand a certain failure situation:

pytest -x --pdb   # drop to PDB on first failure, then end test session
pytest --pdb --maxfail=3  # drop to PDB for first three failures

Note that on any failure the exception information is stored on sys.last_value, sys.last_type and sys.last_traceback. In interactive use, this allows one to drop into postmortem debugging with any debug tool. One can also manually access the exception information, for example:

>>> import sys
>>> sys.last_traceback.tb_lineno
42
>>> sys.last_value
AssertionError('assert result == "ok"',)

Setting a breakpoint / aka set_trace()

If you want to set a breakpoint and enter the pdb.set_trace() you can use a helper:

import pytest
def test_function():
    ...
    pytest.set_trace()    # invoke PDB debugger and tracing

Prior to pytest version 2.0.0 you could only enter PDB tracing if you disabled capturing on the command line via pytest -s. In later versions, pytest automatically disables its output capture when you enter PDB tracing:

  • Output capture in other tests is not affected.
  • Any prior test output that has already been captured and will be processed as such.
  • Any later output produced within the same test will not be captured and will instead get sent directly to sys.stdout. Note that this holds true even for test output occurring after you exit the interactive PDB tracing session and continue with the regular test run.

Since pytest version 2.4.0 you can also use the native Python import pdb;pdb.set_trace() call to enter PDB tracing without having to use the pytest.set_trace() wrapper or explicitly disable pytest’s output capturing via pytest -s.

Profiling test execution duration

To get a list of the slowest 10 test durations:

pytest --durations=10

Creating JUnitXML format files

To create result files which can be read by Jenkins or other Continuous integration servers, use this invocation:

pytest --junitxml=path

to create an XML file at path.

record_xml_property

New in version 2.8.

If you want to log additional information for a test, you can use the record_xml_property fixture:

def test_function(record_xml_property):
    record_xml_property("example_key", 1)
    assert 0

This will add an extra property example_key="1" to the generated testcase tag:

<testcase classname="test_function" file="test_function.py" line="0" name="test_function" time="0.0009">
  <properties>
    <property name="example_key" value="1" />
  </properties>
</testcase>

Warning

This is an experimental feature, and its interface might be replaced by something more powerful and general in future versions. The functionality per-se will be kept, however.

Currently it does not work when used with the pytest-xdist plugin.

Also please note that using this feature will break any schema verification. This might be a problem when used with some CI servers.

LogXML: add_global_property

New in version 3.0.

If you want to add a properties node in the testsuite level, which may contains properties that are relevant to all testcases you can use LogXML.add_global_properties

import pytest

@pytest.fixture(scope="session")
def log_global_env_facts(f):

    if pytest.config.pluginmanager.hasplugin('junitxml'):
        my_junit = getattr(pytest.config, '_xml', None)

    my_junit.add_global_property('ARCH', 'PPC')
    my_junit.add_global_property('STORAGE_TYPE', 'CEPH')

@pytest.mark.usefixtures(log_global_env_facts)
def start_and_prepare_env():
    pass

class TestMe:
    def test_foo(self):
        assert True

This will add a property node below the testsuite node to the generated xml:

<testsuite errors="0" failures="0" name="pytest" skips="0" tests="1" time="0.006">
  <properties>
    <property name="ARCH" value="PPC"/>
    <property name="STORAGE_TYPE" value="CEPH"/>
  </properties>
  <testcase classname="test_me.TestMe" file="test_me.py" line="16" name="test_foo" time="0.000243663787842"/>
</testsuite>

Warning

This is an experimental feature, and its interface might be replaced by something more powerful and general in future versions. The functionality per-se will be kept.

Creating resultlog format files

Deprecated since version 3.0: This option is rarely used and is scheduled for removal in 4.0.

To create plain-text machine-readable result files you can issue:

pytest --resultlog=path

and look at the content at the path location. Such files are used e.g. by the PyPy-test web page to show test results over several revisions.

Sending test report to online pastebin service

Creating a URL for each test failure:

pytest --pastebin=failed

This will submit test run information to a remote Paste service and provide a URL for each failure. You may select tests as usual or add for example -x if you only want to send one particular failure.

Creating a URL for a whole test session log:

pytest --pastebin=all

Currently only pasting to the http://bpaste.net service is implemented.

Disabling plugins

To disable loading specific plugins at invocation time, use the -p option together with the prefix no:.

Example: to disable loading the plugin doctest, which is responsible for executing doctest tests from text files, invoke pytest like this:

pytest -p no:doctest

Calling pytest from Python code

New in version 2.0.

You can invoke pytest from Python code directly:

pytest.main()

this acts as if you would call “pytest” from the command line. It will not raise SystemExit but return the exitcode instead. You can pass in options and arguments:

pytest.main(['-x', 'mytestdir'])

You can specify additional plugins to pytest.main:

# content of myinvoke.py
import pytest
class MyPlugin:
    def pytest_sessionfinish(self):
        print("*** test run reporting finishing")

pytest.main(["-qq"], plugins=[MyPlugin()])

Running it will show that MyPlugin was added and its hook was invoked:

$ python myinvoke.py
*** test run reporting finishing

The writing and reporting of assertions in tests

Asserting with the assert statement

pytest allows you to use the standard python assert for verifying expectations and values in Python tests. For example, you can write the following:

# content of test_assert1.py
def f():
    return 3

def test_function():
    assert f() == 4

to assert that your function returns a certain value. If this assertion fails you will see the return value of the function call:

$ pytest test_assert1.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items

test_assert1.py F

======= FAILURES ========
_______ test_function ________

    def test_function():
>       assert f() == 4
E       assert 3 == 4
E        +  where 3 = f()

test_assert1.py:5: AssertionError
======= 1 failed in 0.12 seconds ========

pytest has support for showing the values of the most common subexpressions including calls, attributes, comparisons, and binary and unary operators. (See Demo of Python failure reports with pytest). This allows you to use the idiomatic python constructs without boilerplate code while not losing introspection information.

However, if you specify a message with the assertion like this:

assert a % 2 == 0, "value was odd, should be even"

then no assertion introspection takes places at all and the message will be simply shown in the traceback.

See Advanced assertion introspection for more information on assertion introspection.

Assertions about expected exceptions

In order to write assertions about raised exceptions, you can use pytest.raises as a context manager like this:

import pytest

def test_zero_division():
    with pytest.raises(ZeroDivisionError):
        1 / 0

and if you need to have access to the actual exception info you may use:

def test_recursion_depth():
    with pytest.raises(RuntimeError) as excinfo:
        def f():
            f()
        f()
    assert 'maximum recursion' in str(excinfo.value)

excinfo is a ExceptionInfo instance, which is a wrapper around the actual exception raised. The main attributes of interest are .type, .value and .traceback.

Changed in version 3.0.

In the context manager form you may use the keyword argument message to specify a custom failure message:

>>> with raises(ZeroDivisionError, message="Expecting ZeroDivisionError"):
...    pass
... Failed: Expecting ZeroDivisionError

If you want to write test code that works on Python 2.4 as well, you may also use two other ways to test for an expected exception:

pytest.raises(ExpectedException, func, *args, **kwargs)
pytest.raises(ExpectedException, "func(*args, **kwargs)")

both of which execute the specified function with args and kwargs and asserts that the given ExpectedException is raised. The reporter will provide you with helpful output in case of failures such as no exception or wrong exception.

Note that it is also possible to specify a “raises” argument to pytest.mark.xfail, which checks that the test is failing in a more specific way than just having any exception raised:

@pytest.mark.xfail(raises=IndexError)
def test_f():
    f()

Using pytest.raises is likely to be better for cases where you are testing exceptions your own code is deliberately raising, whereas using @pytest.mark.xfail with a check function is probably better for something like documenting unfixed bugs (where the test describes what “should” happen) or bugs in dependencies.

If you want to test that a regular expression matches on the string representation of an exception (like the TestCase.assertRaisesRegexp method from unittest) you can use the ExceptionInfo.match method:

import pytest

def myfunc():
    raise ValueError("Exception 123 raised")

def test_match():
    with pytest.raises(ValueError) as excinfo:
        myfunc()
    excinfo.match(r'.* 123 .*')

The regexp parameter of the match method is matched with the re.search function. So in the above example excinfo.match('123') would have worked as well.

Assertions about expected warnings

New in version 2.8.

You can check that code raises a particular warning using pytest.warns.

Making use of context-sensitive comparisons

New in version 2.0.

pytest has rich support for providing context-sensitive information when it encounters comparisons. For example:

# content of test_assert2.py

def test_set_comparison():
    set1 = set("1308")
    set2 = set("8035")
    assert set1 == set2

if you run this module:

$ pytest test_assert2.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items

test_assert2.py F

======= FAILURES ========
_______ test_set_comparison ________

    def test_set_comparison():
        set1 = set("1308")
        set2 = set("8035")
>       assert set1 == set2
E       AssertionError: assert {'0', '1', '3', '8'} == {'0', '3', '5', '8'}
E         Extra items in the left set:
E         '1'
E         Extra items in the right set:
E         '5'
E         Use -v to get the full diff

test_assert2.py:5: AssertionError
======= 1 failed in 0.12 seconds ========

Special comparisons are done for a number of cases:

  • comparing long strings: a context diff is shown
  • comparing long sequences: first failing indices
  • comparing dicts: different entries

See the reporting demo for many more examples.

Defining your own assertion comparison

It is possible to add your own detailed explanations by implementing the pytest_assertrepr_compare hook.

pytest_assertrepr_compare(config, op, left, right)[source]

return explanation for comparisons in failing assert expressions.

Return None for no custom explanation, otherwise return a list of strings. The strings will be joined by newlines but any newlines in a string will be escaped. Note that all but the first line will be indented slightly, the intention is for the first line to be a summary.

As an example consider adding the following hook in a conftest.py which provides an alternative explanation for Foo objects:

# content of conftest.py
from test_foocompare import Foo
def pytest_assertrepr_compare(op, left, right):
    if isinstance(left, Foo) and isinstance(right, Foo) and op == "==":
        return ['Comparing Foo instances:',
                '   vals: %s != %s' % (left.val, right.val)]

now, given this test module:

# content of test_foocompare.py
class Foo:
    def __init__(self, val):
        self.val = val

    def __eq__(self, other):
        return self.val == other.val

def test_compare():
    f1 = Foo(1)
    f2 = Foo(2)
    assert f1 == f2

you can run the test module and get the custom output defined in the conftest file:

$ pytest -q test_foocompare.py
F
======= FAILURES ========
_______ test_compare ________

    def test_compare():
        f1 = Foo(1)
        f2 = Foo(2)
>       assert f1 == f2
E       assert Comparing Foo instances:
E            vals: 1 != 2

test_foocompare.py:11: AssertionError
1 failed in 0.12 seconds

Advanced assertion introspection

New in version 2.1.

Reporting details about a failing assertion is achieved by rewriting assert statements before they are run. Rewritten assert statements put introspection information into the assertion failure message. pytest only rewrites test modules directly discovered by its test collection process, so asserts in supporting modules which are not themselves test modules will not be rewritten.

Note

pytest rewrites test modules on import. It does this by using an import hook to write new pyc files. Most of the time this works transparently. However, if you are messing with import yourself, the import hook may interfere. If this is the case, use --assert=plain. Additionally, rewriting will fail silently if it cannot write new pycs, i.e. in a read-only filesystem or a zipfile.

For further information, Benjamin Peterson wrote up Behind the scenes of pytest’s new assertion rewriting.

New in version 2.1: Add assert rewriting as an alternate introspection technique.

Changed in version 2.1: Introduce the --assert option. Deprecate --no-assert and --nomagic.

Changed in version 3.0: Removes the --no-assert and --nomagic options. Removes the --assert=reinterp option.

Pytest API and builtin fixtures

This is a list of pytest.* API functions and fixtures.

For information on plugin hooks and objects, see Writing plugins.

For information on the pytest.mark mechanism, see Marking test functions with attributes.

For the below objects, you can also interactively ask for help, e.g. by typing on the Python interactive prompt something like:

import pytest
help(pytest)

Invoking pytest interactively

main(args=None, plugins=None)[source]

return exit code, after performing an in-process test run.

Parameters:
  • args – list of command line arguments.
  • plugins – list of plugin objects to be auto-registered during initialization.

More examples at Calling pytest from Python code

Helpers for assertions about Exceptions/Warnings

raises(expected_exception, *args, **kwargs)[source]

Assert that a code block/function call raises expected_exception and raise a failure exception otherwise.

This helper produces a ExceptionInfo() object (see below).

If using Python 2.5 or above, you may use this function as a context manager:

>>> with raises(ZeroDivisionError):
...    1/0

Changed in version 2.10.

In the context manager form you may use the keyword argument message to specify a custom failure message:

>>> with raises(ZeroDivisionError, message="Expecting ZeroDivisionError"):
...    pass
Traceback (most recent call last):
  ...
Failed: Expecting ZeroDivisionError

Note

When using pytest.raises as a context manager, it’s worthwhile to note that normal context manager rules apply and that the exception raised must be the final line in the scope of the context manager. Lines of code after that, within the scope of the context manager will not be executed. For example:

>>> value = 15
>>> with raises(ValueError) as exc_info:
...     if value > 10:
...         raise ValueError("value must be <= 10")
...     assert str(exc_info.value) == "value must be <= 10"  # this will not execute

Instead, the following approach must be taken (note the difference in scope):

>>> with raises(ValueError) as exc_info:
...     if value > 10:
...         raise ValueError("value must be <= 10")
...
>>> assert str(exc_info.value) == "value must be <= 10"

Or you can specify a callable by passing a to-be-called lambda:

>>> raises(ZeroDivisionError, lambda: 1/0)
<ExceptionInfo ...>

or you can specify an arbitrary callable with arguments:

>>> def f(x): return 1/x
...
>>> raises(ZeroDivisionError, f, 0)
<ExceptionInfo ...>
>>> raises(ZeroDivisionError, f, x=0)
<ExceptionInfo ...>

A third possibility is to use a string to be executed:

>>> raises(ZeroDivisionError, "f(0)")
<ExceptionInfo ...>
class ExceptionInfo(tup=None, exprinfo=None)[source]

wraps sys.exc_info() objects and offers help for navigating the traceback.

type = None

the exception class

value = None

the exception instance

tb = None

the exception raw traceback

typename = None

the exception type name

traceback = None

the exception traceback (_pytest._code.Traceback instance)

exconly(tryshort=False)[source]

return the exception as a string

when ‘tryshort’ resolves to True, and the exception is a _pytest._code._AssertionError, only the actual exception part of the exception representation is returned (so ‘AssertionError: ‘ is removed from the beginning)

errisinstance(exc)[source]

return True if the exception is an instance of exc

getrepr(showlocals=False, style='long', abspath=False, tbfilter=True, funcargs=False)[source]

return str()able representation of this exception info. showlocals: show locals per traceback entry style: long|short|no|native traceback style tbfilter: hide entries (where __tracebackhide__ is true)

in case of style==native, tbfilter and showlocals is ignored.

match(regexp)[source]

Match the regular expression ‘regexp’ on the string representation of the exception. If it matches then True is returned (so that it is possible to write ‘assert excinfo.match()’). If it doesn’t match an AssertionError is raised.

Note

Similar to caught exception objects in Python, explicitly clearing local references to returned ExceptionInfo objects can help the Python interpreter speed up its garbage collection.

Clearing those references breaks a reference cycle (ExceptionInfo –> caught exception –> frame stack raising the exception –> current frame stack –> local variables –> ExceptionInfo) which makes Python keep all objects referenced from that cycle (including all local variables in the current frame) alive until the next cyclic garbage collection run. See the official Python try statement documentation for more detailed information.

Examples at Assertions about expected exceptions.

deprecated_call(func=None, *args, **kwargs)[source]

assert that calling func(*args, **kwargs) triggers a DeprecationWarning or PendingDeprecationWarning.

This function can be used as a context manager:

>>> import warnings
>>> def api_call_v2():
...     warnings.warn('use v3 of this api', DeprecationWarning)
...     return 200

>>> with deprecated_call():
...    assert api_call_v2() == 200

Note: we cannot use WarningsRecorder here because it is still subject to the mechanism that prevents warnings of the same type from being triggered twice for the same module. See #1190.

Comparing floating point numbers

class approx(expected, rel=None, abs=None)[source]

Assert that two numbers (or two sets of numbers) are equal to each other within some tolerance.

Due to the intricacies of floating-point arithmetic, numbers that we would intuitively expect to be equal are not always so:

>>> 0.1 + 0.2 == 0.3
False

This problem is commonly encountered when writing tests, e.g. when making sure that floating-point values are what you expect them to be. One way to deal with this problem is to assert that two floating-point numbers are equal to within some appropriate tolerance:

>>> abs((0.1 + 0.2) - 0.3) < 1e-6
True

However, comparisons like this are tedious to write and difficult to understand. Furthermore, absolute comparisons like the one above are usually discouraged because there’s no tolerance that works well for all situations. 1e-6 is good for numbers around 1, but too small for very big numbers and too big for very small ones. It’s better to express the tolerance as a fraction of the expected value, but relative comparisons like that are even more difficult to write correctly and concisely.

The approx class performs floating-point comparisons using a syntax that’s as intuitive as possible:

>>> from pytest import approx
>>> 0.1 + 0.2 == approx(0.3)
True

The same syntax also works on sequences of numbers:

>>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6))
True

By default, approx considers numbers within a relative tolerance of 1e-6 (i.e. one part in a million) of its expected value to be equal. This treatment would lead to surprising results if the expected value was 0.0, because nothing but 0.0 itself is relatively close to 0.0. To handle this case less surprisingly, approx also considers numbers within an absolute tolerance of 1e-12 of its expected value to be equal. Infinite numbers are another special case. They are only considered equal to themselves, regardless of the relative tolerance. Both the relative and absolute tolerances can be changed by passing arguments to the approx constructor:

>>> 1.0001 == approx(1)
False
>>> 1.0001 == approx(1, rel=1e-3)
True
>>> 1.0001 == approx(1, abs=1e-3)
True

If you specify abs but not rel, the comparison will not consider the relative tolerance at all. In other words, two numbers that are within the default relative tolerance of 1e-6 will still be considered unequal if they exceed the specified absolute tolerance. If you specify both abs and rel, the numbers will be considered equal if either tolerance is met:

>>> 1 + 1e-8 == approx(1)
True
>>> 1 + 1e-8 == approx(1, abs=1e-12)
False
>>> 1 + 1e-8 == approx(1, rel=1e-6, abs=1e-12)
True

If you’re thinking about using approx, then you might want to know how it compares to other good ways of comparing floating-point numbers. All of these algorithms are based on relative and absolute tolerances and should agree for the most part, but they do have meaningful differences:

  • math.isclose(a, b, rel_tol=1e-9, abs_tol=0.0): True if the relative tolerance is met w.r.t. either a or b or if the absolute tolerance is met. Because the relative tolerance is calculated w.r.t. both a and b, this test is symmetric (i.e. neither a nor b is a “reference value”). You have to specify an absolute tolerance if you want to compare to 0.0 because there is no tolerance by default. Only available in python>=3.5. More information...
  • numpy.isclose(a, b, rtol=1e-5, atol=1e-8): True if the difference between a and b is less that the sum of the relative tolerance w.r.t. b and the absolute tolerance. Because the relative tolerance is only calculated w.r.t. b, this test is asymmetric and you can think of b as the reference value. Support for comparing sequences is provided by numpy.allclose. More information...
  • unittest.TestCase.assertAlmostEqual(a, b): True if a and b are within an absolute tolerance of 1e-7. No relative tolerance is considered and the absolute tolerance cannot be changed, so this function is not appropriate for very large or very small numbers. Also, it’s only available in subclasses of unittest.TestCase and it’s ugly because it doesn’t follow PEP8. More information...
  • a == pytest.approx(b, rel=1e-6, abs=1e-12): True if the relative tolerance is met w.r.t. b or if the absolute tolerance is met. Because the relative tolerance is only calculated w.r.t. b, this test is asymmetric and you can think of b as the reference value. In the special case that you explicitly specify an absolute tolerance but not a relative tolerance, only the absolute tolerance is considered.

Raising a specific test outcome

You can use the following functions in your test, fixture or setup functions to force a certain test outcome. Note that most often you can rather use declarative marks, see Skip and xfail: dealing with tests that cannot succeed.

fail(msg='', pytrace=True)[source]

explicitly fail an currently-executing test with the given Message.

Parameters:pytrace – if false the msg represents the full failure information and no python traceback will be reported.
skip(msg='')[source]

skip an executing test with the given message. Note: it’s usually better to use the pytest.mark.skipif marker to declare a test to be skipped under certain conditions like mismatching platforms or dependencies. See the pytest_skipping plugin for details.

importorskip(modname, minversion=None)[source]

return imported module if it has at least “minversion” as its __version__ attribute. If no minversion is specified the a skip is only triggered if the module can not be imported.

xfail(reason='')[source]

xfail an executing test or setup functions with the given reason.

exit(msg)[source]

exit testing process as if KeyboardInterrupt was triggered.

Fixtures and requests

To mark a fixture function:

fixture(scope='function', params=None, autouse=False, ids=None, name=None)[source]

(return a) decorator to mark a fixture factory function.

This decorator can be used (with or or without parameters) to define a fixture function. The name of the fixture function can later be referenced to cause its invocation ahead of running tests: test modules or classes can use the pytest.mark.usefixtures(fixturename) marker. Test functions can directly use fixture names as input arguments in which case the fixture instance returned from the fixture function will be injected.

Parameters:
  • scope – the scope for which this fixture is shared, one of “function” (default), “class”, “module” or “session”.
  • params – an optional list of parameters which will cause multiple invocations of the fixture function and all of the tests using it.
  • autouse – if True, the fixture func is activated for all tests that can see it. If False (the default) then an explicit reference is needed to activate the fixture.
  • ids – list of string ids each corresponding to the params so that they are part of the test id. If no ids are provided they will be generated automatically from the params.
  • name – the name of the fixture. This defaults to the name of the decorated function. If a fixture is used in the same module in which it is defined, the function name of the fixture will be shadowed by the function arg that requests the fixture; one way to resolve this is to name the decorated function fixture_<fixturename> and then use @pytest.fixture(name='<fixturename>').

Fixtures can optionally provide their values to test functions using a yield statement, instead of return. In this case, the code block after the yield statement is executed as teardown code regardless of the test outcome. A fixture function must yield exactly once.

Tutorial at pytest fixtures: explicit, modular, scalable.

The request object that can be used from fixture functions.

class FixtureRequest[source]

A request for a fixture from a test or fixture function.

A request object gives access to the requesting test context and has an optional param attribute in case the fixture is parametrized indirectly.

fixturename = None

fixture for which this request is being performed

scope = None

Scope string, one of “function”, “class”, “module”, “session”

node

underlying collection node (depends on current request scope)

config

the pytest config object associated with this request.

function

test function object if the request has a per-function scope.

cls

class (can be None) where the test function was collected.

instance

instance (can be None) on which test function was collected.

module

python module object where the test function was collected.

fspath

the file system path of the test module which collected this test.

keywords

keywords/markers dictionary for the underlying node.

session

pytest session object.

addfinalizer(finalizer)[source]

add finalizer/teardown function to be called after the last test within the requesting test context finished execution.

applymarker(marker)[source]

Apply a marker to a single test function invocation. This method is useful if you don’t want to have a keyword/marker on all function invocations.

Parameters:marker – a _pytest.mark.MarkDecorator object created by a call to pytest.mark.NAME(...).
raiseerror(msg)[source]

raise a FixtureLookupError with the given message.

cached_setup(setup, teardown=None, scope='module', extrakey=None)[source]

(deprecated) Return a testing resource managed by setup & teardown calls. scope and extrakey determine when the teardown function will be called so that subsequent calls to setup would recreate the resource. With pytest-2.3 you often do not need cached_setup() as you can directly declare a scope on a fixture function and register a finalizer through request.addfinalizer().

Parameters:
  • teardown – function receiving a previously setup resource.
  • setup – a no-argument function creating a resource.
  • scope – a string value out of function, class, module or session indicating the caching lifecycle of the resource.
  • extrakey – added to internal caching key of (funcargname, scope).
getfixturevalue(argname)[source]

Dynamically run a named fixture function.

Declaring fixtures via function argument is recommended where possible. But if you can only decide whether to use another fixture at test setup time, you may use this function to retrieve it inside a fixture or test function body.

getfuncargvalue(argname)[source]

Deprecated, use getfixturevalue.

Builtin fixtures/function arguments

You can ask for available builtin or project-custom fixtures by typing:

$ pytest -q --fixtures
cache
    Return a cache object that can persist state between testing sessions.

    cache.get(key, default)
    cache.set(key, value)

    Keys must be a ``/`` separated value, where the first part is usually the
    name of your plugin or application to avoid clashes with other cache users.

    Values can be any object handled by the json stdlib module.
capsys
    Enable capturing of writes to sys.stdout/sys.stderr and make
    captured output available via ``capsys.readouterr()`` method calls
    which return a ``(out, err)`` tuple.
capfd
    Enable capturing of writes to file descriptors 1 and 2 and make
    captured output available via ``capfd.readouterr()`` method calls
    which return a ``(out, err)`` tuple.
doctest_namespace
    Inject names into the doctest namespace.
pytestconfig
    the pytest config object with access to command line opts.
record_xml_property
    Add extra xml properties to the tag for the calling test.
    The fixture is callable with ``(name, value)``, with value being automatically
    xml-encoded.
monkeypatch
    The returned ``monkeypatch`` fixture provides these
    helper methods to modify objects, dictionaries or os.environ::

    monkeypatch.setattr(obj, name, value, raising=True)
    monkeypatch.delattr(obj, name, raising=True)
    monkeypatch.setitem(mapping, name, value)
    monkeypatch.delitem(obj, name, raising=True)
    monkeypatch.setenv(name, value, prepend=False)
    monkeypatch.delenv(name, value, raising=True)
    monkeypatch.syspath_prepend(path)
    monkeypatch.chdir(path)

    All modifications will be undone after the requesting
    test function or fixture has finished. The ``raising``
    parameter determines if a KeyError or AttributeError
    will be raised if the set/deletion operation has no target.
recwarn
    Return a WarningsRecorder instance that provides these methods:

    * ``pop(category=None)``: return last warning matching the category.
    * ``clear()``: clear list of warnings

    See http://docs.python.org/library/warnings.html for information
    on warning categories.
tmpdir_factory
    Return a TempdirFactory instance for the test session.
tmpdir
    Return a temporary directory path object
    which is unique to each test function invocation,
    created as a sub directory of the base temporary
    directory.  The returned object is a `py.path.local`_
    path object.

no tests ran in 0.12 seconds

pytest fixtures: explicit, modular, scalable

New in version 2.0/2.3/2.4.

The purpose of test fixtures is to provide a fixed baseline upon which tests can reliably and repeatedly execute. pytest fixtures offer dramatic improvements over the classic xUnit style of setup/teardown functions:

  • fixtures have explicit names and are activated by declaring their use from test functions, modules, classes or whole projects.
  • fixtures are implemented in a modular manner, as each fixture name triggers a fixture function which can itself use other fixtures.
  • fixture management scales from simple unit to complex functional testing, allowing to parametrize fixtures and tests according to configuration and component options, or to re-use fixtures across class, module or whole test session scopes.

In addition, pytest continues to support classic xunit-style setup. You can mix both styles, moving incrementally from classic to new style, as you prefer. You can also start out from existing unittest.TestCase style or nose based projects.

Fixtures as Function arguments

Test functions can receive fixture objects by naming them as an input argument. For each argument name, a fixture function with that name provides the fixture object. Fixture functions are registered by marking them with @pytest.fixture. Let’s look at a simple self-contained test module containing a fixture and a test function using it:

# content of ./test_smtpsimple.py
import pytest

@pytest.fixture
def smtp():
    import smtplib
    return smtplib.SMTP("smtp.gmail.com")

def test_ehlo(smtp):
    response, msg = smtp.ehlo()
    assert response == 250
    assert 0 # for demo purposes

Here, the test_ehlo needs the smtp fixture value. pytest will discover and call the @pytest.fixture marked smtp fixture function. Running the test looks like this:

$ pytest test_smtpsimple.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items

test_smtpsimple.py F

======= FAILURES ========
_______ test_ehlo ________

smtp = <smtplib.SMTP object at 0xdeadbeef>

    def test_ehlo(smtp):
        response, msg = smtp.ehlo()
        assert response == 250
>       assert 0 # for demo purposes
E       assert 0

test_smtpsimple.py:11: AssertionError
======= 1 failed in 0.12 seconds ========

In the failure traceback we see that the test function was called with a smtp argument, the smtplib.SMTP() instance created by the fixture function. The test function fails on our deliberate assert 0. Here is the exact protocol used by pytest to call the test function this way:

  1. pytest finds the test_ehlo because of the test_ prefix. The test function needs a function argument named smtp. A matching fixture function is discovered by looking for a fixture-marked function named smtp.
  2. smtp() is called to create an instance.
  3. test_ehlo(<SMTP instance>) is called and fails in the last line of the test function.

Note that if you misspell a function argument or want to use one that isn’t available, you’ll see an error with a list of available function arguments.

Note

You can always issue:

pytest --fixtures test_simplefactory.py

to see available fixtures.

In versions prior to 2.3 there was no @pytest.fixture marker and you had to use a magic pytest_funcarg__NAME prefix for the fixture factory. This remains and will remain supported but is not anymore advertised as the primary means of declaring fixture functions.

“Funcargs” a prime example of dependency injection

When injecting fixtures to test functions, pytest-2.0 introduced the term “funcargs” or “funcarg mechanism” which continues to be present also in docs today. It now refers to the specific case of injecting fixture values as arguments to test functions. With pytest-2.3 there are more possibilities to use fixtures but “funcargs” remain as the main way as they allow to directly state the dependencies of a test function.

As the following examples show in more detail, funcargs allow test functions to easily receive and work against specific pre-initialized application objects without having to care about import/setup/cleanup details. It’s a prime example of dependency injection where fixture functions take the role of the injector and test functions are the consumers of fixture objects.

Sharing a fixture across tests in a module (or class/session)

Fixtures requiring network access depend on connectivity and are usually time-expensive to create. Extending the previous example, we can add a scope='module' parameter to the @pytest.fixture invocation to cause the decorated smtp fixture function to only be invoked once per test module. Multiple test functions in a test module will thus each receive the same smtp fixture instance. The next example puts the fixture function into a separate conftest.py file so that tests from multiple test modules in the directory can access the fixture function:

# content of conftest.py
import pytest
import smtplib

@pytest.fixture(scope="module")
def smtp():
    return smtplib.SMTP("smtp.gmail.com")

The name of the fixture again is smtp and you can access its result by listing the name smtp as an input parameter in any test or fixture function (in or below the directory where conftest.py is located):

# content of test_module.py

def test_ehlo(smtp):
    response, msg = smtp.ehlo()
    assert response == 250
    assert b"smtp.gmail.com" in msg
    assert 0  # for demo purposes

def test_noop(smtp):
    response, msg = smtp.noop()
    assert response == 250
    assert 0  # for demo purposes

We deliberately insert failing assert 0 statements in order to inspect what is going on and can now run the tests:

$ pytest test_module.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items

test_module.py FF

======= FAILURES ========
_______ test_ehlo ________

smtp = <smtplib.SMTP object at 0xdeadbeef>

    def test_ehlo(smtp):
        response, msg = smtp.ehlo()
        assert response == 250
        assert b"smtp.gmail.com" in msg
>       assert 0  # for demo purposes
E       assert 0

test_module.py:6: AssertionError
_______ test_noop ________

smtp = <smtplib.SMTP object at 0xdeadbeef>

    def test_noop(smtp):
        response, msg = smtp.noop()
        assert response == 250
>       assert 0  # for demo purposes
E       assert 0

test_module.py:11: AssertionError
======= 2 failed in 0.12 seconds ========

You see the two assert 0 failing and more importantly you can also see that the same (module-scoped) smtp object was passed into the two test functions because pytest shows the incoming argument values in the traceback. As a result, the two test functions using smtp run as quick as a single one because they reuse the same instance.

If you decide that you rather want to have a session-scoped smtp instance, you can simply declare it:

@pytest.fixture(scope="session")
def smtp(...):
    # the returned fixture value will be shared for
    # all tests needing it

Fixture finalization / executing teardown code

pytest supports execution of fixture specific finalization code when the fixture goes out of scope. By using a yield statement instead of return, all the code after the yield statement serves as the teardown code:

# content of conftest.py

import smtplib
import pytest

@pytest.fixture(scope="module")
def smtp(request):
    smtp = smtplib.SMTP("smtp.gmail.com")
    yield smtp  # provide the fixture value
    print("teardown smtp")
    smtp.close()

The print and smtp.close() statements will execute when the last test in the module has finished execution, regardless of the exception status of the tests.

Let’s execute it:

$ pytest -s -q --tb=no
FFteardown smtp

2 failed in 0.12 seconds

We see that the smtp instance is finalized after the two tests finished execution. Note that if we decorated our fixture function with scope='function' then fixture setup and cleanup would occur around each single test. In either case the test module itself does not need to change or know about these details of fixture setup.

Note that we can also seamlessly use the yield syntax with with statements:

# content of test_yield2.py

import smtplib
import pytest

@pytest.fixture(scope="module")
def smtp(request):
    with smtplib.SMTP("smtp.gmail.com") as smtp:
        yield smtp  # provide the fixture value

The smtp connection will be closed after the test finished execution because the smtp object automatically closes when the with statement ends.

Note

Prior to version 2.10, in order to use a yield statement to execute teardown code one had to mark a fixture using the yield_fixture marker. From 2.10 onward, normal fixtures can use yield directly so the yield_fixture decorator is no longer needed and considered deprecated.

Note

As historical note, another way to write teardown code is by accepting a request object into your fixture function and can call its request.addfinalizer one or multiple times:

# content of conftest.py

import smtplib
import pytest

@pytest.fixture(scope="module")
def smtp(request):
    smtp = smtplib.SMTP("smtp.gmail.com")
    def fin():
        print ("teardown smtp")
        smtp.close()
    request.addfinalizer(fin)
    return smtp  # provide the fixture value

The fin function will execute when the last test in the module has finished execution.

This method is still fully supported, but yield is recommended from 2.10 onward because it is considered simpler and better describes the natural code flow.

Fixtures can introspect the requesting test context

Fixture function can accept the request object to introspect the “requesting” test function, class or module context. Further extending the previous smtp fixture example, let’s read an optional server URL from the test module which uses our fixture:

# content of conftest.py
import pytest
import smtplib

@pytest.fixture(scope="module")
def smtp(request):
    server = getattr(request.module, "smtpserver", "smtp.gmail.com")
    smtp = smtplib.SMTP(server)
    yield smtp
    print ("finalizing %s (%s)" % (smtp, server))
    smtp.close()

We use the request.module attribute to optionally obtain an smtpserver attribute from the test module. If we just execute again, nothing much has changed:

$ pytest -s -q --tb=no
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)

2 failed in 0.12 seconds

Let’s quickly create another test module that actually sets the server URL in its module namespace:

# content of test_anothersmtp.py

smtpserver = "mail.python.org"  # will be read by smtp fixture

def test_showhelo(smtp):
    assert 0, smtp.helo()

Running it:

$ pytest -qq --tb=short test_anothersmtp.py
F
======= FAILURES ========
_______ test_showhelo ________
test_anothersmtp.py:5: in test_showhelo
    assert 0, smtp.helo()
E   AssertionError: (250, b'mail.python.org')
E   assert 0
------------------------- Captured stdout teardown -------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef> (mail.python.org)

voila! The smtp fixture function picked up our mail server name from the module namespace.

Parametrizing fixtures

Fixture functions can be parametrized in which case they will be called multiple times, each time executing the set of dependent tests, i. e. the tests that depend on this fixture. Test functions do usually not need to be aware of their re-running. Fixture parametrization helps to write exhaustive functional tests for components which themselves can be configured in multiple ways.

Extending the previous example, we can flag the fixture to create two smtp fixture instances which will cause all tests using the fixture to run twice. The fixture function gets access to each parameter through the special request object:

# content of conftest.py
import pytest
import smtplib

@pytest.fixture(scope="module",
                params=["smtp.gmail.com", "mail.python.org"])
def smtp(request):
    smtp = smtplib.SMTP(request.param)
    yield smtp
    print ("finalizing %s" % smtp)
    smtp.close()

The main change is the declaration of params with @pytest.fixture, a list of values for each of which the fixture function will execute and can access a value via request.param. No test function code needs to change. So let’s just do another run:

$ pytest -q test_module.py
FFFF
======= FAILURES ========
_______ test_ehlo[smtp.gmail.com] ________

smtp = <smtplib.SMTP object at 0xdeadbeef>

    def test_ehlo(smtp):
        response, msg = smtp.ehlo()
        assert response == 250
        assert b"smtp.gmail.com" in msg
>       assert 0  # for demo purposes
E       assert 0

test_module.py:6: AssertionError
_______ test_noop[smtp.gmail.com] ________

smtp = <smtplib.SMTP object at 0xdeadbeef>

    def test_noop(smtp):
        response, msg = smtp.noop()
        assert response == 250
>       assert 0  # for demo purposes
E       assert 0

test_module.py:11: AssertionError
_______ test_ehlo[mail.python.org] ________

smtp = <smtplib.SMTP object at 0xdeadbeef>

    def test_ehlo(smtp):
        response, msg = smtp.ehlo()
        assert response == 250
>       assert b"smtp.gmail.com" in msg
E       AssertionError: assert b'smtp.gmail.com' in b'mail.python.org\nSIZE 51200000\nETRN\nSTARTTLS\nENHANCEDSTATUSCODES\n8BITMIME\nDSN\nSMTPUTF8'

test_module.py:5: AssertionError
-------------------------- Captured stdout setup ---------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef>
_______ test_noop[mail.python.org] ________

smtp = <smtplib.SMTP object at 0xdeadbeef>

    def test_noop(smtp):
        response, msg = smtp.noop()
        assert response == 250
>       assert 0  # for demo purposes
E       assert 0

test_module.py:11: AssertionError
------------------------- Captured stdout teardown -------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef>
4 failed in 0.12 seconds

We see that our two test functions each ran twice, against the different smtp instances. Note also, that with the mail.python.org connection the second test fails in test_ehlo because a different server string is expected than what arrived.

pytest will build a string that is the test ID for each fixture value in a parametrized fixture, e.g. test_ehlo[smtp.gmail.com] and test_ehlo[mail.python.org] in the above examples. These IDs can be used with -k to select specific cases to run, and they will also identify the specific case when one is failing. Running pytest with --collect-only will show the generated IDs.

Numbers, strings, booleans and None will have their usual string representation used in the test ID. For other objects, pytest will make a string based on the argument name. It is possible to customise the string used in a test ID for a certain fixture value by using the ids keyword argument:

# content of test_ids.py
import pytest

@pytest.fixture(params=[0, 1], ids=["spam", "ham"])
def a(request):
    return request.param

def test_a(a):
    pass

def idfn(fixture_value):
    if fixture_value == 0:
        return "eggs"
    else:
        return None

@pytest.fixture(params=[0, 1], ids=idfn)
def b(request):
    return request.param

def test_b(b):
    pass

The above shows how ids can be either a list of strings to use or a function which will be called with the fixture value and then has to return a string to use. In the latter case if the function return None then pytest’s auto-generated ID will be used.

Running the above tests results in the following test IDs being used:

$ pytest --collect-only
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 10 items
<Module 'test_anothersmtp.py'>
  <Function 'test_showhelo[smtp.gmail.com]'>
  <Function 'test_showhelo[mail.python.org]'>
<Module 'test_ids.py'>
  <Function 'test_a[spam]'>
  <Function 'test_a[ham]'>
  <Function 'test_b[eggs]'>
  <Function 'test_b[1]'>
<Module 'test_module.py'>
  <Function 'test_ehlo[smtp.gmail.com]'>
  <Function 'test_noop[smtp.gmail.com]'>
  <Function 'test_ehlo[mail.python.org]'>
  <Function 'test_noop[mail.python.org]'>

======= no tests ran in 0.12 seconds ========

Modularity: using fixtures from a fixture function

You can not only use fixtures in test functions but fixture functions can use other fixtures themselves. This contributes to a modular design of your fixtures and allows re-use of framework-specific fixtures across many projects. As a simple example, we can extend the previous example and instantiate an object app where we stick the already defined smtp resource into it:

# content of test_appsetup.py

import pytest

class App:
    def __init__(self, smtp):
        self.smtp = smtp

@pytest.fixture(scope="module")
def app(smtp):
    return App(smtp)

def test_smtp_exists(app):
    assert app.smtp

Here we declare an app fixture which receives the previously defined smtp fixture and instantiates an App object with it. Let’s run it:

$ pytest -v test_appsetup.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 2 items

test_appsetup.py::test_smtp_exists[smtp.gmail.com] PASSED
test_appsetup.py::test_smtp_exists[mail.python.org] PASSED

======= 2 passed in 0.12 seconds ========

Due to the parametrization of smtp the test will run twice with two different App instances and respective smtp servers. There is no need for the app fixture to be aware of the smtp parametrization as pytest will fully analyse the fixture dependency graph.

Note, that the app fixture has a scope of module and uses a module-scoped smtp fixture. The example would still work if smtp was cached on a session scope: it is fine for fixtures to use “broader” scoped fixtures but not the other way round: A session-scoped fixture could not use a module-scoped one in a meaningful way.

Automatic grouping of tests by fixture instances

pytest minimizes the number of active fixtures during test runs. If you have a parametrized fixture, then all the tests using it will first execute with one instance and then finalizers are called before the next fixture instance is created. Among other things, this eases testing of applications which create and use global state.

The following example uses two parametrized fixture, one of which is scoped on a per-module basis, and all the functions perform print calls to show the setup/teardown flow:

# content of test_module.py
import pytest

@pytest.fixture(scope="module", params=["mod1", "mod2"])
def modarg(request):
    param = request.param
    print ("  SETUP modarg %s" % param)
    yield param
    print ("  TEARDOWN modarg %s" % param)

@pytest.fixture(scope="function", params=[1,2])
def otherarg(request):
    param = request.param
    print ("  SETUP otherarg %s" % param)
    yield param
    print ("  TEARDOWN otherarg %s" % param)

def test_0(otherarg):
    print ("  RUN test0 with otherarg %s" % otherarg)
def test_1(modarg):
    print ("  RUN test1 with modarg %s" % modarg)
def test_2(otherarg, modarg):
    print ("  RUN test2 with otherarg %s and modarg %s" % (otherarg, modarg))

Let’s run the tests in verbose mode and with looking at the print-output:

$ pytest -v -s test_module.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 8 items

test_module.py::test_0[1]   SETUP otherarg 1
  RUN test0 with otherarg 1
PASSED  TEARDOWN otherarg 1

test_module.py::test_0[2]   SETUP otherarg 2
  RUN test0 with otherarg 2
PASSED  TEARDOWN otherarg 2

test_module.py::test_1[mod1]   SETUP modarg mod1
  RUN test1 with modarg mod1
PASSED
test_module.py::test_2[1-mod1]   SETUP otherarg 1
  RUN test2 with otherarg 1 and modarg mod1
PASSED  TEARDOWN otherarg 1

test_module.py::test_2[2-mod1]   SETUP otherarg 2
  RUN test2 with otherarg 2 and modarg mod1
PASSED  TEARDOWN otherarg 2

test_module.py::test_1[mod2]   TEARDOWN modarg mod1
  SETUP modarg mod2
  RUN test1 with modarg mod2
PASSED
test_module.py::test_2[1-mod2]   SETUP otherarg 1
  RUN test2 with otherarg 1 and modarg mod2
PASSED  TEARDOWN otherarg 1

test_module.py::test_2[2-mod2]   SETUP otherarg 2
  RUN test2 with otherarg 2 and modarg mod2
PASSED  TEARDOWN otherarg 2
  TEARDOWN modarg mod2


======= 8 passed in 0.12 seconds ========

You can see that the parametrized module-scoped modarg resource caused an ordering of test execution that lead to the fewest possible “active” resources. The finalizer for the mod1 parametrized resource was executed before the mod2 resource was setup.

In particular notice that test_0 is completely independent and finishes first. Then test_1 is executed with mod1, then test_2 with mod1, then test_1 with mod2 and finally test_2 with mod2.

The otherarg parametrized resource (having function scope) was set up before and teared down after every test that used it.

Using fixtures from classes, modules or projects

Sometimes test functions do not directly need access to a fixture object. For example, tests may require to operate with an empty directory as the current working directory but otherwise do not care for the concrete directory. Here is how you can use the standard tempfile and pytest fixtures to achieve it. We separate the creation of the fixture into a conftest.py file:

# content of conftest.py

import pytest
import tempfile
import os

@pytest.fixture()
def cleandir():
    newpath = tempfile.mkdtemp()
    os.chdir(newpath)

and declare its use in a test module via a usefixtures marker:

# content of test_setenv.py
import os
import pytest

@pytest.mark.usefixtures("cleandir")
class TestDirectoryInit:
    def test_cwd_starts_empty(self):
        assert os.listdir(os.getcwd()) == []
        with open("myfile", "w") as f:
            f.write("hello")

    def test_cwd_again_starts_empty(self):
        assert os.listdir(os.getcwd()) == []

Due to the usefixtures marker, the cleandir fixture will be required for the execution of each test method, just as if you specified a “cleandir” function argument to each of them. Let’s run it to verify our fixture is activated and the tests pass:

$ pytest -q
..
2 passed in 0.12 seconds

You can specify multiple fixtures like this:

@pytest.mark.usefixtures("cleandir", "anotherfixture")

and you may specify fixture usage at the test module level, using a generic feature of the mark mechanism:

pytestmark = pytest.mark.usefixtures("cleandir")

Note that the assigned variable must be called pytestmark, assigning e.g. foomark will not activate the fixtures.

Lastly you can put fixtures required by all tests in your project into an ini-file:

# content of pytest.ini
[pytest]
usefixtures = cleandir

Autouse fixtures (xUnit setup on steroids)

Occasionally, you may want to have fixtures get invoked automatically without a usefixtures or funcargs reference. As a practical example, suppose we have a database fixture which has a begin/rollback/commit architecture and we want to automatically surround each test method by a transaction and a rollback. Here is a dummy self-contained implementation of this idea:

# content of test_db_transact.py

import pytest

class DB:
    def __init__(self):
        self.intransaction = []
    def begin(self, name):
        self.intransaction.append(name)
    def rollback(self):
        self.intransaction.pop()

@pytest.fixture(scope="module")
def db():
    return DB()

class TestClass:
    @pytest.fixture(autouse=True)
    def transact(self, request, db):
        db.begin(request.function.__name__)
        yield
        db.rollback()

    def test_method1(self, db):
        assert db.intransaction == ["test_method1"]

    def test_method2(self, db):
        assert db.intransaction == ["test_method2"]

The class-level transact fixture is marked with autouse=true which implies that all test methods in the class will use this fixture without a need to state it in the test function signature or with a class-level usefixtures decorator.

If we run it, we get two passing tests:

$ pytest -q
..
2 passed in 0.12 seconds

Here is how autouse fixtures work in other scopes:

  • autouse fixtures obey the scope= keyword-argument: if an autouse fixture has scope='session' it will only be run once, no matter where it is defined. scope='class' means it will be run once per class, etc.
  • if an autouse fixture is defined in a test module, all its test functions automatically use it.
  • if an autouse fixture is defined in a conftest.py file then all tests in all test modules below its directory will invoke the fixture.
  • lastly, and please use that with care: if you define an autouse fixture in a plugin, it will be invoked for all tests in all projects where the plugin is installed. This can be useful if a fixture only anyway works in the presence of certain settings e. g. in the ini-file. Such a global fixture should always quickly determine if it should do any work and avoid otherwise expensive imports or computation.

Note that the above transact fixture may very well be a fixture that you want to make available in your project without having it generally active. The canonical way to do that is to put the transact definition into a conftest.py file without using autouse:

# content of conftest.py
@pytest.fixture
def transact(self, request, db):
    db.begin()
    yield
    db.rollback()

and then e.g. have a TestClass using it by declaring the need:

@pytest.mark.usefixtures("transact")
class TestClass:
    def test_method1(self):
        ...

All test methods in this TestClass will use the transaction fixture while other test classes or functions in the module will not use it unless they also add a transact reference.

Shifting (visibility of) fixture functions

If during implementing your tests you realize that you want to use a fixture function from multiple test files you can move it to a conftest.py file or even separately installable plugins without changing test code. The discovery of fixtures functions starts at test classes, then test modules, then conftest.py files and finally builtin and third party plugins.

Overriding fixtures on various levels

In relatively large test suite, you most likely need to override a global or root fixture with a locally defined one, keeping the test code readable and maintainable.

Override a fixture on a folder (conftest) level

Given the tests file structure is:

tests/
    __init__.py

    conftest.py
        # content of tests/conftest.py
        import pytest

        @pytest.fixture
        def username():
            return 'username'

    test_something.py
        # content of tests/test_something.py
        def test_username(username):
            assert username == 'username'

    subfolder/
        __init__.py

        conftest.py
            # content of tests/subfolder/conftest.py
            import pytest

            @pytest.fixture
            def username(username):
                return 'overridden-' + username

        test_something.py
            # content of tests/subfolder/test_something.py
            def test_username(username):
                assert username == 'overridden-username'

As you can see, a fixture with the same name can be overridden for certain test folder level. Note that the base or super fixture can be accessed from the overriding fixture easily - used in the example above.

Override a fixture on a test module level

Given the tests file structure is:

tests/
    __init__.py

    conftest.py
        # content of tests/conftest.py
        @pytest.fixture
        def username():
            return 'username'

    test_something.py
        # content of tests/test_something.py
        import pytest

        @pytest.fixture
        def username(username):
            return 'overridden-' + username

        def test_username(username):
            assert username == 'overridden-username'

    test_something_else.py
        # content of tests/test_something_else.py
        import pytest

        @pytest.fixture
        def username(username):
            return 'overridden-else-' + username

        def test_username(username):
            assert username == 'overridden-else-username'

In the example above, a fixture with the same name can be overridden for certain test module.

Override a fixture with direct test parametrization

Given the tests file structure is:

tests/
    __init__.py

    conftest.py
        # content of tests/conftest.py
        import pytest

        @pytest.fixture
        def username():
            return 'username'

        @pytest.fixture
        def other_username(username):
            return 'other-' + username

    test_something.py
        # content of tests/test_something.py
        import pytest

        @pytest.mark.parametrize('username', ['directly-overridden-username'])
        def test_username(username):
            assert username == 'directly-overridden-username'

        @pytest.mark.parametrize('username', ['directly-overridden-username-other'])
        def test_username_other(other_username):
            assert other_username == 'other-directly-overridden-username-other'

In the example above, a fixture value is overridden by the test parameter value. Note that the value of the fixture can be overridden this way even if the test doesn’t use it directly (doesn’t mention it in the function prototype).

Override a parametrized fixture with non-parametrized one and vice versa

Given the tests file structure is:

tests/
    __init__.py

    conftest.py
        # content of tests/conftest.py
        import pytest

        @pytest.fixture(params=['one', 'two', 'three'])
        def parametrized_username(request):
            return request.param

        @pytest.fixture
        def non_parametrized_username(request):
            return 'username'

    test_something.py
        # content of tests/test_something.py
        import pytest

        @pytest.fixture
        def parametrized_username():
            return 'overridden-username'

        @pytest.fixture(params=['one', 'two', 'three'])
        def non_parametrized_username(request):
            return request.param

        def test_username(parametrized_username):
            assert parametrized_username == 'overridden-username'

        def test_parametrized_username(non_parametrized_username):
            assert non_parametrized_username in ['one', 'two', 'three']

    test_something_else.py
        # content of tests/test_something_else.py
        def test_username(parametrized_username):
            assert parametrized_username in ['one', 'two', 'three']

        def test_username(non_parametrized_username):
            assert non_parametrized_username == 'username'

In the example above, a parametrized fixture is overridden with a non-parametrized version, and a non-parametrized fixture is overridden with a parametrized version for certain test module. The same applies for the test folder level obviously.

Monkeypatching/mocking modules and environments

Sometimes tests need to invoke functionality which depends on global settings or which invokes code which cannot be easily tested such as network access. The monkeypatch fixture helps you to safely set/delete an attribute, dictionary item or environment variable or to modify sys.path for importing. See the monkeypatch blog post for some introduction material and a discussion of its motivation.

Simple example: monkeypatching functions

If you want to pretend that os.expanduser returns a certain directory, you can use the monkeypatch.setattr() method to patch this function before calling into a function which uses it:

# content of test_module.py
import os.path
def getssh(): # pseudo application code
    return os.path.join(os.path.expanduser("~admin"), '.ssh')

def test_mytest(monkeypatch):
    def mockreturn(path):
        return '/abc'
    monkeypatch.setattr(os.path, 'expanduser', mockreturn)
    x = getssh()
    assert x == '/abc/.ssh'

Here our test function monkeypatches os.path.expanduser and then calls into a function that calls it. After the test function finishes the os.path.expanduser modification will be undone.

example: preventing “requests” from remote operations

If you want to prevent the “requests” library from performing http requests in all your tests, you can do:

# content of conftest.py
import pytest
@pytest.fixture(autouse=True)
def no_requests(monkeypatch):
    monkeypatch.delattr("requests.sessions.Session.request")

This autouse fixture will be executed for each test function and it will delete the method request.session.Session.request so that any attempts within tests to create http requests will fail.

Note

Be advised that it is not recommended to patch builtin functions such as open, compile, etc., because it might break pytest’s internals. If that’s unavoidable, passing --tb=native, --assert=plain and --capture=no might help although there’s no guarantee.

Method reference of the monkeypatch fixture

class MonkeyPatch[source]

Object returned by the monkeypatch fixture keeping a record of setattr/item/env/syspath changes.

setattr(target, name, value=<notset>, raising=True)[source]

Set attribute value on target, memorizing the old value. By default raise AttributeError if the attribute did not exist.

For convenience you can specify a string as target which will be interpreted as a dotted import path, with the last part being the attribute name. Example: monkeypatch.setattr("os.getcwd", lambda x: "/") would set the getcwd function of the os module.

The raising value determines if the setattr should fail if the attribute is not already present (defaults to True which means it will raise).

delattr(target, name=<notset>, raising=True)[source]

Delete attribute name from target, by default raise AttributeError it the attribute did not previously exist.

If no name is specified and target is a string it will be interpreted as a dotted import path with the last part being the attribute name.

If raising is set to False, no exception will be raised if the attribute is missing.

setitem(dic, name, value)[source]

Set dictionary entry name to value.

delitem(dic, name, raising=True)[source]

Delete name from dict. Raise KeyError if it doesn’t exist.

If raising is set to False, no exception will be raised if the key is missing.

setenv(name, value, prepend=None)[source]

Set environment variable name to value. If prepend is a character, read the current environment variable value and prepend the value adjoined with the prepend character.

delenv(name, raising=True)[source]

Delete name from the environment. Raise KeyError it does not exist.

If raising is set to False, no exception will be raised if the environment variable is missing.

syspath_prepend(path)[source]

Prepend path to sys.path list of import locations.

chdir(path)[source]

Change the current working directory to the specified path. Path can be a string or a py.path.local object.

undo()[source]

Undo previous changes. This call consumes the undo stack. Calling it a second time has no effect unless you do more monkeypatching after the undo call.

There is generally no need to call undo(), since it is called automatically during tear-down.

Note that the same monkeypatch fixture is used across a single test function invocation. If monkeypatch is used both by the test function itself and one of the test fixtures, calling undo() will undo all of the changes made in both functions.

monkeypatch.setattr/delattr/delitem/delenv() all by default raise an Exception if the target does not exist. Pass raising=False if you want to skip this check.

Temporary directories and files

The ‘tmpdir’ fixture

You can use the tmpdir fixture which will provide a temporary directory unique to the test invocation, created in the base temporary directory.

tmpdir is a py.path.local object which offers os.path methods and more. Here is an example test usage:

# content of test_tmpdir.py
import os
def test_create_file(tmpdir):
    p = tmpdir.mkdir("sub").join("hello.txt")
    p.write("content")
    assert p.read() == "content"
    assert len(tmpdir.listdir()) == 1
    assert 0

Running this would result in a passed test except for the last assert 0 line which we use to look at values:

$ pytest test_tmpdir.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items

test_tmpdir.py F

======= FAILURES ========
_______ test_create_file ________

tmpdir = local('PYTEST_TMPDIR/test_create_file0')

    def test_create_file(tmpdir):
        p = tmpdir.mkdir("sub").join("hello.txt")
        p.write("content")
        assert p.read() == "content"
        assert len(tmpdir.listdir()) == 1
>       assert 0
E       assert 0

test_tmpdir.py:7: AssertionError
======= 1 failed in 0.12 seconds ========

The ‘tmpdir_factory’ fixture

New in version 2.8.

The tmpdir_factory is a session-scoped fixture which can be used to create arbitrary temporary directories from any other fixture or test.

For example, suppose your test suite needs a large image on disk, which is generated procedurally. Instead of computing the same image for each test that uses it into its own tmpdir, you can generate it once per-session to save time:

# contents of conftest.py
import pytest

@pytest.fixture(scope='session')
def image_file(tmpdir_factory):
    img = compute_expensive_image()
    fn = tmpdir_factory.mktemp('data').join('img.png')
    img.save(str(fn))
    return fn

# contents of test_image.py
def test_histogram(image_file):
    img = load_image(image_file)
    # compute and test histogram

tmpdir_factory instances have the following methods:

TempdirFactory.mktemp(basename, numbered=True)[source]

Create a subdirectory of the base temporary directory and return it. If numbered, ensure the directory is unique by adding a number prefix greater than any existing one.

TempdirFactory.getbasetemp()[source]

return base temporary directory.

The default base temporary directory

Temporary directories are by default created as sub-directories of the system temporary directory. The base name will be pytest-NUM where NUM will be incremented with each test run. Moreover, entries older than 3 temporary directories will be removed.

You can override the default temporary directory setting like this:

pytest --basetemp=mydir

When distributing tests on the local machine, pytest takes care to configure a basetemp directory for the sub processes such that all temporary data lands below a single per-test run basetemp directory.

Capturing of the stdout/stderr output

Default stdout/stderr/stdin capturing behaviour

During test execution any output sent to stdout and stderr is captured. If a test or a setup method fails its according captured output will usually be shown along with the failure traceback.

In addition, stdin is set to a “null” object which will fail on attempts to read from it because it is rarely desired to wait for interactive input when running automated tests.

By default capturing is done by intercepting writes to low level file descriptors. This allows to capture output from simple print statements as well as output from a subprocess started by a test.

Setting capturing methods or disabling capturing

There are two ways in which pytest can perform capturing:

  • file descriptor (FD) level capturing (default): All writes going to the operating system file descriptors 1 and 2 will be captured.
  • sys level capturing: Only writes to Python files sys.stdout and sys.stderr will be captured. No capturing of writes to filedescriptors is performed.

You can influence output capturing mechanisms from the command line:

pytest -s            # disable all capturing
pytest --capture=sys # replace sys.stdout/stderr with in-mem files
pytest --capture=fd  # also point filedescriptors 1 and 2 to temp file

Using print statements for debugging

One primary benefit of the default capturing of stdout/stderr output is that you can use print statements for debugging:

# content of test_module.py

def setup_function(function):
    print ("setting up %s" % function)

def test_func1():
    assert True

def test_func2():
    assert False

and running this module will show you precisely the output of the failing function and hide the other one:

$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items

test_module.py .F

======= FAILURES ========
_______ test_func2 ________

    def test_func2():
>       assert False
E       assert False

test_module.py:9: AssertionError
-------------------------- Captured stdout setup ---------------------------
setting up <function test_func2 at 0xdeadbeef>
======= 1 failed, 1 passed in 0.12 seconds ========

Accessing captured output from a test function

The capsys and capfd fixtures allow to access stdout/stderr output created during test execution. Here is an example test function that performs some output related checks:

def test_myoutput(capsys): # or use "capfd" for fd-level
    print ("hello")
    sys.stderr.write("world\n")
    out, err = capsys.readouterr()
    assert out == "hello\n"
    assert err == "world\n"
    print ("next")
    out, err = capsys.readouterr()
    assert out == "next\n"

The readouterr() call snapshots the output so far - and capturing will be continued. After the test function finishes the original streams will be restored. Using capsys this way frees your test from having to care about setting/resetting output streams and also interacts well with pytest’s own per-test capturing.

If you want to capture on filedescriptor level you can use the capfd function argument which offers the exact same interface but allows to also capture output from libraries or subprocesses that directly write to operating system level output streams (FD1 and FD2).

New in version 3.0.

To temporarily disable capture within a test, both capsys and capfd have a disabled() method that can be used as a context manager, disabling capture inside the with block:

def test_disabling_capturing(capsys):
    print('this output is captured')
    with capsys.disabled():
        print('output not captured, going directly to sys.stdout')
    print('this output is also captured')

Asserting Warnings

Asserting warnings with the warns function

New in version 2.8.

You can check that code raises a particular warning using pytest.warns, which works in a similar manner to raises:

import warnings
import pytest

def test_warning():
    with pytest.warns(UserWarning):
        warnings.warn("my warning", UserWarning)

The test will fail if the warning in question is not raised.

You can also call pytest.warns on a function or code string:

pytest.warns(expected_warning, func, *args, **kwargs)
pytest.warns(expected_warning, "func(*args, **kwargs)")

The function also returns a list of all raised warnings (as warnings.WarningMessage objects), which you can query for additional information:

with pytest.warns(RuntimeWarning) as record:
    warnings.warn("another warning", RuntimeWarning)

# check that only one warning was raised
assert len(record) == 1
# check that the message matches
assert record[0].message.args[0] == "another warning"

Alternatively, you can examine raised warnings in detail using the recwarn fixture (see below).

Note

DeprecationWarning and PendingDeprecationWarning are treated differently; see Ensuring a function triggers a deprecation warning.

Recording warnings

You can record raised warnings either using pytest.warns or with the recwarn fixture.

To record with pytest.warns without asserting anything about the warnings, pass None as the expected warning type:

with pytest.warns(None) as record:
    warnings.warn("user", UserWarning)
    warnings.warn("runtime", RuntimeWarning)

assert len(record) == 2
assert str(record[0].message) == "user"
assert str(record[1].message) == "runtime"

The recwarn fixture will record warnings for the whole function:

import warnings

def test_hello(recwarn):
    warnings.warn("hello", UserWarning)
    assert len(recwarn) == 1
    w = recwarn.pop(UserWarning)
    assert issubclass(w.category, UserWarning)
    assert str(w.message) == "hello"
    assert w.filename
    assert w.lineno

Both recwarn and pytest.warns return the same interface for recorded warnings: a WarningsRecorder instance. To view the recorded warnings, you can iterate over this instance, call len on it to get the number of recorded warnings, or index into it to get a particular recorded warning. It also provides these methods:

class WarningsRecorder[source]

A context manager to record raised warnings.

Adapted from warnings.catch_warnings.

list

The list of recorded warnings.

pop(cls=<type 'exceptions.Warning'>)[source]

Pop the first recorded warning, raise exception if not exists.

clear()[source]

Clear the list of recorded warnings.

Each recorded warning has the attributes message, category, filename, lineno, file, and line. The category is the class of the warning. The message is the warning itself; calling str(message) will return the actual message of the warning.

Note

DeprecationWarning and PendingDeprecationWarning are treated differently; see Ensuring a function triggers a deprecation warning.

Ensuring a function triggers a deprecation warning

You can also call a global helper for checking that a certain function call triggers a DeprecationWarning or PendingDeprecationWarning:

import pytest

def test_global():
    pytest.deprecated_call(myfunction, 17)

By default, DeprecationWarning and PendingDeprecationWarning will not be caught when using pytest.warns or recwarn because default Python warnings filters hide them. If you wish to record them in your own code, use the command warnings.simplefilter('always'):

import warnings
import pytest

def test_deprecation(recwarn):
    warnings.simplefilter('always')
    warnings.warn("deprecated", DeprecationWarning)
    assert len(recwarn) == 1
    assert recwarn.pop(DeprecationWarning)

You can also use it as a contextmanager:

def test_global():
    with pytest.deprecated_call():
        myobject.deprecated_method()

Doctest integration for modules and test files

By default all files matching the test*.txt pattern will be run through the python standard doctest module. You can change the pattern by issuing:

pytest --doctest-glob='*.rst'

on the command line. Since version 2.9, --doctest-glob can be given multiple times in the command-line.

You can also trigger running of doctests from docstrings in all python modules (including regular python test modules):

pytest --doctest-modules

You can make these changes permanent in your project by putting them into a pytest.ini file like this:

# content of pytest.ini
[pytest]
addopts = --doctest-modules

If you then have a text file like this:

# content of example.rst

hello this is a doctest
>>> x = 3
>>> x
3

and another like this:

# content of mymodule.py
def something():
    """ a doctest in a docstring
    >>> something()
    42
    """
    return 42

then you can just invoke pytest without command line options:

$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 1 items

mymodule.py .

======= 1 passed in 0.12 seconds ========

It is possible to use fixtures using the getfixture helper:

# content of example.rst
>>> tmp = getfixture('tmpdir')
>>> ...
>>>

Also, Using fixtures from classes, modules or projects and Autouse fixtures (xUnit setup on steroids) fixtures are supported when executing text doctest files.

The standard doctest module provides some setting flags to configure the strictness of doctest tests. In pytest You can enable those flags those flags using the configuration file. To make pytest ignore trailing whitespaces and ignore lengthy exception stack traces you can just write:

[pytest]
doctest_optionflags= NORMALIZE_WHITESPACE IGNORE_EXCEPTION_DETAIL

pytest also introduces new options to allow doctests to run in Python 2 and Python 3 unchanged:

  • ALLOW_UNICODE: when enabled, the u prefix is stripped from unicode strings in expected doctest output.
  • ALLOW_BYTES: when enabled, the b prefix is stripped from byte strings in expected doctest output.

As with any other option flag, these flags can be enabled in pytest.ini using the doctest_optionflags ini option:

[pytest]
doctest_optionflags = ALLOW_UNICODE ALLOW_BYTES

Alternatively, it can be enabled by an inline comment in the doc test itself:

# content of example.rst
>>> get_unicode_greeting()  # doctest: +ALLOW_UNICODE
'Hello'

The ‘doctest_namespace’ fixture

New in version 3.0.

The doctest_namespace fixture can be used to inject items into the namespace in which your doctests run. It is intended to be used within your own fixtures to provide the tests that use them with context.

doctest_namespace is a standard dict object into which you place the objects you want to appear in the doctest namespace:

# content of conftest.py
import numpy
@pytest.fixture(autouse=True)
def add_np(doctest_namespace):
    doctest_namespace['np'] = numpy

which can then be used in your doctests directly:

# content of numpy.py
def arange():
    """
    >>> a = np.arange(10)
    >>> len(a)
    10
    """
    pass

Output format

New in version 3.0.

You can change the diff output format on failure for your doctests by using one of standard doctest modules format in options (see doctest.REPORT_UDIFF, doctest.REPORT_CDIFF, doctest.REPORT_NDIFF, doctest.REPORT_ONLY_FIRST_FAILURE):

pytest --doctest-modules --doctest-report none
pytest --doctest-modules --doctest-report udiff
pytest --doctest-modules --doctest-report cdiff
pytest --doctest-modules --doctest-report ndiff
pytest --doctest-modules --doctest-report only_first_failure

Marking test functions with attributes

By using the pytest.mark helper you can easily set metadata on your test functions. There are some builtin markers, for example:

  • skipif - skip a test function if a certain condition is met
  • xfail - produce an “expected failure” outcome if a certain condition is met
  • parametrize to perform multiple calls to the same test function.

It’s easy to create custom markers or to apply markers to whole test classes or modules. See Working with custom markers for examples which also serve as documentation.

Note

Marks can only be applied to tests, having no effect on fixtures.

Skip and xfail: dealing with tests that cannot succeed

If you have test functions that cannot be run on certain platforms or that you expect to fail you can mark them accordingly or you may call helper functions during execution of setup or test functions.

A skip means that you expect your test to pass unless the environment (e.g. wrong Python interpreter, missing dependency) prevents it to run. And xfail means that your test can run but you expect it to fail because there is an implementation problem.

pytest counts and lists skip and xfail tests separately. Detailed information about skipped/xfailed tests is not shown by default to avoid cluttering the output. You can use the -r option to see details corresponding to the “short” letters shown in the test progress:

pytest -rxs  # show extra info on skips and xfails

(See How to change command line options defaults)

Marking a test function to be skipped

New in version 2.9.

The simplest way to skip a test function is to mark it with the skip decorator which may be passed an optional reason:

@pytest.mark.skip(reason="no way of currently testing this")
def test_the_unknown():
    ...

skipif

New in version 2.0,: 2.4

If you wish to skip something conditionally then you can use skipif instead. Here is an example of marking a test function to be skipped when run on a Python3.3 interpreter:

import sys
@pytest.mark.skipif(sys.version_info < (3,3),
                    reason="requires python3.3")
def test_function():
    ...

During test function setup the condition (“sys.version_info >= (3,3)”) is checked. If it evaluates to True, the test function will be skipped with the specified reason. Note that pytest enforces specifying a reason in order to report meaningful “skip reasons” (e.g. when using -rs). If the condition is a string, it will be evaluated as python expression.

You can share skipif markers between modules. Consider this test module:

# content of test_mymodule.py

import mymodule
minversion = pytest.mark.skipif(mymodule.__versioninfo__ < (1,1),
                                reason="at least mymodule-1.1 required")
@minversion
def test_function():
    ...

You can import it from another test module:

# test_myothermodule.py
from test_mymodule import minversion

@minversion
def test_anotherfunction():
    ...

For larger test suites it’s usually a good idea to have one file where you define the markers which you then consistently apply throughout your test suite.

Alternatively, the pre pytest-2.4 way to specify condition strings instead of booleans will remain fully supported in future versions of pytest. It couldn’t be easily used for importing markers between test modules so it’s no longer advertised as the primary method.

Skip all test functions of a class or module

You can use the skipif decorator (and any other marker) on classes:

@pytest.mark.skipif(sys.platform == 'win32',
                    reason="does not run on windows")
class TestPosixCalls:

    def test_function(self):
        "will not be setup or run under 'win32' platform"

If the condition is true, this marker will produce a skip result for each of the test methods.

If you want to skip all test functions of a module, you must use the pytestmark name on the global level:

# test_module.py
pytestmark = pytest.mark.skipif(...)

If multiple “skipif” decorators are applied to a test function, it will be skipped if any of the skip conditions is true.

Mark a test function as expected to fail

You can use the xfail marker to indicate that you expect a test to fail:

@pytest.mark.xfail
def test_function():
    ...

This test will be run but no traceback will be reported when it fails. Instead terminal reporting will list it in the “expected to fail” (XFAIL) or “unexpectedly passing” (XPASS) sections.

strict parameter

New in version 2.9.

Both XFAIL and XPASS don’t fail the test suite, unless the strict keyword-only parameter is passed as True:

@pytest.mark.xfail(strict=True)
def test_function():
    ...

This will make XPASS (“unexpectedly passing”) results from this test to fail the test suite.

You can change the default value of the strict parameter using the xfail_strict ini option:

[pytest]
xfail_strict=true

reason parameter

As with skipif you can also mark your expectation of a failure on a particular platform:

@pytest.mark.xfail(sys.version_info >= (3,3),
                   reason="python3.3 api changes")
def test_function():
    ...

raises parameter

If you want to be more specific as to why the test is failing, you can specify a single exception, or a list of exceptions, in the raises argument.

@pytest.mark.xfail(raises=RuntimeError)
def test_function():
    ...

Then the test will be reported as a regular failure if it fails with an exception not mentioned in raises.

run parameter

If a test should be marked as xfail and reported as such but should not be even executed, use the run parameter as False:

@pytest.mark.xfail(run=False)
def test_function():
    ...

This is specially useful for marking crashing tests for later inspection.

Ignoring xfail marks

By specifying on the commandline:

pytest --runxfail

you can force the running and reporting of an xfail marked test as if it weren’t marked at all.

Examples

Here is a simple test file with the several usages:

import pytest
xfail = pytest.mark.xfail

@xfail
def test_hello():
    assert 0

@xfail(run=False)
def test_hello2():
    assert 0

@xfail("hasattr(os, 'sep')")
def test_hello3():
    assert 0

@xfail(reason="bug 110")
def test_hello4():
    assert 0

@xfail('pytest.__version__[0] != "17"')
def test_hello5():
    assert 0

def test_hello6():
    pytest.xfail("reason")

@xfail(raises=IndexError)
def test_hello7():
    x = []
    x[1] = 1

Running it with the report-on-xfail option gives this output:

example $ pytest -rx xfail_demo.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR/example, inifile:
collected 7 items

xfail_demo.py xxxxxxx
======= short test summary info ========
XFAIL xfail_demo.py::test_hello
XFAIL xfail_demo.py::test_hello2
  reason: [NOTRUN]
XFAIL xfail_demo.py::test_hello3
  condition: hasattr(os, 'sep')
XFAIL xfail_demo.py::test_hello4
  bug 110
XFAIL xfail_demo.py::test_hello5
  condition: pytest.__version__[0] != "17"
XFAIL xfail_demo.py::test_hello6
  reason: reason
XFAIL xfail_demo.py::test_hello7

======= 7 xfailed in 0.12 seconds ========

xfail signature summary

Here’s the signature of the xfail marker, using Python 3 keyword-only arguments syntax:

def xfail(condition=None, *, reason=None, raises=None, run=True, strict=False):

Skip/xfail with parametrize

It is possible to apply markers like skip and xfail to individual test instances when using parametrize:

import pytest

@pytest.mark.parametrize(("n", "expected"), [
    (1, 2),
pytest.mark.xfail((1, 0)),
    pytest.mark.xfail(reason="some bug")((1, 3)),
    (2, 3),
    (3, 4),
    (4, 5),
    pytest.mark.skipif("sys.version_info >= (3,0)")((10, 11)),
])
def test_increment(n, expected):
    assert n + 1 == expected

Imperative xfail from within a test or setup function

If you cannot declare xfail- of skipif conditions at import time you can also imperatively produce an according outcome imperatively, in test or setup code:

def test_function():
    if not valid_config():
        pytest.xfail("failing configuration (but should work)")
        # or
        pytest.skip("unsupported configuration")

Note that calling pytest.skip at the module level is not allowed since pytest 3.0. If you are upgrading and pytest.skip was being used at the module level, you can set a pytestmark variable:

# before pytest 3.0
pytest.skip('skipping all tests because of reasons')
# after pytest 3.0
pytestmark = pytest.mark.skip('skipping all tests because of reasons')

pytestmark applies a mark or list of marks to all tests in a module.

Skipping on a missing import dependency

You can use the following import helper at module level or within a test or test setup function:

docutils = pytest.importorskip("docutils")

If docutils cannot be imported here, this will lead to a skip outcome of the test. You can also skip based on the version number of a library:

docutils = pytest.importorskip("docutils", minversion="0.3")

The version will be read from the specified module’s __version__ attribute.

specifying conditions as strings versus booleans

Prior to pytest-2.4 the only way to specify skipif/xfail conditions was to use strings:

import sys
@pytest.mark.skipif("sys.version_info >= (3,3)")
def test_function():
    ...

During test function setup the skipif condition is evaluated by calling eval('sys.version_info >= (3,0)', namespace). The namespace contains all the module globals, and os and sys as a minimum.

Since pytest-2.4 condition booleans are considered preferable because markers can then be freely imported between test modules. With strings you need to import not only the marker but all variables everything used by the marker, which violates encapsulation.

The reason for specifying the condition as a string was that pytest can report a summary of skip conditions based purely on the condition string. With conditions as booleans you are required to specify a reason string.

Note that string conditions will remain fully supported and you are free to use them if you have no need for cross-importing markers.

The evaluation of a condition string in pytest.mark.skipif(conditionstring) or pytest.mark.xfail(conditionstring) takes place in a namespace dictionary which is constructed as follows:

  • the namespace is initialized by putting the sys and os modules and the pytest config object into it.
  • updated with the module globals of the test function for which the expression is applied.

The pytest config object allows you to skip based on a test configuration value which you might have added:

@pytest.mark.skipif("not config.getvalue('db')")
def test_function(...):
    ...

The equivalent with “boolean conditions” is:

@pytest.mark.skipif(not pytest.config.getvalue("db"),
                    reason="--db was not specified")
def test_function(...):
    pass

Note

You cannot use pytest.config.getvalue() in code imported before pytest’s argument parsing takes place. For example, conftest.py files are imported before command line parsing and thus config.getvalue() will not execute correctly.

Summary

Here’s a quick guide on how to skip tests in a module in different situations:

  1. Skip all tests in a module unconditionally:
pytestmark = pytest.mark.skip('all tests still WIP')
  1. Skip all tests in a module based on some condition:
pytestmark = pytest.mark.skipif(sys.platform == 'win32', 'tests for linux only')
  1. Skip all tests in a module if some import is missing:
pexpect = pytest.importorskip('pexpect')

Parametrizing fixtures and test functions

pytest supports test parametrization in several well-integrated ways:

  • @pytest.mark.parametrize allows to define parametrization at the function or class level, provides multiple argument/fixture sets for a particular test function or class.
  • pytest_generate_tests enables implementing your own custom dynamic parametrization scheme or extensions.

@pytest.mark.parametrize: parametrizing test functions

New in version 2.2.

Changed in version 2.4: Several improvements.

The builtin pytest.mark.parametrize decorator enables parametrization of arguments for a test function. Here is a typical example of a test function that implements checking that a certain input leads to an expected output:

# content of test_expectation.py
import pytest
@pytest.mark.parametrize("test_input,expected", [
    ("3+5", 8),
    ("2+4", 6),
    ("6*9", 42),
])
def test_eval(test_input, expected):
    assert eval(test_input) == expected

Here, the @parametrize decorator defines three different (test_input,expected) tuples so that the test_eval function will run three times using them in turn:

$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items

test_expectation.py ..F

======= FAILURES ========
_______ test_eval[6*9-42] ________

test_input = '6*9', expected = 42

    @pytest.mark.parametrize("test_input,expected", [
        ("3+5", 8),
        ("2+4", 6),
        ("6*9", 42),
    ])
    def test_eval(test_input, expected):
>       assert eval(test_input) == expected
E       AssertionError: assert 54 == 42
E        +  where 54 = eval('6*9')

test_expectation.py:8: AssertionError
======= 1 failed, 2 passed in 0.12 seconds ========

As designed in this example, only one pair of input/output values fails the simple test function. And as usual with test function arguments, you can see the input and output values in the traceback.

Note that you could also use the parametrize marker on a class or a module (see Marking test functions with attributes) which would invoke several functions with the argument sets.

It is also possible to mark individual test instances within parametrize, for example with the builtin mark.xfail:

# content of test_expectation.py
import pytest
@pytest.mark.parametrize("test_input,expected", [
    ("3+5", 8),
    ("2+4", 6),
    pytest.mark.xfail(("6*9", 42)),
])
def test_eval(test_input, expected):
    assert eval(test_input) == expected

Let’s run this:

$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items

test_expectation.py ..x

======= 2 passed, 1 xfailed in 0.12 seconds ========

The one parameter set which caused a failure previously now shows up as an “xfailed (expected to fail)” test.

To get all combinations of multiple parametrized arguments you can stack parametrize decorators:

import pytest
@pytest.mark.parametrize("x", [0, 1])
@pytest.mark.parametrize("y", [2, 3])
def test_foo(x, y):
    pass

This will run the test with the arguments set to x=0/y=2, x=0/y=3, x=1/y=2 and x=1/y=3.

Note

In versions prior to 2.4 one needed to specify the argument names as a tuple. This remains valid but the simpler "name1,name2,..." comma-separated-string syntax is now advertised first because it’s easier to write and produces less line noise.

Basic pytest_generate_tests example

Sometimes you may want to implement your own parametrization scheme or implement some dynamism for determining the parameters or scope of a fixture. For this, you can use the pytest_generate_tests hook which is called when collecting a test function. Through the passed in metafunc object you can inspect the requesting test context and, most importantly, you can call metafunc.parametrize() to cause parametrization.

For example, let’s say we want to run a test taking string inputs which we want to set via a new pytest command line option. Let’s first write a simple test accepting a stringinput fixture function argument:

# content of test_strings.py

def test_valid_string(stringinput):
    assert stringinput.isalpha()

Now we add a conftest.py file containing the addition of a command line option and the parametrization of our test function:

# content of conftest.py

def pytest_addoption(parser):
    parser.addoption("--stringinput", action="append", default=[],
        help="list of stringinputs to pass to test functions")

def pytest_generate_tests(metafunc):
    if 'stringinput' in metafunc.fixturenames:
        metafunc.parametrize("stringinput",
                             metafunc.config.option.stringinput)

If we now pass two stringinput values, our test will run twice:

$ pytest -q --stringinput="hello" --stringinput="world" test_strings.py
..
2 passed in 0.12 seconds

Let’s also run with a stringinput that will lead to a failing test:

$ pytest -q --stringinput="!" test_strings.py
F
======= FAILURES ========
_______ test_valid_string[!] ________

stringinput = '!'

    def test_valid_string(stringinput):
>       assert stringinput.isalpha()
E       AssertionError: assert False
E        +  where False = <built-in method isalpha of str object at 0xdeadbeef>()
E        +    where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha

test_strings.py:3: AssertionError
1 failed in 0.12 seconds

As expected our test function fails.

If you don’t specify a stringinput it will be skipped because metafunc.parametrize() will be called with an empty parameter list:

$ pytest -q -rs test_strings.py
s
======= short test summary info ========
SKIP [1] test_strings.py:1: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:1
1 skipped in 0.12 seconds

For further examples, you might want to look at more parametrization examples.

The metafunc object

class Metafunc(function, fixtureinfo, config, cls=None, module=None)[source]

Metafunc objects are passed to the pytest_generate_tests hook. They help to inspect a test function and to generate tests according to test configuration or values specified in the class or module where a test function is defined.

config = None

access to the _pytest.config.Config object for the test session

module = None

the module object where the test function is defined in.

function = None

underlying python test function

fixturenames = None

set of fixture names required by the test function

cls = None

class object where the test function is defined in or None.

parametrize(argnames, argvalues, indirect=False, ids=None, scope=None)[source]

Add new invocations to the underlying test function using the list of argvalues for the given argnames. Parametrization is performed during the collection phase. If you need to setup expensive resources see about setting indirect to do it rather at test setup time.

Parameters:
  • argnames – a comma-separated string denoting one or more argument names, or a list/tuple of argument strings.
  • argvalues – The list of argvalues determines how often a test is invoked with different argument values. If only one argname was specified argvalues is a list of values. If N argnames were specified, argvalues must be a list of N-tuples, where each tuple-element specifies a value for its respective argname.
  • indirect – The list of argnames or boolean. A list of arguments’ names (subset of argnames). If True the list contains all names from the argnames. Each argvalue corresponding to an argname in this list will be passed as request.param to its respective argname fixture function so that it can perform more expensive setups during the setup phase of a test rather than at collection time.
  • ids – list of string ids, or a callable. If strings, each is corresponding to the argvalues so that they are part of the test id. If None is given as id of specific test, the automatically generated id for that argument will be used. If callable, it should take one argument (a single argvalue) and return a string or return None. If None, the automatically generated id for that argument will be used. If no ids are provided they will be generated automatically from the argvalues.
  • scope – if specified it denotes the scope of the parameters. The scope is used for grouping tests by parameter instances. It will also override any fixture-function defined scope, allowing to set a dynamic scope using test context or configuration.
addcall(funcargs=None, id=<object object>, param=<object object>)[source]

(deprecated, use parametrize) Add a new call to the underlying test function during the collection phase of a test run. Note that request.addcall() is called during the test collection phase prior and independently to actual test execution. You should only use addcall() if you need to specify multiple arguments of a test function.

Parameters:
  • funcargs – argument keyword dictionary used when invoking the test function.
  • id – used for reporting and identification purposes. If you don’t supply an id an automatic unique id will be generated.
  • param – a parameter which will be exposed to a later fixture function invocation through the request.param attribute.

Cache: working with cross-testrun state

New in version 2.8.

Warning

The functionality of this core plugin was previously distributed as a third party plugin named pytest-cache. The core plugin is compatible regarding command line options and API usage except that you can only store/receive data between test runs that is json-serializable.

Usage

The plugin provides two command line options to rerun failures from the last pytest invocation:

  • --lf, --last-failed - to only re-run the failures.
  • --ff, --failed-first - to run the failures first and then the rest of the tests.

For cleanup (usually not needed), a --cache-clear option allows to remove all cross-session cache contents ahead of a test run.

Other plugins may access the config.cache object to set/get json encodable values between pytest invocations.

Note

This plugin is enabled by default, but can be disabled if needed: see Deactivating / unregistering a plugin by name (the internal name for this plugin is cacheprovider).

Rerunning only failures or failures first

First, let’s create 50 test invocation of which only 2 fail:

# content of test_50.py
import pytest

@pytest.mark.parametrize("i", range(50))
def test_num(i):
    if i in (17, 25):
       pytest.fail("bad luck")

If you run this for the first time you will see two failures:

$ pytest -q
.................F.......F........................
======= FAILURES ========
_______ test_num[17] ________

i = 17

    @pytest.mark.parametrize("i", range(50))
    def test_num(i):
        if i in (17, 25):
>          pytest.fail("bad luck")
E          Failed: bad luck

test_50.py:6: Failed
_______ test_num[25] ________

i = 25

    @pytest.mark.parametrize("i", range(50))
    def test_num(i):
        if i in (17, 25):
>          pytest.fail("bad luck")
E          Failed: bad luck

test_50.py:6: Failed
2 failed, 48 passed in 0.12 seconds

If you then run it with --lf:

$ pytest --lf
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
run-last-failure: rerun last 2 failures
rootdir: $REGENDOC_TMPDIR, inifile:
collected 50 items

test_50.py FF

======= FAILURES ========
_______ test_num[17] ________

i = 17

    @pytest.mark.parametrize("i", range(50))
    def test_num(i):
        if i in (17, 25):
>          pytest.fail("bad luck")
E          Failed: bad luck

test_50.py:6: Failed
_______ test_num[25] ________

i = 25

    @pytest.mark.parametrize("i", range(50))
    def test_num(i):
        if i in (17, 25):
>          pytest.fail("bad luck")
E          Failed: bad luck

test_50.py:6: Failed
======= 48 tests deselected ========
======= 2 failed, 48 deselected in 0.12 seconds ========

You have run only the two failing test from the last run, while 48 tests have not been run (“deselected”).

Now, if you run with the --ff option, all tests will be run but the first previous failures will be executed first (as can be seen from the series of FF and dots):

$ pytest --ff
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
run-last-failure: rerun last 2 failures first
rootdir: $REGENDOC_TMPDIR, inifile:
collected 50 items

test_50.py FF................................................

======= FAILURES ========
_______ test_num[17] ________

i = 17

    @pytest.mark.parametrize("i", range(50))
    def test_num(i):
        if i in (17, 25):
>          pytest.fail("bad luck")
E          Failed: bad luck

test_50.py:6: Failed
_______ test_num[25] ________

i = 25

    @pytest.mark.parametrize("i", range(50))
    def test_num(i):
        if i in (17, 25):
>          pytest.fail("bad luck")
E          Failed: bad luck

test_50.py:6: Failed
======= 2 failed, 48 passed in 0.12 seconds ========

The new config.cache object

Plugins or conftest.py support code can get a cached value using the pytest config object. Here is a basic example plugin which implements a pytest fixtures: explicit, modular, scalable which re-uses previously created state across pytest invocations:

# content of test_caching.py
import pytest
import time

@pytest.fixture
def mydata(request):
    val = request.config.cache.get("example/value", None)
    if val is None:
        time.sleep(9*0.6) # expensive computation :)
        val = 42
        request.config.cache.set("example/value", val)
    return val

def test_function(mydata):
    assert mydata == 23

If you run this command once, it will take a while because of the sleep:

$ pytest -q
F
======= FAILURES ========
_______ test_function ________

mydata = 42

    def test_function(mydata):
>       assert mydata == 23
E       assert 42 == 23

test_caching.py:14: AssertionError
1 failed in 0.12 seconds

If you run it a second time the value will be retrieved from the cache and this will be quick:

$ pytest -q
F
======= FAILURES ========
_______ test_function ________

mydata = 42

    def test_function(mydata):
>       assert mydata == 23
E       assert 42 == 23

test_caching.py:14: AssertionError
1 failed in 0.12 seconds

See the cache-api for more details.

Inspecting Cache content

You can always peek at the content of the cache using the --cache-show command line option:

$ py.test --cache-show
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
cachedir: $REGENDOC_TMPDIR/.cache
------------------------------- cache values -------------------------------
cache/lastfailed contains:
  {'test_caching.py::test_function': True}
example/value contains:
  42

======= no tests ran in 0.12 seconds ========

Clearing Cache content

You can instruct pytest to clear all cache files and values by adding the --cache-clear option like this:

pytest --cache-clear

This is recommended for invocations from Continuous Integration servers where isolation and correctness is more important than speed.

config.cache API

The config.cache object allows other plugins, including conftest.py files, to safely and flexibly store and retrieve values across test runs because the config object is available in many places.

Under the hood, the cache plugin uses the simple dumps/loads API of the json stdlib module

Cache.get(key, default)[source]

return cached value for the given key. If no value was yet cached or the value cannot be read, the specified default is returned.

Parameters:
  • key – must be a / separated value. Usually the first name is the name of your plugin or your application.
  • default – must be provided in case of a cache-miss or invalid cache values.
Cache.set(key, value)[source]

save value for the given key.

Parameters:
  • key – must be a / separated value. Usually the first name is the name of your plugin or your application.
  • value – must be of any combination of basic python types, including nested types like e. g. lists of dictionaries.
Cache.makedir(name)[source]

return a directory path object with the given name. If the directory does not yet exist, it will be created. You can use it to manage files likes e. g. store/retrieve database dumps across test sessions.

Parameters:name – must be a string not containing a / separator. Make sure the name contains your plugin or application identifiers to prevent clashes with other cache users.

Support for unittest.TestCase / Integration of fixtures

pytest has support for running Python unittest.py style tests. It’s meant for leveraging existing unittest-style projects to use pytest features. Concretely, pytest will automatically collect unittest.TestCase subclasses and their test methods in test files. It will invoke typical setup/teardown methods and generally try to make test suites written to run on unittest, to also run using pytest. We assume here that you are familiar with writing unittest.TestCase style tests and rather focus on integration aspects.

Note that this is meant as a provisional way of running your test code until you fully convert to pytest-style tests. To fully take advantage of fixtures, parametrization and hooks you should convert (tools like unittest2pytest are helpful). Also, not all 3rd party pluging are expected to work best with unittest.TestCase style tests.

Usage

After Installation type:

pytest

and you should be able to run your unittest-style tests if they are contained in test_* modules. If that works for you then you can make use of most pytest features, for example --pdb debugging in failures, using plain assert-statements, more informative tracebacks, stdout-capturing or distributing tests to multiple CPUs via the -nNUM option if you installed the pytest-xdist plugin. Please refer to the general pytest documentation for many more examples.

Note

Running tests from unittest.TestCase subclasses with --pdb will disable tearDown and cleanup methods for the case that an Exception occurs. This allows proper post mortem debugging for all applications which have significant logic in their tearDown machinery. However, supporting this feature has the following side effect: If people overwrite unittest.TestCase __call__ or run, they need to to overwrite debug in the same way (this is also true for standard unittest).

Mixing pytest fixtures into unittest.TestCase style tests

Running your unittest with pytest allows you to use its fixture mechanism with unittest.TestCase style tests. Assuming you have at least skimmed the pytest fixture features, let’s jump-start into an example that integrates a pytest db_class fixture, setting up a class-cached database object, and then reference it from a unittest-style test:

# content of conftest.py

# we define a fixture function below and it will be "used" by
# referencing its name from tests

import pytest

@pytest.fixture(scope="class")
def db_class(request):
    class DummyDB:
        pass
    # set a class attribute on the invoking test context
    request.cls.db = DummyDB()

This defines a fixture function db_class which - if used - is called once for each test class and which sets the class-level db attribute to a DummyDB instance. The fixture function achieves this by receiving a special request object which gives access to the requesting test context such as the cls attribute, denoting the class from which the fixture is used. This architecture de-couples fixture writing from actual test code and allows re-use of the fixture by a minimal reference, the fixture name. So let’s write an actual unittest.TestCase class using our fixture definition:

# content of test_unittest_db.py

import unittest
import pytest

@pytest.mark.usefixtures("db_class")
class MyTest(unittest.TestCase):
    def test_method1(self):
        assert hasattr(self, "db")
        assert 0, self.db   # fail for demo purposes

    def test_method2(self):
        assert 0, self.db   # fail for demo purposes

The @pytest.mark.usefixtures("db_class") class-decorator makes sure that the pytest fixture function db_class is called once per class. Due to the deliberately failing assert statements, we can take a look at the self.db values in the traceback:

$ pytest test_unittest_db.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items

test_unittest_db.py FF

======= FAILURES ========
_______ MyTest.test_method1 ________

self = <test_unittest_db.MyTest testMethod=test_method1>

    def test_method1(self):
        assert hasattr(self, "db")
>       assert 0, self.db   # fail for demo purposes
E       AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef>
E       assert 0

test_unittest_db.py:9: AssertionError
_______ MyTest.test_method2 ________

self = <test_unittest_db.MyTest testMethod=test_method2>

    def test_method2(self):
>       assert 0, self.db   # fail for demo purposes
E       AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef>
E       assert 0

test_unittest_db.py:12: AssertionError
======= 2 failed in 0.12 seconds ========

This default pytest traceback shows that the two test methods share the same self.db instance which was our intention when writing the class-scoped fixture function above.

autouse fixtures and accessing other fixtures

Although it’s usually better to explicitly declare use of fixtures you need for a given test, you may sometimes want to have fixtures that are automatically used in a given context. After all, the traditional style of unittest-setup mandates the use of this implicit fixture writing and chances are, you are used to it or like it.

You can flag fixture functions with @pytest.fixture(autouse=True) and define the fixture function in the context where you want it used. Let’s look at an initdir fixture which makes all test methods of a TestCase class execute in a temporary directory with a pre-initialized samplefile.ini. Our initdir fixture itself uses the pytest builtin tmpdir fixture to delegate the creation of a per-test temporary directory:

# content of test_unittest_cleandir.py
import pytest
import unittest

class MyTest(unittest.TestCase):
    @pytest.fixture(autouse=True)
    def initdir(self, tmpdir):
        tmpdir.chdir() # change to pytest-provided temporary directory
        tmpdir.join("samplefile.ini").write("# testdata")

    def test_method(self):
        s = open("samplefile.ini").read()
        assert "testdata" in s

Due to the autouse flag the initdir fixture function will be used for all methods of the class where it is defined. This is a shortcut for using a @pytest.mark.usefixtures("initdir") marker on the class like in the previous example.

Running this test module ...:

$ pytest -q test_unittest_cleandir.py
.
1 passed in 0.12 seconds

... gives us one passed test because the initdir fixture function was executed ahead of the test_method.

Note

While pytest supports receiving fixtures via test function arguments for non-unittest test methods, unittest.TestCase methods cannot directly receive fixture function arguments as implementing that is likely to inflict on the ability to run general unittest.TestCase test suites. Maybe optional support would be possible, though. If unittest finally grows a plugin system that should help as well. In the meanwhile, the above usefixtures and autouse examples should help to mix in pytest fixtures into unittest suites. And of course you can also start to selectively leave away the unittest.TestCase subclassing, use plain asserts and get the unlimited pytest feature set.

Running tests written for nose

pytest has basic support for running tests written for nose.

Usage

After Installation type:

python setup.py develop  # make sure tests can import our package
pytest  # instead of 'nosetests'

and you should be able to run your nose style tests and make use of pytest’s capabilities.

Supported nose Idioms

  • setup and teardown at module/class/method level
  • SkipTest exceptions and markers
  • setup/teardown decorators
  • yield-based tests and their setup
  • __test__ attribute on modules/classes/functions
  • general usage of nose utilities

Unsupported idioms / known issues

  • unittest-style setUp, tearDown, setUpClass, tearDownClass are recognized only on unittest.TestCase classes but not on plain classes. nose supports these methods also on plain classes but pytest deliberately does not. As nose and pytest already both support setup_class, teardown_class, setup_method, teardown_method it doesn’t seem useful to duplicate the unittest-API like nose does. If you however rather think pytest should support the unittest-spelling on plain classes please post to this issue.
  • nose imports test modules with the same import path (e.g. tests.test_mod) but different file system paths (e.g. tests/test_mode.py and other/tests/test_mode.py) by extending sys.path/import semantics. pytest does not do that but there is discussion in issue268 for adding some support. Note that nose2 choose to avoid this sys.path/import hackery.
  • nose-style doctests are not collected and executed correctly, also doctest fixtures don’t work.
  • no nose-configuration is recognized.
  • yield-based methods don’t support setup properly because the setup method is always called in the same class instance. There are no plans to fix this currently because yield-tests are deprecated in pytest 3.0, with pytest.mark.parametrize being the recommended alternative.

classic xunit-style setup

This section describes a classic and popular way how you can implement fixtures (setup and teardown test state) on a per-module/class/function basis.

Note

While these setup/teardown methods are simple and familiar to those coming from a unittest or nose background, you may also consider using pytest’s more powerful fixture mechanism which leverages the concept of dependency injection, allowing for a more modular and more scalable approach for managing test state, especially for larger projects and for functional testing. You can mix both fixture mechanisms in the same file but test methods of unittest.TestCase subclasses cannot receive fixture arguments.

Module level setup/teardown

If you have multiple test functions and test classes in a single module you can optionally implement the following fixture methods which will usually be called once for all the functions:

def setup_module(module):
    """ setup any state specific to the execution of the given module."""

def teardown_module(module):
    """ teardown any state that was previously setup with a setup_module
    method.
    """

As of pytest-3.0, the module parameter is optional.

Class level setup/teardown

Similarly, the following methods are called at class level before and after all test methods of the class are called:

@classmethod
def setup_class(cls):
    """ setup any state specific to the execution of the given class (which
    usually contains tests).
    """

@classmethod
def teardown_class(cls):
    """ teardown any state that was previously setup with a call to
    setup_class.
    """

Method and function level setup/teardown

Similarly, the following methods are called around each method invocation:

def setup_method(self, method):
    """ setup any state tied to the execution of the given method in a
    class.  setup_method is invoked for every test method of a class.
    """

def teardown_method(self, method):
    """ teardown any state that was previously setup with a setup_method
    call.
    """

As of pytest-3.0, the method parameter is optional.

If you would rather define test functions directly at module level you can also use the following functions to implement fixtures:

def setup_function(function):
    """ setup any state tied to the execution of the given function.
    Invoked for every test function in the module.
    """

def teardown_function(function):
    """ teardown any state that was previously setup with a setup_function
    call.
    """

As of pytest-3.0, the function parameter is optional.

Remarks:

  • It is possible for setup/teardown pairs to be invoked multiple times per testing process.
  • teardown functions are not called if the corresponding setup function existed and failed/was skipped.

Installing and Using plugins

This section talks about installing and using third party plugins. For writing your own plugins, please refer to Writing plugins.

Installing a third party plugin can be easily done with pip:

pip install pytest-NAME
pip uninstall pytest-NAME

If a plugin is installed, pytest automatically finds and integrates it, there is no need to activate it.

Here is a little annotated list for some popular plugins:

  • pytest-django: write tests for django apps, using pytest integration.
  • pytest-twisted: write tests for twisted apps, starting a reactor and processing deferreds from test functions.
  • pytest-catchlog: to capture and assert about messages from the logging module
  • pytest-cov: coverage reporting, compatible with distributed testing
  • pytest-xdist: to distribute tests to CPUs and remote hosts, to run in boxed mode which allows to survive segmentation faults, to run in looponfailing mode, automatically re-running failing tests on file changes.
  • pytest-instafail: to report failures while the test run is happening.
  • pytest-bdd and pytest-konira to write tests using behaviour-driven testing.
  • pytest-timeout: to timeout tests based on function marks or global definitions.
  • pytest-pep8: a --pep8 option to enable PEP8 compliance checking.
  • pytest-flakes: check source code with pyflakes.
  • oejskit: a plugin to run javascript unittests in live browsers.

To see a complete list of all plugins with their latest testing status against different pytest and Python versions, please visit plugincompat.

You may also discover more plugins through a pytest- pypi.python.org search.

Requiring/Loading plugins in a test module or conftest file

You can require plugins in a test module or a conftest file like this:

pytest_plugins = "myapp.testsupport.myplugin",

When the test module or conftest plugin is loaded the specified plugins will be loaded as well.

pytest_plugins = “myapp.testsupport.myplugin”

which will import the specified module as a pytest plugin.

Finding out which plugins are active

If you want to find out which plugins are active in your environment you can type:

pytest --trace-config

and will get an extended test header which shows activated plugins and their names. It will also print local plugins aka conftest.py files when they are loaded.

Deactivating / unregistering a plugin by name

You can prevent plugins from loading or unregister them:

pytest -p no:NAME

This means that any subsequent try to activate/load the named plugin will not work.

If you want to unconditionally disable a plugin for a project, you can add this option to your pytest.ini file:

[pytest]
addopts = -p no:NAME

Alternatively to disable it only in certain environments (for example in a CI server), you can set PYTEST_ADDOPTS environment variable to -p no:name.

See Finding out which plugins are active for how to obtain the name of a plugin.

Pytest default plugin reference

You can find the source code for the following plugins in the pytest repository.

_pytest.assertion support for presenting detailed information in failing assertions.
_pytest.cacheprovider merged implementation of the cache provider
_pytest.capture per-test stdout/stderr capturing mechanism.
_pytest.config command line options, ini-file and conftest.py processing.
_pytest.doctest discover and run doctests in modules and test files.
_pytest.helpconfig version info, help messages, tracing configuration.
_pytest.junitxml report test results in JUnit-XML format,
_pytest.mark generic mechanism for marking and selecting python functions.
_pytest.monkeypatch monkeypatching and mocking functionality.
_pytest.nose run test suites written for nose.
_pytest.pastebin submit failure or test session information to a pastebin service.
_pytest.debugging interactive debugging with PDB, the Python Debugger.
_pytest.pytester (disabled by default) support for testing pytest and pytest plugins.
_pytest.python Python test discovery, setup and run of test functions.
_pytest.recwarn recording warnings during test function execution.
_pytest.resultlog log machine-parseable test session result information in a plain
_pytest.runner basic collect and runtest protocol implementations
_pytest.main core implementation of testing process: init, session, runtest loop.
_pytest.skipping support for skip/xfail functions and markers.
_pytest.terminal terminal reporting of the full testing process.
_pytest.tmpdir support for providing temporary directories to test functions.
_pytest.unittest discovery and running of std-library “unittest” style tests.

Writing plugins

It is easy to implement local conftest plugins for your own project or pip-installable plugins that can be used throughout many projects, including third party projects. Please refer to Installing and Using plugins if you only want to use but not write plugins.

A plugin contains one or multiple hook functions. Writing hooks explains the basics and details of how you can write a hook function yourself. pytest implements all aspects of configuration, collection, running and reporting by calling well specified hooks of the following plugins:

In principle, each hook call is a 1:N Python function call where N is the number of registered implementation functions for a given specification. All specifications and implementations follow the pytest_ prefix naming convention, making them easy to distinguish and find.

Plugin discovery order at tool startup

pytest loads plugin modules at tool startup in the following way:

  • by loading all builtin plugins

  • by loading all plugins registered through setuptools entry points.

  • by pre-scanning the command line for the -p name option and loading the specified plugin before actual command line parsing.

  • by loading all conftest.py files as inferred by the command line invocation:

    • if no test paths are specified use current dir as a test path
    • if exists, load conftest.py and test*/conftest.py relative to the directory part of the first test path.

    Note that pytest does not find conftest.py files in deeper nested sub directories at tool startup. It is usually a good idea to keep your conftest.py file in the top level test or project root directory.

  • by recursively loading all plugins specified by the pytest_plugins variable in conftest.py files

conftest.py: local per-directory plugins

Local conftest.py plugins contain directory-specific hook implementations. Hook Session and test running activities will invoke all hooks defined in conftest.py files closer to the root of the filesystem. Example of implementing the pytest_runtest_setup hook so that is called for tests in the a sub directory but not for other directories:

a/conftest.py:
    def pytest_runtest_setup(item):
        # called for running each test in 'a' directory
        print ("setting up", item)

a/test_sub.py:
    def test_sub():
        pass

test_flat.py:
    def test_flat():
        pass

Here is how you might run it:

pytest test_flat.py   # will not show "setting up"
pytest a/test_sub.py  # will show "setting up"

Note

If you have conftest.py files which do not reside in a python package directory (i.e. one containing an __init__.py) then “import conftest” can be ambiguous because there might be other conftest.py files as well on your PYTHONPATH or sys.path. It is thus good practice for projects to either put conftest.py under a package scope or to never import anything from a conftest.py file.

Writing your own plugin

If you want to write a plugin, there are many real-life examples you can copy from:

All of these plugins implement the documented well specified hooks to extend and add functionality.

Note

Make sure to check out the excellent cookiecutter-pytest-plugin project, which is a cookiecutter template for authoring plugins.

The template provides an excellent starting point with a working plugin, tests running with tox, comprehensive README and entry-pointy already pre-configured.

Also consider contributing your plugin to pytest-dev once it has some happy users other than yourself.

Making your plugin installable by others

If you want to make your plugin externally available, you may define a so-called entry point for your distribution so that pytest finds your plugin module. Entry points are a feature that is provided by setuptools. pytest looks up the pytest11 entrypoint to discover its plugins and you can thus make your plugin available by defining it in your setuptools-invocation:

# sample ./setup.py file
from setuptools import setup

setup(
    name="myproject",
    packages = ['myproject']

    # the following makes a plugin available to pytest
    entry_points = {
        'pytest11': [
            'name_of_plugin = myproject.pluginmodule',
        ]
    },

    # custom PyPI classifier for pytest plugins
    classifiers=[
        "Framework :: Pytest",
    ],
)

If a package is installed this way, pytest will load myproject.pluginmodule as a plugin which can define well specified hooks.

Note

Make sure to include Framework :: Pytest in your list of PyPI classifiers to make it easy for users to find your plugin.

Assertion Rewriting

One of the main features of pytest is the use of plain assert statements and the detailed introspection of expressions upon assertion failures. This is provided by “assertion rewriting” which modifies the parsed AST before it gets compiled to bytecode. This is done via a PEP 302 import hook which gets installed early on when pytest starts up and will perform this re-writing when modules get imported. However since we do not want to test different bytecode then you will run in production this hook only re-writes test modules themselves as well as any modules which are part of plugins. Any other imported module will not be re-written and normal assertion behaviour will happen.

If you have assertion helpers in other modules where you would need assertion rewriting to be enabled you need to ask pytest explicitly to re-write this module before it gets imported.

register_assert_rewrite(*names)[source]

Register one or more module names to be rewritten on import.

This function will make sure that this module or all modules inside the package will get their assert statements rewritten. Thus you should make sure to call this before the module is actually imported, usually in your __init__.py if you are a plugin using a package.

Raises:TypeError – if the given module names are not strings.

This is especially important when you write a pytest plugin which is created using a package. The import hook only treats conftest.py files and any modules which are listed in the pytest11 entrypoint as plugins. As an example consider the following package:

pytest_foo/__init__.py
pytest_foo/plugin.py
pytest_foo/helper.py

With the following typical setup.py extract:

setup(
   ...
   entry_points={'pytest11': ['foo = pytest_foo.plugin']},
   ...
)

In this case only pytest_foo/plugin.py will be re-written. If the helper module also contains assert statements which need to be re-written it needs to be marked as such, before it gets imported. This is easiest by marking it for re-writing inside the __init__.py module, which will always be imported first when a module inside a package is imported. This way plugin.py can still import helper.py normally. The contents of pytest_foo/__init__.py will then need to look like this:

import pytest

pytest.register_assert_rewrite('pytest_foo.helper')

Requiring/Loading plugins in a test module or conftest file

You can require plugins in a test module or a conftest.py file like this:

pytest_plugins = ["name1", "name2"]

When the test module or conftest plugin is loaded the specified plugins will be loaded as well. Any module can be blessed as a plugin, including internal application modules:

pytest_plugins = "myapp.testsupport.myplugin"

pytest_plugins variables are processed recursively, so note that in the example above if myapp.testsupport.myplugin also declares pytest_plugins, the contents of the variable will also be loaded as plugins, and so on.

This mechanism makes it easy to share fixtures within applications or even external applications without the need to create external plugins using the setuptools‘s entry point technique.

Plugins imported by pytest_plugins will also automatically be marked for assertion rewriting (see pytest.register_assert_rewrite()). However for this to have any effect the module must not be imported already; if it was already imported at the time the pytest_plugins statement is processed, a warning will result and assertions inside the plugin will not be re-written. To fix this you can either call pytest.register_assert_rewrite() yourself before the module is imported, or you can arrange the code to delay the importing until after the plugin is registered.

Accessing another plugin by name

If a plugin wants to collaborate with code from another plugin it can obtain a reference through the plugin manager like this:

plugin = config.pluginmanager.getplugin("name_of_plugin")

If you want to look at the names of existing plugins, use the --trace-config option.

Testing plugins

pytest comes with some facilities that you can enable for testing your plugin. Given that you have an installed plugin you can enable the testdir fixture via specifying a command line option to include the pytester plugin (-p pytester) or by putting pytest_plugins = "pytester" into your test or conftest.py file. You then will have a testdir fixture which you can use like this:

# content of test_myplugin.py

pytest_plugins = "pytester"  # to get testdir fixture

def test_myplugin(testdir):
    testdir.makepyfile("""
        def test_example():
            pass
    """)
    result = testdir.runpytest("--verbose")
    result.stdout.fnmatch_lines("""
        test_example*
    """)

Note that by default testdir.runpytest() will perform a pytest in-process. You can pass the command line option --runpytest=subprocess to have it happen in a subprocess.

Also see the RunResult for more methods of the result object that you get from a call to runpytest.

Writing hook functions

hook function validation and execution

pytest calls hook functions from registered plugins for any given hook specification. Let’s look at a typical hook function for the pytest_collection_modifyitems(session, config, items) hook which pytest calls after collection of all test items is completed.

When we implement a pytest_collection_modifyitems function in our plugin pytest will during registration verify that you use argument names which match the specification and bail out if not.

Let’s look at a possible implementation:

def pytest_collection_modifyitems(config, items):
    # called after collection is completed
    # you can modify the ``items`` list

Here, pytest will pass in config (the pytest config object) and items (the list of collected test items) but will not pass in the session argument because we didn’t list it in the function signature. This dynamic “pruning” of arguments allows pytest to be “future-compatible”: we can introduce new hook named parameters without breaking the signatures of existing hook implementations. It is one of the reasons for the general long-lived compatibility of pytest plugins.

Note that hook functions other than pytest_runtest_* are not allowed to raise exceptions. Doing so will break the pytest run.

firstresult: stop at first non-None result

Most calls to pytest hooks result in a list of results which contains all non-None results of the called hook functions.

Some hook specifications use the firstresult=True option so that the hook call only executes until the first of N registered functions returns a non-None result which is then taken as result of the overall hook call. The remaining hook functions will not be called in this case.

hookwrapper: executing around other hooks

New in version 2.7.

pytest plugins can implement hook wrappers which wrap the execution of other hook implementations. A hook wrapper is a generator function which yields exactly once. When pytest invokes hooks it first executes hook wrappers and passes the same arguments as to the regular hooks.

At the yield point of the hook wrapper pytest will execute the next hook implementations and return their result to the yield point in the form of a CallOutcome instance which encapsulates a result or exception info. The yield point itself will thus typically not raise exceptions (unless there are bugs).

Here is an example definition of a hook wrapper:

import pytest

@pytest.hookimpl(hookwrapper=True)
def pytest_pyfunc_call(pyfuncitem):
    # do whatever you want before the next hook executes

    outcome = yield
    # outcome.excinfo may be None or a (cls, val, tb) tuple

    res = outcome.get_result()  # will raise if outcome was exception
    # postprocess result

Note that hook wrappers don’t return results themselves, they merely perform tracing or other side effects around the actual hook implementations. If the result of the underlying hook is a mutable object, they may modify that result but it’s probably better to avoid it.

Hook function ordering / call example

For any given hook specification there may be more than one implementation and we thus generally view hook execution as a 1:N function call where N is the number of registered functions. There are ways to influence if a hook implementation comes before or after others, i.e. the position in the N-sized list of functions:

# Plugin 1
@pytest.hookimpl(tryfirst=True)
def pytest_collection_modifyitems(items):
    # will execute as early as possible

# Plugin 2
@pytest.hookimpl(trylast=True)
def pytest_collection_modifyitems(items):
    # will execute as late as possible

# Plugin 3
@pytest.hookimpl(hookwrapper=True)
def pytest_collection_modifyitems(items):
    # will execute even before the tryfirst one above!
    outcome = yield
    # will execute after all non-hookwrappers executed

Here is the order of execution:

  1. Plugin3’s pytest_collection_modifyitems called until the yield point because it is a hook wrapper.
  2. Plugin1’s pytest_collection_modifyitems is called because it is marked with tryfirst=True.
  3. Plugin2’s pytest_collection_modifyitems is called because it is marked with trylast=True (but even without this mark it would come after Plugin1).
  4. Plugin3’s pytest_collection_modifyitems then executing the code after the yield point. The yield receives a CallOutcome instance which encapsulates the result from calling the non-wrappers. Wrappers shall not modify the result.

It’s possible to use tryfirst and trylast also in conjunction with hookwrapper=True in which case it will influence the ordering of hookwrappers among each other.

Declaring new hooks

Plugins and conftest.py files may declare new hooks that can then be implemented by other plugins in order to alter behaviour or interact with the new plugin:

pytest_addhooks(pluginmanager)[source]

called at plugin registration time to allow adding new hooks via a call to pluginmanager.add_hookspecs(module_or_class, prefix).

Hooks are usually declared as do-nothing functions that contain only documentation describing when the hook will be called and what return values are expected.

For an example, see newhooks.py from xdist.

Optionally using hooks from 3rd party plugins

Using new hooks from plugins as explained above might be a little tricky because of the standard validation mechanism: if you depend on a plugin that is not installed, validation will fail and the error message will not make much sense to your users.

One approach is to defer the hook implementation to a new plugin instead of declaring the hook functions directly in your plugin module, for example:

# contents of myplugin.py

class DeferPlugin(object):
    """Simple plugin to defer pytest-xdist hook functions."""

    def pytest_testnodedown(self, node, error):
        """standard xdist hook function.
        """

def pytest_configure(config):
    if config.pluginmanager.hasplugin('xdist'):
        config.pluginmanager.register(DeferPlugin())

This has the added benefit of allowing you to conditionally install hooks depending on which plugins are installed.

pytest hook reference

Initialization, command line and configuration hooks

pytest_load_initial_conftests(early_config, parser, args)[source]

implements the loading of initial conftest files ahead of command line option parsing.

pytest_cmdline_preparse(config, args)[source]

(deprecated) modify command line arguments before option parsing.

pytest_cmdline_parse(pluginmanager, args)[source]

return initialized config object, parsing the specified args.

pytest_namespace()[source]

return dict of name->object to be made globally available in the pytest namespace. This hook is called at plugin registration time.

pytest_addoption(parser)[source]

register argparse-style options and ini-style config values, called once at the beginning of a test run.

Note

This function should be implemented only in plugins or conftest.py files situated at the tests root directory due to how pytest discovers plugins during startup.

Parameters:parser – To add command line options, call parser.addoption(...). To add ini-file values call parser.addini(...).

Options can later be accessed through the config object, respectively:

The config object is passed around on many internal objects via the .config attribute or can be retrieved as the pytestconfig fixture or accessed via (deprecated) pytest.config.

pytest_cmdline_main(config)[source]

called for performing the main command line action. The default implementation will invoke the configure hooks and runtest_mainloop.

pytest_configure(config)[source]

called after command line options have been parsed and all plugins and initial conftest files been loaded. This hook is called for every plugin.

pytest_unconfigure(config)[source]

called before test process is exited.

Generic “runtest” hooks

All runtest related hooks receive a pytest.Item object.

pytest_runtest_protocol(item, nextitem)[source]

implements the runtest_setup/call/teardown protocol for the given test item, including capturing exceptions and calling reporting hooks.

Parameters:
  • item – test item for which the runtest protocol is performed.
  • nextitem – the scheduled-to-be-next test item (or None if this is the end my friend). This argument is passed on to pytest_runtest_teardown().
Return boolean:

True if no further hook implementations should be invoked.

pytest_runtest_setup(item)[source]

called before pytest_runtest_call(item).

pytest_runtest_call(item)[source]

called to execute the test item.

pytest_runtest_teardown(item, nextitem)[source]

called after pytest_runtest_call.

Parameters:nextitem – the scheduled-to-be-next test item (None if no further test item is scheduled). This argument can be used to perform exact teardowns, i.e. calling just enough finalizers so that nextitem only needs to call setup-functions.
pytest_runtest_makereport(item, call)[source]

return a _pytest.runner.TestReport object for the given pytest.Item and _pytest.runner.CallInfo.

For deeper understanding you may look at the default implementation of these hooks in _pytest.runner and maybe also in _pytest.pdb which interacts with _pytest.capture and its input/output capturing in order to immediately drop into interactive debugging when a test failure occurs.

The _pytest.terminal reported specifically uses the reporting hook to print information about a test run.

Collection hooks

pytest calls the following hooks for collecting files and directories:

pytest_ignore_collect(path, config)[source]

return True to prevent considering this path for collection. This hook is consulted for all files and directories prior to calling more specific hooks.

pytest_collect_directory(path, parent)[source]

called before traversing a directory for collection files.

pytest_collect_file(path, parent)[source]

return collection Node or None for the given path. Any new node needs to have the specified parent as a parent.

For influencing the collection of objects in Python modules you can use the following hook:

pytest_pycollect_makeitem(collector, name, obj)[source]

return custom item/collector for a python object in a module, or None.

pytest_generate_tests(metafunc)[source]

generate (multiple) parametrized calls to a test function.

pytest_make_parametrize_id(config, val)[source]

Return a user-friendly string representation of the given val that will be used by @pytest.mark.parametrize calls. Return None if the hook doesn’t know about val.

After collection is complete, you can modify the order of items, delete or otherwise amend the test items:

pytest_collection_modifyitems(session, config, items)[source]

called after collection has been performed, may filter or re-order the items in-place.

Reporting hooks

Session related reporting hooks:

pytest_collectstart(collector)[source]

collector starts collecting.

pytest_itemcollected(item)[source]

we just collected a test item.

pytest_collectreport(report)[source]

collector finished collecting.

pytest_deselected(items)[source]

called for test items deselected by keyword.

pytest_report_header(config, startdir)[source]

return a string to be displayed as header info for terminal reporting.

Note

This function should be implemented only in plugins or conftest.py files situated at the tests root directory due to how pytest discovers plugins during startup.

pytest_report_teststatus(report)[source]

return result-category, shortletter and verbose word for reporting.

pytest_terminal_summary(terminalreporter, exitstatus)[source]

add additional section in terminal summary reporting.

pytest_fixture_setup(fixturedef, request)[source]

performs fixture setup execution.

pytest_fixture_post_finalizer(fixturedef)[source]

called after fixture teardown, but before the cache is cleared so the fixture result cache fixturedef.cached_result can still be accessed.

And here is the central hook for reporting about test execution:

pytest_runtest_logreport(report)[source]

process a test setup/call/teardown report relating to the respective phase of executing a test.

You can also use this hook to customize assertion representation for some types:

pytest_assertrepr_compare(config, op, left, right)[source]

return explanation for comparisons in failing assert expressions.

Return None for no custom explanation, otherwise return a list of strings. The strings will be joined by newlines but any newlines in a string will be escaped. Note that all but the first line will be indented slightly, the intention is for the first line to be a summary.

Debugging/Interaction hooks

There are few hooks which can be used for special reporting or interaction with exceptions:

pytest_internalerror(excrepr, excinfo)[source]

called for internal errors.

pytest_keyboard_interrupt(excinfo)[source]

called for keyboard interrupt.

pytest_exception_interact(node, call, report)[source]

called when an exception was raised which can potentially be interactively handled.

This hook is only called if an exception was raised that is not an internal exception like skip.Exception.

pytest_enter_pdb(config)[source]

called upon pdb.set_trace(), can be used by plugins to take special action just before the python debugger enters in interactive mode.

Parameters:config (_pytest.config.Config) – pytest config object

Reference of objects involved in hooks

class Config[source]

access to configuration values, pluginmanager and plugin hooks.

option = None

access to command line option as attributes. (deprecated), use getoption() instead

pluginmanager = None

a pluginmanager instance

add_cleanup(func)[source]

Add a function to be called when the config object gets out of use (usually coninciding with pytest_unconfigure).

warn(code, message, fslocation=None)[source]

generate a warning for this test session.

classmethod fromdictargs(option_dict, args)[source]

constructor useable for subprocesses.

addinivalue_line(name, line)[source]

add a line to an ini-file option. The option must have been declared but might not yet be set in which case the line becomes the the first line in its value.

getini(name)[source]

return configuration value from an ini file. If the specified name hasn’t been registered through a prior parser.addini call (usually from a plugin), a ValueError is raised.

getoption(name, default=<NOTSET>, skip=False)[source]

return command line option value.

Parameters:
  • name – name of the option. You may also specify the literal --OPT option instead of the “dest” option name.
  • default – default value if no option of that name exists.
  • skip – if True raise pytest.skip if option does not exists or has a None value.
getvalue(name, path=None)[source]

(deprecated, use getoption())

getvalueorskip(name, path=None)[source]

(deprecated, use getoption(skip=True))

class Parser[source]

Parser for command line arguments and ini-file values.

Variables:extra_info – dict of generic param -> value to display in case there’s an error processing the command line arguments.
getgroup(name, description='', after=None)[source]

get (or create) a named option Group.

Name:name of the option group.
Description:long description for –help output.
After:name of other group, used for ordering –help output.

The returned group object has an addoption method with the same signature as parser.addoption but will be shown in the respective group in the output of pytest. --help.

addoption(*opts, **attrs)[source]

register a command line option.

Opts:option names, can be short or long options.
Attrs:same attributes which the add_option() function of the argparse library accepts.

After command line parsing options are available on the pytest config object via config.option.NAME where NAME is usually set by passing a dest attribute, for example addoption("--long", dest="NAME", ...).

parse_known_args(args, namespace=None)[source]

parses and returns a namespace object with known arguments at this point.

parse_known_and_unknown_args(args, namespace=None)[source]

parses and returns a namespace object with known arguments, and the remaining arguments unknown at this point.

addini(name, help, type=None, default=None)[source]

register an ini-file option.

Name:name of the ini-variable
Type:type of the variable, can be pathlist, args, linelist or bool.
Default:default value if no ini-file option exists but is queried.

The value of ini-variables can be retrieved via a call to config.getini(name).

class Node[source]

base class for Collector and Item the test collection tree. Collector subclasses have children, Items are terminal nodes.

name = None

a unique name within the scope of the parent node

parent = None

the parent collector node.

config = None

the pytest config object

session = None

the session this node is part of

fspath = None

filesystem path where this node was collected from (can be None)

keywords = None

keywords/markers collected from all scopes

extra_keyword_matches = None

allow adding of extra keywords to use for matching

ihook

fspath sensitive hook proxy used to call pytest hooks

warn(code, message)[source]

generate a warning with the given code and message for this item.

nodeid

a ::-separated string denoting its collection tree address.

listchain()[source]

return list of all parent collectors up to self, starting from root of collection tree.

add_marker(marker)[source]

dynamically add a marker object to the node.

marker can be a string or pytest.mark.* instance.

get_marker(name)[source]

get a marker object from this node or None if the node doesn’t have a marker with that name.

listextrakeywords()[source]

Return a set of all extra keywords in self and any parents.

addfinalizer(fin)[source]

register a function to be called when this node is finalized.

This method can only be called when this node is active in a setup chain, for example during self.setup().

getparent(cls)[source]

get the next parent node (including ourself) which is an instance of the given class

class Collector[source]

Bases: _pytest.main.Node

Collector instances create children through collect() and thus iteratively build a tree.

exception CollectError[source]

Bases: exceptions.Exception

an error during collection, contains a custom message.

Collector.collect()[source]

returns a list of children (items and collectors) for this collection node.

Collector.repr_failure(excinfo)[source]

represent a collection failure.

class Item[source]

Bases: _pytest.main.Node

a basic test invocation item. Note that for a single function there might be multiple test invocation items.

class Module[source]

Bases: _pytest.main.File, _pytest.python.PyCollector

Collector for test classes and functions.

class Class[source]

Bases: _pytest.python.PyCollector

Collector for test methods.

class Function[source]

Bases: _pytest.python.FunctionMixin, _pytest.main.Item, _pytest.fixtures.FuncargnamesCompatAttr

a Function Item is responsible for setting up and executing a Python test function.

originalname = None

original function name, without any decorations (for example parametrization adds a "[...]" suffix to function names).

New in version 3.0.

function

underlying python ‘function’ object

runtest()[source]

execute the underlying test function.

class FixtureDef[source]

A container for a factory definition.

class CallInfo[source]

Result/Exception info a function invocation.

when = None

context of invocation: one of “setup”, “call”, “teardown”, “memocollect”

excinfo = None

None or ExceptionInfo object.

class TestReport[source]

Basic test report object (also used for setup and teardown calls if they fail).

nodeid = None

normalized collection node id

location = None

a (filesystempath, lineno, domaininfo) tuple indicating the actual location of a test item - it might be different from the collected one e.g. if a method is inherited from a different module.

keywords = None

a name -> value dictionary containing all keywords and markers associated with a test invocation.

outcome = None

test outcome, always one of “passed”, “failed”, “skipped”.

longrepr = None

None or a failure representation.

when = None

one of ‘setup’, ‘call’, ‘teardown’ to indicate runtest phase.

sections = None

list of pairs (str, str) of extra information which needs to marshallable. Used by pytest to add captured text from stdout and stderr, but may be used by other plugins to add arbitrary information to reports.

duration = None

time it took to run just the test

capstderr

Return captured text from stderr, if capturing is enabled

New in version 3.0.

capstdout

Return captured text from stdout, if capturing is enabled

New in version 3.0.

longreprtext

Read-only property that returns the full string representation of longrepr.

New in version 3.0.

class _CallOutcome[source]

Outcome of a function call, either an exception or a proper result. Calling the get_result method will return the result or reraise the exception raised when the function was called.

get_plugin_manager()[source]

Obtain a new instance of the _pytest.config.PytestPluginManager, with default plugins already loaded.

This function can be used by integration with other tools, like hooking into pytest to run tests into an IDE.

class PytestPluginManager[source]

Bases: _pytest.vendored_packages.pluggy.PluginManager

Overwrites pluggy.PluginManager to add pytest-specific functionality:

  • loading plugins from the command line, PYTEST_PLUGIN env variable and pytest_plugins global variables found in plugins being loaded;
  • conftest.py loading during start-up;
addhooks(module_or_class)[source]

Deprecated since version 2.8.

Use pluggy.PluginManager.add_hookspecs() instead.

parse_hookimpl_opts(plugin, name)[source]
parse_hookspec_opts(module_or_class, name)[source]
register(plugin, name=None)[source]
getplugin(name)[source]
hasplugin(name)[source]

Return True if the plugin with the given name is registered.

pytest_configure(config)[source]
consider_preparse(args)[source]
consider_pluginarg(arg)[source]
consider_conftest(conftestmodule)[source]
consider_env()[source]
consider_module(mod)[source]
import_plugin(modname)[source]
class PluginManager[source]

Core Pluginmanager class which manages registration of plugin objects and 1:N hook calling.

You can register new hooks by calling add_hookspec(module_or_class). You can register plugin objects (which contain hooks) by calling register(plugin). The Pluginmanager is initialized with a prefix that is searched for in the names of the dict of registered plugin objects. An optional excludefunc allows to blacklist names which are not considered as hooks despite a matching prefix.

For debugging purposes you can call enable_tracing() which will subsequently send debug information to the trace helper.

register(plugin, name=None)[source]

Register a plugin and return its canonical name or None if the name is blocked from registering. Raise a ValueError if the plugin is already registered.

unregister(plugin=None, name=None)[source]

unregister a plugin object and all its contained hook implementations from internal data structures.

set_blocked(name)[source]

block registrations of the given name, unregister if already registered.

is_blocked(name)[source]

return True if the name blogs registering plugins of that name.

add_hookspecs(module_or_class)[source]

add new hook specifications defined in the given module_or_class. Functions are recognized if they have been decorated accordingly.

get_plugins()[source]

return the set of registered plugins.

is_registered(plugin)[source]

Return True if the plugin is already registered.

get_canonical_name(plugin)[source]

Return canonical name for a plugin object. Note that a plugin may be registered under a different name which was specified by the caller of register(plugin, name). To obtain the name of an registered plugin use get_name(plugin) instead.

get_plugin(name)[source]

Return a plugin or None for the given name.

has_plugin(name)[source]

Return True if a plugin with the given name is registered.

get_name(plugin)[source]

Return name for registered plugin or None if not registered.

check_pending()[source]

Verify that all hooks which have not been verified against a hook specification are optional, otherwise raise PluginValidationError

load_setuptools_entrypoints(entrypoint_name)[source]

Load modules from querying the specified setuptools entrypoint name. Return the number of loaded plugins.

list_plugin_distinfo()[source]

return list of distinfo/plugin tuples for all setuptools registered plugins.

list_name_plugin()[source]

return list of name/plugin pairs.

get_hookcallers(plugin)[source]

get all hook callers for the specified plugin.

add_hookcall_monitoring(before, after)[source]

add before/after tracing functions for all hooks and return an undo function which, when called, will remove the added tracers.

before(hook_name, hook_impls, kwargs) will be called ahead of all hook calls and receive a hookcaller instance, a list of HookImpl instances and the keyword arguments for the hook call.

after(outcome, hook_name, hook_impls, kwargs) receives the same arguments as before but also a _CallOutcome` object which represents the result of the overall hook call.

enable_tracing()[source]

enable tracing of hook calls and return an undo function.

subset_hook_caller(name, remove_plugins)[source]

Return a new _HookCaller instance for the named method which manages calls to all registered plugins except the ones from remove_plugins.

class Testdir[source]

Temporary test directory with tools to test/run pytest itself.

This is based on the tmpdir fixture but provides a number of methods which aid with testing pytest itself. Unless chdir() is used all methods will use tmpdir as current working directory.

Attributes:

Tmpdir:The py.path.local instance of the temporary directory.
Plugins:A list of plugins to use with parseconfig() and runpytest(). Initially this is an empty list but plugins can be added to the list. The type of items to add to the list depend on the method which uses them so refer to them for details.
makeconftest(source)[source]

Write a contest.py file with ‘source’ as contents.

makepyfile(*args, **kwargs)[source]

Shortcut for .makefile() with a .py extension.

runpytest_inprocess(*args, **kwargs)[source]

Return result of running pytest in-process, providing a similar interface to what self.runpytest() provides.

runpytest(*args, **kwargs)[source]

Run pytest inline or in a subprocess, depending on the command line option “–runpytest” and return a RunResult.

runpytest_subprocess(*args, **kwargs)[source]

Run pytest as a subprocess with given arguments.

Any plugins added to the plugins list will added using the -p command line option. Addtionally --basetemp is used put any temporary files and directories in a numbered directory prefixed with “runpytest-” so they do not conflict with the normal numberd pytest location for temporary files and directories.

Returns a RunResult.

class RunResult[source]

The result of running a command.

Attributes:

Ret:The return value.
Outlines:List of lines captured from stdout.
Errlines:List of lines captures from stderr.
Stdout:LineMatcher of stdout, use stdout.str() to reconstruct stdout or the commonly used stdout.fnmatch_lines() method.
Stderrr:LineMatcher of stderr.
Duration:Duration in seconds.
parseoutcomes()[source]

Return a dictionary of outcomestring->num from parsing the terminal output that the test process produced.

assert_outcomes(passed=0, skipped=0, failed=0)[source]

assert that the specified outcomes appear with the respective numbers (0 means it didn’t occur) in the text output from a test run.

class LineMatcher[source]

Flexible matching of text.

This is a convenience class to test large texts like the output of commands.

The constructor takes a list of lines without their trailing newlines, i.e. text.splitlines().

str()[source]

Return the entire original text.

fnmatch_lines_random(lines2)[source]

Check lines exist in the output.

The argument is a list of lines which have to occur in the output, in any order. Each line can contain glob whildcards.

get_lines_after(fnline)[source]

Return all lines following the given line in the text.

The given line can contain glob wildcards.

fnmatch_lines(lines2)[source]

Search the text for matching lines.

The argument is a list of lines which have to match and can use glob wildcards. If they do not match an pytest.fail() is called. The matches and non-matches are also printed on stdout.

Usages and Examples

Here is a (growing) list of examples. Contact us if you need more examples or have questions. Also take a look at the comprehensive documentation which contains many example snippets as well. Also, pytest on stackoverflow.com often comes with example answers.

For basic examples, see

The following examples aim at various use cases you might encounter.

Demo of Python failure reports with pytest

Here is a nice run of several tens of failures and how pytest presents things (unfortunately not showing the nice colors here in the HTML that you get on the terminal - we are working on that):

assertion $ pytest failure_demo.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR/assertion, inifile:
collected 42 items

failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

======= FAILURES ========
_______ test_generative[0] ________

param1 = 3, param2 = 6

    def test_generative(param1, param2):
>       assert param1 * 2 < param2
E       assert (3 * 2) < 6

failure_demo.py:16: AssertionError
_______ TestFailing.test_simple ________

self = <failure_demo.TestFailing object at 0xdeadbeef>

    def test_simple(self):
        def f():
            return 42
        def g():
            return 43

>       assert f() == g()
E       assert 42 == 43
E        +  where 42 = <function TestFailing.test_simple.<locals>.f at 0xdeadbeef>()
E        +  and   43 = <function TestFailing.test_simple.<locals>.g at 0xdeadbeef>()

failure_demo.py:29: AssertionError
_______ TestFailing.test_simple_multiline ________

self = <failure_demo.TestFailing object at 0xdeadbeef>

    def test_simple_multiline(self):
        otherfunc_multi(
                  42,
>                 6*9)

failure_demo.py:34:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

a = 42, b = 54

    def otherfunc_multi(a,b):
>       assert (a ==
                b)
E       assert 42 == 54

failure_demo.py:12: AssertionError
_______ TestFailing.test_not ________

self = <failure_demo.TestFailing object at 0xdeadbeef>

    def test_not(self):
        def f():
            return 42
>       assert not f()
E       assert not 42
E        +  where 42 = <function TestFailing.test_not.<locals>.f at 0xdeadbeef>()

failure_demo.py:39: AssertionError
_______ TestSpecialisedExplanations.test_eq_text ________

self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>

    def test_eq_text(self):
>       assert 'spam' == 'eggs'
E       AssertionError: assert 'spam' == 'eggs'
E         - spam
E         + eggs

failure_demo.py:43: AssertionError
_______ TestSpecialisedExplanations.test_eq_similar_text ________

self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>

    def test_eq_similar_text(self):
>       assert 'foo 1 bar' == 'foo 2 bar'
E       AssertionError: assert 'foo 1 bar' == 'foo 2 bar'
E         - foo 1 bar
E         ?     ^
E         + foo 2 bar
E         ?     ^

failure_demo.py:46: AssertionError
_______ TestSpecialisedExplanations.test_eq_multiline_text ________

self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>

    def test_eq_multiline_text(self):
>       assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
E       AssertionError: assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
E           foo
E         - spam
E         + eggs
E           bar

failure_demo.py:49: AssertionError
_______ TestSpecialisedExplanations.test_eq_long_text ________

self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>

    def test_eq_long_text(self):
        a = '1'*100 + 'a' + '2'*100
        b = '1'*100 + 'b' + '2'*100
>       assert a == b
E       AssertionError: assert '111111111111...2222222222222' == '1111111111111...2222222222222'
E         Skipping 90 identical leading characters in diff, use -v to show
E         Skipping 91 identical trailing characters in diff, use -v to show
E         - 1111111111a222222222
E         ?           ^
E         + 1111111111b222222222
E         ?           ^

failure_demo.py:54: AssertionError
_______ TestSpecialisedExplanations.test_eq_long_text_multiline ________

self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>

    def test_eq_long_text_multiline(self):
        a = '1\n'*100 + 'a' + '2\n'*100
        b = '1\n'*100 + 'b' + '2\n'*100
>       assert a == b
E       AssertionError: assert '1\n1\n1\n1\n...n2\n2\n2\n2\n' == '1\n1\n1\n1\n1...n2\n2\n2\n2\n'
E         Skipping 190 identical leading characters in diff, use -v to show
E         Skipping 191 identical trailing characters in diff, use -v to show
E           1
E           1
E           1
E           1
E           1
E         - a2
E         + b2
E           2
E           2
E           2
E           2

failure_demo.py:59: AssertionError
_______ TestSpecialisedExplanations.test_eq_list ________

self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>

    def test_eq_list(self):
>       assert [0, 1, 2] == [0, 1, 3]
E       assert [0, 1, 2] == [0, 1, 3]
E         At index 2 diff: 2 != 3
E         Use -v to get the full diff

failure_demo.py:62: AssertionError
_______ TestSpecialisedExplanations.test_eq_list_long ________

self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>

    def test_eq_list_long(self):
        a = [0]*100 + [1] + [3]*100
        b = [0]*100 + [2] + [3]*100
>       assert a == b
E       assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...]
E         At index 100 diff: 1 != 2
E         Use -v to get the full diff

failure_demo.py:67: AssertionError
_______ TestSpecialisedExplanations.test_eq_dict ________

self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>

    def test_eq_dict(self):
>       assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
E       AssertionError: assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
E         Omitting 1 identical items, use -v to show
E         Differing items:
E         {'b': 1} != {'b': 2}
E         Left contains more items:
E         {'c': 0}
E         Right contains more items:
E         {'d': 0}
E         Use -v to get the full diff

failure_demo.py:70: AssertionError
_______ TestSpecialisedExplanations.test_eq_set ________

self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>

    def test_eq_set(self):
>       assert set([0, 10, 11, 12]) == set([0, 20, 21])
E       assert {0, 10, 11, 12} == {0, 20, 21}
E         Extra items in the left set:
E         10
E         11
E         12
E         Extra items in the right set:
E         20
E         21
E         Use -v to get the full diff

failure_demo.py:73: AssertionError
_______ TestSpecialisedExplanations.test_eq_longer_list ________

self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>

    def test_eq_longer_list(self):
>       assert [1,2] == [1,2,3]
E       assert [1, 2] == [1, 2, 3]
E         Right contains more items, first extra item: 3
E         Use -v to get the full diff

failure_demo.py:76: AssertionError
_______ TestSpecialisedExplanations.test_in_list ________

self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>

    def test_in_list(self):
>       assert 1 in [0, 2, 3, 4, 5]
E       assert 1 in [0, 2, 3, 4, 5]

failure_demo.py:79: AssertionError
_______ TestSpecialisedExplanations.test_not_in_text_multiline ________

self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>

    def test_not_in_text_multiline(self):
        text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
>       assert 'foo' not in text
E       AssertionError: assert 'foo' not in 'some multiline\ntext\nw...ncludes foo\nand a\ntail'
E         'foo' is contained here:
E           some multiline
E           text
E           which
E           includes foo
E         ?          +++
E           and a
E           tail

failure_demo.py:83: AssertionError
_______ TestSpecialisedExplanations.test_not_in_text_single ________

self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>

    def test_not_in_text_single(self):
        text = 'single foo line'
>       assert 'foo' not in text
E       AssertionError: assert 'foo' not in 'single foo line'
E         'foo' is contained here:
E           single foo line
E         ?        +++

failure_demo.py:87: AssertionError
_______ TestSpecialisedExplanations.test_not_in_text_single_long ________

self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>

    def test_not_in_text_single_long(self):
        text = 'head ' * 50 + 'foo ' + 'tail ' * 20
>       assert 'foo' not in text
E       AssertionError: assert 'foo' not in 'head head head head hea...ail tail tail tail tail '
E         'foo' is contained here:
E           head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E         ?           +++

failure_demo.py:91: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______

self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>

    def test_not_in_text_single_long_term(self):
        text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
>       assert 'f'*70 not in text
E       AssertionError: assert 'fffffffffff...ffffffffffff' not in 'head head he...l tail tail '
E         'ffffffffffffffffff...fffffffffffffffffff' is contained here:
E           head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E         ?           ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

failure_demo.py:95: AssertionError
_______ test_attribute ________

    def test_attribute():
        class Foo(object):
            b = 1
        i = Foo()
>       assert i.b == 2
E       assert 1 == 2
E        +  where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0xdeadbeef>.b

failure_demo.py:102: AssertionError
_______ test_attribute_instance ________

    def test_attribute_instance():
        class Foo(object):
            b = 1
>       assert Foo().b == 2
E       AssertionError: assert 1 == 2
E        +  where 1 = <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef>.b
E        +    where <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()

failure_demo.py:108: AssertionError
_______ test_attribute_failure ________

    def test_attribute_failure():
        class Foo(object):
            def _get_b(self):
                raise Exception('Failed to get attrib')
            b = property(_get_b)
        i = Foo()
>       assert i.b == 2

failure_demo.py:117:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0xdeadbeef>

    def _get_b(self):
>       raise Exception('Failed to get attrib')
E       Exception: Failed to get attrib

failure_demo.py:114: Exception
_______ test_attribute_multiple ________

    def test_attribute_multiple():
        class Foo(object):
            b = 1
        class Bar(object):
            b = 2
>       assert Foo().b == Bar().b
E       AssertionError: assert 1 == 2
E        +  where 1 = <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef>.b
E        +    where <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Foo'>()
E        +  and   2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef>.b
E        +    where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>()

failure_demo.py:125: AssertionError
_______ TestRaises.test_raises ________

self = <failure_demo.TestRaises object at 0xdeadbeef>

    def test_raises(self):
        s = 'qwe'
>       raises(TypeError, "int(s)")

failure_demo.py:134:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

>   int(s)
E   ValueError: invalid literal for int() with base 10: 'qwe'

<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python.py:1207>:1: ValueError
_______ TestRaises.test_raises_doesnt ________

self = <failure_demo.TestRaises object at 0xdeadbeef>

    def test_raises_doesnt(self):
>       raises(IOError, "int('3')")
E       Failed: DID NOT RAISE <class 'OSError'>

failure_demo.py:137: Failed
_______ TestRaises.test_raise ________

self = <failure_demo.TestRaises object at 0xdeadbeef>

    def test_raise(self):
>       raise ValueError("demo error")
E       ValueError: demo error

failure_demo.py:140: ValueError
_______ TestRaises.test_tupleerror ________

self = <failure_demo.TestRaises object at 0xdeadbeef>

    def test_tupleerror(self):
>       a,b = [1]
E       ValueError: not enough values to unpack (expected 2, got 1)

failure_demo.py:143: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______

self = <failure_demo.TestRaises object at 0xdeadbeef>

    def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
        l = [1,2,3]
        print ("l is %r" % l)
>       a,b = l.pop()
E       TypeError: 'int' object is not iterable

failure_demo.py:148: TypeError
--------------------------- Captured stdout call ---------------------------
l is [1, 2, 3]
_______ TestRaises.test_some_error ________

self = <failure_demo.TestRaises object at 0xdeadbeef>

    def test_some_error(self):
>       if namenotexi:
E       NameError: name 'namenotexi' is not defined

failure_demo.py:151: NameError
_______ test_dynamic_compile_shows_nicely ________

    def test_dynamic_compile_shows_nicely():
        src = 'def foo():\n assert 1 == 0\n'
        name = 'abc-123'
        module = py.std.imp.new_module(name)
        code = _pytest._code.compile(src, name, 'exec')
        py.builtin.exec_(code, module.__dict__)
        py.std.sys.modules[name] = module
>       module.foo()

failure_demo.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

    def foo():
>    assert 1 == 0
E    AssertionError

<2-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:163>:2: AssertionError
_______ TestMoreErrors.test_complex_error ________

self = <failure_demo.TestMoreErrors object at 0xdeadbeef>

    def test_complex_error(self):
        def f():
            return 44
        def g():
            return 43
>       somefunc(f(), g())

failure_demo.py:176:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:9: in somefunc
    otherfunc(x,y)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

a = 44, b = 43

    def otherfunc(a,b):
>       assert a==b
E       assert 44 == 43

failure_demo.py:6: AssertionError
_______ TestMoreErrors.test_z1_unpack_error ________

self = <failure_demo.TestMoreErrors object at 0xdeadbeef>

    def test_z1_unpack_error(self):
        l = []
>       a,b  = l
E       ValueError: not enough values to unpack (expected 2, got 0)

failure_demo.py:180: ValueError
_______ TestMoreErrors.test_z2_type_error ________

self = <failure_demo.TestMoreErrors object at 0xdeadbeef>

    def test_z2_type_error(self):
        l = 3
>       a,b  = l
E       TypeError: 'int' object is not iterable

failure_demo.py:184: TypeError
_______ TestMoreErrors.test_startswith ________

self = <failure_demo.TestMoreErrors object at 0xdeadbeef>

    def test_startswith(self):
        s = "123"
        g = "456"
>       assert s.startswith(g)
E       AssertionError: assert False
E        +  where False = <built-in method startswith of str object at 0xdeadbeef>('456')
E        +    where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith

failure_demo.py:189: AssertionError
_______ TestMoreErrors.test_startswith_nested ________

self = <failure_demo.TestMoreErrors object at 0xdeadbeef>

    def test_startswith_nested(self):
        def f():
            return "123"
        def g():
            return "456"
>       assert f().startswith(g())
E       AssertionError: assert False
E        +  where False = <built-in method startswith of str object at 0xdeadbeef>('456')
E        +    where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
E        +      where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef>()
E        +    and   '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef>()

failure_demo.py:196: AssertionError
_______ TestMoreErrors.test_global_func ________

self = <failure_demo.TestMoreErrors object at 0xdeadbeef>

    def test_global_func(self):
>       assert isinstance(globf(42), float)
E       assert False
E        +  where False = isinstance(43, float)
E        +    where 43 = globf(42)

failure_demo.py:199: AssertionError
_______ TestMoreErrors.test_instance ________

self = <failure_demo.TestMoreErrors object at 0xdeadbeef>

    def test_instance(self):
        self.x = 6*7
>       assert self.x != 42
E       assert 42 != 42
E        +  where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x

failure_demo.py:203: AssertionError
_______ TestMoreErrors.test_compare ________

self = <failure_demo.TestMoreErrors object at 0xdeadbeef>

    def test_compare(self):
>       assert globf(10) < 5
E       assert 11 < 5
E        +  where 11 = globf(10)

failure_demo.py:206: AssertionError
_______ TestMoreErrors.test_try_finally ________

self = <failure_demo.TestMoreErrors object at 0xdeadbeef>

    def test_try_finally(self):
        x = 1
        try:
>           assert x == 0
E           assert 1 == 0

failure_demo.py:211: AssertionError
_______ TestCustomAssertMsg.test_single_line ________

self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>

    def test_single_line(self):
        class A:
            a = 1
        b = 2
>       assert A.a == b, "A.a appears not to be b"
E       AssertionError: A.a appears not to be b
E       assert 1 == 2
E        +  where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.<locals>.A'>.a

failure_demo.py:222: AssertionError
_______ TestCustomAssertMsg.test_multiline ________

self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>

    def test_multiline(self):
        class A:
            a = 1
        b = 2
>       assert A.a == b, "A.a appears not to be b\n" \
            "or does not appear to be b\none of those"
E       AssertionError: A.a appears not to be b
E         or does not appear to be b
E         one of those
E       assert 1 == 2
E        +  where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a

failure_demo.py:228: AssertionError
_______ TestCustomAssertMsg.test_custom_repr ________

self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>

    def test_custom_repr(self):
        class JSON:
            a = 1
            def __repr__(self):
                return "This is JSON\n{\n  'foo': 'bar'\n}"
        a = JSON()
        b = 2
>       assert a.a == b, a
E       AssertionError: This is JSON
E         {
E           'foo': 'bar'
E         }
E       assert 1 == 2
E        +  where 1 = This is JSON\n{\n  'foo': 'bar'\n}.a

failure_demo.py:238: AssertionError
======= 42 failed in 0.12 seconds ========

Basic patterns and examples

Pass different values to a test function, depending on command line options

Suppose we want to write a test that depends on a command line option. Here is a basic pattern to achieve this:

# content of test_sample.py
def test_answer(cmdopt):
    if cmdopt == "type1":
        print ("first")
    elif cmdopt == "type2":
        print ("second")
    assert 0 # to see what was printed

For this to work we need to add a command line option and provide the cmdopt through a fixture function:

# content of conftest.py
import pytest

def pytest_addoption(parser):
    parser.addoption("--cmdopt", action="store", default="type1",
        help="my option: type1 or type2")

@pytest.fixture
def cmdopt(request):
    return request.config.getoption("--cmdopt")

Let’s run this without supplying our new option:

$ pytest -q test_sample.py
F
======= FAILURES ========
_______ test_answer ________

cmdopt = 'type1'

    def test_answer(cmdopt):
        if cmdopt == "type1":
            print ("first")
        elif cmdopt == "type2":
            print ("second")
>       assert 0 # to see what was printed
E       assert 0

test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
first
1 failed in 0.12 seconds

And now with supplying a command line option:

$ pytest -q --cmdopt=type2
F
======= FAILURES ========
_______ test_answer ________

cmdopt = 'type2'

    def test_answer(cmdopt):
        if cmdopt == "type1":
            print ("first")
        elif cmdopt == "type2":
            print ("second")
>       assert 0 # to see what was printed
E       assert 0

test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
second
1 failed in 0.12 seconds

You can see that the command line option arrived in our test. This completes the basic pattern. However, one often rather wants to process command line options outside of the test and rather pass in different or more complex objects.

Dynamically adding command line options

Through addopts you can statically add command line options for your project. You can also dynamically modify the command line arguments before they get processed:

# content of conftest.py
import sys
def pytest_cmdline_preparse(args):
    if 'xdist' in sys.modules: # pytest-xdist plugin
        import multiprocessing
        num = max(multiprocessing.cpu_count() / 2, 1)
        args[:] = ["-n", str(num)] + args

If you have the xdist plugin installed you will now always perform test runs using a number of subprocesses close to your CPU. Running in an empty directory with the above conftest.py:

$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items

======= no tests ran in 0.12 seconds ========

Control skipping of tests according to command line option

Here is a conftest.py file adding a --runslow command line option to control skipping of slow marked tests:

# content of conftest.py

import pytest
def pytest_addoption(parser):
    parser.addoption("--runslow", action="store_true",
        help="run slow tests")

We can now write a test module like this:

# content of test_module.py
import pytest


slow = pytest.mark.skipif(
    not pytest.config.getoption("--runslow"),
    reason="need --runslow option to run"
)


def test_func_fast():
    pass


@slow
def test_func_slow():
    pass

and when running it will see a skipped “slow” test:

$ pytest -rs    # "-rs" means report details on the little 's'
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items

test_module.py .s
======= short test summary info ========
SKIP [1] test_module.py:13: need --runslow option to run

======= 1 passed, 1 skipped in 0.12 seconds ========

Or run it including the slow marked test:

$ pytest --runslow
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items

test_module.py ..

======= 2 passed in 0.12 seconds ========

Writing well integrated assertion helpers

If you have a test helper function called from a test you can use the pytest.fail marker to fail a test with a certain message. The test support function will not show up in the traceback if you set the __tracebackhide__ option somewhere in the helper function. Example:

# content of test_checkconfig.py
import pytest
def checkconfig(x):
    __tracebackhide__ = True
    if not hasattr(x, "config"):
        pytest.fail("not configured: %s" %(x,))

def test_something():
    checkconfig(42)

The __tracebackhide__ setting influences pytest showing of tracebacks: the checkconfig function will not be shown unless the --full-trace command line option is specified. Let’s run our little function:

$ pytest -q test_checkconfig.py
F
======= FAILURES ========
_______ test_something ________

    def test_something():
>       checkconfig(42)
E       Failed: not configured: 42

test_checkconfig.py:8: Failed
1 failed in 0.12 seconds

If you only want to hide certain exceptions, you can set __tracebackhide__ to a callable which gets the ExceptionInfo object. You can for example use this to make sure unexpected exception types aren’t hidden:

import operator
import pytest

class ConfigException(Exception):
    pass

def checkconfig(x):
    __tracebackhide__ = operator.methodcaller('errisinstance', ConfigException)
    if not hasattr(x, "config"):
        raise ConfigException("not configured: %s" %(x,))

def test_something():
    checkconfig(42)

This will avoid hiding the exception traceback on unrelated exceptions (i.e. bugs in assertion helpers).

Detect if running from within a pytest run

Usually it is a bad idea to make application code behave differently if called from a test. But if you absolutely must find out if your application code is running from a test you can do something like this:

# content of conftest.py

def pytest_configure(config):
    import sys
    sys._called_from_test = True

def pytest_unconfigure(config):
    del sys._called_from_test

and then check for the sys._called_from_test flag:

if hasattr(sys, '_called_from_test'):
    # called from within a test run
else:
    # called "normally"

accordingly in your application. It’s also a good idea to use your own application module rather than sys for handling flag.

Adding info to test report header

It’s easy to present extra information in a pytest run:

# content of conftest.py

def pytest_report_header(config):
    return "project deps: mylib-1.1"

which will add the string to the test header accordingly:

$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
project deps: mylib-1.1
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items

======= no tests ran in 0.12 seconds ========

It is also possible to return a list of strings which will be considered as several lines of information. You may consider config.getoption('verbose') in order to display more information if applicable:

# content of conftest.py

def pytest_report_header(config):
    if config.getoption('verbose') > 0:
        return ["info1: did you know that ...", "did you?"]

which will add info only when run with “–v”:

$ pytest -v
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
info1: did you know that ...
did you?
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 0 items

======= no tests ran in 0.12 seconds ========

and nothing when run plainly:

$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items

======= no tests ran in 0.12 seconds ========

profiling test duration

If you have a slow running large test suite you might want to find out which tests are the slowest. Let’s make an artificial test suite:

# content of test_some_are_slow.py
import time

def test_funcfast():
    pass

def test_funcslow1():
    time.sleep(0.1)

def test_funcslow2():
    time.sleep(0.2)

Now we can profile which test functions execute the slowest:

$ pytest --durations=3
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items

test_some_are_slow.py ...

======= slowest 3 test durations ========
0.20s call     test_some_are_slow.py::test_funcslow2
0.10s call     test_some_are_slow.py::test_funcslow1
0.00s setup    test_some_are_slow.py::test_funcfast
======= 3 passed in 0.12 seconds ========

incremental testing - test steps

Sometimes you may have a testing situation which consists of a series of test steps. If one step fails it makes no sense to execute further steps as they are all expected to fail anyway and their tracebacks add no insight. Here is a simple conftest.py file which introduces an incremental marker which is to be used on classes:

# content of conftest.py

import pytest

def pytest_runtest_makereport(item, call):
    if "incremental" in item.keywords:
        if call.excinfo is not None:
            parent = item.parent
            parent._previousfailed = item

def pytest_runtest_setup(item):
    if "incremental" in item.keywords:
        previousfailed = getattr(item.parent, "_previousfailed", None)
        if previousfailed is not None:
            pytest.xfail("previous test failed (%s)" %previousfailed.name)

These two hook implementations work together to abort incremental-marked tests in a class. Here is a test module example:

# content of test_step.py

import pytest

@pytest.mark.incremental
class TestUserHandling:
    def test_login(self):
        pass
    def test_modification(self):
        assert 0
    def test_deletion(self):
        pass

def test_normal():
    pass

If we run this:

$ pytest -rx
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items

test_step.py .Fx.
======= short test summary info ========
XFAIL test_step.py::TestUserHandling::()::test_deletion
  reason: previous test failed (test_modification)

======= FAILURES ========
_______ TestUserHandling.test_modification ________

self = <test_step.TestUserHandling object at 0xdeadbeef>

    def test_modification(self):
>       assert 0
E       assert 0

test_step.py:9: AssertionError
======= 1 failed, 2 passed, 1 xfailed in 0.12 seconds ========

We’ll see that test_deletion was not executed because test_modification failed. It is reported as an “expected failure”.

Package/Directory-level fixtures (setups)

If you have nested test directories, you can have per-directory fixture scopes by placing fixture functions in a conftest.py file in that directory You can use all types of fixtures including autouse fixtures which are the equivalent of xUnit’s setup/teardown concept. It’s however recommended to have explicit fixture references in your tests or test classes rather than relying on implicitly executing setup/teardown functions, especially if they are far away from the actual tests.

Here is an example for making a db fixture available in a directory:

# content of a/conftest.py
import pytest

class DB:
    pass

@pytest.fixture(scope="session")
def db():
    return DB()

and then a test module in that directory:

# content of a/test_db.py
def test_a1(db):
    assert 0, db  # to show value

another test module:

# content of a/test_db2.py
def test_a2(db):
    assert 0, db  # to show value

and then a module in a sister directory which will not see the db fixture:

# content of b/test_error.py
def test_root(db):  # no db here, will error out
    pass

We can run this:

$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 7 items

test_step.py .Fx.
a/test_db.py F
a/test_db2.py F
b/test_error.py E

======= ERRORS ========
_______ ERROR at setup of test_root ________
file $REGENDOC_TMPDIR/b/test_error.py, line 1
  def test_root(db):  # no db here, will error out
E       fixture 'db' not found
>       available fixtures: cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
>       use 'pytest --fixtures [testpath]' for help on them.

$REGENDOC_TMPDIR/b/test_error.py:1
======= FAILURES ========
_______ TestUserHandling.test_modification ________

self = <test_step.TestUserHandling object at 0xdeadbeef>

    def test_modification(self):
>       assert 0
E       assert 0

test_step.py:9: AssertionError
_______ test_a1 ________

db = <conftest.DB object at 0xdeadbeef>

    def test_a1(db):
>       assert 0, db  # to show value
E       AssertionError: <conftest.DB object at 0xdeadbeef>
E       assert 0

a/test_db.py:2: AssertionError
_______ test_a2 ________

db = <conftest.DB object at 0xdeadbeef>

    def test_a2(db):
>       assert 0, db  # to show value
E       AssertionError: <conftest.DB object at 0xdeadbeef>
E       assert 0

a/test_db2.py:2: AssertionError
======= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ========

The two test modules in the a directory see the same db fixture instance while the one test in the sister-directory b doesn’t see it. We could of course also define a db fixture in that sister directory’s conftest.py file. Note that each fixture is only instantiated if there is a test actually needing it (unless you use “autouse” fixture which are always executed ahead of the first test executing).

post-process test reports / failures

If you want to postprocess test reports and need access to the executing environment you can implement a hook that gets called when the test “report” object is about to be created. Here we write out all failing test calls and also access a fixture (if it was used by the test) in case you want to query/look at it during your post processing. In our case we just write some information out to a failures file:

# content of conftest.py

import pytest
import os.path

@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
    # execute all other hooks to obtain the report object
    outcome = yield
    rep = outcome.get_result()

    # we only look at actual failing test calls, not setup/teardown
    if rep.when == "call" and rep.failed:
        mode = "a" if os.path.exists("failures") else "w"
        with open("failures", mode) as f:
            # let's also access a fixture for the fun of it
            if "tmpdir" in item.fixturenames:
                extra = " (%s)" % item.funcargs["tmpdir"]
            else:
                extra = ""

            f.write(rep.nodeid + extra + "\n")

if you then have failing tests:

# content of test_module.py
def test_fail1(tmpdir):
    assert 0
def test_fail2():
    assert 0

and run them:

$ pytest test_module.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items

test_module.py FF

======= FAILURES ========
_______ test_fail1 ________

tmpdir = local('PYTEST_TMPDIR/test_fail10')

    def test_fail1(tmpdir):
>       assert 0
E       assert 0

test_module.py:2: AssertionError
_______ test_fail2 ________

    def test_fail2():
>       assert 0
E       assert 0

test_module.py:4: AssertionError
======= 2 failed in 0.12 seconds ========

you will have a “failures” file which contains the failing test ids:

$ cat failures
test_module.py::test_fail1 (PYTEST_TMPDIR/test_fail10)
test_module.py::test_fail2

Making test result information available in fixtures

If you want to make test result reports available in fixture finalizers here is a little example implemented via a local plugin:

# content of conftest.py

import pytest

@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
    # execute all other hooks to obtain the report object
    outcome = yield
    rep = outcome.get_result()

    # set a report attribute for each phase of a call, which can
    # be "setup", "call", "teardown"

    setattr(item, "rep_" + rep.when, rep)


@pytest.fixture
def something(request):
    yield
    # request.node is an "item" because we use the default
    # "function" scope
    if request.node.rep_setup.failed:
        print ("setting up a test failed!", request.node.nodeid)
    elif request.node.rep_setup.passed:
        if request.node.rep_call.failed:
            print ("executing test failed", request.node.nodeid)

if you then have failing tests:

# content of test_module.py

import pytest

@pytest.fixture
def other():
    assert 0

def test_setup_fails(something, other):
    pass

def test_call_fails(something):
    assert 0

def test_fail2():
    assert 0

and run it:

$ pytest -s test_module.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items

test_module.py Esetting up a test failed! test_module.py::test_setup_fails
Fexecuting test failed test_module.py::test_call_fails
F

======= ERRORS ========
_______ ERROR at setup of test_setup_fails ________

    @pytest.fixture
    def other():
>       assert 0
E       assert 0

test_module.py:6: AssertionError
======= FAILURES ========
_______ test_call_fails ________

something = None

    def test_call_fails(something):
>       assert 0
E       assert 0

test_module.py:12: AssertionError
_______ test_fail2 ________

    def test_fail2():
>       assert 0
E       assert 0

test_module.py:15: AssertionError
======= 2 failed, 1 error in 0.12 seconds ========

You’ll see that the fixture finalizers could use the precise reporting information.

Freezing pytest

If you freeze your application using a tool like PyInstaller in order to distribute it to your end-users, it is a good idea to also package your test runner and run your tests using the frozen application. This way packaging errors such as dependencies not being included into the executable can be detected early while also allowing you to send test files to users so they can run them in their machines, which can be useful to obtain more information about a hard to reproduce bug.

Fortunately recent PyInstaller releases already have a custom hook for pytest, but if you are using another tool to freeze executables such as cx_freeze or py2exe, you can use pytest.freeze_includes() to obtain the full list of internal pytest modules. How to configure the tools to find the internal modules varies from tool to tool, however.

Instead of freezing the pytest runner as a separate executable, you can make your frozen program work as the pytest runner by some clever argument handling during program startup. This allows you to have a single executable, which is usually more convenient.

# contents of app_main.py
import sys

if len(sys.argv) > 1 and sys.argv[1] == '--pytest':
    import pytest
    sys.exit(pytest.main(sys.argv[2:]))
else:
    # normal application execution: at this point argv can be parsed
    # by your argument-parsing library of choice as usual
    ...

This allows you to execute tests using the frozen application with standard pytest command-line options:

./app_main --pytest --verbose --tb=long --junitxml=results.xml test-suite/

Parametrizing tests

pytest allows to easily parametrize test functions. For basic docs, see Parametrizing fixtures and test functions.

In the following we provide some examples using the builtin mechanisms.

Generating parameters combinations, depending on command line

Let’s say we want to execute a test with different computation parameters and the parameter range shall be determined by a command line argument. Let’s first write a simple (do-nothing) computation test:

# content of test_compute.py

def test_compute(param1):
    assert param1 < 4

Now we add a test configuration like this:

# content of conftest.py

def pytest_addoption(parser):
    parser.addoption("--all", action="store_true",
        help="run all combinations")

def pytest_generate_tests(metafunc):
    if 'param1' in metafunc.fixturenames:
        if metafunc.config.option.all:
            end = 5
        else:
            end = 2
        metafunc.parametrize("param1", range(end))

This means that we only run 2 tests if we do not pass --all:

$ pytest -q test_compute.py
..
2 passed in 0.12 seconds

We run only two computations, so we see two dots. let’s run the full monty:

$ pytest -q --all
....F
======= FAILURES ========
_______ test_compute[4] ________

param1 = 4

    def test_compute(param1):
>       assert param1 < 4
E       assert 4 < 4

test_compute.py:3: AssertionError
1 failed, 4 passed in 0.12 seconds

As expected when running the full range of param1 values we’ll get an error on the last one.

Different options for test IDs

pytest will build a string that is the test ID for each set of values in a parametrized test. These IDs can be used with -k to select specific cases to run, and they will also identify the specific case when one is failing. Running pytest with --collect-only will show the generated IDs.

Numbers, strings, booleans and None will have their usual string representation used in the test ID. For other objects, pytest will make a string based on the argument name:

# content of test_time.py

import pytest

from datetime import datetime, timedelta

testdata = [
    (datetime(2001, 12, 12), datetime(2001, 12, 11), timedelta(1)),
    (datetime(2001, 12, 11), datetime(2001, 12, 12), timedelta(-1)),
]


@pytest.mark.parametrize("a,b,expected", testdata)
def test_timedistance_v0(a, b, expected):
    diff = a - b
    assert diff == expected


@pytest.mark.parametrize("a,b,expected", testdata, ids=["forward", "backward"])
def test_timedistance_v1(a, b, expected):
    diff = a - b
    assert diff == expected


def idfn(val):
    if isinstance(val, (datetime,)):
        # note this wouldn't show any hours/minutes/seconds
        return val.strftime('%Y%m%d')


@pytest.mark.parametrize("a,b,expected", testdata, ids=idfn)
def test_timedistance_v2(a, b, expected):
    diff = a - b
    assert diff == expected

In test_timedistance_v0, we let pytest generate the test IDs.

In test_timedistance_v1, we specified ids as a list of strings which were used as the test IDs. These are succinct, but can be a pain to maintain.

In test_timedistance_v2, we specified ids as a function that can generate a string representation to make part of the test ID. So our datetime values use the label generated by idfn, but because we didn’t generate a label for timedelta objects, they are still using the default pytest representation:

$ pytest test_time.py --collect-only
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 6 items
<Module 'test_time.py'>
  <Function 'test_timedistance_v0[a0-b0-expected0]'>
  <Function 'test_timedistance_v0[a1-b1-expected1]'>
  <Function 'test_timedistance_v1[forward]'>
  <Function 'test_timedistance_v1[backward]'>
  <Function 'test_timedistance_v2[20011212-20011211-expected0]'>
  <Function 'test_timedistance_v2[20011211-20011212-expected1]'>

======= no tests ran in 0.12 seconds ========

A quick port of “testscenarios”

Here is a quick port to run tests configured with test scenarios, an add-on from Robert Collins for the standard unittest framework. We only have to work a bit to construct the correct arguments for pytest’s Metafunc.parametrize():

# content of test_scenarios.py

def pytest_generate_tests(metafunc):
    idlist = []
    argvalues = []
    for scenario in metafunc.cls.scenarios:
        idlist.append(scenario[0])
        items = scenario[1].items()
        argnames = [x[0] for x in items]
        argvalues.append(([x[1] for x in items]))
    metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")

scenario1 = ('basic', {'attribute': 'value'})
scenario2 = ('advanced', {'attribute': 'value2'})

class TestSampleWithScenarios:
    scenarios = [scenario1, scenario2]

    def test_demo1(self, attribute):
        assert isinstance(attribute, str)

    def test_demo2(self, attribute):
        assert isinstance(attribute, str)

this is a fully self-contained example which you can run with:

$ pytest test_scenarios.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items

test_scenarios.py ....

======= 4 passed in 0.12 seconds ========

If you just collect tests you’ll also nicely see ‘advanced’ and ‘basic’ as variants for the test function:

$ pytest --collect-only test_scenarios.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
<Module 'test_scenarios.py'>
  <Class 'TestSampleWithScenarios'>
    <Instance '()'>
      <Function 'test_demo1[basic]'>
      <Function 'test_demo2[basic]'>
      <Function 'test_demo1[advanced]'>
      <Function 'test_demo2[advanced]'>

======= no tests ran in 0.12 seconds ========

Note that we told metafunc.parametrize() that your scenario values should be considered class-scoped. With pytest-2.3 this leads to a resource-based ordering.

Deferring the setup of parametrized resources

The parametrization of test functions happens at collection time. It is a good idea to setup expensive resources like DB connections or subprocess only when the actual test is run. Here is a simple example how you can achieve that, first the actual test requiring a db object:

# content of test_backends.py

import pytest
def test_db_initialized(db):
    # a dummy test
    if db.__class__.__name__ == "DB2":
        pytest.fail("deliberately failing for demo purposes")

We can now add a test configuration that generates two invocations of the test_db_initialized function and also implements a factory that creates a database object for the actual test invocations:

# content of conftest.py
import pytest

def pytest_generate_tests(metafunc):
    if 'db' in metafunc.fixturenames:
        metafunc.parametrize("db", ['d1', 'd2'], indirect=True)

class DB1:
    "one database object"
class DB2:
    "alternative database object"

@pytest.fixture
def db(request):
    if request.param == "d1":
        return DB1()
    elif request.param == "d2":
        return DB2()
    else:
        raise ValueError("invalid internal test config")

Let’s first see how it looks like at collection time:

$ pytest test_backends.py --collect-only
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
<Module 'test_backends.py'>
  <Function 'test_db_initialized[d1]'>
  <Function 'test_db_initialized[d2]'>

======= no tests ran in 0.12 seconds ========

And then when we run the test:

$ pytest -q test_backends.py
.F
======= FAILURES ========
_______ test_db_initialized[d2] ________

db = <conftest.DB2 object at 0xdeadbeef>

    def test_db_initialized(db):
        # a dummy test
        if db.__class__.__name__ == "DB2":
>           pytest.fail("deliberately failing for demo purposes")
E           Failed: deliberately failing for demo purposes

test_backends.py:6: Failed
1 failed, 1 passed in 0.12 seconds

The first invocation with db == "DB1" passed while the second with db == "DB2" failed. Our db fixture function has instantiated each of the DB values during the setup phase while the pytest_generate_tests generated two according calls to the test_db_initialized during the collection phase.

Apply indirect on particular arguments

Very often parametrization uses more than one argument name. There is opportunity to apply indirect parameter on particular arguments. It can be done by passing list or tuple of arguments’ names to indirect. In the example below there is a function test_indirect which uses two fixtures: x and y. Here we give to indirect the list, which contains the name of the fixture x. The indirect parameter will be applied to this argument only, and the value a will be passed to respective fixture function:

# content of test_indirect_list.py

import pytest
@pytest.fixture(scope='function')
def x(request):
    return request.param * 3

@pytest.fixture(scope='function')
def y(request):
    return request.param * 2

@pytest.mark.parametrize('x, y', [('a', 'b')], indirect=['x'])
def test_indirect(x,y):
    assert x == 'aaa'
    assert y == 'b'

The result of this test will be successful:

$ pytest test_indirect_list.py --collect-only
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items
<Module 'test_indirect_list.py'>
  <Function 'test_indirect[a-b]'>

======= no tests ran in 0.12 seconds ========

Parametrizing test methods through per-class configuration

Here is an example pytest_generate_function function implementing a parametrization scheme similar to Michael Foord’s unittest parametrizer but in a lot less code:

# content of ./test_parametrize.py
import pytest

def pytest_generate_tests(metafunc):
    # called once per each test function
    funcarglist = metafunc.cls.params[metafunc.function.__name__]
    argnames = sorted(funcarglist[0])
    metafunc.parametrize(argnames, [[funcargs[name] for name in argnames]
            for funcargs in funcarglist])

class TestClass:
    # a map specifying multiple argument sets for a test method
    params = {
        'test_equals': [dict(a=1, b=2), dict(a=3, b=3), ],
        'test_zerodivision': [dict(a=1, b=0), ],
    }

    def test_equals(self, a, b):
        assert a == b

    def test_zerodivision(self, a, b):
        pytest.raises(ZeroDivisionError, "a/b")

Our test generator looks up a class-level definition which specifies which argument sets to use for each test function. Let’s run it:

$ pytest -q
F..
======= FAILURES ========
_______ TestClass.test_equals[1-2] ________

self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2

    def test_equals(self, a, b):
>       assert a == b
E       assert 1 == 2

test_parametrize.py:18: AssertionError
1 failed, 2 passed in 0.12 seconds

Indirect parametrization with multiple fixtures

Here is a stripped down real-life example of using parametrized testing for testing serialization of objects between different python interpreters. We define a test_basic_objects function which is to be run with different sets of arguments for its three arguments:

  • python1: first python interpreter, run to pickle-dump an object to a file
  • python2: second interpreter, run to pickle-load an object from a file
  • obj: object to be dumped/loaded
"""
module containing a parametrized tests testing cross-python
serialization via the pickle module.
"""
import py
import pytest
import _pytest._code

pythonlist = ['python2.6', 'python2.7', 'python3.4', 'python3.5']
@pytest.fixture(params=pythonlist)
def python1(request, tmpdir):
    picklefile = tmpdir.join("data.pickle")
    return Python(request.param, picklefile)

@pytest.fixture(params=pythonlist)
def python2(request, python1):
    return Python(request.param, python1.picklefile)

class Python:
    def __init__(self, version, picklefile):
        self.pythonpath = py.path.local.sysfind(version)
        if not self.pythonpath:
            pytest.skip("%r not found" %(version,))
        self.picklefile = picklefile
    def dumps(self, obj):
        dumpfile = self.picklefile.dirpath("dump.py")
        dumpfile.write(_pytest._code.Source("""
            import pickle
            f = open(%r, 'wb')
            s = pickle.dump(%r, f, protocol=2)
            f.close()
        """ % (str(self.picklefile), obj)))
        py.process.cmdexec("%s %s" %(self.pythonpath, dumpfile))

    def load_and_is_true(self, expression):
        loadfile = self.picklefile.dirpath("load.py")
        loadfile.write(_pytest._code.Source("""
            import pickle
            f = open(%r, 'rb')
            obj = pickle.load(f)
            f.close()
            res = eval(%r)
            if not res:
                raise SystemExit(1)
        """ % (str(self.picklefile), expression)))
        print (loadfile)
        py.process.cmdexec("%s %s" %(self.pythonpath, loadfile))

@pytest.mark.parametrize("obj", [42, {}, {1:3},])
def test_basic_objects(python1, python2, obj):
    python1.dumps(obj)
    python2.load_and_is_true("obj == %s" % obj)

Running it results in some skips if we don’t have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize):

. $ pytest -rs -q multipython.py
sssssssssssssss.........sss.........sss.........
======= short test summary info ========
SKIP [21] $REGENDOC_TMPDIR/CWD/multipython.py:23: 'python2.6' not found
27 passed, 21 skipped in 0.12 seconds

Indirect parametrization of optional implementations/imports

If you want to compare the outcomes of several implementations of a given API, you can write test functions that receive the already imported implementations and get skipped in case the implementation is not importable/available. Let’s say we have a “base” implementation and the other (possibly optimized ones) need to provide similar results:

# content of conftest.py

import pytest

@pytest.fixture(scope="session")
def basemod(request):
    return pytest.importorskip("base")

@pytest.fixture(scope="session", params=["opt1", "opt2"])
def optmod(request):
    return pytest.importorskip(request.param)

And then a base implementation of a simple function:

# content of base.py
def func1():
    return 1

And an optimized version:

# content of opt1.py
def func1():
    return 1.0001

And finally a little test module:

# content of test_module.py

def test_func1(basemod, optmod):
    assert round(basemod.func1(), 3) == round(optmod.func1(), 3)

If you run this with reporting for skips enabled:

$ pytest -rs test_module.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items

test_module.py .s
======= short test summary info ========
SKIP [1] $REGENDOC_TMPDIR/conftest.py:10: could not import 'opt2'

======= 1 passed, 1 skipped in 0.12 seconds ========

You’ll see that we don’t have a opt2 module and thus the second test run of our test_func1 was skipped. A few notes:

  • the fixture functions in the conftest.py file are “session-scoped” because we don’t need to import more than once
  • if you have multiple test functions and a skipped import, you will see the [1] count increasing in the report
  • you can put @pytest.mark.parametrize style parametrization on the test functions to parametrize input/output values as well.

Working with custom markers

Here are some example using the Marking test functions with attributes mechanism.

Marking test functions and selecting them for a run

You can “mark” a test function with custom metadata like this:

# content of test_server.py

import pytest
@pytest.mark.webtest
def test_send_http():
    pass # perform some webtest test for your app
def test_something_quick():
    pass
def test_another():
    pass
class TestClass:
    def test_method(self):
        pass

New in version 2.2.

You can then restrict a test run to only run tests marked with webtest:

$ pytest -v -m webtest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items

test_server.py::test_send_http PASSED

======= 3 tests deselected ========
======= 1 passed, 3 deselected in 0.12 seconds ========

Or the inverse, running all tests except the webtest ones:

$ pytest -v -m "not webtest"
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items

test_server.py::test_something_quick PASSED
test_server.py::test_another PASSED
test_server.py::TestClass::test_method PASSED

======= 1 tests deselected ========
======= 3 passed, 1 deselected in 0.12 seconds ========

Selecting tests based on their node ID

You can provide one or more node IDs as positional arguments to select only specified tests. This makes it easy to select tests based on their module, class, method, or function name:

$ pytest -v test_server.py::TestClass::test_method
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 5 items

test_server.py::TestClass::test_method PASSED

======= 1 passed in 0.12 seconds ========

You can also select on the class:

$ pytest -v test_server.py::TestClass
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items

test_server.py::TestClass::test_method PASSED

======= 1 passed in 0.12 seconds ========

Or select multiple nodes:

$ pytest -v test_server.py::TestClass test_server.py::test_send_http
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 8 items

test_server.py::TestClass::test_method PASSED
test_server.py::test_send_http PASSED

======= 2 passed in 0.12 seconds ========

Note

Node IDs are of the form module.py::class::method or module.py::function. Node IDs control which tests are collected, so module.py::class will select all test methods on the class. Nodes are also created for each parameter of a parametrized fixture or test, so selecting a parametrized test must include the parameter value, e.g. module.py::function[param].

Node IDs for failing tests are displayed in the test summary info when running pytest with the -rf option. You can also construct Node IDs from the output of pytest --collectonly.

Using -k expr to select tests based on their name

You can use the -k command line option to specify an expression which implements a substring match on the test names instead of the exact match on markers that -m provides. This makes it easy to select tests based on their names:

$ pytest -v -k http  # running with the above defined example module
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items

test_server.py::test_send_http PASSED

======= 3 tests deselected ========
======= 1 passed, 3 deselected in 0.12 seconds ========

And you can also run all tests except the ones that match the keyword:

$ pytest -k "not send_http" -v
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items

test_server.py::test_something_quick PASSED
test_server.py::test_another PASSED
test_server.py::TestClass::test_method PASSED

======= 1 tests deselected ========
======= 3 passed, 1 deselected in 0.12 seconds ========

Or to select “http” and “quick” tests:

$ pytest -k "http or quick" -v
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items

test_server.py::test_send_http PASSED
test_server.py::test_something_quick PASSED

======= 2 tests deselected ========
======= 2 passed, 2 deselected in 0.12 seconds ========

Note

If you are using expressions such as “X and Y” then both X and Y need to be simple non-keyword names. For example, “pass” or “from” will result in SyntaxErrors because “-k” evaluates the expression.

However, if the “-k” argument is a simple string, no such restrictions apply. Also “-k ‘not STRING’” has no restrictions. You can also specify numbers like “-k 1.3” to match tests which are parametrized with the float “1.3”.

Registering markers

New in version 2.2.

Registering markers for your test suite is simple:

# content of pytest.ini
[pytest]
markers =
    webtest: mark a test as a webtest.

You can ask which markers exist for your test suite - the list includes our just defined webtest markers:

$ pytest --markers
@pytest.mark.webtest: mark a test as a webtest.

@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.

@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value.  Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html

@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html

@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.

@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures

@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.

@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.

For an example on how to add and work with markers from a plugin, see Custom marker and command line option to control test runs.

Note

It is recommended to explicitly register markers so that:

  • there is one place in your test suite defining your markers
  • asking for existing markers via pytest --markers gives good output
  • typos in function markers are treated as an error if you use the --strict option. Future versions of pytest are probably going to start treating non-registered markers as errors at some point.

Marking whole classes or modules

You may use pytest.mark decorators with classes to apply markers to all of its test methods:

# content of test_mark_classlevel.py
import pytest
@pytest.mark.webtest
class TestClass:
    def test_startup(self):
        pass
    def test_startup_and_more(self):
        pass

This is equivalent to directly applying the decorator to the two test functions.

To remain backward-compatible with Python 2.4 you can also set a pytestmark attribute on a TestClass like this:

import pytest

class TestClass:
    pytestmark = pytest.mark.webtest

or if you need to use multiple markers you can use a list:

import pytest

class TestClass:
    pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]

You can also set a module level marker:

import pytest
pytestmark = pytest.mark.webtest

in which case it will be applied to all functions and methods defined in the module.

Marking individual tests when using parametrize

When using parametrize, applying a mark will make it apply to each individual test. However it is also possible to apply a marker to an individual test instance:

import pytest

@pytest.mark.foo
@pytest.mark.parametrize(("n", "expected"), [
    (1, 2),
    pytest.mark.bar((1, 3)),
    (2, 3),
])
def test_increment(n, expected):
     assert n + 1 == expected

In this example the mark “foo” will apply to each of the three tests, whereas the “bar” mark is only applied to the second test. Skip and xfail marks can also be applied in this way, see Skip/xfail with parametrize.

Note

If the data you are parametrizing happen to be single callables, you need to be careful when marking these items. pytest.mark.xfail(my_func) won’t work because it’s also the signature of a function being decorated. To resolve this ambiguity, you need to pass a reason argument: pytest.mark.xfail(func_bar, reason=”Issue#7”).

Custom marker and command line option to control test runs

Plugins can provide custom markers and implement specific behaviour based on it. This is a self-contained example which adds a command line option and a parametrized test function marker to run tests specifies via named environments:

# content of conftest.py

import pytest
def pytest_addoption(parser):
    parser.addoption("-E", action="store", metavar="NAME",
        help="only run tests matching the environment NAME.")

def pytest_configure(config):
    # register an additional marker
    config.addinivalue_line("markers",
        "env(name): mark test to run only on named environment")

def pytest_runtest_setup(item):
    envmarker = item.get_marker("env")
    if envmarker is not None:
        envname = envmarker.args[0]
        if envname != item.config.getoption("-E"):
            pytest.skip("test requires env %r" % envname)

A test file using this local plugin:

# content of test_someenv.py

import pytest
@pytest.mark.env("stage1")
def test_basic_db_operation():
    pass

and an example invocations specifying a different environment than what the test needs:

$ pytest -E stage2
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items

test_someenv.py s

======= 1 skipped in 0.12 seconds ========

and here is one that specifies exactly the environment needed:

$ pytest -E stage1
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items

test_someenv.py .

======= 1 passed in 0.12 seconds ========

The --markers option always gives you a list of available markers:

$ pytest --markers
@pytest.mark.env(name): mark test to run only on named environment

@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.

@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value.  Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html

@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html

@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.

@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures

@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.

@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.

Reading markers which were set from multiple places

If you are heavily using markers in your test suite you may encounter the case where a marker is applied several times to a test function. From plugin code you can read over all such settings. Example:

# content of test_mark_three_times.py
import pytest
pytestmark = pytest.mark.glob("module", x=1)

@pytest.mark.glob("class", x=2)
class TestClass:
    @pytest.mark.glob("function", x=3)
    def test_something(self):
        pass

Here we have the marker “glob” applied three times to the same test function. From a conftest file we can read it like this:

# content of conftest.py
import sys

def pytest_runtest_setup(item):
    g = item.get_marker("glob")
    if g is not None:
        for info in g:
            print ("glob args=%s kwargs=%s" %(info.args, info.kwargs))
            sys.stdout.flush()

Let’s run this without capturing output and see what we get:

$ pytest -q -s
glob args=('function',) kwargs={'x': 3}
glob args=('class',) kwargs={'x': 2}
glob args=('module',) kwargs={'x': 1}
.
1 passed in 0.12 seconds

marking platform specific tests with pytest

Consider you have a test suite which marks tests for particular platforms, namely pytest.mark.darwin, pytest.mark.win32 etc. and you also have tests that run on all platforms and have no specific marker. If you now want to have a way to only run the tests for your particular platform, you could use the following plugin:

# content of conftest.py
#
import sys
import pytest

ALL = set("darwin linux win32".split())

def pytest_runtest_setup(item):
    if isinstance(item, item.Function):
        plat = sys.platform
        if not item.get_marker(plat):
            if ALL.intersection(item.keywords):
                pytest.skip("cannot run on platform %s" %(plat))

then tests will be skipped if they were specified for a different platform. Let’s do a little test file to show how this looks like:

# content of test_plat.py

import pytest

@pytest.mark.darwin
def test_if_apple_is_evil():
    pass

@pytest.mark.linux
def test_if_linux_works():
    pass

@pytest.mark.win32
def test_if_win32_crashes():
    pass

def test_runs_everywhere():
    pass

then you will see two tests skipped and two executed tests as expected:

$ pytest -rs # this option reports skip reasons
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items

test_plat.py s.s.
======= short test summary info ========
SKIP [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux

======= 2 passed, 2 skipped in 0.12 seconds ========

Note that if you specify a platform via the marker-command line option like this:

$ pytest -m linux
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items

test_plat.py .

======= 3 tests deselected ========
======= 1 passed, 3 deselected in 0.12 seconds ========

then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.

Automatically adding markers based on test names

If you a test suite where test function names indicate a certain type of test, you can implement a hook that automatically defines markers so that you can use the -m option with it. Let’s look at this test module:

# content of test_module.py

def test_interface_simple():
    assert 0

def test_interface_complex():
    assert 0

def test_event_simple():
    assert 0

def test_something_else():
    assert 0

We want to dynamically define two markers and can do it in a conftest.py plugin:

# content of conftest.py

import pytest
def pytest_collection_modifyitems(items):
    for item in items:
        if "interface" in item.nodeid:
            item.add_marker(pytest.mark.interface)
        elif "event" in item.nodeid:
            item.add_marker(pytest.mark.event)

We can now use the -m option to select one set:

$ pytest -m interface --tb=short
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items

test_module.py FF

======= FAILURES ========
_______ test_interface_simple ________
test_module.py:3: in test_interface_simple
    assert 0
E   assert 0
_______ test_interface_complex ________
test_module.py:6: in test_interface_complex
    assert 0
E   assert 0
======= 2 tests deselected ========
======= 2 failed, 2 deselected in 0.12 seconds ========

or to select both “event” and “interface” tests:

$ pytest -m "interface or event" --tb=short
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items

test_module.py FFF

======= FAILURES ========
_______ test_interface_simple ________
test_module.py:3: in test_interface_simple
    assert 0
E   assert 0
_______ test_interface_complex ________
test_module.py:6: in test_interface_complex
    assert 0
E   assert 0
_______ test_event_simple ________
test_module.py:9: in test_event_simple
    assert 0
E   assert 0
======= 1 tests deselected ========
======= 3 failed, 1 deselected in 0.12 seconds ========

A session-fixture which can look at all collected tests

A session-scoped fixture effectively has access to all collected test items. Here is an example of a fixture function which walks all collected tests and looks if their test class defines a callme method and calls it:

# content of conftest.py

import pytest

@pytest.fixture(scope="session", autouse=True)
def callattr_ahead_of_alltests(request):
    print ("callattr_ahead_of_alltests called")
    seen = set([None])
    session = request.node
    for item in session.items:
        cls = item.getparent(pytest.Class)
        if cls not in seen:
            if hasattr(cls.obj, "callme"):
               cls.obj.callme()
            seen.add(cls)

test classes may now define a callme method which will be called ahead of running any tests:

# content of test_module.py

class TestHello:
    @classmethod
    def callme(cls):
        print ("callme called!")

    def test_method1(self):
        print ("test_method1 called")

    def test_method2(self):
        print ("test_method1 called")

class TestOther:
    @classmethod
    def callme(cls):
        print ("callme other called")
    def test_other(self):
        print ("test other")

# works with unittest as well ...
import unittest

class SomeTest(unittest.TestCase):
    @classmethod
    def callme(self):
        print ("SomeTest callme called")

    def test_unit1(self):
        print ("test_unit1 method called")

If you run this without output capturing:

$ pytest -q -s test_module.py
callattr_ahead_of_alltests called
callme called!
callme other called
SomeTest callme called
test_method1 called
.test_method1 called
.test other
.test_unit1 method called
.
4 passed in 0.12 seconds

Changing standard (Python) test discovery

Ignore paths during test collection

You can easily ignore certain test directories and modules during collection by passing the --ignore=path option on the cli. pytest allows multiple --ignore options. Example:

tests/
|-- example
|   |-- test_example_01.py
|   |-- test_example_02.py
|   '-- test_example_03.py
|-- foobar
|   |-- test_foobar_01.py
|   |-- test_foobar_02.py
|   '-- test_foobar_03.py
'-- hello
    '-- world
        |-- test_world_01.py
        |-- test_world_02.py
        '-- test_world_03.py

Now if you invoke pytest with --ignore=tests/foobar/test_foobar_03.py --ignore=tests/hello/, you will see that pytest only collects test-modules, which do not match the patterns specified:

========= test session starts ==========
platform darwin -- Python 2.7.10, pytest-2.8.2, py-1.4.30, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
collected 5 items

tests/example/test_example_01.py .
tests/example/test_example_02.py .
tests/example/test_example_03.py .
tests/foobar/test_foobar_01.py .
tests/foobar/test_foobar_02.py .

======= 5 passed in 0.02 seconds =======

Keeping duplicate paths specified from command line

Default behavior of pytest is to ignore duplicate paths specified from the command line. Example:

py.test path_a path_a

...
collected 1 item
...

Just collect tests once.

To collect duplicate tests, use the --keep-duplicates option on the cli. Example:

py.test --keep-duplicates path_a path_a

...
collected 2 items
...

As the collector just works on directories, if you specify twice a single test file, pytest will still collect it twice, no matter if the --keep-duplicates is not specified. Example:

py.test test_a.py test_a.py

...
collected 2 items
...

Changing directory recursion

You can set the norecursedirs option in an ini-file, for example your pytest.ini in the project root directory:

# content of pytest.ini
[pytest]
norecursedirs = .svn _build tmp*

This would tell pytest to not recurse into typical subversion or sphinx-build directories or into any tmp prefixed directory.

Changing naming conventions

You can configure different naming conventions by setting the python_files, python_classes and python_functions configuration options. Example:

# content of pytest.ini
# can also be defined in tox.ini or setup.cfg file, although the section
# name in setup.cfg files should be "tool:pytest"
[pytest]
python_files=check_*.py
python_classes=Check
python_functions=*_check

This would make pytest look for tests in files that match the check_* .py glob-pattern, Check prefixes in classes, and functions and methods that match *_check. For example, if we have:

# content of check_myapp.py
class CheckMyApp:
    def simple_check(self):
        pass
    def complex_check(self):
        pass

then the test collection looks like this:

$ pytest --collect-only
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 2 items
<Module 'check_myapp.py'>
  <Class 'CheckMyApp'>
    <Instance '()'>
      <Function 'simple_check'>
      <Function 'complex_check'>

======= no tests ran in 0.12 seconds ========

Note

the python_functions and python_classes options has no effect for unittest.TestCase test discovery because pytest delegates detection of test case methods to unittest code.

Interpreting cmdline arguments as Python packages

You can use the --pyargs option to make pytest try interpreting arguments as python package names, deriving their file system path and then running the test. For example if you have unittest2 installed you can type:

pytest --pyargs unittest2.test.test_skipping -q

which would run the respective test module. Like with other options, through an ini-file and the addopts option you can make this change more permanently:

# content of pytest.ini
[pytest]
addopts = --pyargs

Now a simple invocation of pytest NAME will check if NAME exists as an importable package/module and otherwise treat it as a filesystem path.

Finding out what is collected

You can always peek at the collection tree without running tests like this:

. $ pytest --collect-only pythoncollection.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 3 items
<Module 'CWD/pythoncollection.py'>
  <Function 'test_function'>
  <Class 'TestClass'>
    <Instance '()'>
      <Function 'test_method'>
      <Function 'test_anothermethod'>

======= no tests ran in 0.12 seconds ========

customizing test collection to find all .py files

You can easily instruct pytest to discover tests from every python file:

# content of pytest.ini
[pytest]
python_files = *.py

However, many projects will have a setup.py which they don’t want to be imported. Moreover, there may files only importable by a specific python version. For such cases you can dynamically define files to be ignored by listing them in a conftest.py file:

# content of conftest.py
import sys

collect_ignore = ["setup.py"]
if sys.version_info[0] > 2:
    collect_ignore.append("pkg/module_py2.py")

And then if you have a module file like this:

# content of pkg/module_py2.py
def test_only_on_python2():
    try:
        assert 0
    except Exception, e:
        pass

and a setup.py dummy file like this:

# content of setup.py
0/0  # will raise exception if imported

then a pytest run on Python2 will find the one test and will leave out the setup.py file:

#$ pytest --collect-only
====== test session starts ======
platform linux2 -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 1 items
<Module 'pkg/module_py2.py'>
  <Function 'test_only_on_python2'>

====== no tests ran in 0.04 seconds ======

If you run with a Python3 interpreter both the one test and the setup.py file will be left out:

$ pytest --collect-only
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 0 items

======= no tests ran in 0.12 seconds ========

Working with non-python tests

A basic example for specifying tests in Yaml files

Here is an example conftest.py (extracted from Ali Afshnars special purpose pytest-yamlwsgi plugin). This conftest.py will collect test*.yml files and will execute the yaml-formatted content as custom tests:

# content of conftest.py

import pytest

def pytest_collect_file(parent, path):
    if path.ext == ".yml" and path.basename.startswith("test"):
        return YamlFile(path, parent)

class YamlFile(pytest.File):
    def collect(self):
        import yaml # we need a yaml parser, e.g. PyYAML
        raw = yaml.safe_load(self.fspath.open())
        for name, spec in sorted(raw.items()):
            yield YamlItem(name, self, spec)

class YamlItem(pytest.Item):
    def __init__(self, name, parent, spec):
        super(YamlItem, self).__init__(name, parent)
        self.spec = spec

    def runtest(self):
        for name, value in sorted(self.spec.items()):
            # some custom test execution (dumb example follows)
            if name != value:
                raise YamlException(self, name, value)

    def repr_failure(self, excinfo):
        """ called when self.runtest() raises an exception. """
        if isinstance(excinfo.value, YamlException):
            return "\n".join([
                "usecase execution failed",
                "   spec failed: %r: %r" % excinfo.value.args[1:3],
                "   no further details known at this point."
            ])

    def reportinfo(self):
        return self.fspath, 0, "usecase: %s" % self.name

class YamlException(Exception):
    """ custom exception for error reporting. """

You can create a simple example file:

# test_simple.yml
ok:
    sub1: sub1

hello:
    world: world
    some: other

and if you installed PyYAML or a compatible YAML-parser you can now execute the test specification:

nonpython $ pytest test_simple.yml
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collected 2 items

test_simple.yml F.

======= FAILURES ========
_______ usecase: hello ________
usecase execution failed
   spec failed: 'some': 'other'
   no further details known at this point.
======= 1 failed, 1 passed in 0.12 seconds ========

You get one dot for the passing sub1: sub1 check and one failure. Obviously in the above conftest.py you’ll want to implement a more interesting interpretation of the yaml-values. You can easily write your own domain specific testing language this way.

Note

repr_failure(excinfo) is called for representing test failures. If you create custom collection nodes you can return an error representation string of your choice. It will be reported as a (red) string.

reportinfo() is used for representing the test location and is also consulted when reporting in verbose mode:

nonpython $ pytest -v
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collecting ... collected 2 items

test_simple.yml::hello FAILED
test_simple.yml::ok PASSED

======= FAILURES ========
_______ usecase: hello ________
usecase execution failed
   spec failed: 'some': 'other'
   no further details known at this point.
======= 1 failed, 1 passed in 0.12 seconds ========

While developing your custom test collection and execution it’s also interesting to just look at the collection tree:

nonpython $ pytest --collect-only
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collected 2 items
<YamlFile 'test_simple.yml'>
  <YamlItem 'hello'>
  <YamlItem 'ok'>

======= no tests ran in 0.12 seconds ========

Good Integration Practices

Conventions for Python test discovery

pytest implements the following standard test discovery:

  • If no arguments are specified then collection starts from testpaths (if configured) or the current directory. Alternatively, command line arguments can be used in any combination of directories, file names or node ids.
  • Recurse into directories, unless they match norecursedirs.
  • In those directories, search for test_*.py or *_test.py files, imported by their test package name.
  • From those files, collect test items:
    • test_ prefixed test functions or methods outside of class
    • test_ prefixed test functions or methods inside Test prefixed test classes (without an __init__ method)

For examples of how to customize your test discovery Changing standard (Python) test discovery.

Within Python modules, pytest also discovers tests using the standard unittest.TestCase subclassing technique.

Choosing a test layout / import rules

pytest supports two common test layouts:

  • putting tests into an extra directory outside your actual application code, useful if you have many functional tests or for other reasons want to keep tests separate from actual application code (often a good idea):

    setup.py   # your setuptools Python package metadata
    mypkg/
        __init__.py
        appmodule.py
    tests/
        test_app.py
        ...
    
  • inlining test directories into your application package, useful if you have direct relation between (unit-)test and application modules and want to distribute your tests along with your application:

    setup.py   # your setuptools Python package metadata
    mypkg/
        __init__.py
        appmodule.py
        ...
        test/
            test_app.py
            ...
    

Important notes relating to both schemes:

  • make sure that “mypkg” is importable, for example by typing once:

    pip install -e .   # install package using setup.py in editable mode
    
  • avoid “__init__.py” files in your test directories. This way your tests can run easily against an installed version of mypkg, independently from the installed package if it contains the tests or not.

  • With inlined tests you might put __init__.py into test directories and make them installable as part of your application. Using the pytest --pyargs mypkg invocation pytest will discover where mypkg is installed and collect tests from there. With the “external” test you can still distribute tests but they will not be installed or become importable.

Typically you can run tests by pointing to test directories or modules:

pytest tests/test_app.py       # for external test dirs
pytest mypkg/test/test_app.py  # for inlined test dirs
pytest mypkg                   # run tests in all below test directories
pytest                         # run all tests below current dir
...

Because of the above editable install mode you can change your source code (both tests and the app) and rerun tests at will. Once you are done with your work, you can use tox to make sure that the package is really correct and tests pass in all required configurations.

Note

You can use Python3 namespace packages (PEP420) for your application but pytest will still perform test package name discovery based on the presence of __init__.py files. If you use one of the two recommended file system layouts above but leave away the __init__.py files from your directories it should just work on Python3.3 and above. From “inlined tests”, however, you will need to use absolute imports for getting at your application code.

Note

If pytest finds a “a/b/test_module.py” test file while recursing into the filesystem it determines the import name as follows:

  • determine basedir: this is the first “upward” (towards the root) directory not containing an __init__.py. If e.g. both a and b contain an __init__.py file then the parent directory of a will become the basedir.
  • perform sys.path.insert(0, basedir) to make the test module importable under the fully qualified import name.
  • import a.b.test_module where the path is determined by converting path separators / into ”.” characters. This means you must follow the convention of having directory and file names map directly to the import names.

The reason for this somewhat evolved importing technique is that in larger projects multiple test modules might import from each other and thus deriving a canonical import name helps to avoid surprises such as a test module getting imported twice.

Tox

For development, we recommend to use virtualenv environments and pip for installing your application and any dependencies as well as the pytest package itself. This ensures your code and dependencies are isolated from the system Python installation.

If you frequently release code and want to make sure that your actual package passes all tests you may want to look into tox, the virtualenv test automation tool and its pytest support. Tox helps you to setup virtualenv environments with pre-defined dependencies and then executing a pre-configured test command with options. It will run tests against the installed package and not against your source code checkout, helping to detect packaging glitches.

Continuous integration services such as Jenkins can make use of the --junitxml=PATH option to create a JUnitXML file and generate reports (e.g. by publishing the results in a nice format with the Jenkins xUnit Plugin).

Integrating with setuptools / python setup.py test / pytest-runner

You can integrate test runs into your setuptools based project with the pytest-runner plugin.

Add this to setup.py file:

from setuptools import setup

setup(
    #...,
    setup_requires=['pytest-runner', ...],
    tests_require=['pytest', ...],
    #...,
)

And create an alias into setup.cfg file:

[aliases]
test=pytest

If you now type:

python setup.py test

this will execute your tests using pytest-runner. As this is a standalone version of pytest no prior installation whatsoever is required for calling the test command. You can also pass additional arguments to pytest such as your test directory or other options using --addopts.

You can also specify other pytest-ini options in your setup.cfg file by putting them into a [tool:pytest] section:

[tool:pytest]
addopts = --verbose
python_files = testing/*/*.py

Note

Prior to 3.0, the supported section name was [pytest]. Due to how this may collide with some distutils commands, the recommended section name for setup.cfg files is now [tool:pytest].

Note that for pytest.ini and tox.ini files the section name is [pytest].

Manual Integration

If for some reason you don’t want/can’t use pytest-runner, you can write your own setuptools Test command for invoking pytest.

import sys

from setuptools.command.test import test as TestCommand


class PyTest(TestCommand):
    user_options = [('pytest-args=', 'a', "Arguments to pass to pytest")]

    def initialize_options(self):
        TestCommand.initialize_options(self)
        self.pytest_args = []

    def run_tests(self):
        import shlex
        #import here, cause outside the eggs aren't loaded
        import pytest
        errno = pytest.main(shlex.split(self.pytest_args))
        sys.exit(errno)


setup(
    #...,
    tests_require=['pytest'],
    cmdclass = {'test': PyTest},
    )

Now if you run:

python setup.py test

this will download pytest if needed and then run your tests as you would expect it to. You can pass a single string of arguments using the --pytest-args or -a command-line option. For example:

python setup.py test -a "--durations=5"

is equivalent to running pytest --durations=5.

Basic test configuration

Command line options and configuration file settings

You can get help on command line options and values in INI-style configurations files by using the general help option:

pytest -h   # prints options _and_ config file settings

This will display command line and configuration file settings which were registered by installed plugins.

initialization: determining rootdir and inifile

New in version 2.7.

pytest determines a “rootdir” for each test run which depends on the command line arguments (specified test files, paths) and on the existence of inifiles. The determined rootdir and ini-file are printed as part of the pytest header. The rootdir is used for constructing “nodeids” during collection and may also be used by plugins to store project/testrun-specific information.

Here is the algorithm which finds the rootdir from args:

  • determine the common ancestor directory for the specified args that are recognised as paths that exist in the file system. If no such paths are found, the common ancestor directory is set to the current working directory.
  • look for pytest.ini, tox.ini and setup.cfg files in the ancestor directory and upwards. If one is matched, it becomes the ini-file and its directory becomes the rootdir.
  • if no ini-file was found, look for setup.py upwards from the common ancestor directory to determine the rootdir.
  • if no setup.py was found, look for pytest.ini, tox.ini and setup.cfg in each of the specified args and upwards. If one is matched, it becomes the ini-file and its directory becomes the rootdir.
  • if no ini-file was found, use the already determined common ancestor as root directory. This allows to work with pytest in structures that are not part of a package and don’t have any particular ini-file configuration.

If no args are given, pytest collects test below the current working directory and also starts determining the rootdir from there.

warning:custom pytest plugin commandline arguments may include a path, as in pytest --log-output ../../test.log args. Then args is mandatory, otherwise pytest uses the folder of test.log for rootdir determination (see also issue 1435). A dot . for referencing to the current working directory is also possible.

Note that an existing pytest.ini file will always be considered a match, whereas tox.ini and setup.cfg will only match if they contain a [pytest] or [tool:pytest] section, respectively. Options from multiple ini-files candidates are never merged - the first one wins (pytest.ini always wins, even if it does not contain a [pytest] section).

The config object will subsequently carry these attributes:

  • config.rootdir: the determined root directory, guaranteed to exist.
  • config.inifile: the determined ini-file, may be None.

The rootdir is used a reference directory for constructing test addresses (“nodeids”) and can be used also by plugins for storing per-testrun information.

Example:

pytest path/to/testdir path/other/

will determine the common ancestor as path and then check for ini-files as follows:

# first look for pytest.ini files
path/pytest.ini
path/setup.cfg  # must also contain [tool:pytest] section to match
path/tox.ini    # must also contain [pytest] section to match
pytest.ini
... # all the way down to the root

# now look for setup.py
path/setup.py
setup.py
... # all the way down to the root

How to change command line options defaults

It can be tedious to type the same series of command line options every time you use pytest. For example, if you always want to see detailed info on skipped and xfailed tests, as well as have terser “dot” progress output, you can write it into a configuration file:

# content of pytest.ini
# (or tox.ini or setup.cfg)
[pytest]
addopts = -rsxX -q

Alternatively, you can set a PYTEST_ADDOPTS environment variable to add command line options while the environment is in use:

export PYTEST_ADDOPTS="-rsxX -q"

From now on, running pytest will add the specified options.

Builtin configuration file options

minversion

Specifies a minimal pytest version required for running tests.

minversion = 2.1 # will fail if we run with pytest-2.0
addopts

Add the specified OPTS to the set of command line arguments as if they had been specified by the user. Example: if you have this ini file content:

[pytest]
addopts = --maxfail=2 -rf  # exit after 2 failures, report fail info

issuing pytest test_hello.py actually means:

pytest --maxfail=2 -rf test_hello.py

Default is to add no options.

norecursedirs

Set the directory basename patterns to avoid when recursing for test discovery. The individual (fnmatch-style) patterns are applied to the basename of a directory to decide if to recurse into it. Pattern matching characters:

*       matches everything
?       matches any single character
[seq]   matches any character in seq
[!seq]  matches any char not in seq

Default patterns are '.*', 'build', 'dist', 'CVS', '_darcs', '{arch}', '*.egg'. Setting a norecursedirs replaces the default. Here is an example of how to avoid certain directories:

# content of pytest.ini
[pytest]
norecursedirs = .svn _build tmp*

This would tell pytest to not look into typical subversion or sphinx-build directories or into any tmp prefixed directory.

testpaths

New in version 2.8.

Sets list of directories that should be searched for tests when no specific directories, files or test ids are given in the command line when executing pytest from the rootdir directory. Useful when all project tests are in a known location to speed up test collection and to avoid picking up undesired tests by accident.

# content of pytest.ini
[pytest]
testpaths = testing doc

This tells pytest to only look for tests in testing and doc directories when executing from the root directory.

python_files

One or more Glob-style file patterns determining which python files are considered as test modules.

python_classes

One or more name prefixes or glob-style patterns determining which classes are considered for test collection. Here is an example of how to collect tests from classes that end in Suite:

# content of pytest.ini
[pytest]
python_classes = *Suite

Note that unittest.TestCase derived classes are always collected regardless of this option, as unittest‘s own collection framework is used to collect those tests.

python_functions

One or more name prefixes or glob-patterns determining which test functions and methods are considered tests. Here is an example of how to collect test functions and methods that end in _test:

# content of pytest.ini
[pytest]
python_functions = *_test

Note that this has no effect on methods that live on a unittest .TestCase derived class, as unittest‘s own collection framework is used to collect those tests.

See Changing naming conventions for more detailed examples.

doctest_optionflags

One or more doctest flag names from the standard doctest module. See how pytest handles doctests.

confcutdir

Sets a directory where search upwards for conftest.py files stops. By default, pytest will stop searching for conftest.py files upwards from pytest.ini/tox.ini/setup.cfg of the project if any, or up to the file-system root.

Setting up bash completion

When using bash as your shell, pytest can use argcomplete (https://argcomplete.readthedocs.io/) for auto-completion. For this argcomplete needs to be installed and enabled.

Install argcomplete using:

sudo pip install 'argcomplete>=0.5.7'

For global activation of all argcomplete enabled python applications run:

sudo activate-global-python-argcomplete

For permanent (but not global) pytest activation, use:

register-python-argcomplete pytest >> ~/.bashrc

For one-time activation of argcomplete for pytest only, use:

eval "$(register-python-argcomplete pytest)"

Backwards Compatibility Policy

Keeping backwards compatibility has a very high priority in the pytest project. Although we have deprecated functionality over the years, most of it is still supported. All deprecations in pytest were done because simpler or more efficient ways of accomplishing the same tasks have emerged, making the old way of doing things unnecessary.

With the pytest 3.0 release we introduced a clear communication scheme for when we will actually remove the old busted joint and politely ask you to use the new hotness instead, while giving you enough time to adjust your tests or raise concerns if there are valid reasons to keep deprecated functionality around.

To communicate changes we are already issuing deprecation warnings, but they are not displayed by default. In pytest 3.0 we changed the default setting so that pytest deprecation warnings are displayed if not explicitly silenced (with --disable-pytest-warnings).

We will only remove deprecated functionality in major releases (e.g. if we deprecate something in 3.0 we will remove it in 4.0), and keep it around for at least two minor releases (e.g. if we deprecate something in 3.9 and 4.0 is the next release, we will not remove it in 4.0 but in 5.0).

License

Distributed under the terms of the MIT license, pytest is free and open source software.

The MIT License (MIT)

Copyright (c) 2004-2016 Holger Krekel and others

Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

Contribution getting started

Contributions are highly welcomed and appreciated. Every little help counts, so do not hesitate!

Feature requests and feedback

Do you like pytest? Share some love on Twitter or in your blog posts!

We’d also like to hear about your propositions and suggestions. Feel free to submit them as issues and:

  • Explain in detail how they should work.
  • Keep the scope as narrow as possible. This will make it easier to implement.

Report bugs

Report bugs for pytest in the issue tracker.

If you are reporting a bug, please include:

  • Your operating system name and version.
  • Any details about your local setup that might be helpful in troubleshooting, specifically Python interpreter version, installed libraries and pytest version.
  • Detailed steps to reproduce the bug.

If you can write a demonstration test that currently fails but should pass (xfail), that is a very useful commit to make as well, even if you can’t find how to fix the bug yet.

Fix bugs

Look through the GitHub issues for bugs. Here is a filter you can use: https://github.com/pytest-dev/pytest/labels/bug

Talk to developers to find out how you can fix specific bugs.

Don’t forget to check the issue trackers of your favourite plugins, too!

Implement features

Look through the GitHub issues for enhancements. Here is a filter you can use: https://github.com/pytest-dev/pytest/labels/enhancement

Talk to developers to find out how you can implement specific features.

Write documentation

Pytest could always use more documentation. What exactly is needed?

  • More complementary documentation. Have you perhaps found something unclear?
  • Documentation translations. We currently have only English.
  • Docstrings. There can never be too many of them.
  • Blog posts, articles and such – they’re all very appreciated.

You can also edit documentation files directly in the GitHub web interface, without using a local copy. This can be convenient for small fixes.

Note

Build the documentation locally with the following command:

$ tox -e docs

The built documentation should be available in the doc/en/_build/.

Where ‘en’ refers to the documentation language.

Submitting Plugins to pytest-dev

Pytest development of the core, some plugins and support code happens in repositories living under the pytest-dev organisations:

All pytest-dev Contributors team members have write access to all contained repositories. Pytest core and plugins are generally developed using pull requests to respective repositories.

The objectives of the pytest-dev organisation are:

  • Having a central location for popular pytest plugins
  • Sharing some of the maintenance responsibility (in case a maintainer no longer wishes to maintain a plugin)

You can submit your plugin by subscribing to the pytest-dev mail list and writing a mail pointing to your existing pytest plugin repository which must have the following:

  • PyPI presence with a setup.py that contains a license, pytest- prefixed name, version number, authors, short and long description.
  • a tox.ini for running tests using tox.
  • a README.txt describing how to use the plugin and on which platforms it runs.
  • a LICENSE.txt file or equivalent containing the licensing information, with matching info in setup.py.
  • an issue tracker for bug reports and enhancement requests.
  • a changelog

If no contributor strongly objects and two agree, the repository can then be transferred to the pytest-dev organisation.

Here’s a rundown of how a repository transfer usually proceeds (using a repository named joedoe/pytest-xyz as example):

  • joedoe transfers repository ownership to pytest-dev administrator calvin.
  • calvin creates pytest-xyz-admin and pytest-xyz-developers teams, inviting joedoe to both as maintainer.
  • calvin transfers repository to pytest-dev and configures team access:
    • pytest-xyz-admin admin access;
    • pytest-xyz-developers write access;

The pytest-dev/Contributors team has write access to all projects, and every project administrator is in it. We recommend that each plugin has at least three people who have the right to release to PyPI.

Repository owners can rest assured that no pytest-dev administrator will ever make releases of your repository or take ownership in any way, except in rare cases where someone becomes unresponsive after months of contact attempts. As stated, the objective is to share maintenance and avoid “plugin-abandon”.

Preparing Pull Requests on GitHub

Note

What is a “pull request”? It informs project’s core developers about the changes you want to review and merge. Pull requests are stored on GitHub servers. Once you send a pull request, we can discuss its potential modifications and even add more commits to it later on.

There’s an excellent tutorial on how Pull Requests work in the GitHub Help Center, but here is a simple overview:

  1. Fork the pytest GitHub repository. It’s fine to use pytest as your fork repository name because it will live under your user.

  2. Clone your fork locally using git and create a branch:

    $ git clone git@github.com:YOUR_GITHUB_USERNAME/pytest.git
    $ cd pytest
    # now, to fix a bug create your own branch off "master":
    
        $ git checkout -b your-bugfix-branch-name master
    
    # or to instead add a feature create your own branch off "features":
    
        $ git checkout -b your-feature-branch-name features
    

    Given we have “major.minor.micro” version numbers, bugfixes will usually be released in micro releases whereas features will be released in minor releases and incompatible changes in major releases.

    If you need some help with Git, follow this quick start guide: https://git.wiki.kernel.org/index.php/QuickStart

  3. Install tox

    Tox is used to run all the tests and will automatically setup virtualenvs to run the tests in. (will implicitly use http://www.virtualenv.org/en/latest/):

    $ pip install tox
    
  4. Run all the tests

    You need to have Python 2.7 and 3.5 available in your system. Now running tests is as simple as issuing this command:

    $ tox -e linting,py27,py35
    

    This command will run tests via the “tox” tool against Python 2.7 and 3.5 and also perform “lint” coding-style checks.

  5. You can now edit your local working copy.

    You can now make the changes you want and run the tests again as necessary.

    To run tests on Python 2.7 and pass options to pytest (e.g. enter pdb on failure) to pytest you can do:

    $ tox -e py27 -- --pdb
    

    Or to only run tests in a particular test module on Python 3.5:

    $ tox -e py35 -- testing/test_config.py
    
  6. Commit and push once your tests pass and you are happy with your change(s):

    $ git commit -a -m "<commit message>"
    $ git push -u
    

    Make sure you add a message to CHANGELOG.rst and add yourself to AUTHORS. If you are unsure about either of these steps, submit your pull request and we’ll help you fix it up.

  7. Finally, submit a pull request through the GitHub website using this data:

    head-fork: YOUR_GITHUB_USERNAME/pytest
    compare: your-branch-name
    
    base-fork: pytest-dev/pytest
    base: master          # if it's a bugfix
    base: features        # if it's a feature
    

Talks and Tutorials

Talks and blog postings

Test parametrization:

Assertion introspection:

Distributed testing:

Plugin specific examples:

_images/gaynor3.png _images/theuni.png _images/cramer2.png _images/keleshev.png

Project examples

Here are some examples of projects using pytest (please send notes via Contact channels):

  • PyPy, Python with a JIT compiler, running over 21000 tests
  • the MoinMoin Wiki Engine
  • sentry, realtime app-maintenance and exception tracking
  • Astropy and affiliated packages
  • tox, virtualenv/Hudson integration tool
  • PIDA framework for integrated development
  • PyPM ActiveState’s package manager
  • Fom a fluid object mapper for FluidDB
  • applib cross-platform utilities
  • six Python 2 and 3 compatibility utilities
  • pediapress MediaWiki articles
  • mwlib mediawiki parser and utility library
  • The Translate Toolkit for localization and conversion
  • execnet rapid multi-Python deployment
  • pylib cross-platform path, IO, dynamic code library
  • Pacha configuration management in five minutes
  • bbfreeze create standalone executables from Python scripts
  • pdb++ a fancier version of PDB
  • py-s3fuse Amazon S3 FUSE based filesystem
  • waskr WSGI Stats Middleware
  • guachi global persistent configs for Python modules
  • Circuits lightweight Event Driven Framework
  • pygtk-helpers easy interaction with PyGTK
  • QuantumCore statusmessage and repoze openid plugin
  • pydataportability libraries for managing the open web
  • XIST extensible HTML/XML generator
  • tiddlyweb optionally headless, extensible RESTful datastore
  • fancycompleter for colorful tab-completion
  • Paludis tools for Gentoo Paludis package manager
  • Gerald schema comparison tool
  • abjad Python API for Formalized Score control
  • bu a microscopic build system
  • katcp Telescope communication protocol over Twisted
  • kss plugin timer
  • pyudev a pure Python binding to the Linux library libudev
  • pytest-localserver a plugin for pytest that provides an httpserver and smtpserver
  • pytest-monkeyplus a plugin that extends monkeypatch

These projects help integrate pytest into other Python frameworks:

Some Issues and Questions

Note

This FAQ is here only mostly for historic reasons. Checkout pytest Q&A at Stackoverflow for many questions and answers related to pytest and/or use Contact channels to get help.

On naming, nosetests, licensing and magic

How does pytest relate to nose and unittest?

pytest and nose share basic philosophy when it comes to running and writing Python tests. In fact, you can run many tests written for nose with pytest. nose was originally created as a clone of pytest when pytest was in the 0.8 release cycle. Note that starting with pytest-2.0 support for running unittest test suites is majorly improved.

how does pytest relate to twisted’s trial?

Since some time pytest has builtin support for supporting tests written using trial. It does not itself start a reactor, however, and does not handle Deferreds returned from a test in pytest style. If you are using trial’s unittest.TestCase chances are that you can just run your tests even if you return Deferreds. In addition, there also is a dedicated pytest-twisted plugin which allows you to return deferreds from pytest-style tests, allowing the use of pytest fixtures: explicit, modular, scalable and other features.

how does pytest work with Django?

In 2012, some work is going into the pytest-django plugin. It substitutes the usage of Django’s manage.py test and allows the use of all pytest features most of which are not available from Django directly.

What’s this “magic” with pytest? (historic notes)

Around 2007 (version 0.8) some people thought that pytest was using too much “magic”. It had been part of the pylib which contains a lot of unrelated python library code. Around 2010 there was a major cleanup refactoring, which removed unused or deprecated code and resulted in the new pytest PyPI package which strictly contains only test-related code. This release also brought a complete pluginification such that the core is around 300 lines of code and everything else is implemented in plugins. Thus pytest today is a small, universally runnable and customizable testing framework for Python. Note, however, that pytest uses metaprogramming techniques and reading its source is thus likely not something for Python beginners.

A second “magic” issue was the assert statement debugging feature. Nowadays, pytest explicitly rewrites assert statements in test modules in order to provide more useful assert feedback. This completely avoids previous issues of confusing assertion-reporting. It also means, that you can use Python’s -O optimization without losing assertions in test modules.

You can also turn off all assertion interaction using the --assert=plain option.

Why can I use both pytest and py.test commands?

pytest used to be part of the py package, which provided several developer utilities, all starting with py.<TAB>, thus providing nice TAB-completion. If you install pip install pycmd you get these tools from a separate package. Once pytest became a separate package, the py.test name was retained due to avoid a naming conflict with another tool. This conflict was eventually resolved, and the pytest command was therefore introduced. In future versions of pytest, we may deprecate and later remove the py.test command to avoid perpetuating the confusion.

pytest fixtures, parametrized tests

Is using pytest fixtures versus xUnit setup a style question?

For simple applications and for people experienced with nose or unittest-style test setup using xUnit style setup probably feels natural. For larger test suites, parametrized testing or setup of complex test resources using fixtures may feel more natural. Moreover, fixtures are ideal for writing advanced test support code (like e.g. the monkeypatch, the tmpdir or capture fixtures) because the support code can register setup/teardown functions in a managed class/module/function scope.

Can I yield multiple values from a fixture function?

There are two conceptual reasons why yielding from a factory function is not possible:

  • If multiple factories yielded values there would be no natural place to determine the combination policy - in real-world examples some combinations often should not run.
  • Calling factories for obtaining test function arguments is part of setting up and running a test. At that point it is not possible to add new test calls to the test collection anymore.

However, with pytest-2.3 you can use the Fixtures as Function arguments decorator and specify params so that all tests depending on the factory-created resource will run multiple times with different parameters.

You can also use the pytest_generate_tests hook to implement the parametrization scheme of your choice. See also Parametrizing tests for more examples.

pytest interaction with other packages

Issues with pytest, multiprocess and setuptools?

On Windows the multiprocess package will instantiate sub processes by pickling and thus implicitly re-import a lot of local modules. Unfortunately, setuptools-0.6.11 does not if __name__=='__main__' protect its generated command line script. This leads to infinite recursion when running a test that instantiates Processes.

As of mid-2013, there shouldn’t be a problem anymore when you use the standard setuptools (note that distribute has been merged back into setuptools which is now shipped directly with virtualenv).

Contact channels

Release announcements

pytest-3.0.7

pytest 3.0.7 has just been released to PyPI.

This is a bug-fix release, being a drop-in replacement. To upgrade:

pip install --upgrade pytest

The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.

Thanks to all who contributed to this release, among them:

  • Anthony Sottile
  • Barney Gale
  • Bruno Oliveira
  • Florian Bruhin
  • Floris Bruynooghe
  • Ionel Cristian Mărieș
  • Katerina Koukiou
  • NODA, Kai
  • Omer Hadari
  • Patrick Hayes
  • Ran Benita
  • Ronny Pfannschmidt
  • Victor Uriarte
  • Vidar Tonaas Fauske
  • Ville Skyttä
  • fbjorn
  • mbyt

Happy testing, The pytest Development Team

pytest-3.0.6

pytest 3.0.6 has just been released to PyPI.

This is a bug-fix release, being a drop-in replacement. To upgrade:

pip install --upgrade pytest

The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.

Thanks to all who contributed to this release, among them:

  • Andreas Pelme
  • Bruno Oliveira
  • Dmitry Malinovsky
  • Eli Boyarski
  • Jakub Wilk
  • Jeff Widman
  • Loïc Estève
  • Luke Murphy
  • Miro Hrončok
  • Oscar Hellström
  • Peter Heatwole
  • Philippe Ombredanne
  • Ronny Pfannschmidt
  • Rutger Prins
  • Stefan Scherfke

Happy testing, The pytest Development Team

pytest-3.0.5

pytest 3.0.5 has just been released to PyPI.

This is a bug-fix release, being a drop-in replacement. To upgrade:

pip install --upgrade pytest

The changelog is available at http://doc.pytest.org/en/latest/changelog.html.

Thanks to all who contributed to this release, among them:

  • Ana Vojnovic
  • Bruno Oliveira
  • Daniel Hahler
  • Duncan Betts
  • Igor Starikov
  • Ismail
  • Luke Murphy
  • Ned Batchelder
  • Ronny Pfannschmidt
  • Sebastian Ramacher
  • nmundar

Happy testing, The pytest Development Team

pytest-3.0.4

pytest 3.0.4 has just been released to PyPI.

This release fixes some regressions and bugs reported in the last version, being a drop-in replacement. To upgrade:

pip install --upgrade pytest

The changelog is available at http://doc.pytest.org/en/latest/changelog.html.

Thanks to all who contributed to this release, among them:

  • Bruno Oliveira
  • Dan Wandschneider
  • Florian Bruhin
  • Georgy Dyuldin
  • Grigorii Eremeev
  • Jason R. Coombs
  • Manuel Jacob
  • Mathieu Clabaut
  • Michael Seifert
  • Nikolaus Rath
  • Ronny Pfannschmidt
  • Tom V

Happy testing, The pytest Development Team

pytest-3.0.3

pytest 3.0.3 has just been released to PyPI.

This release fixes some regressions and bugs reported in the last version, being a drop-in replacement. To upgrade:

pip install --upgrade pytest

The changelog is available at http://doc.pytest.org/en/latest/changelog.html.

Thanks to all who contributed to this release, among them:

  • Bruno Oliveira
  • Florian Bruhin
  • Floris Bruynooghe
  • Huayi Zhang
  • Lev Maximov
  • Raquel Alegre
  • Ronny Pfannschmidt
  • Roy Williams
  • Tyler Goodlet
  • mbyt

Happy testing, The pytest Development Team

pytest-3.0.2

pytest 3.0.2 has just been released to PyPI.

This release fixes some regressions and bugs reported in version 3.0.1, being a drop-in replacement. To upgrade:

pip install --upgrade pytest

The changelog is available at http://doc.pytest.org/en/latest/changelog.html.

Thanks to all who contributed to this release, among them:

  • Ahn Ki-Wook
  • Bruno Oliveira
  • Florian Bruhin
  • Jordan Guymon
  • Raphael Pierzina
  • Ronny Pfannschmidt
  • mbyt

Happy testing, The pytest Development Team

pytest-3.0.1

pytest 3.0.1 has just been released to PyPI.

This release fixes some regressions reported in version 3.0.0, being a drop-in replacement. To upgrade:

pip install –upgrade pytest

The changelog is available at http://doc.pytest.org/en/latest/changelog.html.

Thanks to all who contributed to this release, among them:

Adam Chainz Andrew Svetlov Bruno Oliveira Daniel Hahler Dmitry Dygalo Florian Bruhin Marcin Bachry Ronny Pfannschmidt matthiasha

Happy testing, The py.test Development Team

pytest-3.0.0

The pytest team is proud to announce the 3.0.0 release!

pytest is a mature Python testing tool with more than a 1600 tests against itself, passing on many different interpreters and platforms.

This release contains a lot of bugs fixes and improvements, and much of the work done on it was possible because of the 2016 Sprint[1], which was funded by an indiegogo campaign which raised over US$12,000 with nearly 100 backers.

There’s a “What’s new in pytest 3.0” [2] blog post highlighting the major features in this release.

To see the complete changelog and documentation, please visit:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed to this release, among them:

AbdealiJK Ana Ribeiro Antony Lee Brandon W Maister Brianna Laugher Bruno Oliveira Ceridwen Christian Boelsen Daniel Hahler Danielle Jenkins Dave Hunt Diego Russo Dmitry Dygalo Edoardo Batini Eli Boyarski Florian Bruhin Floris Bruynooghe Greg Price Guyzmo HEAD KANGAROO JJ Javi Romero Javier Domingo Cansino Kale Kundert Kalle Bronsen Marius Gedminas Matt Williams Mike Lundy Oliver Bestwalter Omar Kohl Raphael Pierzina RedBeardCode Roberto Polli Romain Dorgueil Roman Bolshakov Ronny Pfannschmidt Stefan Zimmermann Steffen Allner Tareq Alayan Ted Xiao Thomas Grainger Tom Viner TomV Vasily Kuznetsov aostr marscher palaviv satoru taschini

Happy testing, The Pytest Development Team

[1] http://blog.pytest.org/2016/pytest-development-sprint/ [2] http://blog.pytest.org/2016/whats-new-in-pytest-30/

python testing sprint June 20th-26th 2016

_images/freiburg2.jpg

The pytest core group held the biggest sprint in its history in June 2016, taking place in the black forest town Freiburg in Germany. In February 2016 we started a funding campaign on Indiegogo to cover expenses The page also mentions some preliminary topics:

  • improving pytest-xdist test scheduling to take into account fixture setups and explicit user hints.
  • provide info on fixture dependencies during –collect-only
  • tying pytest-xdist to tox so that you can do “py.test -e py34” to run tests in a particular tox-managed virtualenv. Also look into making pytest-xdist use tox environments on remote ssh-sides so that remote dependency management becomes easier.
  • refactoring the fixture system so more people understand it :)
  • integrating PyUnit setup methods as autouse fixtures. possibly adding ways to influence ordering of same-scoped fixtures (so you can make a choice of which fixtures come before others)
  • fixing bugs and issues from the tracker, really an endless source :)

Participants

Over 20 participants took part from 4 continents, including employees from Splunk, Personalkollen, Cobe.io, FanDuel and Dolby. Some newcomers mixed with developers who have worked on pytest since its beginning, and of course everyone in between.

Sprint organisation, schedule

People arrived in Freiburg on the 19th, with sprint development taking place on 20th, 21st, 22nd, 24th and 25th. On the 23rd we took a break day for some hot hiking in the Black Forest.

Sprint activity was organised heavily around pairing, with plenty of group discusssions to take advantage of the high bandwidth, and lightning talks as well.

Money / funding

The Indiegogo campaign aimed for 11000 USD and in the end raised over 12000, to reimburse travel costs, pay for a sprint venue and catering.

Excess money is reserved for further sprint/travel funding for pytest/tox contributors.

pytest-2.9.2

pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms.

See below for the changes and see docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed to this release, among them:

Adam Chainz Benjamin Dopplinger Bruno Oliveira Florian Bruhin John Towler Martin Prusse Meng Jue MengJueM Omar Kohl Quentin Pradet Ronny Pfannschmidt Thomas Güttler TomV Tyler Goodlet

Happy testing, The py.test Development Team

2.9.2 (compared to 2.9.1)

Bug Fixes

  • fix #510: skip tests where one parameterize dimension was empty thanks Alex Stapleton for the Report and @RonnyPfannschmidt for the PR
  • Fix Xfail does not work with condition keyword argument. Thanks @astraw38 for reporting the issue (#1496) and @tomviner for PR the (#1524).
  • Fix win32 path issue when putting custom config file with absolute path in pytest.main("-c your_absolute_path").
  • Fix maximum recursion depth detection when raised error class is not aware of unicode/encoded bytes. Thanks @prusse-martin for the PR (#1506).
  • Fix pytest.mark.skip mark when used in strict mode. Thanks @pquentin for the PR and @RonnyPfannschmidt for showing how to fix the bug.
  • Minor improvements and fixes to the documentation. Thanks @omarkohl for the PR.
  • Fix --fixtures to show all fixture definitions as opposed to just one per fixture name. Thanks to @hackebrot for the PR.

pytest-2.9.1

pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms.

See below for the changes and see docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed to this release, among them:

Bruno Oliveira Daniel Hahler Dmitry Malinovsky Florian Bruhin Floris Bruynooghe Matt Bachmann Ronny Pfannschmidt TomV Vladimir Bolshakov Zearin palaviv

Happy testing, The py.test Development Team

2.9.1 (compared to 2.9.0)

Bug Fixes

  • Improve error message when a plugin fails to load. Thanks @nicoddemus for the PR.
  • Fix (#1178): pytest.fail with non-ascii characters raises an internal pytest error. Thanks @nicoddemus for the PR.
  • Fix (#469): junit parses report.nodeid incorrectly, when params IDs contain ::. Thanks @tomviner for the PR (#1431).
  • Fix (#578): SyntaxErrors containing non-ascii lines at the point of failure generated an internal py.test error. Thanks @asottile for the report and @nicoddemus for the PR.
  • Fix (#1437): When passing in a bytestring regex pattern to parameterize attempt to decode it as utf-8 ignoring errors.
  • Fix (#649): parametrized test nodes cannot be specified to run on the command line.

pytest-2.9.0

pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms.

See below for the changes and see docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed to this release, among them:

Anatoly Bubenkov Bruno Oliveira Buck Golemon David Vierra Florian Bruhin Galaczi Endre Georgy Dyuldin Lukas Bednar Luke Murphy Marcin Biernat Matt Williams Michael Aquilina Raphael Pierzina Ronny Pfannschmidt Ryan Wooden Tiemo Kieft TomV holger krekel jab

Happy testing, The py.test Development Team

2.9.0 (compared to 2.8.7)

New Features

  • New pytest.mark.skip mark, which unconditionally skips marked tests. Thanks @MichaelAquilina for the complete PR (#1040).
  • --doctest-glob may now be passed multiple times in the command-line. Thanks @jab and @nicoddemus for the PR.
  • New -rp and -rP reporting options give the summary and full output of passing tests, respectively. Thanks to @codewarrior0 for the PR.
  • pytest.mark.xfail now has a strict option which makes XPASS tests to fail the test suite, defaulting to False. There’s also a xfail_strict ini option that can be used to configure it project-wise. Thanks @rabbbit for the request and @nicoddemus for the PR (#1355).
  • Parser.addini now supports options of type bool. Thanks @nicoddemus for the PR.
  • New ALLOW_BYTES doctest option strips b prefixes from byte strings in doctest output (similar to ALLOW_UNICODE). Thanks @jaraco for the request and @nicoddemus for the PR (#1287).
  • give a hint on KeyboardInterrupt to use the –fulltrace option to show the errors, this fixes #1366. Thanks to @hpk42 for the report and @RonnyPfannschmidt for the PR.
  • catch IndexError exceptions when getting exception source location. This fixes pytest internal error for dynamically generated code (fixtures and tests) where source lines are fake by intention

Changes

  • Important: py.code has been merged into the pytest repository as pytest._code. This decision was made because py.code had very few uses outside pytest and the fact that it was in a different repository made it difficult to fix bugs on its code in a timely manner. The team hopes with this to be able to better refactor out and improve that code. This change shouldn’t affect users, but it is useful to let users aware if they encounter any strange behavior.

    Keep in mind that the code for pytest._code is private and experimental, so you definitely should not import it explicitly!

    Please note that the original py.code is still available in pylib.

  • pytest_enter_pdb now optionally receives the pytest config object. Thanks @nicoddemus for the PR.

  • Removed code and documentation for Python 2.5 or lower versions, including removal of the obsolete _pytest.assertion.oldinterpret module. Thanks @nicoddemus for the PR (#1226).

  • Comparisons now always show up in full when CI or BUILD_NUMBER is found in the environment, even when -vv isn’t used. Thanks @The-Compiler for the PR.

  • --lf and --ff now support long names: --last-failed and --failed-first respectively. Thanks @MichaelAquilina for the PR.

  • Added expected exceptions to pytest.raises fail message

  • Collection only displays progress (“collecting X items”) when in a terminal. This avoids cluttering the output when using --color=yes to obtain colors in CI integrations systems (#1397).

Bug Fixes

  • The -s and -c options should now work under xdist; Config.fromdictargs now represents its input much more faithfully. Thanks to @bukzor for the complete PR (#680).
  • Fix (#1290): support Python 3.5’s @ operator in assertion rewriting. Thanks @Shinkenjoe for report with test case and @tomviner for the PR.
  • Fix formatting utf-8 explanation messages (#1379). Thanks @biern for the PR.
  • Fix traceback style docs to describe all of the available options (auto/long/short/line/native/no), with auto being the default since v2.6. Thanks @hackebrot for the PR.
  • Fix (#1422): junit record_xml_property doesn’t allow multiple records with same name.

pytest-2.8.7

This is a hotfix release to solve a regression in the builtin monkeypatch plugin that got introduced in 2.8.6.

pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is supposed to be drop-in compatible to 2.8.5.

See below for the changes and see docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed to this release, among them:

Ronny Pfannschmidt

Happy testing, The py.test Development Team

2.8.7 (compared to 2.8.6)

  • fix #1338: use predictable object resolution for monkeypatch

pytest-2.8.6

pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is supposed to be drop-in compatible to 2.8.5.

See below for the changes and see docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed to this release, among them:

AMiT Kumar Bruno Oliveira Erik M. Bray Florian Bruhin Georgy Dyuldin Jeff Widman Kartik Singhal Loïc Estève Manu Phatak Peter Demin Rick van Hattem Ronny Pfannschmidt Ulrich Petri foxx

Happy testing, The py.test Development Team

2.8.6 (compared to 2.8.5)

  • fix #1259: allow for double nodeids in junitxml, this was a regression failing plugins combinations like pytest-pep8 + pytest-flakes
  • Workaround for exception that occurs in pyreadline when using --pdb with standard I/O capture enabled. Thanks Erik M. Bray for the PR.
  • fix #900: Better error message in case the target of a monkeypatch call raises an ImportError.
  • fix #1292: monkeypatch calls (setattr, setenv, etc.) are now O(1). Thanks David R. MacIver for the report and Bruno Oliveira for the PR.
  • fix #1223: captured stdout and stderr are now properly displayed before entering pdb when --pdb is used instead of being thrown away. Thanks Cal Leeming for the PR.
  • fix #1305: pytest warnings emitted during pytest_terminal_summary are now properly displayed. Thanks Ionel Maries Cristian for the report and Bruno Oliveira for the PR.
  • fix #628: fixed internal UnicodeDecodeError when doctests contain unicode. Thanks Jason R. Coombs for the report and Bruno Oliveira for the PR.
  • fix #1334: Add captured stdout to jUnit XML report on setup error. Thanks Georgy Dyuldin for the PR.

pytest-2.8.5

pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is supposed to be drop-in compatible to 2.8.4.

See below for the changes and see docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed to this release, among them:

Alex Gaynor aselus-hub Bruno Oliveira Ronny Pfannschmidt

Happy testing, The py.test Development Team

2.8.5 (compared to 2.8.4)

  • fix #1243: fixed issue where class attributes injected during collection could break pytest. PR by Alexei Kozlenok, thanks Ronny Pfannschmidt and Bruno Oliveira for the review and help.
  • fix #1074: precompute junitxml chunks instead of storing the whole tree in objects Thanks Bruno Oliveira for the report and Ronny Pfannschmidt for the PR
  • fix #1238: fix pytest.deprecated_call() receiving multiple arguments (Regression introduced in 2.8.4). Thanks Alex Gaynor for the report and Bruno Oliveira for the PR.

pytest-2.8.4

pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is supposed to be drop-in compatible to 2.8.2.

See below for the changes and see docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed to this release, among them:

Bruno Oliveira Florian Bruhin Jeff Widman Mehdy Khoshnoody Nicholas Chammas Ronny Pfannschmidt Tim Chan

Happy testing, The py.test Development Team

2.8.4 (compared to 2.8.3)

  • fix #1190: deprecated_call() now works when the deprecated function has been already called by another test in the same module. Thanks Mikhail Chernykh for the report and Bruno Oliveira for the PR.
  • fix #1198: --pastebin option now works on Python 3. Thanks Mehdy Khoshnoody for the PR.
  • fix #1219: --pastebin now works correctly when captured output contains non-ascii characters. Thanks Bruno Oliveira for the PR.
  • fix #1204: another error when collecting with a nasty __getattr__(). Thanks Florian Bruhin for the PR.
  • fix the summary printed when no tests did run. Thanks Florian Bruhin for the PR.
  • a number of documentation modernizations wrt good practices. Thanks Bruno Oliveira for the PR.

pytest-2.8.3: bug fixes

pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is supposed to be drop-in compatible to 2.8.2.

See below for the changes and see docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed to this release, among them:

Bruno Oliveira Florian Bruhin Gabe Hollombe Gabriel Reis Hartmut Goebel John Vandenberg Lee Kamentsky Michael Birtwell Raphael Pierzina Ronny Pfannschmidt William Martin Stewart

Happy testing, The py.test Development Team

2.8.3 (compared to 2.8.2)

  • fix #1169: add __name__ attribute to testcases in TestCaseFunction to support the @unittest.skip decorator on functions and methods. Thanks Lee Kamentsky for the PR.
  • fix #1035: collecting tests if test module level obj has __getattr__(). Thanks Suor for the report and Bruno Oliveira / Tom Viner for the PR.
  • fix #331: don’t collect tests if their failure cannot be reported correctly e.g. they are a callable instance of a class.
  • fix #1133: fixed internal error when filtering tracebacks where one entry belongs to a file which is no longer available. Thanks Bruno Oliveira for the PR.
  • enhancement made to highlight in red the name of the failing tests so they stand out in the output. Thanks Gabriel Reis for the PR.
  • add more talks to the documentation
  • extend documentation on the –ignore cli option
  • use pytest-runner for setuptools integration
  • minor fixes for interaction with OS X El Capitan system integrity protection (thanks Florian)

pytest-2.8.2: bug fixes

pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is supposed to be drop-in compatible to 2.8.1.

See below for the changes and see docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed to this release, among them:

Bruno Oliveira Demian Brecht Florian Bruhin Ionel Cristian Mărieș Raphael Pierzina Ronny Pfannschmidt holger krekel

Happy testing, The py.test Development Team

2.8.2 (compared to 2.7.2)

  • fix #1085: proper handling of encoding errors when passing encoded byte strings to pytest.parametrize in Python 2. Thanks Themanwithoutaplan for the report and Bruno Oliveira for the PR.
  • fix #1087: handling SystemError when passing empty byte strings to pytest.parametrize in Python 3. Thanks Paul Kehrer for the report and Bruno Oliveira for the PR.
  • fix #995: fixed internal error when filtering tracebacks where one entry was generated by an exec() statement. Thanks Daniel Hahler, Ashley C Straw, Philippe Gauthier and Pavel Savchenko for contributing and Bruno Oliveira for the PR.

pytest-2.7.2: bug fixes

pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is supposed to be drop-in compatible to 2.7.1.

See below for the changes and see docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed to this release, among them:

Bruno Oliveira Floris Bruynooghe Punyashloka Biswal Aron Curzon Benjamin Peterson Thomas De Schampheleire Edison Gustavo Muenz Holger Krekel

Happy testing, The py.test Development Team

2.7.2 (compared to 2.7.1)

  • fix issue767: pytest.raises value attribute does not contain the exception instance on Python 2.6. Thanks Eric Siegerman for providing the test case and Bruno Oliveira for PR.
  • Automatically create directory for junitxml and results log. Thanks Aron Curzon.
  • fix issue713: JUnit XML reports for doctest failures. Thanks Punyashloka Biswal.
  • fix issue735: assertion failures on debug versions of Python 3.4+ Thanks Benjamin Peterson.
  • fix issue114: skipif marker reports to internal skipping plugin; Thanks Floris Bruynooghe for reporting and Bruno Oliveira for the PR.
  • fix issue748: unittest.SkipTest reports to internal pytest unittest plugin. Thanks Thomas De Schampheleire for reporting and Bruno Oliveira for the PR.
  • fix issue718: failed to create representation of sets containing unsortable elements in python 2. Thanks Edison Gustavo Muenz
  • fix issue756, fix issue752 (and similar issues): depend on py-1.4.29 which has a refined algorithm for traceback generation.

pytest-2.7.1: bug fixes

pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is supposed to be drop-in compatible to 2.7.0.

See below for the changes and see docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed to this release, among them:

Bruno Oliveira Holger Krekel Ionel Maries Cristian Floris Bruynooghe

Happy testing, The py.test Development Team

2.7.1 (compared to 2.7.0)

  • fix issue731: do not get confused by the braces which may be present and unbalanced in an object’s repr while collapsing False explanations. Thanks Carl Meyer for the report and test case.
  • fix issue553: properly handling inspect.getsourcelines failures in FixtureLookupError which would lead to an internal error, obfuscating the original problem. Thanks talljosh for initial diagnose/patch and Bruno Oliveira for final patch.
  • fix issue660: properly report scope-mismatch-access errors independently from ordering of fixture arguments. Also avoid the pytest internal traceback which does not provide information to the user. Thanks Holger Krekel.
  • streamlined and documented release process. Also all versions (in setup.py and documentation generation) are now read from _pytest/__init__.py. Thanks Holger Krekel.
  • fixed docs to remove the notion that yield-fixtures are experimental. They are here to stay :) Thanks Bruno Oliveira.
  • Support building wheels by using environment markers for the requirements. Thanks Ionel Maries Cristian.
  • fixed regression to 2.6.4 which surfaced e.g. in lost stdout capture printing when tests raised SystemExit. Thanks Holger Krekel.
  • reintroduced _pytest fixture of the pytester plugin which is used at least by pytest-xdist.

pytest-2.7.0: fixes, features, speed improvements

pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is supposed to be drop-in compatible to 2.6.X.

See below for the changes and see docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed, among them:

Anatoly Bubenkoff Floris Bruynooghe Brianna Laugher Eric Siegerman Daniel Hahler Charles Cloud Tom Viner Holger Peters Ldiary Translations almarklein

have fun, holger krekel

2.7.0 (compared to 2.6.4)

  • fix issue435: make reload() work when assert rewriting is active. Thanks Daniel Hahler.
  • fix issue616: conftest.py files and their contained fixutres are now properly considered for visibility, independently from the exact current working directory and test arguments that are used. Many thanks to Eric Siegerman and his PR235 which contains systematic tests for conftest visibility and now passes. This change also introduces the concept of a rootdir which is printed as a new pytest header and documented in the pytest customize web page.
  • change reporting of “diverted” tests, i.e. tests that are collected in one file but actually come from another (e.g. when tests in a test class come from a base class in a different file). We now show the nodeid and indicate via a postfix the other file.
  • add ability to set command line options by environment variable PYTEST_ADDOPTS.
  • added documentation on the new pytest-dev teams on bitbucket and github. See https://pytest.org/latest/contributing.html . Thanks to Anatoly for pushing and initial work on this.
  • fix issue650: new option --docttest-ignore-import-errors which will turn import errors in doctests into skips. Thanks Charles Cloud for the complete PR.
  • fix issue655: work around different ways that cause python2/3 to leak sys.exc_info into fixtures/tests causing failures in 3rd party code
  • fix issue615: assertion re-writing did not correctly escape % signs when formatting boolean operations, which tripped over mixing booleans with modulo operators. Thanks to Tom Viner for the report, triaging and fix.
  • implement issue351: add ability to specify parametrize ids as a callable to generate custom test ids. Thanks Brianna Laugher for the idea and implementation.
  • introduce and document new hookwrapper mechanism useful for plugins which want to wrap the execution of certain hooks for their purposes. This supersedes the undocumented __multicall__ protocol which pytest itself and some external plugins use. Note that pytest-2.8 is scheduled to drop supporting the old __multicall__ and only support the hookwrapper protocol.
  • majorly speed up invocation of plugin hooks
  • use hookwrapper mechanism in builtin pytest plugins.
  • add a doctest ini option for doctest flags, thanks Holger Peters.
  • add note to docs that if you want to mark a parameter and the parameter is a callable, you also need to pass in a reason to disambiguate it from the “decorator” case. Thanks Tom Viner.
  • “python_classes” and “python_functions” options now support glob-patterns for test discovery, as discussed in issue600. Thanks Ldiary Translations.
  • allow to override parametrized fixtures with non-parametrized ones and vice versa (bubenkoff).
  • fix issue463: raise specific error for ‘parameterize’ misspelling (pfctdayelise).
  • On failure, the sys.last_value, sys.last_type and sys.last_traceback are set, so that a user can inspect the error via postmortem debugging (almarklein).

pytest-2.6.3: fixes and little improvements

pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is drop-in compatible to 2.5.2 and 2.6.X. See below for the changes and see docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed, among them:

Floris Bruynooghe Oleg Sinyavskiy Uwe Schmitt Charles Cloud Wolfgang Schnerring

have fun, holger krekel

Changes 2.6.3

  • fix issue575: xunit-xml was reporting collection errors as failures instead of errors, thanks Oleg Sinyavskiy.
  • fix issue582: fix setuptools example, thanks Laszlo Papp and Ronny Pfannschmidt.
  • Fix infinite recursion bug when pickling capture.EncodedFile, thanks Uwe Schmitt.
  • fix issue589: fix bad interaction with numpy and others when showing exceptions. Check for precise “maximum recursion depth exceed” exception instead of presuming any RuntimeError is that one (implemented in py dep). Thanks Charles Cloud for analysing the issue.
  • fix conftest related fixture visibility issue: when running with a CWD outside of a test package pytest would get fixture discovery wrong. Thanks to Wolfgang Schnerring for figuring out a reproducible example.
  • Introduce pytest_enter_pdb hook (needed e.g. by pytest_timeout to cancel the timeout when interactively entering pdb). Thanks Wolfgang Schnerring.
  • check xfail/skip also with non-python function test items. Thanks Floris Bruynooghe.

pytest-2.6.2: few fixes and cx_freeze support

pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is drop-in compatible to 2.5.2 and 2.6.X. It also brings support for including pytest with cx_freeze or similar freezing tools into your single-file app distribution. For details see the CHANGELOG below.

See docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed, among them:

Floris Bruynooghe Benjamin Peterson Bruno Oliveira

have fun, holger krekel

2.6.2

  • Added function pytest.freeze_includes(), which makes it easy to embed pytest into executables using tools like cx_freeze. See docs for examples and rationale. Thanks Bruno Oliveira.
  • Improve assertion rewriting cache invalidation precision.
  • fixed issue561: adapt autouse fixture example for python3.
  • fixed issue453: assertion rewriting issue with __repr__ containing “n{”, “n}” and “n~”.
  • fix issue560: correctly display code if an “else:” or “finally:” is followed by statements on the same line.
  • Fix example in monkeypatch documentation, thanks t-8ch.
  • fix issue572: correct tmpdir doc example for python3.
  • Do not mark as universal wheel because Python 2.6 is different from other builds due to the extra argparse dependency. Fixes issue566. Thanks sontek.

pytest-2.6.1: fixes and new xfail feature

pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. The 2.6.1 release is drop-in compatible to 2.5.2 and actually fixes some regressions introduced with 2.6.0. It also brings a little feature to the xfail marker which now recognizes expected exceptions, see the CHANGELOG below.

See docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed, among them:

Floris Bruynooghe Bruno Oliveira Nicolas Delaby

have fun, holger krekel

Changes 2.6.1

  • No longer show line numbers in the –verbose output, the output is now purely the nodeid. The line number is still shown in failure reports. Thanks Floris Bruynooghe.
  • fix issue437 where assertion rewriting could cause pytest-xdist slaves to collect different tests. Thanks Bruno Oliveira.
  • fix issue555: add “errors” attribute to capture-streams to satisfy some distutils and possibly other code accessing sys.stdout.errors.
  • fix issue547 capsys/capfd also work when output capturing (“-s”) is disabled.
  • address issue170: allow pytest.mark.xfail(...) to specify expected exceptions via an optional “raises=EXC” argument where EXC can be a single exception or a tuple of exception classes. Thanks David Mohr for the complete PR.
  • fix integration of pytest with unittest.mock.patch decorator when it uses the “new” argument. Thanks Nicolas Delaby for test and PR.
  • fix issue with detecting conftest files if the arguments contain ”::” node id specifications (copy pasted from “-v” output)
  • fix issue544 by only removing “@NUM” at the end of ”::” separated parts and if the part has an ”.py” extension
  • don’t use py.std import helper, rather import things directly. Thanks Bruno Oliveira.

pytest-2.6.0: shorter tracebacks, new warning system, test runner compat

pytest is a mature Python testing tool with more than a 1000 tests against itself, passing on many different interpreters and platforms.

The 2.6.0 release should be drop-in backward compatible to 2.5.2 and fixes a number of bugs and brings some new features, mainly:

  • shorter tracebacks by default: only the first (test function) entry and the last (failure location) entry are shown, the ones between only in “short” format. Use --tb=long to get back the old behaviour of showing “long” entries everywhere.
  • a new warning system which reports oddities during collection and execution. For example, ignoring collecting Test* classes with an __init__ now produces a warning.
  • various improvements to nose/mock/unittest integration

Note also that 2.6.0 departs with the “zero reported bugs” policy because it has been too hard to keep up with it, unfortunately. Instead we are for now rather bound to work on “upvoted” issues in the https://bitbucket.org/pytest-dev/pytest/issues?status=new&status=open&sort=-votes issue tracker.

See docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to all who contributed, among them:

Benjamin Peterson Jurko Gospodnetić Floris Bruynooghe Marc Abramowitz Marc Schlaich Trevor Bekolay Bruno Oliveira Alex Groenholm

have fun, holger krekel

2.6.0

  • fix issue537: Avoid importing old assertion reinterpretation code by default. Thanks Benjamin Peterson.
  • fix issue364: shorten and enhance tracebacks representation by default. The new “–tb=auto” option (default) will only display long tracebacks for the first and last entry. You can get the old behaviour of printing all entries as long entries with “–tb=long”. Also short entries by default are now printed very similarly to “–tb=native” ones.
  • fix issue514: teach assertion reinterpretation about private class attributes Thanks Benjamin Peterson.
  • change -v output to include full node IDs of tests. Users can copy a node ID from a test run, including line number, and use it as a positional argument in order to run only a single test.
  • fix issue 475: fail early and comprehensible if calling pytest.raises with wrong exception type.
  • fix issue516: tell in getting-started about current dependencies.
  • cleanup setup.py a bit and specify supported versions. Thanks Jurko Gospodnetic for the PR.
  • change XPASS colour to yellow rather then red when tests are run with -v.
  • fix issue473: work around mock putting an unbound method into a class dict when double-patching.
  • fix issue498: if a fixture finalizer fails, make sure that the fixture is still invalidated.
  • fix issue453: the result of the pytest_assertrepr_compare hook now gets it’s newlines escaped so that format_exception does not blow up.
  • internal new warning system: pytest will now produce warnings when it detects oddities in your test collection or execution. Warnings are ultimately sent to a new pytest_logwarning hook which is currently only implemented by the terminal plugin which displays warnings in the summary line and shows more details when -rw (report on warnings) is specified.
  • change skips into warnings for test classes with an __init__ and callables in test modules which look like a test but are not functions.
  • fix issue436: improved finding of initial conftest files from command line arguments by using the result of parse_known_args rather than the previous flaky heuristics. Thanks Marc Abramowitz for tests and initial fixing approaches in this area.
  • fix issue #479: properly handle nose/unittest(2) SkipTest exceptions during collection/loading of test modules. Thanks to Marc Schlaich for the complete PR.
  • fix issue490: include pytest_load_initial_conftests in documentation and improve docstring.
  • fix issue472: clarify that pytest.config.getvalue() cannot work if it’s triggered ahead of command line parsing.
  • merge PR123: improved integration with mock.patch decorator on tests.
  • fix issue412: messing with stdout/stderr FD-level streams is now captured without crashes.
  • fix issue483: trial/py33 works now properly. Thanks Daniel Grana for PR.
  • improve example for pytest integration with “python setup.py test” which now has a generic “-a” or “–pytest-args” option where you can pass additional options as a quoted string. Thanks Trevor Bekolay.
  • simplified internal capturing mechanism and made it more robust against tests or setups changing FD1/FD2, also better integrated now with pytest.pdb() in single tests.
  • improvements to pytest’s own test-suite leakage detection, courtesy of PRs from Marc Abramowitz
  • fix issue492: avoid leak in test_writeorg. Thanks Marc Abramowitz.
  • fix issue493: don’t run tests in doc directory with python setup.py test (use tox -e doctesting for that)
  • fix issue486: better reporting and handling of early conftest loading failures
  • some cleanup and simplification of internal conftest handling.
  • work a bit harder to break reference cycles when catching exceptions. Thanks Jurko Gospodnetic.
  • fix issue443: fix skip examples to use proper comparison. Thanks Alex Groenholm.
  • support nose-style __test__ attribute on modules, classes and functions, including unittest-style Classes. If set to False, the test will not be collected.
  • fix issue512: show “<notset>” for arguments which might not be set in monkeypatch plugin. Improves output in documentation.
  • avoid importing “py.test” (an old alias module for “pytest”)

pytest-2.5.2: fixes

pytest is a mature Python testing tool with more than a 1000 tests against itself, passing on many different interpreters and platforms.

The 2.5.2 release fixes a few bugs with two maybe-bugs remaining and actively being worked on (and waiting for the bug reporter’s input). We also have a new contribution guide thanks to Piotr Banaszkiewicz and others.

See docs at:

As usual, you can upgrade from pypi via:

pip install -U pytest

Thanks to the following people who contributed to this release:

Anatoly Bubenkov Ronny Pfannschmidt Floris Bruynooghe Bruno Oliveira Andreas Pelme Jurko Gospodnetić Piotr Banaszkiewicz Simon Liedtke lakka Lukasz Balcerzak Philippe Muller Daniel Hahler

have fun, holger krekel

2.5.2

  • fix issue409 – better interoperate with cx_freeze by not trying to import from collections.abc which causes problems for py27/cx_freeze. Thanks Wolfgang L. for reporting and tracking it down.
  • fixed docs and code to use “pytest” instead of “py.test” almost everywhere. Thanks Jurko Gospodnetic for the complete PR.
  • fix issue425: mention at end of “py.test -h” that –markers and –fixtures work according to specified test path (or current dir)
  • fix issue413: exceptions with unicode attributes are now printed correctly also on python2 and with pytest-xdist runs. (the fix requires py-1.4.20)
  • copy, cleanup and integrate py.io capture from pylib 1.4.20.dev2 (rev 13d9af95547e)
  • address issue416: clarify docs as to conftest.py loading semantics
  • fix issue429: comparing byte strings with non-ascii chars in assert expressions now work better. Thanks Floris Bruynooghe.
  • make capfd/capsys.capture private, its unused and shouldn’t be exposed

pytest-2.5.1: fixes and new home page styling

pytest is a mature Python testing tool with more than a 1000 tests against itself, passing on many different interpreters and platforms.

The 2.5.1 release maintains the “zero-reported-bugs” promise by fixing the three bugs reported since the last release a few days ago. It also features a new home page styling implemented by Tobias Bieniek, based on the flask theme from Armin Ronacher:

If you have anything more to improve styling and docs, we’d be very happy to merge further pull requests.

On the coding side, the release also contains a little enhancement to fixture decorators allowing to directly influence generation of test ids, thanks to Floris Bruynooghe. Other thanks for helping with this release go to Anatoly Bubenkoff and Ronny Pfannschmidt.

As usual, you can upgrade from pypi via:

pip install -U pytest

have fun and a nice remaining “bug-free” time of the year :) holger krekel

2.5.1

  • merge new documentation styling PR from Tobias Bieniek.
  • fix issue403: allow parametrize of multiple same-name functions within a collection node. Thanks Andreas Kloeckner and Alex Gaynor for reporting and analysis.
  • Allow parameterized fixtures to specify the ID of the parameters by adding an ids argument to pytest.fixture() and pytest.yield_fixture(). Thanks Floris Bruynooghe.
  • fix issue404 by always using the binary xml escape in the junitxml plugin. Thanks Ronny Pfannschmidt.
  • fix issue407: fix addoption docstring to point to argparse instead of optparse. Thanks Daniel D. Wright.

pytest-2.5.0: now down to ZERO reported bugs!

pytest-2.5.0 is a big fixing release, the result of two community bug fixing days plus numerous additional works from many people and reporters. The release should be fully compatible to 2.4.2, existing plugins and test suites. We aim at maintaining this level of ZERO reported bugs because it’s no fun if your testing tool has bugs, is it? Under a condition, though: when submitting a bug report please provide clear information about the circumstances and a simple example which reproduces the problem.

The issue tracker is of course not empty now. We have many remaining “enhacement” issues which we’ll hopefully can tackle in 2014 with your help.

For those who use older Python versions, please note that pytest is not automatically tested on python2.5 due to virtualenv, setuptools and tox not supporting it anymore. Manual verification shows that it mostly works fine but it’s not going to be part of the automated release process and thus likely to break in the future.

As usual, current docs are at

and you can upgrade from pypi via:

pip install -U pytest

Particular thanks for helping with this release go to Anatoly Bubenkoff, Floris Bruynooghe, Marc Abramowitz, Ralph Schmitt, Ronny Pfannschmidt, Donald Stufft, James Lan, Rob Dennis, Jason R. Coombs, Mathieu Agopian, Virgil Dupras, Bruno Oliveira, Alex Gaynor and others.

have fun, holger krekel

2.5.0

  • dropped python2.5 from automated release testing of pytest itself which means it’s probably going to break soon (but still works with this release we believe).

  • simplified and fixed implementation for calling finalizers when parametrized fixtures or function arguments are involved. finalization is now performed lazily at setup time instead of in the “teardown phase”. While this might sound odd at first, it helps to ensure that we are correctly handling setup/teardown even in complex code. User-level code should not be affected unless it’s implementing the pytest_runtest_teardown hook and expecting certain fixture instances are torn down within (very unlikely and would have been unreliable anyway).

  • PR90: add –color=yes|no|auto option to force terminal coloring mode (“auto” is default). Thanks Marc Abramowitz.

  • fix issue319 - correctly show unicode in assertion errors. Many thanks to Floris Bruynooghe for the complete PR. Also means we depend on py>=1.4.19 now.

  • fix issue396 - correctly sort and finalize class-scoped parametrized tests independently from number of methods on the class.

  • refix issue323 in a better way – parametrization should now never cause Runtime Recursion errors because the underlying algorithm for re-ordering tests per-scope/per-fixture is not recursive anymore (it was tail-call recursive before which could lead to problems for more than >966 non-function scoped parameters).

  • fix issue290 - there is preliminary support now for parametrizing with repeated same values (sometimes useful to test if calling a second time works as with the first time).

  • close issue240 - document precisely how pytest module importing works, discuss the two common test directory layouts, and how it interacts with PEP420-namespace packages.

  • fix issue246 fix finalizer order to be LIFO on independent fixtures depending on a parametrized higher-than-function scoped fixture. (was quite some effort so please bear with the complexity of this sentence :) Thanks Ralph Schmitt for the precise failure example.

  • fix issue244 by implementing special index for parameters to only use indices for paramentrized test ids

  • fix issue287 by running all finalizers but saving the exception from the first failing finalizer and re-raising it so teardown will still have failed. We reraise the first failing exception because it might be the cause for other finalizers to fail.

  • fix ordering when mock.patch or other standard decorator-wrappings are used with test methods. This fixues issue346 and should help with random “xdist” collection failures. Thanks to Ronny Pfannschmidt and Donald Stufft for helping to isolate it.

  • fix issue357 - special case “-k” expressions to allow for filtering with simple strings that are not valid python expressions. Examples: “-k 1.3” matches all tests parametrized with 1.3. “-k None” filters all tests that have “None” in their name and conversely “-k ‘not None’”. Previously these examples would raise syntax errors.

  • fix issue384 by removing the trial support code since the unittest compat enhancements allow trial to handle it on its own

  • don’t hide an ImportError when importing a plugin produces one. fixes issue375.

  • fix issue275 - allow usefixtures and autouse fixtures for running doctest text files.

  • fix issue380 by making –resultlog only rely on longrepr instead of the “reprcrash” attribute which only exists sometimes.

  • address issue122: allow @pytest.fixture(params=iterator) by exploding into a list early on.

  • fix pexpect-3.0 compatibility for pytest’s own tests. (fixes issue386)

  • allow nested parametrize-value markers, thanks James Lan for the PR.

  • fix unicode handling with new monkeypatch.setattr(import_path, value) API. Thanks Rob Dennis. Fixes issue371.

  • fix unicode handling with junitxml, fixes issue368.

  • In assertion rewriting mode on Python 2, fix the detection of coding cookies. See issue #330.

  • make “–runxfail” turn imperative pytest.xfail calls into no ops (it already did neutralize pytest.mark.xfail markers)

  • refine pytest / pkg_resources interactions: The AssertionRewritingHook PEP302 compliant loader now registers itself with setuptools/pkg_resources properly so that the pkg_resources.resource_stream method works properly. Fixes issue366. Thanks for the investigations and full PR to Jason R. Coombs.

  • pytestconfig fixture is now session-scoped as it is the same object during the whole test run. Fixes issue370.

  • avoid one surprising case of marker malfunction/confusion:

    @pytest.mark.some(lambda arg: ...)
    def test_function():
    

    would not work correctly because pytest assumes @pytest.mark.some gets a function to be decorated already. We now at least detect if this arg is a lambda and thus the example will work. Thanks Alex Gaynor for bringing it up.

  • xfail a test on pypy that checks wrong encoding/ascii (pypy does not error out). fixes issue385.

  • internally make varnames() deal with classes’s __init__, although it’s not needed by pytest itself atm. Also fix caching. Fixes issue376.

  • fix issue221 - handle importing of namespace-package with no __init__.py properly.

  • refactor internal FixtureRequest handling to avoid monkeypatching. One of the positive user-facing effects is that the “request” object can now be used in closures.

  • fixed version comparison in pytest.importskip(modname, minverstring)

  • fix issue377 by clarifying in the nose-compat docs that pytest does not duplicate the unittest-API into the “plain” namespace.

  • fix verbose reporting for @mock’d test functions

pytest-2.4.2: colorama on windows, plugin/tmpdir fixes

pytest-2.4.2 is another bug-fixing release:

  • on Windows require colorama and a newer py lib so that py.io.TerminalWriter() now uses colorama instead of its own ctypes hacks. (fixes issue365) thanks Paul Moore for bringing it up.
  • fix “-k” matching of tests where “repr” and “attr” and other names would cause wrong matches because of an internal implementation quirk (don’t ask) which is now properly implemented. fixes issue345.
  • avoid tmpdir fixture to create too long filenames especially when parametrization is used (issue354)
  • fix pytest-pep8 and pytest-flakes / pytest interactions (collection names in mark plugin was assuming an item always has a function which is not true for those plugins etc.) Thanks Andi Zeidler.
  • introduce node.get_marker/node.add_marker API for plugins like pytest-pep8 and pytest-flakes to avoid the messy details of the node.keywords pseudo-dicts. Adapted docs.
  • remove attempt to “dup” stdout at startup as it’s icky. the normal capturing should catch enough possibilities of tests messing up standard FDs.
  • add pluginmanager.do_configure(config) as a link to config.do_configure() for plugin-compatibility

as usual, docs at http://pytest.org and upgrades via:

pip install -U pytest

have fun, holger krekel

pytest-2.4.1: fixing three regressions compared to 2.3.5

pytest-2.4.1 is a quick follow up release to fix three regressions compared to 2.3.5 before they hit more people:

  • When using parser.addoption() unicode arguments to the “type” keyword should also be converted to the respective types. thanks Floris Bruynooghe, @dnozay. (fixes issue360 and issue362)
  • fix dotted filename completion when using argcomplete thanks Anthon van der Neuth. (fixes issue361)
  • fix regression when a 1-tuple (“arg”,) is used for specifying parametrization (the values of the parametrization were passed nested in a tuple). Thanks Donald Stufft.
  • also merge doc typo fixes, thanks Andy Dirnberger

as usual, docs at http://pytest.org and upgrades via:

pip install -U pytest

have fun, holger krekel

pytest-2.4.0: new fixture features/hooks and bug fixes

The just released pytest-2.4.0 brings many improvements and numerous bug fixes while remaining plugin- and test-suite compatible apart from a few supposedly very minor incompatibilities. See below for a full list of details. A few feature highlights:

  • new yield-style fixtures pytest.yield_fixture, allowing to use existing with-style context managers in fixture functions.
  • improved pdb support: import pdb ; pdb.set_trace() now works without requiring prior disabling of stdout/stderr capturing. Also the --pdb options works now on collection and internal errors and we introduced a new experimental hook for IDEs/plugins to intercept debugging: pytest_exception_interact(node, call, report).
  • shorter monkeypatch variant to allow specifying an import path as a target, for example: monkeypatch.setattr("requests.get", myfunc)
  • better unittest/nose compatibility: all teardown methods are now only called if the corresponding setup method succeeded.
  • integrate tab-completion on command line options if you have argcomplete configured.
  • allow boolean expression directly with skipif/xfail if a “reason” is also specified.
  • a new hook pytest_load_initial_conftests allows plugins like pytest-django to influence the environment before conftest files import django.
  • reporting: color the last line red or green depending if failures/errors occurred or everything passed.

The documentation has been updated to accommodate the changes, see http://pytest.org

To install or upgrade pytest:

pip install -U pytest # or
easy_install -U pytest

Many thanks to all who helped, including Floris Bruynooghe, Brianna Laugher, Andreas Pelme, Anthon van der Neut, Anatoly Bubenkoff, Vladimir Keleshev, Mathieu Agopian, Ronny Pfannschmidt, Christian Theunert and many others.

may passing tests be with you,

holger krekel

Changes between 2.3.5 and 2.4

known incompatibilities:

  • if calling –genscript from python2.7 or above, you only get a standalone script which works on python2.7 or above. Use Python2.6 to also get a python2.5 compatible version.
  • all xunit-style teardown methods (nose-style, pytest-style, unittest-style) will not be called if the corresponding setup method failed, see issue322 below.
  • the pytest_plugin_unregister hook wasn’t ever properly called and there is no known implementation of the hook - so it got removed.
  • pytest.fixture-decorated functions cannot be generators (i.e. use yield) anymore. This change might be reversed in 2.4.1 if it causes unforeseen real-life issues. However, you can always write and return an inner function/generator and change the fixture consumer to iterate over the returned generator. This change was done in lieu of the new pytest.yield_fixture decorator, see below.

new features:

  • experimentally introduce a new pytest.yield_fixture decorator which accepts exactly the same parameters as pytest.fixture but mandates a yield statement instead of a return statement from fixture functions. This allows direct integration with “with-style” context managers in fixture functions and generally avoids registering of finalization callbacks in favour of treating the “after-yield” as teardown code. Thanks Andreas Pelme, Vladimir Keleshev, Floris Bruynooghe, Ronny Pfannschmidt and many others for discussions.

  • allow boolean expression directly with skipif/xfail if a “reason” is also specified. Rework skipping documentation to recommend “condition as booleans” because it prevents surprises when importing markers between modules. Specifying conditions as strings will remain fully supported.

  • reporting: color the last line red or green depending if failures/errors occurred or everything passed. thanks Christian Theunert.

  • make “import pdb ; pdb.set_trace()” work natively wrt capturing (no “-s” needed anymore), making pytest.set_trace() a mere shortcut.

  • fix issue181: –pdb now also works on collect errors (and on internal errors) . This was implemented by a slight internal refactoring and the introduction of a new hook pytest_exception_interact hook (see next item).

  • fix issue341: introduce new experimental hook for IDEs/terminals to intercept debugging: pytest_exception_interact(node, call, report).

  • new monkeypatch.setattr() variant to provide a shorter invocation for patching out classes/functions from modules:

    monkeypatch.setattr(“requests.get”, myfunc)

    will replace the “get” function of the “requests” module with myfunc.

  • fix issue322: tearDownClass is not run if setUpClass failed. Thanks Mathieu Agopian for the initial fix. Also make all of pytest/nose finalizer mimic the same generic behaviour: if a setupX exists and fails, don’t run teardownX. This internally introduces a new method “node.addfinalizer()” helper which can only be called during the setup phase of a node.

  • simplify pytest.mark.parametrize() signature: allow to pass a CSV-separated string to specify argnames. For example: pytest.mark.parametrize("input,expected",  [(1,2), (2,3)]) works as well as the previous: pytest.mark.parametrize(("input", "expected"), ...).

  • add support for setUpModule/tearDownModule detection, thanks Brian Okken.

  • integrate tab-completion on options through use of “argcomplete”. Thanks Anthon van der Neut for the PR.

  • change option names to be hyphen-separated long options but keep the old spelling backward compatible. py.test -h will only show the hyphenated version, for example “–collect-only” but “–collectonly” will remain valid as well (for backward-compat reasons). Many thanks to Anthon van der Neut for the implementation and to Hynek Schlawack for pushing us.

  • fix issue 308 - allow to mark/xfail/skip individual parameter sets when parametrizing. Thanks Brianna Laugher.

  • call new experimental pytest_load_initial_conftests hook to allow 3rd party plugins to do something before a conftest is loaded.

Bug fixes:

  • fix issue358 - capturing options are now parsed more properly by using a new parser.parse_known_args method.
  • pytest now uses argparse instead of optparse (thanks Anthon) which means that “argparse” is added as a dependency if installing into python2.6 environments or below.
  • fix issue333: fix a case of bad unittest/pytest hook interaction.
  • PR27: correctly handle nose.SkipTest during collection. Thanks Antonio Cuni, Ronny Pfannschmidt.
  • fix issue355: junitxml puts name=”pytest” attribute to testsuite tag.
  • fix issue336: autouse fixture in plugins should work again.
  • fix issue279: improve object comparisons on assertion failure for standard datatypes and recognise collections.abc. Thanks to Brianna Laugher and Mathieu Agopian.
  • fix issue317: assertion rewriter support for the is_package method
  • fix issue335: document py.code.ExceptionInfo() object returned from pytest.raises(), thanks Mathieu Agopian.
  • remove implicit distribute_setup support from setup.py.
  • fix issue305: ignore any problems when writing pyc files.
  • SO-17664702: call fixture finalizers even if the fixture function partially failed (finalizers would not always be called before)
  • fix issue320 - fix class scope for fixtures when mixed with module-level functions. Thanks Anatloy Bubenkoff.
  • you can specify “-q” or “-qq” to get different levels of “quieter” reporting (thanks Katarzyna Jachim)
  • fix issue300 - Fix order of conftest loading when starting py.test in a subdirectory.
  • fix issue323 - sorting of many module-scoped arg parametrizations
  • make sessionfinish hooks execute with the same cwd-context as at session start (helps fix plugin behaviour which write output files with relative path such as pytest-cov)
  • fix issue316 - properly reference collection hooks in docs
  • fix issue 306 - cleanup of -k/-m options to only match markers/test names/keywords respectively. Thanks Wouter van Ackooy.
  • improved doctest counting for doctests in python modules – files without any doctest items will not show up anymore and doctest examples are counted as separate test items. thanks Danilo Bellini.
  • fix issue245 by depending on the released py-1.4.14 which fixes py.io.dupfile to work with files with no mode. Thanks Jason R. Coombs.
  • fix junitxml generation when test output contains control characters, addressing issue267, thanks Jaap Broekhuizen
  • fix issue338: honor –tb style for setup/teardown errors as well. Thanks Maho.
  • fix issue307 - use yaml.safe_load in example, thanks Mark Eichin.
  • better parametrize error messages, thanks Brianna Laugher
  • pytest_terminal_summary(terminalreporter) hooks can now use ”.section(title)” and ”.line(msg)” methods to print extra information at the end of a test run.

pytest-2.3.5: bug fixes and little improvements

pytest-2.3.5 is a maintenance release with many bug fixes and little improvements. See the changelog below for details. No backward compatibility issues are foreseen and all plugins which worked with the prior version are expected to work unmodified. Speaking of which, a few interesting new plugins saw the light last month:

  • pytest-instafail: show failure information while tests are running
  • pytest-qt: testing of GUI applications written with QT/Pyside
  • pytest-xprocess: managing external processes across test runs
  • pytest-random: randomize test ordering

And several others like pytest-django saw maintenance releases. For a more complete list, check out https://pypi.python.org/pypi?%3Aaction=search&term=pytest&submit=search.

For general information see:

To install or upgrade pytest:

pip install -U pytest # or easy_install -U pytest

Particular thanks to Floris, Ronny, Benjamin and the many bug reporters and fix providers.

may the fixtures be with you, holger krekel

Changes between 2.3.4 and 2.3.5

  • never consider a fixture function for test function collection
  • allow re-running of test items / helps to fix pytest-reruntests plugin and also help to keep less fixture/resource references alive
  • put captured stdout/stderr into junitxml output even for passing tests (thanks Adam Goucher)
  • Issue 265 - integrate nose setup/teardown with setupstate so it doesn’t try to teardown if it did not setup
  • issue 271 - don’t write junitxml on slave nodes
  • Issue 274 - don’t try to show full doctest example when doctest does not know the example location
  • issue 280 - disable assertion rewriting on buggy CPython 2.6.0
  • inject “getfixture()” helper to retrieve fixtures from doctests, thanks Andreas Zeidler
  • issue 259 - when assertion rewriting, be consistent with the default source encoding of ASCII on Python 2
  • issue 251 - report a skip instead of ignoring classes with init
  • issue250 unicode/str mixes in parametrization names and values now works
  • issue257, assertion-triggered compilation of source ending in a comment line doesn’t blow up in python2.5 (fixed through py>=1.4.13.dev6)
  • fix –genscript option to generate standalone scripts that also work with python3.3 (importer ordering)
  • issue171 - in assertion rewriting, show the repr of some global variables
  • fix option help for “-k”
  • move long description of distribution into README.rst
  • improve docstring for metafunc.parametrize()
  • fix bug where using capsys with pytest.set_trace() in a test function would break when looking at capsys.readouterr()
  • allow to specify prefixes starting with “_” when customizing python_functions test discovery. (thanks Graham Horler)
  • improve PYTEST_DEBUG tracing output by putting extra data on a new lines with additional indent
  • ensure OutcomeExceptions like skip/fail have initialized exception attributes
  • issue 260 - don’t use nose special setup on plain unittest cases
  • fix issue134 - print the collect errors that prevent running specified test items
  • fix issue266 - accept unicode in MarkEvaluator expressions

pytest-2.3.4: stabilization, more flexible selection via “-k expr”

pytest-2.3.4 is a small stabilization release of the py.test tool which offers uebersimple assertions, scalable fixture mechanisms and deep customization for testing with Python. This release comes with the following fixes and features:

  • make “-k” option accept an expressions the same as with “-m” so that one can write: -k “name1 or name2” etc. This is a slight usage incompatibility if you used special syntax like “TestClass.test_method” which you now need to write as -k “TestClass and test_method” to match a certain method in a certain test class.
  • allow to dynamically define markers via item.keywords[...]=assignment integrating with “-m” option
  • yielded test functions will now have autouse-fixtures active but cannot accept fixtures as funcargs - it’s anyway recommended to rather use the post-2.0 parametrize features instead of yield, see: http://pytest.org/latest/example/parametrize.html
  • fix autouse-issue where autouse-fixtures would not be discovered if defined in a a/conftest.py file and tests in a/tests/test_some.py
  • fix issue226 - LIFO ordering for fixture teardowns
  • fix issue224 - invocations with >256 char arguments now work
  • fix issue91 - add/discuss package/directory level setups in example
  • fixes related to autouse discovery and calling

Thanks in particular to Thomas Waldmann for spotting and reporting issues.

See

for general information. To install or upgrade pytest:

pip install -U pytest # or easy_install -U pytest

best, holger krekel

pytest-2.3.3: integration fixes, py24 support, */** shown in traceback

pytest-2.3.3 is another stabilization release of the py.test tool which offers uebersimple assertions, scalable fixture mechanisms and deep customization for testing with Python. Particularly, this release provides:

  • integration fixes and improvements related to flask, numpy, nose, unittest, mock
  • makes pytest work on py24 again (yes, people sometimes still need to use it)
  • show *,** args in pytest tracebacks

Thanks to Manuel Jacob, Thomas Waldmann, Ronny Pfannschmidt, Pavel Repin and Andreas Taumoefolau for providing patches and all for the issues.

See

for general information. To install or upgrade pytest:

pip install -U pytest # or easy_install -U pytest

best, holger krekel

Changes between 2.3.2 and 2.3.3

  • fix issue214 - parse modules that contain special objects like e. g. flask’s request object which blows up on getattr access if no request is active. thanks Thomas Waldmann.
  • fix issue213 - allow to parametrize with values like numpy arrays that do not support an __eq__ operator
  • fix issue215 - split test_python.org into multiple files
  • fix issue148 - @unittest.skip on classes is now recognized and avoids calling setUpClass/tearDownClass, thanks Pavel Repin
  • fix issue209 - reintroduce python2.4 support by depending on newer pylib which re-introduced statement-finding for pre-AST interpreters
  • nose support: only call setup if it’s a callable, thanks Andrew Taumoefolau
  • fix issue219 - add py2.4-3.3 classifiers to TROVE list
  • in tracebacks ,* arg values are now shown next to normal arguments (thanks Manuel Jacob)
  • fix issue217 - support mock.patch with pytest’s fixtures - note that you need either mock-1.0.1 or the python3.3 builtin unittest.mock.
  • fix issue127 - improve documentation for pytest_addoption() and add a config.getoption(name) helper function for consistency.

pytest-2.3.2: some fixes and more traceback-printing speed

pytest-2.3.2 is another stabilization release:

  • issue 205: fixes a regression with conftest detection
  • issue 208/29: fixes traceback-printing speed in some bad cases
  • fix teardown-ordering for parametrized setups
  • fix unittest and trial compat behaviour with respect to runTest() methods
  • issue 206 and others: some improvements to packaging
  • fix issue127 and others: improve some docs

See

for general information. To install or upgrade pytest:

pip install -U pytest # or easy_install -U pytest

best, holger krekel

Changes between 2.3.1 and 2.3.2

  • fix issue208 and fix issue29 use new py version to avoid long pauses when printing tracebacks in long modules
  • fix issue205 - conftests in subdirs customizing pytest_pycollect_makemodule and pytest_pycollect_makeitem now work properly
  • fix teardown-ordering for parametrized setups
  • fix issue127 - better documentation for pytest_addoption and related objects.
  • fix unittest behaviour: TestCase.runtest only called if there are test methods defined
  • improve trial support: don’t collect its empty unittest.TestCase.runTest() method
  • “python setup.py test” now works with pytest itself
  • fix/improve internal/packaging related bits:
    • exception message check of test_nose.py now passes on python33 as well
    • issue206 - fix test_assertrewrite.py to work when a global PYTHONDONTWRITEBYTECODE=1 is present
    • add tox.ini to pytest distribution so that ignore-dirs and others config bits are properly distributed for maintainers who run pytest-own tests

pytest-2.3.1: fix regression with factory functions

pytest-2.3.1 is a quick follow-up release:

  • fix issue202 - regression with fixture functions/funcarg factories: using “self” is now safe again and works as in 2.2.4. Thanks to Eduard Schettino for the quick bug report.
  • disable pexpect pytest self tests on Freebsd - thanks Koob for the quick reporting
  • fix/improve interactive docs with –markers

See

for general information. To install or upgrade pytest:

pip install -U pytest # or easy_install -U pytest

best, holger krekel

Changes between 2.3.0 and 2.3.1

  • fix issue202 - fix regression: using “self” from fixture functions now works as expected (it’s the same “self” instance that a test method which uses the fixture sees)
  • skip pexpect using tests (test_pdb.py mostly) on freebsd* systems due to pexpect not supporting it properly (hanging)
  • link to web pages from –markers output which provides help for pytest.mark.* usage.

pytest-2.3: improved fixtures / better unittest integration

pytest-2.3 comes with many major improvements for fixture/funcarg management and parametrized testing in Python. It is now easier, more efficient and more predicatable to re-run the same tests with different fixture instances. Also, you can directly declare the caching “scope” of fixtures so that dependent tests throughout your whole test suite can re-use database or other expensive fixture objects with ease. Lastly, it’s possible for fixture functions (formerly known as funcarg factories) to use other fixtures, allowing for a completely modular and re-useable fixture design.

For detailed info and tutorial-style examples, see:

Moreover, there is now support for using pytest fixtures/funcargs with unittest-style suites, see here for examples:

Besides, more unittest-test suites are now expected to “simply work” with pytest.

All changes are backward compatible and you should be able to continue to run your test suites and 3rd party plugins that worked with pytest-2.2.4.

If you are interested in the precise reasoning (including examples) of the pytest-2.3 fixture evolution, please consult http://pytest.org/latest/funcarg_compare.html

For general info on installation and getting started:

Docs and PDF access as usual at:

and more details for those already in the knowing of pytest can be found in the CHANGELOG below.

Particular thanks for this release go to Floris Bruynooghe, Alex Okrushko Carl Meyer, Ronny Pfannschmidt, Benjamin Peterson and Alex Gaynor for helping to get the new features right and well integrated. Ronny and Floris also helped to fix a number of bugs and yet more people helped by providing bug reports.

have fun, holger krekel

Changes between 2.2.4 and 2.3.0

  • fix issue202 - better automatic names for parametrized test functions
  • fix issue139 - introduce @pytest.fixture which allows direct scoping and parametrization of funcarg factories. Introduce new @pytest.setup marker to allow the writing of setup functions which accept funcargs.
  • fix issue198 - conftest fixtures were not found on windows32 in some circumstances with nested directory structures due to path manipulation issues
  • fix issue193 skip test functions with were parametrized with empty parameter sets
  • fix python3.3 compat, mostly reporting bits that previously depended on dict ordering
  • introduce re-ordering of tests by resource and parametrization setup which takes precedence to the usual file-ordering
  • fix issue185 monkeypatching time.time does not cause pytest to fail
  • fix issue172 duplicate call of pytest.setup-decoratored setup_module functions
  • fix junitxml=path construction so that if tests change the current working directory and the path is a relative path it is constructed correctly from the original current working dir.
  • fix “python setup.py test” example to cause a proper “errno” return
  • fix issue165 - fix broken doc links and mention stackoverflow for FAQ
  • catch unicode-issues when writing failure representations to terminal to prevent the whole session from crashing
  • fix xfail/skip confusion: a skip-mark or an imperative pytest.skip will now take precedence before xfail-markers because we can’t determine xfail/xpass status in case of a skip. see also: http://stackoverflow.com/questions/11105828/in-py-test-when-i-explicitly-skip-a-test-that-is-marked-as-xfail-how-can-i-get
  • always report installed 3rd party plugins in the header of a test run
  • fix issue160: a failing setup of an xfail-marked tests should be reported as xfail (not xpass)
  • fix issue128: show captured output when capsys/capfd are used
  • fix issue179: properly show the dependency chain of factories
  • pluginmanager.register(...) now raises ValueError if the plugin has been already registered or the name is taken
  • fix issue159: improve http://pytest.org/latest/faq.html especially with respect to the “magic” history, also mention pytest-django, trial and unittest integration.
  • make request.keywords and node.keywords writable. All descendant collection nodes will see keyword values. Keywords are dictionaries containing markers and other info.
  • fix issue 178: xml binary escapes are now wrapped in py.xml.raw
  • fix issue 176: correctly catch the builtin AssertionError even when we replaced AssertionError with a subclass on the python level
  • factory discovery no longer fails with magic global callables that provide no sane __code__ object (mock.call for example)
  • fix issue 182: testdir.inprocess_run now considers passed plugins
  • fix issue 188: ensure sys.exc_info is clear on python2
    before calling into a test
  • fix issue 191: add unittest TestCase runTest method support
  • fix issue 156: monkeypatch correctly handles class level descriptors
  • reporting refinements:
    • pytest_report_header now receives a “startdir” so that you can use startdir.bestrelpath(yourpath) to show nice relative path
    • allow plugins to implement both pytest_report_header and pytest_sessionstart (sessionstart is invoked first).
    • don’t show deselected reason line if there is none
    • py.test -vv will show all of assert comparisons instead of truncating

pytest-2.2.4: bug fixes, better junitxml/unittest/python3 compat

pytest-2.2.4 is a minor backward-compatible release of the versatile py.test testing tool. It contains bug fixes and a few refinements to junitxml reporting, better unittest- and python3 compatibility.

For general information see here:

To install or upgrade pytest:

pip install -U pytest # or easy_install -U pytest

Special thanks for helping on this release to Ronny Pfannschmidt and Benjamin Peterson and the contributors of issues.

best, holger krekel

Changes between 2.2.3 and 2.2.4

  • fix error message for rewritten assertions involving the % operator
  • fix issue 126: correctly match all invalid xml characters for junitxml binary escape
  • fix issue with unittest: now @unittest.expectedFailure markers should be processed correctly (you can also use @pytest.mark markers)
  • document integration with the extended distribute/setuptools test commands
  • fix issue 140: properly get the real functions of bound classmethods for setup/teardown_class
  • fix issue #141: switch from the deceased paste.pocoo.org to bpaste.net
  • fix issue #143: call unconfigure/sessionfinish always when configure/sessionstart where called
  • fix issue #144: better mangle test ids to junitxml classnames
  • upgrade distribute_setup.py to 0.6.27

pytest-2.2.2: bug fixes

pytest-2.2.2 (updated to 2.2.3 to fix packaging issues) is a minor backward-compatible release of the versatile py.test testing tool. It contains bug fixes and a few refinements particularly to reporting with “–collectonly”, see below for betails.

For general information see here:

To install or upgrade pytest:

pip install -U pytest # or easy_install -U pytest

Special thanks for helping on this release to Ronny Pfannschmidt and Ralf Schmitt and the contributors of issues.

best, holger krekel

Changes between 2.2.1 and 2.2.2

  • fix issue101: wrong args to unittest.TestCase test function now produce better output
  • fix issue102: report more useful errors and hints for when a test directory was renamed and some pyc/__pycache__ remain
  • fix issue106: allow parametrize to be applied multiple times e.g. from module, class and at function level.
  • fix issue107: actually perform session scope finalization
  • don’t check in parametrize if indirect parameters are funcarg names
  • add chdir method to monkeypatch funcarg
  • fix crash resulting from calling monkeypatch undo a second time
  • fix issue115: make –collectonly robust against early failure (missing files/directories)
  • “-qq –collectonly” now shows only files and the number of tests in them
  • “-q –collectonly” now shows test ids
  • allow adding of attributes to test reports such that it also works with distributed testing (no upgrade of pytest-xdist needed)

pytest-2.2.1: bug fixes, perfect teardowns

pytest-2.2.1 is a minor backward-compatible release of the py.test testing tool. It contains bug fixes and little improvements, including documentation fixes. If you are using the distributed testing pluginmake sure to upgrade it to pytest-xdist-1.8.

For general information see here:

To install or upgrade pytest:

pip install -U pytest # or easy_install -U pytest

Special thanks for helping on this release to Ronny Pfannschmidt, Jurko Gospodnetic and Ralf Schmitt.

best, holger krekel

Changes between 2.2.0 and 2.2.1

  • fix issue99 (in pytest and py) internallerrors with resultlog now produce better output - fixed by normalizing pytest_internalerror input arguments.
  • fix issue97 / traceback issues (in pytest and py) improve traceback output in conjunction with jinja2 and cython which hack tracebacks
  • fix issue93 (in pytest and pytest-xdist) avoid “delayed teardowns”: the final test in a test node will now run its teardown directly instead of waiting for the end of the session. Thanks Dave Hunt for the good reporting and feedback. The pytest_runtest_protocol as well as the pytest_runtest_teardown hooks now have “nextitem” available which will be None indicating the end of the test run.
  • fix collection crash due to unknown-source collected items, thanks to Ralf Schmitt (fixed by depending on a more recent pylib)

py.test 2.2.0: test marking++, parametrization++ and duration profiling

pytest-2.2.0 is a test-suite compatible release of the popular py.test testing tool. Plugins might need upgrades. It comes with these improvements:

  • easier and more powerful parametrization of tests:
    • new @pytest.mark.parametrize decorator to run tests with different arguments
    • new metafunc.parametrize() API for parametrizing arguments independently
    • see examples at http://pytest.org/latest/example/parametrize.html
    • NOTE that parametrize() related APIs are still a bit experimental and might change in future releases.
  • improved handling of test markers and refined marking mechanism:
    • “-m markexpr” option for selecting tests according to their mark
    • a new “markers” ini-variable for registering test markers for your project
    • the new “–strict” bails out with an error if using unregistered markers.
    • see examples at http://pytest.org/latest/example/markers.html
  • duration profiling: new “–duration=N” option showing the N slowest test execution or setup/teardown calls. This is most useful if you want to find out where your slowest test code is.
  • also 2.2.0 performs more eager calling of teardown/finalizers functions resulting in better and more accurate reporting when they fail

Besides there is the usual set of bug fixes along with a cleanup of pytest’s own test suite allowing it to run on a wider range of environments.

For general information, see extensive docs with examples here:

If you want to install or upgrade pytest you might just type:

pip install -U pytest # or
easy_install -U pytest

Thanks to Ronny Pfannschmidt, David Burns, Jeff Donner, Daniel Nouri, Alfredo Deza and all who gave feedback or sent bug reports.

best, holger krekel

notes on incompatibility

While test suites should work unchanged you might need to upgrade plugins:

  • You need a new version of the pytest-xdist plugin (1.7) for distributing test runs.
  • Other plugins might need an upgrade if they implement the pytest_runtest_logreport hook which now is called unconditionally for the setup/teardown fixture phases of a test. You may choose to ignore setup/teardown failures by inserting “if rep.when != ‘call’: return” or something similar. Note that most code probably “just” works because the hook was already called for failing setup/teardown phases of a test so a plugin should have been ready to grok such reports already.

Changes between 2.1.3 and 2.2.0

  • fix issue90: introduce eager tearing down of test items so that teardown function are called earlier.
  • add an all-powerful metafunc.parametrize function which allows to parametrize test function arguments in multiple steps and therefore from independent plugins and places.
  • add a @pytest.mark.parametrize helper which allows to easily call a test function with different argument values.
  • Add examples to the “parametrize” example page, including a quick port of Test scenarios and the new parametrize function and decorator.
  • introduce registration for “pytest.mark.*” helpers via ini-files or through plugin hooks. Also introduce a “–strict” option which will treat unregistered markers as errors allowing to avoid typos and maintain a well described set of markers for your test suite. See examples at http://pytest.org/latest/mark.html and its links.
  • issue50: introduce “-m marker” option to select tests based on markers (this is a stricter and more predictable version of “-k” in that “-m” only matches complete markers and has more obvious rules for and/or semantics.
  • new feature to help optimizing the speed of your tests: –durations=N option for displaying N slowest test calls and setup/teardown methods.
  • fix issue87: –pastebin now works with python3
  • fix issue89: –pdb with unexpected exceptions in doctest work more sensibly
  • fix and cleanup pytest’s own test suite to not leak FDs
  • fix issue83: link to generated funcarg list
  • fix issue74: pyarg module names are now checked against imp.find_module false positives
  • fix compatibility with twisted/trial-11.1.0 use cases

py.test 2.1.3: just some more fixes

pytest-2.1.3 is a minor backward compatible maintenance release of the popular py.test testing tool. It is commonly used for unit, functional- and integration testing. See extensive docs with examples here:

The release contains another fix to the perfected assertions introduced with the 2.1 series as well as the new possibility to customize reporting for assertion expressions on a per-directory level.

If you want to install or upgrade pytest, just type one of:

pip install -U pytest # or
easy_install -U pytest

Thanks to the bug reporters and to Ronny Pfannschmidt, Benjamin Peterson and Floris Bruynooghe who implemented the fixes.

best, holger krekel

Changes between 2.1.2 and 2.1.3

  • fix issue79: assertion rewriting failed on some comparisons in boolops,
  • correctly handle zero length arguments (a la pytest ‘’)
  • fix issue67 / junitxml now contains correct test durations
  • fix issue75 / skipping test failure on jython
  • fix issue77 / Allow assertrepr_compare hook to apply to a subset of tests

py.test 2.1.2: bug fixes and fixes for jython

pytest-2.1.2 is a minor backward compatible maintenance release of the popular py.test testing tool. pytest is commonly used for unit, functional- and integration testing. See extensive docs with examples here:

Most bug fixes address remaining issues with the perfected assertions introduced in the 2.1 series - many thanks to the bug reporters and to Benjamin Peterson for helping to fix them. pytest should also work better with Jython-2.5.1 (and Jython trunk).

If you want to install or upgrade pytest, just type one of:

pip install -U pytest # or
easy_install -U pytest

best, holger krekel / http://merlinux.eu

Changes between 2.1.1 and 2.1.2

  • fix assertion rewriting on files with windows newlines on some Python versions
  • refine test discovery by package/module name (–pyargs), thanks Florian Mayer
  • fix issue69 / assertion rewriting fixed on some boolean operations
  • fix issue68 / packages now work with assertion rewriting
  • fix issue66: use different assertion rewriting caches when the -O option is passed
  • don’t try assertion rewriting on Jython, use reinterp

py.test 2.1.1: assertion fixes and improved junitxml output

pytest-2.1.1 is a backward compatible maintenance release of the popular py.test testing tool. See extensive docs with examples here:

Most bug fixes address remaining issues with the perfected assertions introduced with 2.1.0 - many thanks to the bug reporters and to Benjamin Peterson for helping to fix them. Also, junitxml output now produces system-out/err tags which lead to better displays of tracebacks with Jenkins.

Also a quick note to package maintainers and others interested: there now is a “pytest” man page which can be generated with “make man” in doc/.

If you want to install or upgrade pytest, just type one of:

pip install -U pytest # or
easy_install -U pytest

best, holger krekel / http://merlinux.eu

Changes between 2.1.0 and 2.1.1

  • fix issue64 / pytest.set_trace now works within pytest_generate_tests hooks
  • fix issue60 / fix error conditions involving the creation of __pycache__
  • fix issue63 / assertion rewriting on inserts involving strings containing ‘%’
  • fix assertion rewriting on calls with a ** arg
  • don’t cache rewritten modules if bytecode generation is disabled
  • fix assertion rewriting in read-only directories
  • fix issue59: provide system-out/err tags for junitxml output
  • fix issue61: assertion rewriting on boolean operations with 3 or more operands
  • you can now build a man page with “cd doc ; make man”

py.test 2.1.0: perfected assertions and bug fixes

Welcome to the release of pytest-2.1, a mature testing tool for Python, supporting CPython 2.4-3.2, Jython and latest PyPy interpreters. See the improved extensive docs (now also as PDF!) with tested examples here:

The single biggest news about this release are perfected assertions courtesy of Benjamin Peterson. You can now safely use assert statements in test modules without having to worry about side effects or python optimization (“-OO”) options. This is achieved by rewriting assert statements in test modules upon import, using a PEP302 hook. See http://pytest.org/assert.html#advanced-assertion-introspection for detailed information. The work has been partly sponsored by my company, merlinux GmbH.

For further details on bug fixes and smaller enhancements see below.

If you want to install or upgrade pytest, just type one of:

pip install -U pytest # or
easy_install -U pytest

best, holger krekel / http://merlinux.eu

Changes between 2.0.3 and 2.1.0

  • fix issue53 call nosestyle setup functions with correct ordering
  • fix issue58 and issue59: new assertion code fixes
  • merge Benjamin’s assertionrewrite branch: now assertions for test modules on python 2.6 and above are done by rewriting the AST and saving the pyc file before the test module is imported. see doc/assert.txt for more info.
  • fix issue43: improve doctests with better traceback reporting on unexpected exceptions
  • fix issue47: timing output in junitxml for test cases is now correct
  • fix issue48: typo in MarkInfo repr leading to exception
  • fix issue49: avoid confusing error when initialization partially fails
  • fix issue44: env/username expansion for junitxml file path
  • show releaselevel information in test runs for pypy
  • reworked doc pages for better navigation and PDF generation
  • report KeyboardInterrupt even if interrupted during session startup
  • fix issue 35 - provide PDF doc version and download link from index page

py.test 2.0.3: bug fixes and speed ups

Welcome to pytest-2.0.3, a maintenance and bug fix release of pytest, a mature testing tool for Python, supporting CPython 2.4-3.2, Jython and latest PyPy interpreters. See the extensive docs with tested examples here:

If you want to install or upgrade pytest, just type one of:

pip install -U pytest # or
easy_install -U pytest

There also is a bugfix release 1.6 of pytest-xdist, the plugin that enables seamless distributed and “looponfail” testing for Python.

best, holger krekel

Changes between 2.0.2 and 2.0.3

  • fix issue38: nicer tracebacks on calls to hooks, particularly early configure/sessionstart ones
  • fix missing skip reason/meta information in junitxml files, reported via http://lists.idyll.org/pipermail/testing-in-python/2011-March/003928.html
  • fix issue34: avoid collection failure with “test” prefixed classes deriving from object.
  • don’t require zlib (and other libs) for genscript plugin without –genscript actually being used.
  • speed up skips (by not doing a full traceback representation internally)
  • fix issue37: avoid invalid characters in junitxml’s output

py.test 2.0.2: bug fixes, improved xfail/skip expressions, speed ups

Welcome to pytest-2.0.2, a maintenance and bug fix release of pytest, a mature testing tool for Python, supporting CPython 2.4-3.2, Jython and latest PyPy interpreters. See the extensive docs with tested examples here:

If you want to install or upgrade pytest, just type one of:

pip install -U pytest # or
easy_install -U pytest

Many thanks to all issue reporters and people asking questions or complaining, particularly Jurko for his insistence, Laura, Victor and Brianna for helping with improving and Ronny for his general advise.

best, holger krekel

Changes between 2.0.1 and 2.0.2

  • tackle issue32 - speed up test runs of very quick test functions by reducing the relative overhead

  • fix issue30 - extended xfail/skipif handling and improved reporting. If you have a syntax error in your skip/xfail expressions you now get nice error reports.

    Also you can now access module globals from xfail/skipif expressions so that this for example works now:

    import pytest
    import mymodule
    @pytest.mark.skipif("mymodule.__version__[0] == "1")
    def test_function():
        pass
    

    This will not run the test function if the module’s version string does not start with a “1”. Note that specifying a string instead of a boolean expressions allows py.test to report meaningful information when summarizing a test run as to what conditions lead to skipping (or xfail-ing) tests.

  • fix issue28 - setup_method and pytest_generate_tests work together The setup_method fixture method now gets called also for test function invocations generated from the pytest_generate_tests hook.

  • fix issue27 - collectonly and keyword-selection (-k) now work together Also, if you do “py.test –collectonly -q” you now get a flat list of test ids that you can use to paste to the py.test commandline in order to execute a particular test.

  • fix issue25 avoid reported problems with –pdb and python3.2/encodings output

  • fix issue23 - tmpdir argument now works on Python3.2 and WindowsXP Starting with Python3.2 os.symlink may be supported. By requiring a newer py lib version the py.path.local() implementation acknowledges this.

  • fixed typos in the docs (thanks Victor Garcia, Brianna Laugher) and particular thanks to Laura Creighton who also reviewed parts of the documentation.

  • fix slightly wrong output of verbose progress reporting for classes (thanks Amaury)

  • more precise (avoiding of) deprecation warnings for node.Class|Function accesses

  • avoid std unittest assertion helper code in tracebacks (thanks Ronny)

py.test 2.0.1: bug fixes

Welcome to pytest-2.0.1, a maintenance and bug fix release of pytest, a mature testing tool for Python, supporting CPython 2.4-3.2, Jython and latest PyPy interpreters. See extensive docs with tested examples here:

If you want to install or upgrade pytest, just type one of:

pip install -U pytest # or
easy_install -U pytest

Many thanks to all issue reporters and people asking questions or complaining. Particular thanks to Floris Bruynooghe and Ronny Pfannschmidt for their great coding contributions and many others for feedback and help.

best, holger krekel

Changes between 2.0.0 and 2.0.1

  • refine and unify initial capturing so that it works nicely even if the logging module is used on an early-loaded conftest.py file or plugin.
  • fix issue12 - show plugin versions with “–version” and “–traceconfig” and also document how to add extra information to reporting test header
  • fix issue17 (import-* reporting issue on python3) by requiring py>1.4.0 (1.4.1 is going to include it)
  • fix issue10 (numpy arrays truth checking) by refining assertion interpretation in py lib
  • fix issue15: make nose compatibility tests compatible with python3 (now that nose-1.0 supports python3)
  • remove somewhat surprising “same-conftest” detection because it ignores conftest.py when they appear in several subdirs.
  • improve assertions (“not in”), thanks Floris Bruynooghe
  • improve behaviour/warnings when running on top of “python -OO” (assertions and docstrings are turned off, leading to potential false positives)
  • introduce a pytest_cmdline_processargs(args) hook to allow dynamic computation of command line arguments. This fixes a regression because py.test prior to 2.0 allowed to set command line options from conftest.py files which so far pytest-2.0 only allowed from ini-files now.
  • fix issue7: assert failures in doctest modules. unexpected failures in doctests will not generally show nicer, i.e. within the doctest failing context.
  • fix issue9: setup/teardown functions for an xfail-marked test will report as xfail if they fail but report as normally passing (not xpassing) if they succeed. This only is true for “direct” setup/teardown invocations because teardown_class/ teardown_module cannot closely relate to a single test.
  • fix issue14: no logging errors at process exit
  • refinements to “collecting” output on non-ttys
  • refine internal plugin registration and –traceconfig output
  • introduce a mechanism to prevent/unregister plugins from the command line, see http://pytest.org/latest/plugins.html#cmdunregister
  • activate resultlog plugin by default
  • fix regression wrt yielded tests which due to the collection-before-running semantics were not setup as with pytest 1.3.4. Note, however, that the recommended and much cleaner way to do test parametrization remains the “pytest_generate_tests” mechanism, see the docs.

py.test 2.0.0: asserts++, unittest++, reporting++, config++, docs++

Welcome to pytest-2.0.0, a major new release of “py.test”, the rapid easy Python testing tool. There are many new features and enhancements, see below for summary and detailed lists. A lot of long-deprecated code has been removed, resulting in a much smaller and cleaner implementation. See the new docs with examples here:

A note on packaging: pytest used to part of the “py” distribution up until version py-1.3.4 but this has changed now: pytest-2.0.0 only contains py.test related code and is expected to be backward-compatible to existing test code. If you want to install pytest, just type one of:

pip install -U pytest
easy_install -U pytest

Many thanks to all issue reporters and people asking questions or complaining. Particular thanks to Floris Bruynooghe and Ronny Pfannschmidt for their great coding contributions and many others for feedback and help.

best, holger krekel

New Features

  • new invocations through Python interpreter and from Python:

    python -m pytest      # on all pythons >= 2.5
    

    or from a python program:

    import pytest ; pytest.main(arglist, pluginlist)
    

    see http://pytest.org/2.0.0/usage.html for details.

  • new and better reporting information in assert expressions if comparing lists, sequences or strings.

    see http://pytest.org/2.0.0/assert.html#newreport

  • new configuration through ini-files (setup.cfg or tox.ini recognized), for example:

    [pytest]
    norecursedirs = .hg data*  # don't ever recurse in such dirs
    addopts = -x --pyargs      # add these command line options by default
    

    see http://pytest.org/2.0.0/customize.html

  • improved standard unittest support. In general py.test should now better be able to run custom unittest.TestCases like twisted trial or Django based TestCases. Also you can now run the tests of an installed ‘unittest’ package with py.test:

    py.test --pyargs unittest
    
  • new “-q” option which decreases verbosity and prints a more nose/unittest-style “dot” output.

  • many many more detailed improvements details

Fixes

  • fix issue126 - introduce py.test.set_trace() to trace execution via PDB during the running of tests even if capturing is ongoing.
  • fix issue124 - make reporting more resilient against tests opening files on filedescriptor 1 (stdout).
  • fix issue109 - sibling conftest.py files will not be loaded. (and Directory collectors cannot be customized anymore from a Directory’s conftest.py - this needs to happen at least one level up).
  • fix issue88 (finding custom test nodes from command line arg)
  • fix issue93 stdout/stderr is captured while importing conftest.py
  • fix bug: unittest collected functions now also can have “pytestmark” applied at class/module level

Important Notes

  • The usual way in pre-2.0 times to use py.test in python code was to import “py” and then e.g. use “py.test.raises” for the helper. This remains valid and is not planned to be deprecated. However, in most examples and internal code you’ll find “import pytest” and “pytest.raises” used as the recommended default way.
  • pytest now first performs collection of the complete test suite before running any test. This changes for example the semantics of when pytest_collectstart/pytest_collectreport are called. Some plugins may need upgrading.
  • The pytest package consists of a 400 LOC core.py and about 20 builtin plugins, summing up to roughly 5000 LOCs, including docstrings. To be fair, it also uses generic code from the “pylib”, and the new “py” package to help with filesystem and introspection/code manipulation.

(Incompatible) Removals

  • py.test.config is now only available if you are in a test run.

  • the following (mostly already deprecated) functionality was removed:

    • removed support for Module/Class/... collection node definitions in conftest.py files. They will cause nothing special.
    • removed support for calling the pre-1.0 collection API of “run()” and “join”
    • removed reading option values from conftest.py files or env variables. This can now be done much much better and easier through the ini-file mechanism and the “addopts” entry in particular.
    • removed the “disabled” attribute in test classes. Use the skipping and pytestmark mechanism to skip or xfail a test class.
  • py.test.collect.Directory does not exist anymore and it is not possible to provide an own “Directory” object. If you have used this and don’t know what to do, get in contact. We’ll figure something out.

    Note that pytest_collect_directory() is still called but any return value will be ignored. This allows to keep old code working that performed for example “py.test.skip()” in collect() to prevent recursion into directory trees if a certain dependency or command line option is missing.

see Changelog history for more detailed changes.

Changelog history

3.0.7 (2017-03-14)

  • Fix issue in assertion rewriting breaking due to modules silently discarding other modules when importing fails Notably, importing the anydbm module is fixed. (#2248). Thanks @pfhayes for the PR.
  • junitxml: Fix problematic case where system-out tag occured twice per testcase element in the XML report. Thanks @kkoukiou for the PR.
  • Fix regression, pytest now skips unittest correctly if run with --pdb (#2137). Thanks to @gst for the report and @mbyt for the PR.
  • Ignore exceptions raised from descriptors (e.g. properties) during Python test collection (#2234). Thanks to @bluetech.
  • --override-ini now correctly overrides some fundamental options like python_files (#2238). Thanks @sirex for the report and @nicoddemus for the PR.
  • Replace raise StopIteration usages in the code by simple returns to finish generators, in accordance to PEP-479 (#2160). Thanks @tgoodlet for the report and @nicoddemus for the PR.
  • Fix internal errors when an unprintable AssertionError is raised inside a test. Thanks @omerhadari for the PR.
  • Skipping plugin now also works with test items generated by custom collectors (#2231). Thanks to @vidartf.
  • Fix trailing whitespace in console output if no .ini file presented (#2281). Thanks @fbjorn for the PR.
  • Conditionless xfail markers no longer rely on the underlying test item being an instance of PyobjMixin, and can therefore apply to tests not collected by the built-in python test collector. Thanks @barneygale for the PR.

3.0.6 (2017-01-22)

  • pytest no longer generates PendingDeprecationWarning from its own operations, which was introduced by mistake in version 3.0.5 (#2118). Thanks to @nicoddemus for the report and @RonnyPfannschmidt for the PR.
  • pytest no longer recognizes coroutine functions as yield tests (#2129). Thanks to @malinoff for the PR.
  • Plugins loaded by the PYTEST_PLUGINS environment variable are now automatically considered for assertion rewriting (#2185). Thanks @nicoddemus for the PR.
  • Improve error message when pytest.warns fails (#2150). The type(s) of the expected warnings and the list of caught warnings is added to the error message. Thanks @lesteve for the PR.
  • Fix pytester internal plugin to work correctly with latest versions of zope.interface (#1989). Thanks @nicoddemus for the PR.
  • Assert statements of the pytester plugin again benefit from assertion rewriting (#1920). Thanks @RonnyPfannschmidt for the report and @nicoddemus for the PR.
  • Specifying tests with colons like test_foo.py::test_bar for tests in subdirectories with ini configuration files now uses the correct ini file (#2148). Thanks @pelme.
  • Fail testdir.runpytest().assert_outcomes() explicitly if the pytest terminal output it relies on is missing. Thanks to @eli-b for the PR.

3.0.5 (2016-12-05)

  • Add warning when not passing option=value correctly to -o/--override-ini (#2105). Also improved the help documentation. Thanks to @mbukatov for the report and @lwm for the PR.
  • Now --confcutdir and --junit-xml are properly validated if they are directories and filenames, respectively (#2089 and #2078). Thanks to @lwm for the PR.
  • Add hint to error message hinting possible missing __init__.py (#478). Thanks @DuncanBetts.
  • More accurately describe when fixture finalization occurs in documentation (#687). Thanks @DuncanBetts.
  • Provide :ref: targets for recwarn.rst so we can use intersphinx referencing. Thanks to @dupuy for the report and @lwm for the PR.
  • In Python 2, use a simple +- ASCII string in the string representation of pytest.approx (for example "4 +- 4.0e-06") because it is brittle to handle that in different contexts and representations internally in pytest which can result in bugs such as #2111. In Python 3, the representation still uses ± (for example 4 ± 4.0e-06). Thanks @kerrick-lyft for the report and @nicoddemus for the PR.
  • Using item.Function, item.Module, etc., is now issuing deprecation warnings, prefer pytest.Function, pytest.Module, etc., instead (#2034). Thanks @nmundar for the PR.
  • Fix error message using approx with complex numbers (#2082). Thanks @adler-j for the report and @nicoddemus for the PR.
  • Fixed false-positives warnings from assertion rewrite hook for modules imported more than once by the pytest_plugins mechanism. Thanks @nicoddemus for the PR.
  • Remove an internal cache which could cause hooks from conftest.py files in sub-directories to be called in other directories incorrectly (#2016). Thanks @d-b-w for the report and @nicoddemus for the PR.
  • Remove internal code meant to support earlier Python 3 versions that produced the side effect of leaving None in sys.modules when expressions were evaluated by pytest (for example passing a condition as a string to pytest.mark.skipif)(#2103). Thanks @jaraco for the report and @nicoddemus for the PR.
  • Cope gracefully with a .pyc file with no matching .py file (#2038). Thanks @nedbat.

3.0.4 (2016-11-09)

  • Import errors when collecting test modules now display the full traceback (#1976). Thanks @cwitty for the report and @nicoddemus for the PR.
  • Fix confusing command-line help message for custom options with two or more metavar properties (#2004). Thanks @okulynyak and @davehunt for the report and @nicoddemus for the PR.
  • When loading plugins, import errors which contain non-ascii messages are now properly handled in Python 2 (#1998). Thanks @nicoddemus for the PR.
  • Fixed cyclic reference when pytest.raises is used in context-manager form (#1965). Also as a result of this fix, sys.exc_info() is left empty in both context-manager and function call usages. Previously, sys.exc_info would contain the exception caught by the context manager, even when the expected exception occurred. Thanks @MSeifert04 for the report and the PR.
  • Fixed false-positives warnings from assertion rewrite hook for modules that were rewritten but were later marked explicitly by pytest.register_assert_rewrite or implicitly as a plugin (#2005). Thanks @RonnyPfannschmidt for the report and @nicoddemus for the PR.
  • Report teardown output on test failure (#442). Thanks @matclab for the PR.
  • Fix teardown error message in generated xUnit XML. Thanks @gdyuldin for the PR.
  • Properly handle exceptions in multiprocessing tasks (#1984). Thanks @adborden for the report and @nicoddemus for the PR.
  • Clean up unittest TestCase objects after tests are complete (#1649). Thanks @d_b_w for the report and PR.

3.0.3 (2016-09-28)

  • The ids argument to parametrize again accepts unicode strings in Python 2 (#1905). Thanks @philpep for the report and @nicoddemus for the PR.
  • Assertions are now being rewritten for plugins in development mode (pip install -e) (#1934). Thanks @nicoddemus for the PR.
  • Fix pkg_resources import error in Jython projects (#1853). Thanks @raquel-ucl for the PR.
  • Got rid of AttributeError: 'Module' object has no attribute '_obj' exception in Python 3 (#1944). Thanks @axil for the PR.
  • Explain a bad scope value passed to @fixture declarations or a MetaFunc.parametrize() call. Thanks @tgoodlet for the PR.
  • This version includes pluggy-0.4.0, which correctly handles VersionConflict errors in plugins (#704). Thanks @nicoddemus for the PR.

3.0.2 (2016-09-01)

  • Improve error message when passing non-string ids to pytest.mark.parametrize (#1857). Thanks @okken for the report and @nicoddemus for the PR.
  • Add buffer attribute to stdin stub class pytest.capture.DontReadFromInput Thanks @joguSD for the PR.
  • Fix UnicodeEncodeError when string comparison with unicode has failed. (#1864) Thanks @AiOO for the PR.
  • pytest_plugins is now handled correctly if defined as a string (as opposed as a sequence of strings) when modules are considered for assertion rewriting. Due to this bug, much more modules were being rewritten than necessary if a test suite uses pytest_plugins to load internal plugins (#1888). Thanks @jaraco for the report and @nicoddemus for the PR (#1891).
  • Do not call tearDown and cleanups when running tests from unittest.TestCase subclasses with --pdb enabled. This allows proper post mortem debugging for all applications which have significant logic in their tearDown machinery (#1890). Thanks @mbyt for the PR.
  • Fix use of deprecated getfuncargvalue method in the internal doctest plugin. Thanks @ViviCoder for the report (#1898).

3.0.1 (2016-08-23)

  • Fix regression when importorskip is used at module level (#1822). Thanks @jaraco and @The-Compiler for the report and @nicoddemus for the PR.
  • Fix parametrization scope when session fixtures are used in conjunction with normal parameters in the same call (#1832). Thanks @The-Compiler for the report, @Kingdread and @nicoddemus for the PR.
  • Fix internal error when parametrizing tests or fixtures using an empty ids argument (#1849). Thanks @OPpuolitaival for the report and @nicoddemus for the PR.
  • Fix loader error when running pytest embedded in a zipfile. Thanks @mbachry for the PR.

3.0.0 (2016-08-18)

Incompatible changes

A number of incompatible changes were made in this release, with the intent of removing features deprecated for a long time or change existing behaviors in order to make them less surprising/more useful.

  • Reinterpretation mode has now been removed. Only plain and rewrite mode are available, consequently the --assert=reinterp option is no longer available. This also means files imported from plugins or conftest.py will not benefit from improved assertions by default, you should use pytest.register_assert_rewrite() to explicitly turn on assertion rewriting for those files. Thanks @flub for the PR.

  • The following deprecated commandline options were removed:

    • --genscript: no longer supported;
    • --no-assert: use --assert=plain instead;
    • --nomagic: use --assert=plain instead;
    • --report: use -r instead;

    Thanks to @RedBeardCode for the PR (#1664).

  • ImportErrors in plugins now are a fatal error instead of issuing a pytest warning (#1479). Thanks to @The-Compiler for the PR.

  • Removed support code for Python 3 versions < 3.3 (#1627).

  • Removed all py.test-X* entry points. The versioned, suffixed entry points were never documented and a leftover from a pre-virtualenv era. These entry points also created broken entry points in wheels, so removing them also removes a source of confusion for users (#1632). Thanks @obestwalter for the PR.

  • pytest.skip() now raises an error when used to decorate a test function, as opposed to its original intent (to imperatively skip a test inside a test function). Previously this usage would cause the entire module to be skipped (#607). Thanks @omarkohl for the complete PR (#1519).

  • Exit tests if a collection error occurs. A poll indicated most users will hit CTRL-C anyway as soon as they see collection errors, so pytest might as well make that the default behavior (#1421). A --continue-on-collection-errors option has been added to restore the previous behaviour. Thanks @olegpidsadnyi and @omarkohl for the complete PR (#1628).

  • Renamed the pytest pdb module (plugin) into debugging to avoid clashes with the builtin pdb module.

  • Raise a helpful failure message when requesting a parametrized fixture at runtime, e.g. with request.getfixturevalue. Previously these parameters were simply never defined, so a fixture decorated like @pytest.fixture(params=[0, 1, 2]) only ran once (#460). Thanks to @nikratio for the bug report, @RedBeardCode and @tomviner for the PR.

  • _pytest.monkeypatch.monkeypatch class has been renamed to _pytest.monkeypatch.MonkeyPatch so it doesn’t conflict with the monkeypatch fixture.

  • --exitfirst / -x can now be overridden by a following --maxfail=N and is just a synonym for --maxfail=1.

New Features

  • Support nose-style __test__ attribute on methods of classes, including unittest-style Classes. If set to False, the test will not be collected.

  • New doctest_namespace fixture for injecting names into the namespace in which doctests run. Thanks @milliams for the complete PR (#1428).

  • New --doctest-report option available to change the output format of diffs when running (failing) doctests (implements #1749). Thanks @hartym for the PR.

  • New name argument to pytest.fixture decorator which allows a custom name for a fixture (to solve the funcarg-shadowing-fixture problem). Thanks @novas0x2a for the complete PR (#1444).

  • New approx() function for easily comparing floating-point numbers in tests. Thanks @kalekundert for the complete PR (#1441).

  • Ability to add global properties in the final xunit output file by accessing the internal junitxml plugin (experimental). Thanks @tareqalayan for the complete PR #1454).

  • New ExceptionInfo.match() method to match a regular expression on the string representation of an exception (#372). Thanks @omarkohl for the complete PR (#1502).

  • __tracebackhide__ can now also be set to a callable which then can decide whether to filter the traceback based on the ExceptionInfo object passed to it. Thanks @The-Compiler for the complete PR (#1526).

  • New pytest_make_parametrize_id(config, val) hook which can be used by plugins to provide friendly strings for custom types. Thanks @palaviv for the PR.

  • capsys and capfd now have a disabled() context-manager method, which can be used to temporarily disable capture within a test. Thanks @nicoddemus for the PR.

  • New cli flag --fixtures-per-test: shows which fixtures are being used for each selected test item. Features doc strings of fixtures by default. Can also show where fixtures are defined if combined with -v. Thanks @hackebrot for the PR.

  • Introduce pytest command as recommended entry point. Note that py.test still works and is not scheduled for removal. Closes proposal #1629. Thanks @obestwalter and @davehunt for the complete PR (#1633).

  • New cli flags:

    • --setup-plan: performs normal collection and reports the potential setup and teardown and does not execute any fixtures and tests;
    • --setup-only: performs normal collection, executes setup and teardown of fixtures and reports them;
    • --setup-show: performs normal test execution and additionally shows setup and teardown of fixtures;
    • --keep-duplicates: py.test now ignores duplicated paths given in the command line. To retain the previous behavior where the same test could be run multiple times by specifying it in the command-line multiple times, pass the --keep-duplicates argument (#1609);

    Thanks @d6e, @kvas-it, @sallner, @ioggstream and @omarkohl for the PRs.

  • New CLI flag --override-ini/-o: overrides values from the ini file. For example: "-o xfail_strict=True"‘. Thanks @blueyed and @fengxx for the PR.

  • New hooks:

    • pytest_fixture_setup(fixturedef, request): executes fixture setup;
    • pytest_fixture_post_finalizer(fixturedef): called after the fixture’s finalizer and has access to the fixture’s result cache.

    Thanks @d6e, @sallner.

  • Issue warnings for asserts whose test is a tuple literal. Such asserts will never fail because tuples are always truthy and are usually a mistake (see #1562). Thanks @kvas-it, for the PR.

  • Allow passing a custom debugger class (e.g. --pdbcls=IPython.core.debugger:Pdb). Thanks to @anntzer for the PR.

Changes

  • Plugins now benefit from assertion rewriting. Thanks @sober7, @nicoddemus and @flub for the PR.
  • Change report.outcome for xpassed tests to "passed" in non-strict mode and "failed" in strict mode. Thanks to @hackebrot for the PR (#1795) and @gprasad84 for report (#1546).
  • Tests marked with xfail(strict=False) (the default) now appear in JUnitXML reports as passing tests instead of skipped. Thanks to @hackebrot for the PR (#1795).
  • Highlight path of the file location in the error report to make it easier to copy/paste. Thanks @suzaku for the PR (#1778).
  • Fixtures marked with @pytest.fixture can now use yield statements exactly like those marked with the @pytest.yield_fixture decorator. This change renders @pytest.yield_fixture deprecated and makes @pytest.fixture with yield statements the preferred way to write teardown code (#1461). Thanks @csaftoiu for bringing this to attention and @nicoddemus for the PR.
  • Explicitly passed parametrize ids do not get escaped to ascii (#1351). Thanks @ceridwen for the PR.
  • Fixtures are now sorted in the error message displayed when an unknown fixture is declared in a test function. Thanks @nicoddemus for the PR.
  • pytest_terminal_summary hook now receives the exitstatus of the test session as argument. Thanks @blueyed for the PR (#1809).
  • Parametrize ids can accept None as specific test id, in which case the automatically generated id for that argument will be used. Thanks @palaviv for the complete PR (#1468).
  • The parameter to xunit-style setup/teardown methods (setup_method, setup_module, etc.) is now optional and may be omitted. Thanks @okken for bringing this to attention and @nicoddemus for the PR.
  • Improved automatic id generation selection in case of duplicate ids in parametrize. Thanks @palaviv for the complete PR (#1474).
  • Now pytest warnings summary is shown up by default. Added a new flag --disable-pytest-warnings to explicitly disable the warnings summary (#1668).
  • Make ImportError during collection more explicit by reminding the user to check the name of the test module/package(s) (#1426). Thanks @omarkohl for the complete PR (#1520).
  • Add build/ and dist/ to the default --norecursedirs list. Thanks @mikofski for the report and @tomviner for the PR (#1544).
  • pytest.raises in the context manager form accepts a custom message to raise when no exception occurred. Thanks @palaviv for the complete PR (#1616).
  • conftest.py files now benefit from assertion rewriting; previously it was only available for test modules. Thanks @flub, @sober7 and @nicoddemus for the PR (#1619).
  • Text documents without any doctests no longer appear as “skipped”. Thanks @graingert for reporting and providing a full PR (#1580).
  • Ensure that a module within a namespace package can be found when it is specified on the command line together with the --pyargs option. Thanks to @taschini for the PR (#1597).
  • Always include full assertion explanation during assertion rewriting. The previous behaviour was hiding sub-expressions that happened to be False, assuming this was redundant information. Thanks @bagerard for reporting (#1503). Thanks to @davehunt and @tomviner for the PR.
  • OptionGroup.addoption() now checks if option names were already added before, to make it easier to track down issues like #1618. Before, you only got exceptions later from argparse library, giving no clue about the actual reason for double-added options.
  • yield-based tests are considered deprecated and will be removed in pytest-4.0. Thanks @nicoddemus for the PR.
  • [pytest] sections in setup.cfg files should now be named [tool:pytest] to avoid conflicts with other distutils commands (see #567). [pytest] sections in pytest.ini or tox.ini files are supported and unchanged. Thanks @nicoddemus for the PR.
  • Using pytest_funcarg__ prefix to declare fixtures is considered deprecated and will be removed in pytest-4.0 (#1684). Thanks @nicoddemus for the PR.
  • Passing a command-line string to pytest.main() is considered deprecated and scheduled for removal in pytest-4.0. It is recommended to pass a list of arguments instead (#1723).
  • Rename getfuncargvalue to getfixturevalue. getfuncargvalue is still present but is now considered deprecated. Thanks to @RedBeardCode and @tomviner for the PR (#1626).
  • optparse type usage now triggers DeprecationWarnings (#1740).
  • optparse backward compatibility supports float/complex types (#457).
  • Refined logic for determining the rootdir, considering only valid paths which fixes a number of issues: #1594, #1435 and #1471. Updated the documentation according to current behavior. Thanks to @blueyed, @davehunt and @matthiasha for the PR.
  • Always include full assertion explanation. The previous behaviour was hiding sub-expressions that happened to be False, assuming this was redundant information. Thanks @bagerard for reporting (#1503). Thanks to @davehunt and @tomviner for PR.
  • Better message in case of not using parametrized variable (see #1539). Thanks to @tramwaj29 for the PR.
  • Updated docstrings with a more uniform style.
  • Add stderr write for pytest.exit(msg) during startup. Previously the message was never shown. Thanks @BeyondEvil for reporting #1210. Thanks to @JonathonSonesen and @tomviner for the PR.
  • No longer display the incorrect test deselection reason (#1372). Thanks @ronnypfannschmidt for the PR.
  • The --resultlog command line option has been deprecated: it is little used and there are more modern and better alternatives (see #830). Thanks @nicoddemus for the PR.
  • Improve error message with fixture lookup errors: add an ‘E’ to the first line and ‘>’ to the rest. Fixes #717. Thanks @blueyed for reporting and a PR, @eolo999 for the initial PR and @tomviner for his guidance during EuroPython2016 sprint.

Bug Fixes

  • Parametrize now correctly handles duplicated test ids.
  • Fix internal error issue when the method argument is missing for teardown_method() (#1605).
  • Fix exception visualization in case the current working directory (CWD) gets deleted during testing (#1235). Thanks @bukzor for reporting. PR by @marscher.
  • Improve test output for logical expression with brackets (#925). Thanks @DRMacIver for reporting and @RedBeardCode for the PR.
  • Create correct diff for strings ending with newlines (#1553). Thanks @Vogtinator for reporting and @RedBeardCode and @tomviner for the PR.
  • ConftestImportFailure now shows the traceback making it easier to identify bugs in conftest.py files (#1516). Thanks @txomon for the PR.
  • Text documents without any doctests no longer appear as “skipped”. Thanks @graingert for reporting and providing a full PR (#1580).
  • Fixed collection of classes with custom __new__ method. Fixes #1579. Thanks to @Stranger6667 for the PR.
  • Fixed scope overriding inside metafunc.parametrize (#634). Thanks to @Stranger6667 for the PR.
  • Fixed the total tests tally in junit xml output (#1798). Thanks to @cryporchild for the PR.
  • Fixed off-by-one error with lines from request.node.warn. Thanks to @blueyed for the PR.

2.9.2 (2016-05-31)

Bug Fixes

  • fix #510: skip tests where one parameterize dimension was empty thanks Alex Stapleton for the Report and @RonnyPfannschmidt for the PR
  • Fix Xfail does not work with condition keyword argument. Thanks @astraw38 for reporting the issue (#1496) and @tomviner for PR the (#1524).
  • Fix win32 path issue when putting custom config file with absolute path in pytest.main("-c your_absolute_path").
  • Fix maximum recursion depth detection when raised error class is not aware of unicode/encoded bytes. Thanks @prusse-martin for the PR (#1506).
  • Fix pytest.mark.skip mark when used in strict mode. Thanks @pquentin for the PR and @RonnyPfannschmidt for showing how to fix the bug.
  • Minor improvements and fixes to the documentation. Thanks @omarkohl for the PR.
  • Fix --fixtures to show all fixture definitions as opposed to just one per fixture name. Thanks to @hackebrot for the PR.

2.9.1 (2016-03-17)

Bug Fixes

  • Improve error message when a plugin fails to load. Thanks @nicoddemus for the PR.
  • Fix (#1178): pytest.fail with non-ascii characters raises an internal pytest error. Thanks @nicoddemus for the PR.
  • Fix (#469): junit parses report.nodeid incorrectly, when params IDs contain ::. Thanks @tomviner for the PR (#1431).
  • Fix (#578): SyntaxErrors containing non-ascii lines at the point of failure generated an internal py.test error. Thanks @asottile for the report and @nicoddemus for the PR.
  • Fix (#1437): When passing in a bytestring regex pattern to parameterize attempt to decode it as utf-8 ignoring errors.
  • Fix (#649): parametrized test nodes cannot be specified to run on the command line.
  • Fix (#138): better reporting for python 3.3+ chained exceptions

2.9.0 (2016-02-29)

New Features

  • New pytest.mark.skip mark, which unconditionally skips marked tests. Thanks @MichaelAquilina for the complete PR (#1040).
  • --doctest-glob may now be passed multiple times in the command-line. Thanks