[PATCH 0/5] patman: test_util: Prettify test report outputs for python tools
Alper Nebi Yasak
alpernebiyasak at gmail.com
Sat Apr 2 19:06:03 CEST 2022
These are a few changes to the test utilities shared by binman, patman,
buildman and dtoc to make their printed test results cleaner. Simon has
sent a patch making test_fdt use these utils [1], this applies on top of
that and affects its output as well.
[1] dtoc: Update fdt tests to use test_util
https://patchwork.ozlabs.org/project/uboot/patch/20220319000150.1722595-1-sjg@chromium.org/
As a comparison, here's how the report currently looks when I add three
bad tests to patman (one failing, one error, one skip). Line wrapped
with backslashes at the ends for convenience, most things are otherwise
jammed on one line:
> Printing text block to stderr:
> **************
> * test_error *
> **************
> Printing text block to stdout:
> ++++++++++++++++
> + test_failure +
> ++++++++++++++++
> <unittest.result.TestResult run=48 errors=1 failures=1>
> patman.func_test.TestFunctional.test_error \
> testtools.testresult.real._StringException: \
> RuntimeError: Uncaught error during test
>
> testtools.testresult.real._StringException: \
> AssertionError: False is not true
>
> [(<subunit.RemotedTestCase \
> description='patman.func_test.TestFunctional.test_failure'>, \
> 'testtools.testresult.real._StringException: \
> AssertionError: False is not true\n\n')]
> 1 patman test SKIPPED:
> patman.func_test.TestFunctional.test_skipped \
> (subunit.RemotedTestCase): Example skip reason
>
> patman tests FAILED
This quickly gets very bad the more test failures there are.
Here's what I get after this series:
> ======================== Running patman tests ========================
> ======================================================================
> ERROR: patman.func_test.TestFunctional.test_error (subunit.RemotedTestCase)
> patman.func_test.TestFunctional.test_error
> ----------------------------------------------------------------------
> testtools.testresult.real._StringException: stderr: {{{
> Printing text block to stderr:
> **************
> * test_error *
> **************
> }}}
>
> RuntimeError: Uncaught error during test
>
> ======================================================================
> FAIL: patman.func_test.TestFunctional.test_failure (subunit.RemotedTestCase)
> patman.func_test.TestFunctional.test_failure
> ----------------------------------------------------------------------
> testtools.testresult.real._StringException: stdout: {{{
> Printing text block to stdout:
> ++++++++++++++++
> + test_failure +
> ++++++++++++++++
> }}}
>
> AssertionError: False is not true
>
> ======================================================================
> SKIP: patman.func_test.TestFunctional.test_skipped (subunit.RemotedTestCase)
> patman.func_test.TestFunctional.test_skipped
> ----------------------------------------------------------------------
> Example skip reason
>
> ----------------------------------------------------------------------
> Ran 48 tests in 4.469s
>
> FAILED (failures=1, errors=1, skipped=1)
There are some weird parts in this as well, like the stdout/stderr being
quoted with braces and test descriptions being the same as test names,
but they're quirks of the libraries used for concurrency test flows.
The report looks slightly better when tests are run non-concurrently.
Alper Nebi Yasak (5):
patman: test_util: Fix printing results for failed tests
patman: test_util: Handle nonexistent tests while loading tests
patman: test_util: Use unittest text runner to print test results
patman: test_util: Customize unittest test results for more info
patman: test_util: Print test stdout/stderr within test summaries
tools/binman/main.py | 8 +-
tools/buildman/main.py | 8 +-
tools/concurrencytest/concurrencytest.py | 83 ++++++++++++-
tools/dtoc/main.py | 9 +-
tools/dtoc/test_fdt.py | 8 +-
tools/patman/main.py | 8 +-
tools/patman/test_util.py | 150 +++++++++++++++--------
7 files changed, 197 insertions(+), 77 deletions(-)
--
2.35.1
More information about the U-Boot
mailing list