[PATCH v2 14/45] test: Support tests which can only be run manually

Heinrich Schuchardt xypron.glpk at gmx.de
Thu Oct 13 18:14:06 CEST 2022


On 10/13/22 14:28, Simon Glass wrote:
> At present we normally write tests either in Python or in C. But most
> Python tests end up doing a lot of checks which would be better done in C.
> Checks done in C are orders of magnitude faster and it is possible to get
> full access to U-Boot's internal workings, rather than just relying on
> the command line.
>
> The model is to have a Python test set up some things and then use C code
> (in a unit test) to check that they were done correctly. But we don't want
> those checks to happen as part of normal test running, since each C unit
> tests is dependent on the associate Python tests, so cannot run without
> it.
>
> To acheive this, add a new UT_TESTF_MANUAL flag to use with the C 'check'
> tests, so that they can be skipped by default when the 'ut' command is
> used. Require that tests have a name ending with '_norun', so that pytest

Why do want to use a naming convention?
What will we do when we want more flags like e.g. "slow"?

Adding more fields to struct unit_test would be more future prove.

Best regards

Heinrich


> knows to skip them.
>
> Signed-off-by: Simon Glass <sjg at chromium.org>
> ---
>
> Changes in v2:
> - Rebase to master
> - Expand docs a little to clarify that manual tests are otherwise normal
>
>   arch/sandbox/cpu/spl.c        |  2 +-
>   doc/develop/tests_writing.rst | 27 +++++++++++++++++++++++++++
>   include/test/test.h           |  8 ++++++++
>   include/test/ut.h             |  4 +++-
>   test/cmd_ut.c                 | 16 +++++++++++++---
>   test/dm/test-dm.c             |  2 +-
>   test/py/conftest.py           |  8 +++++++-
>   test/test-main.c              | 27 ++++++++++++++++++++++++++-
>   8 files changed, 86 insertions(+), 8 deletions(-)
>
> diff --git a/arch/sandbox/cpu/spl.c b/arch/sandbox/cpu/spl.c
> index 1d49a9bd102..9c59cc26163 100644
> --- a/arch/sandbox/cpu/spl.c
> +++ b/arch/sandbox/cpu/spl.c
> @@ -89,7 +89,7 @@ void spl_board_init(void)
>   		int ret;
>
>   		ret = ut_run_list("spl", NULL, tests, count,
> -				  state->select_unittests, 1);
> +				  state->select_unittests, 1, false);
>   		/* continue execution into U-Boot */
>   	}
>   }
> diff --git a/doc/develop/tests_writing.rst b/doc/develop/tests_writing.rst
> index 1ddf7a353a7..bb1145da268 100644
> --- a/doc/develop/tests_writing.rst
> +++ b/doc/develop/tests_writing.rst
> @@ -74,6 +74,33 @@ NOT rely on running with sandbox, but instead should function correctly on any
>   board supported by U-Boot.
>
>
> +Mixing Python and C
> +-------------------
> +
> +The best of both worlds is sometimes to have a Python test set things up and
> +perform some operations, with a 'checker' C unit test doing the checks
> +afterwards. This can be achieved with these steps:
> +
> +- Add the `UT_TESTF_MANUAL` flag to the checker test so that the `ut` command
> +  does not run it by default
> +- Add a `_norun` suffix to the name so that pytest knows to skip it too
> +
> +In your Python test use the `-f` flag to the `ut` command to force the checker
> +test to run it, e.g.::
> +
> +   # Do the Python part
> +   host load ...
> +   bootm ...
> +
> +   # Run the checker to make sure that everything worked
> +   ut -f bootstd vbe_test_fixup_norun
> +
> +Note that apart from the `UT_TESTF_MANUAL` flag, the code in a 'manual' C test
> +is just like any other C test. It still uses ut_assert...() and other such
> +constructs, in this case to check that the expected things happened in the
> +Python test.
> +
> +
>   How slow are Python tests?
>   --------------------------
>
> diff --git a/include/test/test.h b/include/test/test.h
> index c1853ce471b..4ad74614afc 100644
> --- a/include/test/test.h
> +++ b/include/test/test.h
> @@ -28,6 +28,7 @@
>    * @other_fdt_size: Size of the other FDT (UT_TESTF_OTHER_FDT)
>    * @of_other: Live tree for the other FDT
>    * @runs_per_test: Number of times to run each test (typically 1)
> + * @force_run: true to run tests marked with the UT_TESTF_MANUAL flag
>    * @expect_str: Temporary string used to hold expected string value
>    * @actual_str: Temporary string used to hold actual string value
>    */
> @@ -48,6 +49,7 @@ struct unit_test_state {
>   	int other_fdt_size;
>   	struct device_node *of_other;
>   	int runs_per_test;
> +	bool force_run;
>   	char expect_str[512];
>   	char actual_str[512];
>   };
> @@ -63,6 +65,12 @@ enum {
>   	/* do extra driver model init and uninit */
>   	UT_TESTF_DM		= BIT(6),
>   	UT_TESTF_OTHER_FDT	= BIT(7),	/* read in other device tree */
> +	/*
> +	 * Only run if explicitly requested with 'ut -f <suite> <test>'. The
> +	 * test name must end in "_norun" so that pytest detects this also,
> +	 * since it cannot access the flags.
> +	 */
> +	UT_TESTF_MANUAL		= BIT(8),
>   };
>
>   /**
> diff --git a/include/test/ut.h b/include/test/ut.h
> index f7217aa8ac5..e0e618b58c2 100644
> --- a/include/test/ut.h
> +++ b/include/test/ut.h
> @@ -409,9 +409,11 @@ void test_set_state(struct unit_test_state *uts);
>    * @select_name: Name of a single test to run (from the list provided). If NULL
>    *	then all tests are run
>    * @runs_per_test: Number of times to run each test (typically 1)
> + * @force_run: Run tests that are marked as manual-only (UT_TESTF_MANUAL)
>    * Return: 0 if all tests passed, -1 if any failed
>    */
>   int ut_run_list(const char *name, const char *prefix, struct unit_test *tests,
> -		int count, const char *select_name, int runs_per_test);
> +		int count, const char *select_name, int runs_per_test,
> +		bool force_run);
>
>   #endif
> diff --git a/test/cmd_ut.c b/test/cmd_ut.c
> index 11c219b48ac..3ea692fd31f 100644
> --- a/test/cmd_ut.c
> +++ b/test/cmd_ut.c
> @@ -19,16 +19,26 @@ int cmd_ut_category(const char *name, const char *prefix,
>   		    int argc, char *const argv[])
>   {
>   	int runs_per_text = 1;
> +	bool force_run = false;
>   	int ret;
>
> -	if (argc > 1 && !strncmp("-r", argv[1], 2)) {
> -		runs_per_text = dectoul(argv[1] + 2, NULL);
> +	while (argc > 1 && *argv[1] == '-') {
> +		const char *str = argv[1];
> +
> +		switch (str[1]) {
> +		case 'r':
> +			runs_per_text = dectoul(str + 2, NULL);
> +			break;
> +		case 'f':
> +			force_run = true;
> +			break;
> +		}
>   		argv++;
>   		argc++;
>   	}
>
>   	ret = ut_run_list(name, prefix, tests, n_ents,
> -			  argc > 1 ? argv[1] : NULL, runs_per_text);
> +			  argc > 1 ? argv[1] : NULL, runs_per_text, force_run);
>
>   	return ret ? CMD_RET_FAILURE : 0;
>   }
> diff --git a/test/dm/test-dm.c b/test/dm/test-dm.c
> index eb3581333b9..66cc2bc6cce 100644
> --- a/test/dm/test-dm.c
> +++ b/test/dm/test-dm.c
> @@ -36,7 +36,7 @@ static int dm_test_run(const char *test_name, int runs_per_text)
>   	int ret;
>
>   	ret = ut_run_list("driver model", "dm_test_", tests, n_ents, test_name,
> -			  runs_per_text);
> +			  runs_per_text, false);
>
>   	return ret ? CMD_RET_FAILURE : 0;
>   }
> diff --git a/test/py/conftest.py b/test/py/conftest.py
> index 304e93164aa..fc9dd3a83f8 100644
> --- a/test/py/conftest.py
> +++ b/test/py/conftest.py
> @@ -289,7 +289,13 @@ def generate_ut_subtest(metafunc, fixture_name, sym_path):
>           m = re_ut_test_list.search(l)
>           if not m:
>               continue
> -        vals.append(m.group(1) + ' ' + m.group(2))
> +        suite, name = m.groups()
> +
> +        # Tests marked with _norun should only be run manually using 'ut -f'
> +        if name.endswith('_norun'):
> +            continue
> +
> +        vals.append(f'{suite} {name}')
>
>       ids = ['ut_' + s.replace(' ', '_') for s in vals]
>       metafunc.parametrize(fixture_name, vals, ids=ids)
> diff --git a/test/test-main.c b/test/test-main.c
> index 312fa1a6a19..37cb1dd2379 100644
> --- a/test/test-main.c
> +++ b/test/test-main.c
> @@ -517,6 +517,30 @@ static int ut_run_tests(struct unit_test_state *uts, const char *prefix,
>
>   		if (!test_matches(prefix, test_name, select_name))
>   			continue;
> +
> +		if (test->flags & UT_TESTF_MANUAL) {
> +			int len;
> +
> +			/*
> +			 * manual tests must have a name ending "_norun" as this
> +			 * is how pytest knows to skip them. See
> +			 * generate_ut_subtest() for this check.
> +			 */
> +			len = strlen(test_name);
> +			if (len < 6 || strcmp(test_name + len - 6, "_norun")) {
> +				printf("Test %s is manual so must have a name ending in _norun\n",
> +				       test_name);
> +				uts->fail_count++;
> +				return -EBADF;
> +			}
> +			if (!uts->force_run) {
> +				if (select_name) {
> +					printf("Test %s skipped as it is manual (use -f to run it)\n",
> +					       test_name);
> +				}
> +				continue;
> +			}
> +		}
>   		old_fail_count = uts->fail_count;
>   		for (i = 0; i < uts->runs_per_test; i++)
>   			ret = ut_run_test_live_flat(uts, test, select_name);
> @@ -538,7 +562,7 @@ static int ut_run_tests(struct unit_test_state *uts, const char *prefix,
>
>   int ut_run_list(const char *category, const char *prefix,
>   		struct unit_test *tests, int count, const char *select_name,
> -		int runs_per_test)
> +		int runs_per_test, bool force_run)
>   {
>   	struct unit_test_state uts = { .fail_count = 0 };
>   	bool has_dm_tests = false;
> @@ -572,6 +596,7 @@ int ut_run_list(const char *category, const char *prefix,
>   		}
>   		memcpy(uts.fdt_copy, gd->fdt_blob, uts.fdt_size);
>   	}
> +	uts.force_run = force_run;
>   	ret = ut_run_tests(&uts, prefix, tests, count, select_name);
>
>   	/* Best efforts only...ignore errors */



More information about the U-Boot mailing list