[PATCH v4 42/42] doc: Explain briefly how to write new tests

Heinrich Schuchardt xypron.glpk at gmx.de
Thu Mar 4 19:14:25 CET 2021


On 04.03.21 14:51, Simon Glass wrote:
> Add a second on writing tests, covering when to use Python and C, where
> to put the tests, etc. Add a link to the existing Python test
> documentation.
>
> Signed-off-by: Simon Glass <sjg at chromium.org>
> ---
>
> Changes in v4:
> - Add correct SPDX header
>
> Changes in v2:
> - Add new patches to cover running an SPL test
>
>  doc/develop/index.rst         |   1 +
>  doc/develop/py_testing.rst    |   3 +-
>  doc/develop/testing.rst       |   2 +
>  doc/develop/tests_sandbox.rst |   7 +
>  doc/develop/tests_writing.rst | 339 ++++++++++++++++++++++++++++++++++
>  5 files changed, 351 insertions(+), 1 deletion(-)
>  create mode 100644 doc/develop/tests_writing.rst
>
> diff --git a/doc/develop/index.rst b/doc/develop/index.rst
> index 9208668a2d4..8f0a598b5f2 100644
> --- a/doc/develop/index.rst
> +++ b/doc/develop/index.rst
> @@ -32,6 +32,7 @@ Testing
>
>     testing
>     py_testing
> +   tests_writing
>     tests_sandbox
>
>  Refactoring
> diff --git a/doc/develop/py_testing.rst b/doc/develop/py_testing.rst
> index 7f01858cfda..c4cecc0a01b 100644
> --- a/doc/develop/py_testing.rst
> +++ b/doc/develop/py_testing.rst
> @@ -13,7 +13,8 @@ results. Advantages of this approach are:
>    U-Boot; there can be no disconnect.
>  - There is no need to write or embed test-related code into U-Boot itself.
>    It is asserted that writing test-related code in Python is simpler and more
> -  flexible than writing it all in C.
> +  flexible than writing it all in C. But see :doc:`tests_writing` for caveats
> +  and more discussion / analysis.
>  - It is reasonably simple to interact with U-Boot in this way.
>
>  Requirements
> diff --git a/doc/develop/testing.rst b/doc/develop/testing.rst
> index b181c2e2e41..ced13ac8bb4 100644
> --- a/doc/develop/testing.rst
> +++ b/doc/develop/testing.rst
> @@ -117,6 +117,8 @@ or is covered sparingly. So here are some suggestions:
>    is much easier to add onto a test - writing a new large test can seem
>    daunting to most contributors.
>
> +See doc:`tests_writing` for how to write tests.
> +
>
>  Future work
>  -----------
> diff --git a/doc/develop/tests_sandbox.rst b/doc/develop/tests_sandbox.rst
> index 8c2eac4c5f4..7e5cc84e43c 100644
> --- a/doc/develop/tests_sandbox.rst
> +++ b/doc/develop/tests_sandbox.rst
> @@ -199,3 +199,10 @@ linker_list::
>     000000000001f240 D _u_boot_list_2_dm_test_2_dm_test_of_plat_parent
>     000000000001f260 D _u_boot_list_2_dm_test_2_dm_test_of_plat_phandle
>     000000000001f280 D _u_boot_list_2_dm_test_2_dm_test_of_plat_props
> +
> +
> +Writing tests
> +-------------
> +
> +See :doc:`tests_writing` for how to write new tests.
> +
> diff --git a/doc/develop/tests_writing.rst b/doc/develop/tests_writing.rst
> new file mode 100644
> index 00000000000..d926c00f6e1
> --- /dev/null
> +++ b/doc/develop/tests_writing.rst
> @@ -0,0 +1,339 @@
> +.. SPDX-License-Identifier: GPL-2.0+
> +.. Copyright 2021 Google LLC
> +.. sectionauthor:: Simon Glass <sjg at chromium.org>
> +
> +Writing Tests
> +=============
> +
> +This describes how to write tests in U-Boot and describes the possible options.
> +
> +Test types
> +----------
> +
> +There are two basic types of test in U-Boot:
> +
> +  - Python tests, in test/py/tests
> +  - C tests, in test/ and its subdirectories

You missed lib/efi_selftest/ for the UEFI tests.

> +
> +Python tests talk to U-Boot via the command line. They support both sandbox and
> +real hardware. They typically do not require building test code into U-Boot
> +itself. They are fairly slow to run, due to the command-line interface and there
> +being two separate processes. Python tests are fairly easy to write. They can
> +be a little tricky to debug sometimes due to the voluminous output of pytest.
> +
> +C tests are written directly in U-Boot. While they can be used on boards, they
> +are more commonly used with sandbox, as they obviously add to U-Boot code size.
> +C tests are easy to write so long as the required facilities exist. Where they
> +do not it can involve refactoring or adding new features to sandbox. They are
> +fast to run and easy to debug.
> +
> +Regardless of which test type is used, all tests are collected and run by the
> +pytest framework, so there is typically no need to run them separately. This
> +means that C tests can be used when it makes sense, and Python tests when it
> +doesn't.
> +
> +
> +This table shows how to decide whether to write a C or Python test:
> +
> +=====================  ===========================  =============================
> +Attribute              C test                       Python test
> +=====================  ===========================  =============================
> +Fast to run?           Yes                          No (two separate processes)
> +Easy to write?         Yes, if required test        Yes
> +                       features exist in sandbox

"in sandbox"? Better "on the target system" which might be the sandbox.

You cannot test architecture specific features on the sandbox, e.g.

* hardware drivers
* communication with firmware like TF-A and SBI
* FPGA code
* exception handling

These can be tested on real hardware using C tests.

We still have not succeeded in running file system tests in Gitlab.

For both C and Python there are trivial cases and hard ones.

> +Needs code in U-Boot?  Yes                          No, provided the test can be
> +                                                    executed and the result
> +                                                    determined using the command
> +                                                    line
> +Easy to debug?         Yes                          Yes, but the amount of output can
> +                                                    sometimes require a bit of digging

Debugging the Python tests is non-trivial. Just try to find out why some
FAT tests fail. Though Python calls sync in the test preparation the
Python test does not see the full generated file.

The problem is not the output but the non-output.

> +Can use gdb?           Yes, directly                Yes, with --gdbserver
> +Can run on boards?     Some can, but only if        Yes
> +                       compiled in and not
> +                       dependent on sandbox

Many Python tests cannot be run on boards.

> +=====================  ===========================  =============================
> +
> +
> +Python or C
> +-----------
> +
> +Typically in U-Boot we encourage C test using sandbox for all features. This
> +allows fast testing, easy development and allows contributors to make changes
> +without needing dozens of boards to test with.
> +
> +When a test requires setup or interaction with the running host (such as to
> +generate images and then running U-Boot to check that they can be loaded), or
> +cannot be run on sandbox, Python tests should be used. These should typically

Why should I use Python if the sandbox cannot run it? I could still
write a C test invoked by the "ut" command.

> +NOT rely on running with sandbox, but instead should function correctly on any
> +board supported by U-Boot.
> +
> +
> +How slow are Python tests?
> +--------------------------
> +
> +Under the hood, Python tests work by starting a sandbox test and connecting to

Above you said that they should run on any hardware. So why do you refer
to the sandbox here?

The tests start by invoking U-Boot, be it on physical hardware or on the
sandbox.

> +it via a pipe. Each interaction with the U-Boot process requires at least a
> +context switch to handle the pipe interaction. The test sends a command to
> +U-Boot, which then reacts and shows some output, then the test sees that and
> +continues. Of course on real hardware, communications delays (e.g. with a serial
> +console) make this slower.
> +
> +For comparison, consider a test that checks the 'md' (memory dump). All times
> +below are approximate, as measured on an AMD 2950X system. Here is is the test
> +in Python::
> +
> +   @pytest.mark.buildconfigspec('cmd_memory')
> +   def test_md(u_boot_console):
> +       """Test that md reads memory as expected, and that memory can be modified
> +       using the mw command."""
> +
> +       ram_base = u_boot_utils.find_ram_base(u_boot_console)
> +       addr = '%08x' % ram_base
> +       val = 'a5f09876'
> +       expected_response = addr + ': ' + val
> +       u_boot_console.run_command('mw ' + addr + ' 0 10')
> +       response = u_boot_console.run_command('md ' + addr + ' 10')
> +       assert(not (expected_response in response))
> +       u_boot_console.run_command('mw ' + addr + ' ' + val)
> +       response = u_boot_console.run_command('md ' + addr + ' 10')
> +       assert(expected_response in response)
> +
> +This runs a few commands and checks the output. Note that it runs a command,
> +waits for the response and then checks it agains what is expected. If run by
> +itself it takes around 800ms, including test collection. For 1000 runs it takes
> +19 seconds, or 19ms per run. Of course 1000 runs it not that useful since we
> +only want to run it once.
> +
> +There is no exactly equivalent C test, but here is a similar one that tests 'ms'
> +(memory search)::
> +
> +   /* Test 'ms' command with bytes */
> +   static int mem_test_ms_b(struct unit_test_state *uts)
> +   {
> +      u8 *buf;
> +
> +      buf = map_sysmem(0, BUF_SIZE + 1);
> +      memset(buf, '\0', BUF_SIZE);
> +      buf[0x0] = 0x12;
> +      buf[0x31] = 0x12;
> +      buf[0xff] = 0x12;
> +      buf[0x100] = 0x12;
> +      ut_assertok(console_record_reset_enable());
> +      run_command("ms.b 1 ff 12", 0);
> +      ut_assert_nextline("00000030: 00 12 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................");
> +      ut_assert_nextline("--");
> +      ut_assert_nextline("000000f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 12    ................");
> +      ut_assert_nextline("2 matches");
> +      ut_assert_console_end();
> +
> +      ut_asserteq(2, env_get_hex("memmatches", 0));
> +      ut_asserteq(0xff, env_get_hex("memaddr", 0));
> +      ut_asserteq(0xfe, env_get_hex("mempos", 0));
> +
> +      unmap_sysmem(buf);
> +
> +      return 0;
> +   }
> +   MEM_TEST(mem_test_ms_b, UT_TESTF_CONSOLE_REC);
> +
> +This runs the command directly in U-Boot, then checks the console output, also
> +directly in U-Boot. If run by itself this takes 100ms. For 1000 runs it takes
> +660ms, or 0.66ms per run.
> +
> +So overall running a C test is perhaps 8 times faster individually and the
> +interactions are perhaps 25 times faster.
> +
> +It should also be noted that the C test is fairly easy to debug. You can set a
> +breakpoint on do_mem_search(), which is what implements the 'ms' command,
> +single step to see what might be wrong, etc. That is also possible with the
> +pytest, but requires two terminals and --gdbserver.
> +
> +
> +Why does speed matter?
> +----------------------
> +
> +Many development activities rely on running tests:
> +
> +  - 'git bisect run make qcheck' can be used to find a failing commit
> +  - test-driven development relies on quick iteration of build/test
> +  - U-Boot's continuous integration (CI) systems make use of tests. Running
> +      all sandbox tests typically takes 90 seconds and running each qemu test
> +      takes about 30 seconds. This is currently dwarfed by the time taken to
> +      build all boards
> +
> +As U-Boot continues to grow its feature set, fast and reliable tests are a
> +critical factor factor in developer productivity and happiness.
> +
> +
> +Writing C tests
> +---------------
> +
> +C tests are arranged into suites which are typically executed by the 'ut'
> +command. Each suite is in its own file. This section describes how to accomplish
> +some common test tasks.

UEFI tests are written in C, started with 'bootefi selftest', and are in
non of the suites you refer to.

Best regards

Heinrich

> +
> +
> +Add a new driver model test
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Use this when adding a test for a new or existing uclass, adding new operations
> +or features to a uclass, adding new ofnode or dev_read_() functions, or anything
> +else related to driver model.
> +
> +Find a suitable place for your test, perhaps near other test functions in
> +existing code, or in a new file. Each uclass should have its own test file.
> +
> +Declare the test with::
> +
> +   /* Test that ... */
> +   static int dm_test_uclassname_what(struct unit_test_state *uts)
> +   {
> +      /* test code here */
> +
> +      return 0;
> +   }
> +   DM_TEST(dm_test_uclassname_what, UT_TESTF_SCAN_FDT);
> +
> +Replace 'uclassname' with the name of your uclass, if applicable. Replace 'what'
> +with what you are testing.
> +
> +The flags for DM_TEST() are defined in test/test.h and you typically want
> +UT_TESTF_SCAN_FDT so that the devicetree is scanned and all devices are bound
> +and ready for use. The DM_TEST macro adds UT_TESTF_DM automatically so that
> +the test runner knows it is a driver model test.
> +
> +Driver model tests are special in that the entire driver model state is
> +recreated anew for each test. This ensures that if a previous test deletes a
> +device, for example, it does not affect subsequent tests. Driver model tests
> +also run both with livetree and flattree, to ensure that both devicetree
> +implementations work as expected.
> +
> +Example commit: c48cb7ebfb4 ("sandbox: add ADC unit tests") [1]
> +
> +[1] https://gitlab.denx.de/u-boot/u-boot/-/commit/c48cb7ebfb4
> +
> +
> +Add a C test to an existing suite
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Use this when you are adding to or modifying an existing feature outside driver
> +model. An example is bloblist.
> +
> +Add a new function in the same file as the rest of the suite and register it
> +with the suite. For example, to add a new mem_search test::
> +
> +   /* Test 'ms' command with 32-bit values */
> +   static int mem_test_ms_new_thing(struct unit_test_state *uts)
> +   {
> +         /* test code here*/
> +
> +         return 0;
> +   }
> +   MEM_TEST(mem_test_ms_new_thing, UT_TESTF_CONSOLE_REC);
> +
> +Note that the MEM_TEST() macros is defined at the top of the file.
> +
> +Example commit: 9fe064646d2 ("bloblist: Support relocating to a larger space") [1]
> +
> +[1] https://gitlab.denx.de/u-boot/u-boot/-/commit/9fe064646d2
> +
> +
> +Add a new test suite
> +~~~~~~~~~~~~~~~~~~~~
> +
> +Each suite should focus on one feature or subsystem, so if you are writing a
> +new one of those, you should add a new suite.
> +
> +Create a new file in test/ or a subdirectory and define a macro to register the
> +suite. For example::
> +
> +   #include <common.h>
> +   #include <console.h>
> +   #include <mapmem.h>
> +   #include <dm/test.h>
> +   #include <test/ut.h>
> +
> +   /* Declare a new wibble test */
> +   #define WIBBLE_TEST(_name, _flags)   UNIT_TEST(_name, _flags, wibble_test)
> +
> +   /* Tetss go here */
> +
> +   /* At the bottom of the file: */
> +
> +   int do_ut_wibble(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[])
> +   {
> +     struct unit_test *tests = UNIT_TEST_SUITE_START(wibble_test);
> +     const int n_ents = UNIT_TEST_SUITE_COUNT(wibble_test);
> +
> +     return cmd_ut_category("cmd_wibble", "wibble_test_", tests, n_ents, argc, argv);
> +   }
> +
> +Then add new tests to it as above.
> +
> +Register this new suite in test/cmd_ut.c by adding to cmd_ut_sub[]::
> +
> +  /* Within cmd_ut_sub[]... */
> +
> +  U_BOOT_CMD_MKENT(wibble, CONFIG_SYS_MAXARGS, 1, do_ut_wibble, "", ""),
> +
> +and adding new help to ut_help_text[]::
> +
> +  "ut wibble - Test the wibble feature\n"
> +
> +If your feature is conditional on a particular Kconfig, then you can use #ifdef
> +to control that.
> +
> +Finally, add the test to the build by adding to the Makefile in the same
> +directory::
> +
> +  obj-$(CONFIG_$(SPL_)CMDLINE) += wibble.o
> +
> +Note that CMDLINE is never enabled in SPL, so this test will only be present in
> +U-Boot proper. See below for how to do SPL tests.
> +
> +As before, you can add an extra Kconfig check if needed::
> +
> +  ifneq ($(CONFIG_$(SPL_)WIBBLE),)
> +  obj-$(CONFIG_$(SPL_)CMDLINE) += wibble.o
> +  endif
> +
> +
> +Example commit: 919e7a8fb64 ("test: Add a simple test for bloblist") [1]
> +
> +[1] https://gitlab.denx.de/u-boot/u-boot/-/commit/919e7a8fb64
> +
> +
> +Making the test run from pytest
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +All C tests must run from pytest. Typically this is automatic, since pytest
> +scans the U-Boot executable for available tests to run. So long as you have a
> +'ut' subcommand for your test suite, it will run. The same applies for driver
> +model tests since they use the 'ut dm' subcommand.
> +
> +See test/py/tests/test_ut.py for how unit tests are run.
> +
> +
> +Add a C test for SPL
> +~~~~~~~~~~~~~~~~~~~~
> +
> +Note: C tests are only available for sandbox_spl at present. There is currently
> +no mechanism in other boards to existing SPL tests even if they are built into
> +the image.
> +
> +SPL tests cannot be run from the 'ut' command since there are no commands
> +available in SPL. Instead, sandbox (only) calls ut_run_list() on start-up, when
> +the -u flag is given. This runs the available unit tests, no matter what suite
> +they are in.
> +
> +To create a new SPL test, follow the same rules as above, either adding to an
> +existing suite or creating a new one.
> +
> +An example SPL test is spl_test_load().
> +
> +
> +Writing Python tests
> +--------------------
> +
> +See :doc:`py_testing` for brief notes how to write Python tests. You
> +should be able to use the existing tests in test/py/tests as examples.
>



More information about the U-Boot mailing list