[U-Boot] Sandbox DT for testing (unit tests)
Stephen Warren
swarren at wwwdotorg.org
Wed Jan 27 00:28:06 CET 2016
On 01/26/2016 04:08 PM, Simon Glass wrote:
> Hi Stephen,
>
> On 26 January 2016 at 15:36, Stephen Warren <swarren at wwwdotorg.org> wrote:
>> Simon,
>>
>> I noticed that under sandbox, "ut dm" needs sandbox to have been started
>> with arch/sandbox/dts/test.dtb. A few questions related to that:
>>
>> a) Is it safe and does it make sense to always use that DT when running
>> Sandbox for tests (e.g. under test/py)?
>
> Yes.
>
>>
>> b) Does it make sense for that DT to be the default (perhaps bundled into
>> the executable like other DT-using platforms, or perhaps the default value
>> for the -d option if the user supplies none)?
>
> There is a separate sandbox.dts which is the default with the -D
> option. I don't think the test.dts should be used by default at
> present.
>
>>
>> c) Is it possible for "ut dm" to detect if the correct DT has been loaded
>> (e.g. by reading some property only in that file as a marker) and only
>> execute tests that don't rely on test.dtb if test.dtb isn't in use?
>
> Sure - just look for something that should be there, or perhaps check
> the compatible string or model in the root node?
>
>>
>> I think running "ut env" and "ut time" under test/py should be very easy,
>> although the test log will only report overall status, not the status of
>> each individual test within the ut invocation. That information will still
>> be in the log file though. I'll go add tests for those two at least.
>
> Sounds good. But presumably it would not be too hard to report the
> status of each individual test?
Unfortunately, this would be quite hard.
The way pytest works is that it first scans the source tree (or some
designated tree; currently test/py/tests/) for files, classes, and
functions which define tests, and "collects" a list of them. Then, it
iterates over each of those tests and executes them. Each collected test
maps to one test invocation and one reported test status. In other
words, the set of tests to execute is calculated before actually running
any of the tests.
Right now, I'm writing a single test_ut_env() function (and hence a
single pytest test) which executes "ut env" on the U-Boot command-line
and reports a single status. That appears to be the minimum granularity
of the U-Boot ut shell interface, and hence I can't easily make pytests
that do anything smaller than that. There are obviously other
functions/tests for "ut time", "ut_cmd", etc.
About the only way to do anything better would be to write a custom
"collector" implementation which somehow parsed the C source at
collection time to determine the set of sub-tests (or perhaps hard-code
the list in Python instead) in order to generate more pytests. Then we
could have one pytest which executed "ut env" and saved the results, and
all the other tests would parse out just part of the console output to
determine the individual test status. I expect this would be very
fragile though.
More information about the U-Boot
mailing list