[tbot] Problem with Linux boot to prompt

Harald Seiler hws at denx.de
Thu Jan 24 12:00:13 UTC 2019

Hello Wolfgang,

On Thu, 2019-01-24 at 12:20 +0100, Wolfgang Denk wrote:
> Dear Harald,
> In message <1548326964.2436.22.camel at denx.de> you wrote:
> > 
> > Skip does not really make sense in tbot's design, where other testcases
> > might depend on one testcase's return value.
> Can we please add this as a "nice to have"?
> Maybe even as a general option/flag similar to what "make -k" 
> (--keep-going) is doing?
> Assume you have a system that has been thoroughly tested and then
> released to the customer.  All tesats are running fine.  Later,we
> upddate to a new U-Boot/Linux/Yocto version.  In such a situation a
> quick test run that shows "8 tests are broken" is very useful - for
> example, you can give the customer quick feedback about schedule and
> expected efforts.

I was referring to testcases which have hard dependencies:  For example,
it does not make sense to attempt calling `make` to build U-Boot, if the
`git clone` failed.  This is why tbot defaults to aborting the whole

What you are thinking of is a "test-suite" [1], which tbot also has,
for exactly the reason you brought up.  For example, `tbot selftest`
is such a test-suite and after running, it reports:

	Success: 17/17 tests passed

Or, in case some of them don't work:

	Failure: 2/17 tests failed

	(Followed by a list of failed testcases and some diagnostics)

The test-suite implementation is actually no magic at all:


As you can see, each testcase is called in a try-block and if it
fails, the error is collected and then the next test is run.  You
can use the same pattern if you want to tolerate certain failures
in your own tests.  The use of exceptions also has a nice benefit
    You can tolerate only some specific types of failure by restricting
the except-block.  For example you might want to tolerate the wrong
U-Boot version being installed, but not the power-switch failing ...

> If each failure terminates the test sequence, this means you have to
> actually fix each problem as it occurs, one after the other, and you
> will never now how many breakages are still ahead.
> > If you want to skip, do it
> > like `selftest_board_linux`:
> > 
> > 	https://github.com/Rahix/tbot/blob/master/tbot/tc/selftest/board_machine.py#L136
> This seems unpractical to me if I think at the scenario above.  It
> would be nice to be able to switch on such behaviour without
> changing the test case code, just by flipping a switch :-)

I think these are two different types of 'SKIP'.  What you are talking about
is the kind known from other frameworks, eg pytest:  You can set the maximum
number of tests that are allowed to fail before a run is aborted or filter
which tests in a test-suite are actually attempted at all.

The example I showed is an 'intelligent' skip.  The `selftest_board_linux` test
automatically returns early (and shows the skip message) if it detects that the
selected board does not support Linux (most of the time because no board was selected).
I implemented this behavior to allow running selftests in travis CI where we
dont't have access to actual hardware to test tbot with.

'Your SKIP' is partially implemented right now: The test-suite I showed above
can attempt running *all* tests, but does not have filtering or a max-fail
parameter.  This is something I will look into implementing!

(I hope I was ably to convey what I am trying to say ... If not, please tell me!)

> Thanks!
> Best regards,
> Wolfgang Denk

[1]: https://rahix.de/tbot/module-tc.html#tbot.tc.testsuite


DENX Software Engineering GmbH,      Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: +49-8142-66989-62  Fax: +49-8142-66989-80   Email: hws at denx.de

More information about the tbot mailing list