[tbot] [DENX] tbot: board hangs if no autoload

Stefano Babic sbabic at denx.de
Thu Nov 15 14:51:54 UTC 2018


Hallo Heiko,

On 15/11/18 13:01, Heiko Schocher wrote:
> Hello Stefano,
> 

[snip]

>>
>> Well, developers have his own way and each of us do things in a
>> different and preferred way. It is just to start a flame to use just Vim
>> instead of Emacs...
> 
> Of course! I can not force anyone to use tbot ...
> 
> But I hope that others are also lazy, and want to automate as much tasks
> as they can.
> 
>> But what developers surely need (and this is why I put functional tests
>> on top of priorities) is a way to validate what they did and to have
>> regression tests without a lot of effort. And in both of them, tbot
>> excels.
> 
> Isn;t it for example also a valid testcase to ensure, that u-boot for
> example compiles?
> 
> Just yesterday I posted a patch on U-Boot ML, which compiled, but dropped
> warnings I did not check, because I build U-Boot with bitbake ... :-(

As maintainer, my current work-flow is with buildman and/or travis. I
get an e-mail if travis reports error, if not I sent a PR...

> 
> Or if customer uses swupdate for updating, write a testcase for it?

Exactly, this is the functional test case. And you understand why I am
not so interested about a specific setup for installing U-Boot and/or
kernel on the target. Specific install details are already hidden by
SWUpdate and I have not to take care of it. My testcase is to push the
SWU to the target, and this can be done generic because the project
specific parts are already handled.

> 
>> I would not say that there won't be a customer who wants to have this,
>> but as far as I know, most customers rely on already known way to build
>> software (Jenkins / bitbake /..) and I guess that building from U-Boot
>> is not the first priority for them. But testing that the build works is
>> on the top of the list.
> 
> Ok. But thats the benefit of using tbot. You (as developer) can automate
> *all* your task you have ... and pass the customer only the testcase for
> example which start testing functionality for his board ...

Yes, I think anyone should find the most profitable way to use the tool.

> 
>>> And if I have one command for doing all the boring stuff from
>>> scratch ... this is nice. Also if you get at the end a documentation
>>> with all the steps for the customer, how to reproduce this.
>>>
>>>> If we start to convert how to install software on the board, we start
>>>> with a lot of single different cases, because this is absolutely board
>>>> specific.
>>>
>>> Yes ... so write for the board specific part a board specific testcase
>>> which is called from a generic part ...
>>
>> I am just looking to the current status and what we have available. To
>> do this, I am expecting that class Board has additional methods like
>> "install_uboot" and/or "install_linux" near poweron / poweroff/.., see
>> machine/board/board.py. So I guess we are not ready for it and it is
>> better to start with testcases that do not imply to have a very specific
>> setup for each board.
> 
> I rather have in mind, not to fullfill the class with a lot of tasks,
> instead
> let tbot as simpel as possible and do the hard work in testcases ...

ok

> 
> But may I am here on the wrong way ...
> 
>>>> My vote goes to start with the more general cases, that is: Software is
>>>> on the board, does the board work as expected ? Things like:
>>>>
>>>> - U-Boot:
>>>>      - does network work ?
>>>>      - does storage work ?
>>>>      - do other u-boot peripherals work ?
>>>
>>> Of course also a valid starting point!
>>>
>>> But also you must define a way, how to find out, what devices are
>>> oon the board... I did for example "help date" and if this is
>>> successfull, I can test the date command ...
>>
>> I think this can be ok to put into the board configuration file. It is a
>> static configuration and does not depend on the runtime.
> 
> Hmm... really .. think on the capes of the beagleboneblack ...
> 
> I would say, write a board specific testcase, which calls all the (maybe
> generic) testcases you want to run on the board ... or test what testcases
> it can run ...>
>>> Or parse help output and decide then?
>>
>> Also a good idea, too.
>>
>>> Parse U-Boots config and/or DTS ?
>>>
>>>> Such cases - they are unaware of which board is running, and we can at
>>>> the early beginning have more general test cases. Same thing for linux,
>>>> but of course we can have much more.
>>>
>>> see above.
>>>
>>> Also as you can call testcases from another testcase, you can write
>>> a board specific testcase, in which you (as board maintainer) should
>>> now, which generic testcases you can call ...
>>
>> That is nice ! I wait for tomorrow when testcases will be put into
>> tbot-denx. It will help me to understand better.
> 
> At least with the old tbot you can do this ... and I am sure Haralds
> newer version can do this!
> 
> I had/have variables which hold the name of a testcase ... so you can
> write a generic testcase, which calls testcases you can configure
> in the board config file ...
> 
> For example:
> https://github.com/hsdenx/tbot/blob/master/src/tc/demo/u-boot/tc_demo_compile_install_test.py#L134
> 
> 
> if tb.config.tc_demo_uboot_test_update != 'none':
> 
> call testcase with the name in this variable ... so you can write a
> board specific testcase, which installs SPL/U-Boot on your specific
> board ...
> 
> so you can set (old tbot!) in your board or lab config file:
> 
> tc_demo_uboot_test_update = 'tc_install_uboot_on_p2020rdb.py'

ok, maybe you are getting the point because this is what I do not like.
I prefer a more generic approach as I see in the example. I can test any
linux command simply with:

            lh = cx.enter_context(tbot.acquire_lab())
            b = cx.enter_context(tbot.acquire_board(lh))
            lnx = cx.enter_context(tbot.acquire_linux(b))

This hides which board (mira in my case), which lab, how u-boot is
started and how linux is started. There is not a "boot_linux_on_mira",
it is simply tbot.acquire_linux(b) ! Very nice !

The class hides the specific part (which u-boot variables I must set,
which commands, ..). That means that this testcase runs as it is on any
board, from power-off until it is turned off again, and full resuse is
possible. Something like:

@tbot.testcase
 def iinstall_uboot() -> None:
     with tbot.acquire_lab() as lh:
         with tbot.acquire_board(lh) as b:
             with tbot.acquire_uboot(b) as ub:
                 ub.install_uboot()

And this is generic and I just need to define a set of commands for my
board to istall u-boot (like "boot_command" array, I mean).

In your example, I cannot set a generic testcase like
tc_demo_uboot_test_update to work for any board, because it must
reference to a specific file / function (install_uboot_p2020rdb). IMHO I
like to have an abstraction to hide the specific part of a board (as the
enter_context above let me do).

> 
> and the generic testcase will call this board specific function for
> installing SPL/U-Boot for the p2020rdb board ...
> 
> You got the idea ?
> 
> Hope I do not make Harald now headaches :-P

Maybe I have added some headaches...

> 
>>>>> - create a register dump file
>>>>>     write register content into a register dump file
>>>>> - do register checks
>>>>>     open register dump file and check if register content
>>>>>     is the same as in the file
>>>>> - convert all DUTS testcases
>>>>>     http://git.denx.de/?p=duts.git
>>>>
>>>> I do not thing this is a great idea. This "duts" is obsolete and I
>>>> think
>>>> we have now a more generic and better concept with tbot. I think we
>>>> should just have a list of test cases and then translate them in
>>>> @tbot.testcase, without looking at the past. IMHO duts is quite broken
>>>> and we should not care of it, it can just confuse us and it could be a
>>>> waste of time.
>>>
>>> But there are a lot of valid tests!
>>
>> That is the reason I think we should have a list of testcases, and then
>> implement them as @tbot.testcase
> 
> Yes!
> 
>>> It is just an idea ... I converted some of them (not finished all)
>>> and made based on the results a U-Boot cmdline documentation, as
>>> we had with the DULG.
>>
>> ok, I'll wait for ;-)
> 
> :-P
> 
> Not ready for the new tbot ... patches are welcome!
> 
>>>>>     goal, create at the end a u-boot commandline documentation
>>>>>
>>>>> - call pytest from u-boot?
>>>>
>>>> Do we ?
>>>
>>> I meant: call U-Boot testframework which is in "tools/py"
>>> from tbot.
>>>
>>>>> - if new u-boot does not boot, switch bootmode and unbreak it
>>>>
>>>> This is also very board specific and it does not always work. I prefer
>>>> to start with a more generic approach.
>>>>
>>>> For example, start with testing network in U-Boot. How can I split
>>>> between Lab setup and board setup ? Let's say the tftp server. I can
>>>> set
>>>> in the board file a "setenv serverip", but this is broken, because a
>>>> board could belong to different Labs (I have a mira here and I have my
>>>> own Lab setup). Is there a a way to do this ? Where should I look for
>>>> such a cases ?
>>>
>>> Than the serverip should be a lab specific variable.
>>
>> Should not be an attribute of UbootMachine class, that I can overwrite
>> in my lab.py ?
> 
> Or better, it maybe is detectable through a testcase, executed on the
> lab PC ?
> 
> The tftp serverip is configured somewhere on the lab PC ... so write a
> testcase

But a lab PC is not strictly required, and we have not. If you see the
code, I manage to add my lab simply having my own board/lab.py instead
of board/denx.py, and I inherit my board (mira) from it.

> for it, which returns the ip ... and you do not need to configure it!
> 
>>>>> Linux:
>>>>>
>>>>> - get sources
>>>>> - may apply patches to it
>>>>> - install linux on the board
>>>>> - check if booted version is the expected one
>>>>> - create a register dump file
>>>>>     write register content into a register dump file
>>>>> - do register checks
>>>>
>>>> See above. I think this is useful during a porting, but it is less
>>>> useful for a customer who wants to test functionalities. I would
>>>> like to
>>>
>>> I have here another opinion.
>>
>> Well, of course ;-). We should not always agree, we get more improvement
>> when we discuss and have different opinions ! ;-)
> 
> Yep!
> 
> I like this discussion ... 4 years nearly nobody was interested in my
> old tbot.
> Ok, it was a big misuse of python ... but it worked ;-)
> 
> I could not say it to much ... many thanks to Harald!
> 
>>> This is also interesting for a customer.
>>>
>>> Which customer does never change a DTS or does not try a linux update
>>> on his own?
>>>
>>> If he have an automated check, if all important registers are setup
>>> as expected ... this is nice.
>>>
>>> This testcase could be done very generic...
>>>
>>>> have first a catalog of testcases with functionalities, like:
>>>>
>>>>      - is network working ?
>>>>      - are peripherals working (SPI / I2C /....) ?
>>>
>>> Yes. My hope is, that we get a lot of users, so we will get a lot of
>>> testcases ;-)
>>
>> ok
>>
>>>
>>>> In the ideal case, DT is parsed to get a list of testcases...
>>>
>>> Yes.
>>>
>>>>>     open register dump file and check if register content
>>>>>     is the same as in the file
>>>>> - look if a list of string are in dmesg output
>>>>>
>>>>> - look for example at the LTP project, what they test
>>>>>
>>>>
>>>> +1
>>>>
>>>> LTP contains a lot of useful testcases, but of course they are meant to
>>>> run as scripts directly on the target / host. Anyway, they have
>>>> testcases for a lot of things.
>>>
>>> Yes, and we may can use this scripts! Start them and analyse the
>>> results.
>>>
>>
>> ok, I let this for later, it is not clear to me how...
> 
> I am also just speculating. But executing a script on the board is easy...

I see a lot of calls to something related to LTP (tst_res, tst_resm,..).
Most testcases are simple, we could have most of them in tbot as own
testcases as python code.

> 
>>>>> - check if ptest-runner is in the rootfs and call it
>>>>
>>>> ptest-runner means python. Do we have it on most projects ? Some yes,
>>>> some not...
>>>
>>> Therefore is "check if ptest-runner" exists ;-)
>>>
>>>>> ...
>>>>>
>>>>> yocto:
>>>>> - get the sources
>>>>> - configure
>>>>> - bake
>>>>> - check if files you are interested in are created
>>>>> - install new images
>>>>> - boot them
>>>>> - check if rootfsversion is correct
>>>>
>>>> See above - IMHO it is better to split between functional tests on
>>>> target and build, and to start with the functional tests.
>>>
>>> Of course. Both parts can be done independently
>>
>> Sure !

Regards,
Stefano


-- 
=====================================================================
DENX Software Engineering GmbH,      Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: +49-8142-66989-53 Fax: +49-8142-66989-80 Email: sbabic at denx.de
=====================================================================


More information about the tbot mailing list