[tbot] [DENX] tbot: board hangs if no autoload
Heiko Schocher
hs at denx.de
Thu Nov 15 12:01:47 UTC 2018
Hello Stefano,
Am 15.11.2018 um 12:16 schrieb Stefano Babic:
> Hi Heiko,
>
> On 15/11/18 11:44, Heiko Schocher wrote:
>> Hello Stefano,
>>
>> Am 15.11.2018 um 11:23 schrieb Stefano Babic:
>>> Hi Heiko, Harald,
>>>
>>> On 15/11/18 10:49, Heiko Schocher wrote:
>>>> Hello Harald,
>>>>
>>>> Am 15.11.2018 um 10:28 schrieb Harald Seiler:
>>>>> On Thu, 2018-11-15 at 10:23 +0100, Lukasz Majewski wrote:
>>>>>> On Thu, 15 Nov 2018 10:19:17 +0100
>>>>>> Harald Seiler <hws at denx.de> wrote:
>>>>>>
>>>>>>> On Thu, 2018-11-15 at 10:10 +0100, Stefano Babic wrote:
>>>>>>>> Hi Harald,
>>>>>>>>
>>>>>>>> On 15/11/18 09:36, Harald Seiler wrote:
>>>>>>>>> Hi Stefano!
>>>>>>>>>
>>>>>>>>> Yes, TBot waits for an autoboot prompt. You can disable this by
>>>>>>>>> setting `autoboot_prompt` in your UBootMachine to the U-Boot
>>>>>>>>> prompt.
>>>>>>>>>
>>>>>>>>> class MyUBoot(board.UBootMachine):
>>>>>>>>> prompt = "=> "
>>>>>>>>> autoboot_prompt = "=> "
>>>>>>>>>
>>>>>>>>> I know this is more of a hack
>>>>>>>>
>>>>>>>> Yes, because it is not a fix property of the board. It depends if
>>>>>>>> autoload is active and bootcmd is set on the board. The "mira"
>>>>>>>> board has not "bootcmd" set in default environment, and behavior is
>>>>>>>> different just after setting bootcmd.
>>>>>>>
>>>>>>> Hmm, good point, I will think about it ...
>>>>>>>
>>>>>>>>
>>>>>>>>> and I will add a proper way to do this in
>>>>>>>>> a future release
>>>>>>>>
>>>>>>>> Nice !
>>>>>>>>
>>>>>>>>> (keep an eye on the CHANGELOG, there will be a lot of
>>>>>>>>> small convenience features like this in the next versions!)
>>>>>>>>
>>>>>>>> Do we have a repo for testcases with the new format ? I really
>>>>>>>> appreciate that new tbot has a clean split between software (tbot),
>>>>>>>> setup (boards and lab) and testcases. We have repos for the first
>>>>>>>> two cases, I cannot find a set of common tc. I mean, tc that are
>>>>>>>> already converted for @tbot.testcase - if not, I would start to
>>>>>>>> write myself, but I do not want to reinvent the wheel (and of
>>>>>>>> course, I will do more mistakes..)
>>>>>>>
>>>>>>> That is what my `tbot-denx` repo is supposed to be (DENX-Internal
>>>>>>> only):
>>>>>>>
>>>>>>> https://gitlab.denx.de/HaraldSeiler/tbot-denx
>>>>>>>
>>>>>>> However, it doesn't contain any testcases yet.
>>>>>>
>>>>>> Is there a way to convert (or directly re-use) Heiko's test cases?
>>>>>
>>>>> Unfortunately not, there is absolutely no compatibility between the two
>>>>> versions ... So it needs a human to do it.
>>>>>
>>>>> I guess, at the moment I am in the best position to do so, I just need
>>>>> input about which testcases have the highest priority to you.
>>>>
>>>> Instead it may makes more sense to discuss what we want to test and
>>>> than how to write the testcases?
>>>
>>> Right, let's see.
>>>
>>>>
>>>> Proposal what I have already:
>>>>
>>>> U-Boot
>>>>
>>>> - get sources
>>>> - may apply patches to it
>>>> list of patches in a directory ?
>>>> get a list of patches from patchwork todo list?
>>>>
>>>> - install u-boot on the board
>>>> - check if really the new version of u-boot boots
>>>
>>> I admit that I am not very interested how to build and to push software
>>> on the board. There are a lot of different paths, and this is not what
>>> the customers ask. They have their own buildserver (Yocto / buildroot /
>>> Jenkins / custom / whatever...) and the process to get and applied
>>> patches is more interesting for a U-Boot maintainer as for a customer.
>>
>> Yes ... why should developers not use tbot??
>
> Well, developers have his own way and each of us do things in a
> different and preferred way. It is just to start a flame to use just Vim
> instead of Emacs...
Of course! I can not force anyone to use tbot ...
But I hope that others are also lazy, and want to automate as much tasks
as they can.
> But what developers surely need (and this is why I put functional tests
> on top of priorities) is a way to validate what they did and to have
> regression tests without a lot of effort. And in both of them, tbot excels.
Isn;t it for example also a valid testcase to ensure, that u-boot for
example compiles?
Just yesterday I posted a patch on U-Boot ML, which compiled, but dropped
warnings I did not check, because I build U-Boot with bitbake ... :-(
Or if customer uses swupdate for updating, write a testcase for it?
> I would not say that there won't be a customer who wants to have this,
> but as far as I know, most customers rely on already known way to build
> software (Jenkins / bitbake /..) and I guess that building from U-Boot
> is not the first priority for them. But testing that the build works is
> on the top of the list.
Ok. But thats the benefit of using tbot. You (as developer) can automate
*all* your task you have ... and pass the customer only the testcase for
example which start testing functionality for his board ...
>> And if I have one command for doing all the boring stuff from
>> scratch ... this is nice. Also if you get at the end a documentation
>> with all the steps for the customer, how to reproduce this.
>>
>>> If we start to convert how to install software on the board, we start
>>> with a lot of single different cases, because this is absolutely board
>>> specific.
>>
>> Yes ... so write for the board specific part a board specific testcase
>> which is called from a generic part ...
>
> I am just looking to the current status and what we have available. To
> do this, I am expecting that class Board has additional methods like
> "install_uboot" and/or "install_linux" near poweron / poweroff/.., see
> machine/board/board.py. So I guess we are not ready for it and it is
> better to start with testcases that do not imply to have a very specific
> setup for each board.
I rather have in mind, not to fullfill the class with a lot of tasks, instead
let tbot as simpel as possible and do the hard work in testcases ...
But may I am here on the wrong way ...
>>> My vote goes to start with the more general cases, that is: Software is
>>> on the board, does the board work as expected ? Things like:
>>>
>>> - U-Boot:
>>> - does network work ?
>>> - does storage work ?
>>> - do other u-boot peripherals work ?
>>
>> Of course also a valid starting point!
>>
>> But also you must define a way, how to find out, what devices are
>> oon the board... I did for example "help date" and if this is
>> successfull, I can test the date command ...
>
> I think this can be ok to put into the board configuration file. It is a
> static configuration and does not depend on the runtime.
Hmm... really .. think on the capes of the beagleboneblack ...
I would say, write a board specific testcase, which calls all the (maybe
generic) testcases you want to run on the board ... or test what testcases
it can run ...
>> Or parse help output and decide then?
>
> Also a good idea, too.
>
>> Parse U-Boots config and/or DTS ?
>>
>>> Such cases - they are unaware of which board is running, and we can at
>>> the early beginning have more general test cases. Same thing for linux,
>>> but of course we can have much more.
>>
>> see above.
>>
>> Also as you can call testcases from another testcase, you can write
>> a board specific testcase, in which you (as board maintainer) should
>> now, which generic testcases you can call ...
>
> That is nice ! I wait for tomorrow when testcases will be put into
> tbot-denx. It will help me to understand better.
At least with the old tbot you can do this ... and I am sure Haralds
newer version can do this!
I had/have variables which hold the name of a testcase ... so you can
write a generic testcase, which calls testcases you can configure
in the board config file ...
For example:
https://github.com/hsdenx/tbot/blob/master/src/tc/demo/u-boot/tc_demo_compile_install_test.py#L134
if tb.config.tc_demo_uboot_test_update != 'none':
call testcase with the name in this variable ... so you can write a
board specific testcase, which installs SPL/U-Boot on your specific
board ...
so you can set (old tbot!) in your board or lab config file:
tc_demo_uboot_test_update = 'tc_install_uboot_on_p2020rdb.py'
and the generic testcase will call this board specific function for
installing SPL/U-Boot for the p2020rdb board ...
You got the idea ?
Hope I do not make Harald now headaches :-P
>>>> - create a register dump file
>>>> write register content into a register dump file
>>>> - do register checks
>>>> open register dump file and check if register content
>>>> is the same as in the file
>>>> - convert all DUTS testcases
>>>> http://git.denx.de/?p=duts.git
>>>
>>> I do not thing this is a great idea. This "duts" is obsolete and I think
>>> we have now a more generic and better concept with tbot. I think we
>>> should just have a list of test cases and then translate them in
>>> @tbot.testcase, without looking at the past. IMHO duts is quite broken
>>> and we should not care of it, it can just confuse us and it could be a
>>> waste of time.
>>
>> But there are a lot of valid tests!
>
> That is the reason I think we should have a list of testcases, and then
> implement them as @tbot.testcase
Yes!
>> It is just an idea ... I converted some of them (not finished all)
>> and made based on the results a U-Boot cmdline documentation, as
>> we had with the DULG.
>
> ok, I'll wait for ;-)
:-P
Not ready for the new tbot ... patches are welcome!
>>>> goal, create at the end a u-boot commandline documentation
>>>>
>>>> - call pytest from u-boot?
>>>
>>> Do we ?
>>
>> I meant: call U-Boot testframework which is in "tools/py"
>> from tbot.
>>
>>>> - if new u-boot does not boot, switch bootmode and unbreak it
>>>
>>> This is also very board specific and it does not always work. I prefer
>>> to start with a more generic approach.
>>>
>>> For example, start with testing network in U-Boot. How can I split
>>> between Lab setup and board setup ? Let's say the tftp server. I can set
>>> in the board file a "setenv serverip", but this is broken, because a
>>> board could belong to different Labs (I have a mira here and I have my
>>> own Lab setup). Is there a a way to do this ? Where should I look for
>>> such a cases ?
>>
>> Than the serverip should be a lab specific variable.
>
> Should not be an attribute of UbootMachine class, that I can overwrite
> in my lab.py ?
Or better, it maybe is detectable through a testcase, executed on the
lab PC ?
The tftp serverip is configured somewhere on the lab PC ... so write a testcase
for it, which returns the ip ... and you do not need to configure it!
>>>> Linux:
>>>>
>>>> - get sources
>>>> - may apply patches to it
>>>> - install linux on the board
>>>> - check if booted version is the expected one
>>>> - create a register dump file
>>>> write register content into a register dump file
>>>> - do register checks
>>>
>>> See above. I think this is useful during a porting, but it is less
>>> useful for a customer who wants to test functionalities. I would like to
>>
>> I have here another opinion.
>
> Well, of course ;-). We should not always agree, we get more improvement
> when we discuss and have different opinions ! ;-)
Yep!
I like this discussion ... 4 years nearly nobody was interested in my old tbot.
Ok, it was a big misuse of python ... but it worked ;-)
I could not say it to much ... many thanks to Harald!
>> This is also interesting for a customer.
>>
>> Which customer does never change a DTS or does not try a linux update
>> on his own?
>>
>> If he have an automated check, if all important registers are setup
>> as expected ... this is nice.
>>
>> This testcase could be done very generic...
>>
>>> have first a catalog of testcases with functionalities, like:
>>>
>>> - is network working ?
>>> - are peripherals working (SPI / I2C /....) ?
>>
>> Yes. My hope is, that we get a lot of users, so we will get a lot of
>> testcases ;-)
>
> ok
>
>>
>>> In the ideal case, DT is parsed to get a list of testcases...
>>
>> Yes.
>>
>>>> open register dump file and check if register content
>>>> is the same as in the file
>>>> - look if a list of string are in dmesg output
>>>>
>>>> - look for example at the LTP project, what they test
>>>>
>>>
>>> +1
>>>
>>> LTP contains a lot of useful testcases, but of course they are meant to
>>> run as scripts directly on the target / host. Anyway, they have
>>> testcases for a lot of things.
>>
>> Yes, and we may can use this scripts! Start them and analyse the results.
>>
>
> ok, I let this for later, it is not clear to me how...
I am also just speculating. But executing a script on the board is easy...
>>>> - check if ptest-runner is in the rootfs and call it
>>>
>>> ptest-runner means python. Do we have it on most projects ? Some yes,
>>> some not...
>>
>> Therefore is "check if ptest-runner" exists ;-)
>>
>>>> ...
>>>>
>>>> yocto:
>>>> - get the sources
>>>> - configure
>>>> - bake
>>>> - check if files you are interested in are created
>>>> - install new images
>>>> - boot them
>>>> - check if rootfsversion is correct
>>>
>>> See above - IMHO it is better to split between functional tests on
>>> target and build, and to start with the functional tests.
>>
>> Of course. Both parts can be done independently
>
> Sure !
bye,
Heiko
--
DENX Software Engineering GmbH, Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: +49-8142-66989-52 Fax: +49-8142-66989-80 Email: hs at denx.de
More information about the tbot
mailing list