[tbot] Next steps

Stefano Babic sbabic at denx.de
Thu Jan 17 21:58:15 UTC 2019


Hi Harald,

On 17/01/19 20:59, Harald Seiler wrote:
> Hi Stefano,
> 
> On Wed, 2019-01-16 at 12:31 +0100, Stefano Babic wrote:
>> Hi Harald,
>>
>> On 16/01/19 11:01, Harald Seiler wrote:
>>> Hello!
>>>
>>> I want to ask for some input on the priorities of the following
>>> features.  Which of the following features/changes would you need
>>> the most?
>>>
>>> ## Documentation Generation
>>>    ------------------------
>>> Old tbot had this feature and I definitely want it in new tbot as well.
>>> The basic idea is the following:  A generator that takes a test-run's logfile
>>> and a 'template' and uses these to generate a pdf documenting the process
>>> of reproducing the results.  The usecase is for example automatically generating
>>> documentation for board bringup.
>>>
>>
>> Just my two cents: this has a high priority. The JSON log file is nice
>> but unreadable for customers. I fully agree with your approach.
>>
>> I would also like to have an automatic "performance" output, as the log
>> contains the time.
>>
>> For target output separated by "\n", it would be nice to have the
>> related timestamp. Or atr least, let tbot to record this if requested.
>>
>> Now I see for example:
>>
>> "time": 3.7857260500022676,
>>    "data": {
>>      "output": "Trying 192.168.178.37...\nConnected to
>> raspbx.fritz.box.\nEscape character is '^]'.\n\nser2net port 2014 device
>> /dev/ttyUSB14 [115200 N81] (Debian   GNU/Linux)\n\n\u0000\nU-Boot SPL
>> 2016.05-00276-g176c732-dirty (Jun 21 2016 - 20:04:29)\nBoot device
>> 1\nTrying to boot from MMC1\nmmc_load_image_raw_sector: mmc     block
>> read error\n ** ext4fs_devread read error - block\nFailed to mount ext2
>> filesystem...\nspl_load_image_ext: ext4fs mount err - 0\n\nU-Boot SPL
>> 2016.05-00276-  g176c732-dirty (Jun 21 2016 - 20:04:29)\nBoot device
>> 1\nTrying to boot from MMC1\nmmc_load_image_raw_sector: mmc block read
>> error\n** Can't read partition table    on 0:0 **\nspl
>>
>> But it would be nice to know how much time is elapsed for each output
>> sent from the hardware to find bottlenecks, similar to the "grabserial"
>> tool (this is also a small python script).
>>
>> I would also record something like:
>>
>> [timestamp] U-Boot 2018.07 (Nov 19 2018 - 23:01:09 +0000)
>>
>> [timestamp] CPU:   Freescale i.MX6Q rev1.5 996 MHz (running at 792 MHz)
>> [timestamp] CPU:   Automotive temperature grade (-40C to 125C) at 25C
>> [timestamp] Reset cause: WDOG
>> [timestamp] I2C:   ready
>> [timestamp] DRAM:  1 GiB
>> [timestamp] NAND:  1024 MiB
>> [timestamp] MMC:   FSL_SDHC: 0
>>
>> This helps to measure boottime instead of dropping tbot and using other
>> tools.
> 
> You can already get this info, at least kind of.  If you run tbot with -vvv,
> the log will be filled with __debug__ events detailing the timestamps of
> each stream fragment.
> 

Thanks, I was not aware of this !


>>> ## Refactored Configuration
>>>    ------------------------
>>> Currently tbot takes two config files, one with `-l` and one with `-b`.
>>> The new system would only have one parameter `-c` which can be specified
>>> multiple times.  Each config file can then specify any of
>>>
>>> 	* LAB
>>> 	* BOARD
>>> 	* UBOOT
>>> 	* LINUX
>>>
>>> .  tbot will read the files in order and for each of the names takes
>>> the last one that was defined.
>>>    This would allow a more modularized config approach which (hopefully)
>>> makes sharing configs easier.
>>>
>>
>> IMHO it is very important to have orthogonal configuration. If this is a
>> way to do this, nice. A board configuration file (even if small) should
>> not depend on lab setup, and so on. My goal remains to have a unmodified
>> board file and to test the target again in another lab, just passing the
>> new lab configuration to tbot.
>>
>>
>>> ## JTAG Debugger Integration
>>>    -------------------------
>>> New machine flavors for eg. BDI debuggers.
>>
>> I will this on a lower priority for now. It is maybe important to show a
>> way to do this and contribution can be done by users.
>>
>>> ## More Documentation
>>>    ------------------
>>> Right now, the docs are pretty lacking, especially for onboarding and
>>> getting started with tbot.  This has to change sooner rather than later
>>> but might not be the biggest prio right now ... You decide!
>>>    Another point to include here is "marketing":  I got feedback that
>>> while the docs provide a reference, there is way to little explanation
>>> what tbot is actually useful for.  "Why should I even use tbot?" The
>>> reason I haven't written much about this is, to be quite honest:  I am
>>> not yet sure.  Right now people are experimenting what tbot is good at,
>>> where it is still lacking, or where it might not the right choice
>>> at all.  I need some input here ... Tell me what you think!
>>>
>>
>> What is really missing is a database of test cases and / or boards.
> 
> Missing that as well, problem is there are as good as no testcases that have
> been written yet ...

Until there is no repo, nobody will try to push anything. And everybody
is supposing that tbot's users / developers have already a set of
testcases.

> 
>> Users could check into the test cases instead of documentation. There is
>> no test cases at all, the exceptions are interactive_uboot and
>> interactive_linux.
>>
>> What I strongly recommend is to set up a repo (outside tbot core) where
>> test cases can be stored and sorted, like
>>
>> tc/time
>> tc/network/
>> tc/network/bridging
>> tc/network/routing
>>
>> and so on. Even the most simple use cases are missing, like "test and
>> report U-Boot version" or "test and report kernel version".
>>
>>
>>> ## Refactored Build-System
>>>    -----------------------
>>> As I previously mentioned, the current build testcases are not really
>>> in a state that is fun to use.  Before making them more feature rich
>>> there should be an overhaul of the design.
>>>
>>> ## More builtin Testcases
>>>    ----------------------
>>> Right now, there are about 5 testcases included with tbot (not counting
>>> selftests).  While I don't want to stick every possible testcase into
>>> tbot core, there should definitely be a few more.  For example a testcase
>>> to build linux or to run U-Boot's test/py.
>>
>> I tend to let tbot "core" thin and to push testcases in a separate (not
>> tbot-denx) repo. This supposes that testcases are general enough, like
>> interactive_* are, see above.
>>
> 
> I completely agree.
> 
>>> ## Examples
>>>    --------
>>> While the docs contain some code scattered about, there is no official
>>> working demo yet.  I think such a repo would help beginners a lot with
>>> understanding the basics of tbot.
>>>
>>> ## DENX Internal: CI
>>>    -----------------
>>> New tbot should run in a CI for all our hardware at some point.  This needs
>>> to be set up so you can add your CI testcases. (Discussion about this should
>>> probably be moved to the internal ML, especially security considerations)
>>>
>>>
>>> If there is anything I missed, please mention that as well!
>>>
>>

Regards,
Stefano

-- 
=====================================================================
DENX Software Engineering GmbH,      Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: +49-8142-66989-53 Fax: +49-8142-66989-80 Email: sbabic at denx.de
=====================================================================


More information about the tbot mailing list