[tbot] [Discussion] Calling tbot from within the source directory

Stefano Babic sbabic at denx.de
Mon Dec 3 17:58:42 UTC 2018

Hi Harals,

On 03/12/18 17:48, Harald Seiler wrote:
> Hi Stefano,
> [disclaimer: long!]
> On Mon, 2018-12-03 at 12:39 +0100, Stefano Babic wrote:
>> Hi Harald,
>> thanks for your clarifications:
>> On 03/12/18 10:27, Harald Seiler wrote:
>>> Hello Claudius,
> [...]
>>>> My problem here I think is that lab-specific configuration, user specific
>>>> configuration and project specific configuration are sort of orthogonal and
>>>> all of them come together at the command line of tbot.
>>> Don't get me wrong, I can also see this issue.  But as you mentioned before,
>>> bloating the commandline isn't really a nice solution ...
>> I agree that this should not be done extending the command line - and
>> board, lab, project *are* configuration file.
>> But as Claudius says they are orthogonal - that means they should not
>> have any dependencies. I am unsure which is the best way to derive from
>> base class.
> Hmm, I am not so sure about that.  There is quite a complicated net of
> dependencies between the different "configs", which is why this question
> is so difficult.  If they were completely orthogonal, we wouldn't be
> talking about this ...

We are talking about how we can make them fully orthogonal and how we
can share data between users and which kind of data (boards, labs,

> Examples:
> - Power is both lab AND board dependent.  You can't put it just into one
>   of the two, because it is different for each combination.

I disagree here. Power is IMHO a fully lab property. From the board
perspective, we just need to turn off or on. The lab needs to know the
name of the board and provide a way to do it.

But I can have the same board put into a lab where a relay is provided
to turned it on, or a Zigbee device, or...

This is very lab specific and not board specific.

> - Toolchains are definitely lab-specific, but a board requires the
>   right toolchain to be available.


>> In fact, let's assume I have a board. I should be able to test myboard
>> without change anything in the board configuration file if I test in
>> another environment (lab). So I would like to have:
>> 	tbot -b <myboard> -l <mylab> testcase
>> or
>> 	tbot -b <myboard> -l <company lab> testcase
> Hmm, only as long as there is compatibility, ie the board actually exists
> in this lab.
> I'd say this is possible right now, as long as the board doesn't require
> any special fancy features from the labhost.

There are some properties really specific to the board. For example,
"boot_commands" , "prompt", such as things.

It would be nice if we could have a board file that allow to test the
board in a different environment, that means in a different lab.

>  But as soon as it does,
> they need to know of each other, ie where the kernel is found or the
> serverip ... We can't abstract that away ...

Yes, I have already asked for "serverip". And IMHO this belongs to lab,
too. Board can requires it, of course, but this just mean that board
asks the lab for an attribute.

>> This allows to create a database of boards that can be reused in any
>> lab.
> Wolfgang seems to like the idea of a database, too:

If we think to push tbot as general tool for many users, it makes sense
to have a "databases" of boards that act as examples, too. I agree that
it should be nice if can just send my board's configuration to customer
(without changes or maybe from a repo) and customer can test it with his
own lab configuration without changes.

>> What I would like to see is that lab configurations, board
>> configurations and test cases can be made publicly available.
> I agree about testcases, because they are supposed to be shareable.

Right, absolutely agree.

>  Sharing
> testcases also works right now, because I specifically wrote tbot in a way
> to allow for generic testcases.


>  There might be some board-specific testcases, but
> those don't make sense sharing anyway.


>   For sharing testcase, you'd just pack them into a python module and ship
> that to anyone who wants to use them/make it available online.
> With lab-configs I'd argue, sharing them does not make that much sense. 

Agree. A lab-config is always custom and cannot be replicated. I agree
if we forget to share these - it is enough to provide an example and
document which attributes / methods should be exported by a lab.

> A
> lab config doesn't contain much "valuable" information, for the most part
> it is "where do I store and find things on this specific machine?", which
> will never be portable to any other machine.
>   Now, to not confuse anyone, sharing here is in the sense of sharing one
> config between multiple projects/setups.  Multiple developers working in
> our VLAB can definitely use the same lab-config.

Agree - we are discussing about to share between multiple sites, that is
multiple labs (at least, my side and customer's...)

> If a downstream project is developed in our VLAB, it will probably require
> changes to the lab-config.  To allow for this, I'd suggest the downstream
> lab inherits the pollux config and overwrites things as needed:
> 	[tbot_denx.pollux] <- [my_project.lab]
> In code:
> 	from tbot_denx.labs import pollux
> 	class MyProjectLab(pollux.PolluxLab):
> 	    # A project-specific config, our projects testcases might depend
> 	    # on this, but this would mean they can't be shared with other
> 	    # projects anymore ...
> 	    def tftp_folder(self, board: str) -> "linux.Path[MyProjectLab]":
> 	        return linux.Path(self, "/tftpboot") / board / "hws"
> 	    @property
> 	    def workdir(self) -> "linux.Path[MyProjectLab]":
> 	        return linux.Workdir.athome(self, "my-project")

ok - we have tftp_folder and workdir, both are lab specific. Nice to
have them in lab config.

> Now, let's say I don't like the files residing in ~/my-project:  I write
> another, developer-specific lab-config:
> 	# hws-lab.py
> 	import lab
> 	class HwsLab(lab.MyProjectLab):
> 	    @property
> 	    def workdir(self) -> "linux.Path[MyProjectLab]":
> 	        return linux.Workdir.athome(self, "Projects") / "my-project"
> Everyone content with the default config for the project just uses:
> 	tbot -l lab.py
> And I can use my custom one without any changes to git-tracked files:
> 	tbot -l hws-lab.py

Ok, fine.

> Now, this will "break down" in the following scenario:  You have done your work
> and want to give a customer your tbot config so he can use it as well.  The
> customer doesn't have access to pollux, they have their own lab.

I do not think this is a problem - if nobody complains, labs are always
specific. Customer must just know that he has to provide  tftp_folder()
and workdir().

>  Now I'd say
> this means the customer needs to write their own lab-config, not dependent on
> pollux but their own stuff.


>  To make tbot notice if they forget something, we
> can make use of pythons multiple-inheritance:

Do you mean tbot raise an error if a method is not implemented ?

> We define an abstract base class for our project lab that requires all custom
> lab-host definitions we have as well:
> 	#lab_base.py
> 	class BaseMyProjectLab(linux.LabHost):
> 	    @abc.abstractmethod
> 	    def tftp_folder(self, board: str) -> linux.Path:
> 	        ...
> Now we make this another base in our "pollux-project" config:
> 	class MyProjectLab(pollux.PolluxLab, lab_base.BaseMyProjectLab):
> 	    ...
> This would enforce a correct lab-config ... As a diagram:
> 	[project lab base class]
> 		A
> 		|
> 	[pollux project lab] <- [possibly a developer specific lab config]
> 		|
> 		V
> 	[pollux generic lab]

Ah, ok.

> The paradigm here is to keep base classes generic and specialize them in each
> level ...

Fine - but the base class should at least define which "basic" methods /
attributes exports like tftp_folder() (and workdir, serverip, ...)

> Although ... This is quite a complex case.  We should wait and see how often
> these situations actually arise before deciding on a best practice.

Mmhhh..a different configuration / lab between developer and customer
*is* the standard case. I need guidelines how the customer should change
the lab config to his own needs. Of course, this is another use case as
you are telling about. You are focussing about "many developers working
at the same lab", while my focus is more delivery and acceptance between
us and customer (if customer can reproduce the same tests in "quite" the
same environment, he can quicker accept what I send to him).

> Now, about board-configs:
> First of all, my bad:  The code you quoted below, which I provided as a quick
> solution some time ago might not be the best in the long run ... As you demonstrated.

Glad to have proof this ;-)

> I think, the board-config right now is two things at the same time:
> A config for the physical board (board.Board) and a config for the abstract idea
> of this hardware/software configuration (board.UBootMachine and board.LinuxMachine).

ok, but we have already different classes, haven't we ? The "physical"
board is board.Board, and then we have board.UBootMachine and

> We should separate this, because that way we get a cleaner config split.  The abstract
> board, let's call it "virtual", should be completely orthogonal to the rest.  It is also
> the part the would belong in a database if we create one.  There might still be some
> parts in it which depend on the lab, for example a serverip.

Why cannot "serverip" be a service from lab ? Something that I can call
from "general" board code, like
	lab = tbot.selectable.LabHost()

	serverip = lab.serverip()

	bootcmd [
		"setenv", "serverip", serverip,

(Disclaimer: I have no idea if this makes sense - I want just to give
you the idea)

>  I'd deal with this by just
> trying to access it and failing if it isn't provided:
> 	class MyBoardLinux(board.LinuxWithUBootMachine):
> 	    def do_boot(self, ub):
> 	        serverip = ub.board.lh.serverip  # Raises an AttributeError if no serverip is set
> 	        ub.exec0("ping", serverip)


> The physical board config however is completely lab dependent.  We can't and in my opinion
> shouldn't try to force it to be generic.  I would propose the following approach:
> To use a board with a lab in a certain project, create a config like this:
> 	from boards_db import bbb
> 	import denx
                ^--- what is this ? Is it still lab dependent ?

> 	class MyBoard(bbb.BeagleBoneBlack, denx.DenxBoard):
> 		pass
> 	BOARD = MyBoard
> 	UBOOT = bbb.BeagleBoneUBoot
> 	LINUX = bbb.BeagleBoneLinux
> This would belong into tbot-denx.  boards_db is a new repo where the shareable "virtual" board
> configs would reside.  To summarize:
> 	boards-db:
> 		bbb.py			# Virtual bbb config
> 		mira.py			# Virtual mira config
> 	tbot-denx:
> 		labs/pollux.py		# Standalone config for pollux
> 		board/denx.py		# Base class for boards in our VLAB
> 		board/bbb.py		-> boards-db/bbb.py
> 					# The bbb is available in out VLAB and this config "marks" this.
> 					# It also contains lab-specific changes
> 		boards/mira.py		-> boards-db/mira.py
> 					# Same as bbb
> 	augsburg:
> 		labs/augsburg.py	# Standalone config for your lab
> 		board/augsburg_board.py	# Base clase for your boards
> 		board/mira.py		# You have a mira at your lab
> 	mira-project:
> 		labs/base_lab.py	# Abstract Base Class defining settings for this project
> 		labs/pollux.py		-> tbot-denx/labs/pollux.py & labs/base_lab.py
> 					# + definition of pollux-specific mira configs
> 		labs/augsburg.py	-> augsburg/labs/augsburg.py & labs/base_lab.pu
> 					# + definition of augsburg-specific mira configs
> 		board/mira.py		-> boards-db/mira.py
> 					# + project-specific changes.
> 					# Also contains the if guards you quoted below to
> 					# switch
> between tbot-denx/boards/mira.py and
> 					# augsburg/boards/mira.py ...
> Whew this is quite a lot ... I hope I got my idea across, if not, please tell me!

I think I get the idea, but I cannot judge how much is then put in the
"database" board file. Goal is to have as much as possible in this file,
and just some "tuning" in the lab-board files.

> What do you think? Does this cover all your usecases so far? Is there something I missed?
> Also, as mentioned above, this is supporting quite a complex use-case.  For most times,
> I feel like this is overkill ... Right now, board-configs don't contain much useful info
> anyway ... 

board config can contain a set of U-Boot environment necessary to boot
the board if the environment is not already provided by U-Boot. And
because we cannot be sure about the status of the board, setting the
variables could be a main feature. This is completely board dependent
and has nothing to do with lab.

> This might change if we decide to add more build-info in there but even that
> isn't much ...
> Id'd say, at the moment, having two standalone configs in tbot-denx/boards/mira.py and
> augsbug/boards/mira.py is acceptable because there isn't much to be shared yet.

Right, I see the same. But it is also a reason to avoid to share (why if
it does not work without changes ?)

>  But as this
> changes, we can introduce the virtual board config database as a third repo.
>> This is currently not the case because the board file should be
>> aware of the lab, as it was discussed in previous thread. So I have
>> something in my board file like:
>> if tbot.selectable.LabHost.name == "pollux":
>>     # Use pollux specific config
>>     import denx
>>     BaseBoard = denx.DenxBoard
>>     BaseUBootBuild = denx.DenxUBootBuildInfo
>> elif tbot.selectable.LabHost.name == "local" or
>> tbot.selectable.LabHost.name == "papero":
>>     import augsburg
>>     # Use your personal lab config
>>     BaseBoard = augsburg.MylabBoard
>>     BaseUBootBuild = augsburg.MylabUBootBuildInfo
>> else:
>>     raise NotImplementedError("Board not available on this labhost!")
>> and this does not scale well because everyone has its own lab that must
>> be imported. I do not know if this is the only case, but the "poweron"
>> and "poweroff" methods are bound to the board - should they not be bound
>> to the lab ?
> Hmm, good point ... Well, I'd argue "poweron" is sort of difficult, because
> it depends on both the board and the lab.

This is something I disagree or I cannot understand. Why does it depend
on the board ? The board is doing nothing, it is "passive" and it is
turned on or off via "lab" commands.

>  Making it a lab-only thing is bad,
> because you now need to edit your lab-config when adding a new board.

Yes, but let's say....you have to edit more files in any case. Adding a
new board means to physically conect it to a slot in a rack, connecting
its power on to a "physical" switcher, and so on. Some other
configuration files must be in any case updated. Why is it bad to edit
lab-config ? You have a new board in your environment - I think it is
worse to change the board file just because it was put in a different lab.

>  Making
> it only board-dependent is also not good, because in different labs, toggling
> power looks very different ... See my above approach for a suggestion on how
> to solve this ...

I have already different ways: GPIos, relays, "fhem", ...anyway, this
setup is related to my lab and cannot be exported.
So I have a board with
   def poweron(self) -> None:
        self.lh.exec0("remote_power", self.name, "on")

and another one with :

def poweron(self) -> None:
       self.lh.exec0("fhem", "paperina2:7072", "set NETIO3 on")

If I put one of this board in pollux, I have to change the board
configuration file - but IMHO this is just a lab's setup.

> My issue with just making it a lab-function is the following:  Not everyone has
> a setup as tidy as ours, where every board is powered on and off in the same way.

I have already a different setup - boards are turned on and off in
different way. IMHO it is a reason to move it as lab function, not board

> By making poweron a lab thing, we would make this sort of setup mandatory.

It is already mandatory - a board must have poweron() and poweroff(),
and the setup remains mandatory but in the lab. We can raise an
exception if the lab reported that the board is not found.

>> I could write in the board something like:
>> class MylabBoard(board.Board):
>>     def poweron(self) -> None:
>> 	with lab or tbot.acquire_lab() as lh:
>>         	lh.poweron(self.name)
>> but it is not portable if a lab does not export the method. Should it do
>> it ?
>>>> In this example:
>>>> my-project.py:
>>>>   - contains the paths to the project sources
>>>> /path/to/my/user/tbot.py:
>>>>   - contains customization to the base lab classes, so that login etc.
>>>>     works (ssh-key key path, choosen build server etc.)
>>>> /path/to/tbot-denx/labs/pollux.py:
>>>>   - contains base defintions of the available lab hardware/build server
>>>> I think thats just a lot of composition necessary. So that either has to
>>>> be so easy, like specifying it via the command line or handled
>>>> seperatly. That was one thing we learned with kas, at some point
>>>> creating a file for every permutation is just to much, so we implemented
>>>> a mechanism to combine configuration files with colons in the command
>>>> line.
>>>> Maybe I am just missing a good example of how things should work if you
>>>> have multiple users build multiple projects in different labs and
>>>> trying to share as much as possible with each other.
>>> I don't have one either at this point ... But I think we need to find a pythonic
>>> solution for this and not force a custom one on the user as I did in my first
>>> version.  At the moment, I have the following ideas floating around:
>>> * As I did in the dcu repo, use a configparser.  I think this works well enough
>>>   for this specific case.
>>> * Add a config.py that is either imported or sourced using `eval`.  I have seen
>>>   this pattern in a lot of other projects, where it seems to work really well.
>>>   The advantage is, that you can now modify literally everything from your config
>>>   because of pythons dynamic nature.  On the flipside, confining the room what a
>>>   config can change might also be good, to keep the code simple to reason about.
>>> * Each user creates their own lab.py which inherits the projects lab.py.  This
>>>   would also allow all possible customizations and would have one nice side-effect:
>>>     If the projects lab-config has sane defaults, most people could get started right
>>>   away without needing any custom config file at all ... The downside I see is that
>>>   you need to document elsewhere, what should be configured per user, which can be
>>>   seen right away with the other two solutions.
>>> What I do not like is adding some fixed solution to tbot.  I have done this in the
>>> past and it was for the worse.  You will always stumble upon an edge-case where the
>>> system is not expressive enough and with a fixed solution, you have no option to
>>> change it (easily).  By going this route, you force downstream users to implement
>>> hacks for solving their problems ...
>>> I think you know what I am talking about (we have this exact same issue in isar all
>>> the time!), but for others, I want to make an example:
>>>    Just a few days ago, I was writing a testcase to bisect U-Boot.  The current
>>> design for building U-Boot in tbot is one that is strictly configured (and does not
>>> really use composition).  The issue arose that the `uboot.build` testcase internally
>>> checks out the U-Boot repository.  During a git-bisect, this will make troubles because
>>> we do not want to checkout the current master but build on the revision the bisect
>>> put us on.  How to solve this?  Well, you can't.  You have to change the `uboot.build`
>>> testcase.  Luckily you can do this without actually touching tbot code, but the solution
>>> is still less than ideal:
>>> 	class BuildInfo(getattr(tbot.selectable.UBootMachine, "build")):
>>> 	    def checkout(self, clean):
>>> 	        return super().checkout(False)
>>> 	builddir = uboot.build(
>>> 	    bh,
>>> 	    BuildInfo,
>>> 	    clean=True,
>>> 	)
>>> This code overwrites the checkout method in the U-Boot build-info (which is a composable
>>> part of this config!) and tells tbot to always do a dirty checkout, even if we supply
>>> clean=True to `uboot.build`.  This is arguably not good programming and definitely not
>>> pythonic in any way.  If the U-Boot build was more composable instead of this config-approach,
>>> this issue would never arise.
>>> The reason I am so adamant about this is because I believe it is really *really* hard to
>>> think about everything when using the config approach so nobody will ever have the issue
>>> that his needs are not supported.  If you implement a compositing approach, however, you
>>> nicely delegate this responsibility to the downstream user and in doing so solve a lot of
>>> headaches implicitly.
>>> I think I am getting off topic ... And this mail is getting way too long anyway.
>>> Hope I was able to clear things up a little ... And that I haven't hurt anyones
>>> feelings by having such a strong opinion about this ...
>> Regards,
>> Stefano
> One more thing:  I think we are seeing a few different use-cases for tbot here.

You're right.

> On the one hand the way I use tbot right now:  To support me while hacking on some
> customer issue.  In this "mode", tbot just supports me by automating eg compile+flash+run_test.
> The config and testcases I create during working like this are all "throw-away" code, I don't
> intend it to be pretty or used later on for anything else.  A lot of it is copy pasted between
> different projects for the quickest way to solving whatever the issue is.  I really hack
> into tbot to get solutions quick and dirty.  I try, however, to find acceptable solutions
> that don't require internal knowledge of tbot and if I can't, I will come up with upstream
> solutions ...


> There is also the "nightly-tests/ci" mode, which would be writing tests that are intended to be
> run from ci.  Code written here should be robust and adhere to coding styles but doesn't need to
> be as generic as the following:


> Finally there is what Claudius and Stefano seem to intend, a use-case where tbot is more
> than just a helper and code written is intended to be "distributable" and extendable.

This is also a goal for a FOSS project. If it is not, its usage remains

>  Personally,
> I don't quite see this being applicable yet, but I see that it might come in the future and we should
> think about it when making design decisions.  Wolfgangs public board database also plays in this
> direction.
> I think the first thing we should get "public" is a bunch of testcases.


>  Right now the only real
> builtin testcase in tbot is building U-Boot (and git-bisect if you want to count that).  There is
> definitely room for improvements here, like testing a running U-Boot and the whole GNU+Linux stack ...
> I'd prioritiese testcases for A) ci and B) supporting a developer during his daily work, because
> those are what I see tbot being used for the most right now.  And to be perfectly honest,
> it is what I'd focus tbot on in general as it is the original nieche Heiko tried to fit in.
> Anyway, I got carried away again ... have a nice evening!


DENX Software Engineering GmbH,      Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: +49-8142-66989-53 Fax: +49-8142-66989-80 Email: sbabic at denx.de

More information about the tbot mailing list