[tbot] [Discussion] Calling tbot from within the source directory
Harald Seiler
hws at denx.de
Mon Dec 3 16:48:06 UTC 2018
Hi Stefano,
[disclaimer: long!]
On Mon, 2018-12-03 at 12:39 +0100, Stefano Babic wrote:
> Hi Harald,
>
> thanks for your clarifications:
>
> On 03/12/18 10:27, Harald Seiler wrote:
> > Hello Claudius,
[...]
> > > My problem here I think is that lab-specific configuration, user specific
> > > configuration and project specific configuration are sort of orthogonal and
> > > all of them come together at the command line of tbot.
> >
> > Don't get me wrong, I can also see this issue. But as you mentioned before,
> > bloating the commandline isn't really a nice solution ...
>
> I agree that this should not be done extending the command line - and
> board, lab, project *are* configuration file.
>
> But as Claudius says they are orthogonal - that means they should not
> have any dependencies. I am unsure which is the best way to derive from
> base class.
Hmm, I am not so sure about that. There is quite a complicated net of
dependencies between the different "configs", which is why this question
is so difficult. If they were completely orthogonal, we wouldn't be
talking about this ...
Examples:
- Power is both lab AND board dependent. You can't put it just into one
of the two, because it is different for each combination.
- Toolchains are definitely lab-specific, but a board requires the
right toolchain to be available.
> In fact, let's assume I have a board. I should be able to test myboard
> without change anything in the board configuration file if I test in
> another environment (lab). So I would like to have:
>
> tbot -b <myboard> -l <mylab> testcase
>
> or
>
> tbot -b <myboard> -l <company lab> testcase
>
Hmm, only as long as there is compatibility, ie the board actually exists
in this lab.
I'd say this is possible right now, as long as the board doesn't require
any special fancy features from the labhost. But as soon as it does,
they need to know of each other, ie where the kernel is found or the
serverip ... We can't abstract that away ...
> This allows to create a database of boards that can be reused in any
> lab.
Wolfgang seems to like the idea of a database, too:
> What I would like to see is that lab configurations, board
> configurations and test cases can be made publicly available.
I agree about testcases, because they are supposed to be shareable. Sharing
testcases also works right now, because I specifically wrote tbot in a way
to allow for generic testcases. There might be some board-specific testcases, but
those don't make sense sharing anyway.
For sharing testcase, you'd just pack them into a python module and ship
that to anyone who wants to use them/make it available online.
With lab-configs I'd argue, sharing them does not make that much sense. A
lab config doesn't contain much "valuable" information, for the most part
it is "where do I store and find things on this specific machine?", which
will never be portable to any other machine.
Now, to not confuse anyone, sharing here is in the sense of sharing one
config between multiple projects/setups. Multiple developers working in
our VLAB can definitely use the same lab-config.
If a downstream project is developed in our VLAB, it will probably require
changes to the lab-config. To allow for this, I'd suggest the downstream
lab inherits the pollux config and overwrites things as needed:
[tbot_denx.pollux] <- [my_project.lab]
In code:
from tbot_denx.labs import pollux
class MyProjectLab(pollux.PolluxLab):
# A project-specific config, our projects testcases might depend
# on this, but this would mean they can't be shared with other
# projects anymore ...
def tftp_folder(self, board: str) -> "linux.Path[MyProjectLab]":
return linux.Path(self, "/tftpboot") / board / "hws"
@property
def workdir(self) -> "linux.Path[MyProjectLab]":
return linux.Workdir.athome(self, "my-project")
Now, let's say I don't like the files residing in ~/my-project: I write
another, developer-specific lab-config:
# hws-lab.py
import lab
class HwsLab(lab.MyProjectLab):
@property
def workdir(self) -> "linux.Path[MyProjectLab]":
return linux.Workdir.athome(self, "Projects") / "my-project"
Everyone content with the default config for the project just uses:
tbot -l lab.py
And I can use my custom one without any changes to git-tracked files:
tbot -l hws-lab.py
Now, this will "break down" in the following scenario: You have done your work
and want to give a customer your tbot config so he can use it as well. The
customer doesn't have access to pollux, they have their own lab. Now I'd say
this means the customer needs to write their own lab-config, not dependent on
pollux but their own stuff. To make tbot notice if they forget something, we
can make use of pythons multiple-inheritance:
We define an abstract base class for our project lab that requires all custom
lab-host definitions we have as well:
#lab_base.py
class BaseMyProjectLab(linux.LabHost):
@abc.abstractmethod
def tftp_folder(self, board: str) -> linux.Path:
...
Now we make this another base in our "pollux-project" config:
class MyProjectLab(pollux.PolluxLab, lab_base.BaseMyProjectLab):
...
This would enforce a correct lab-config ... As a diagram:
[project lab base class]
A
|
[pollux project lab] <- [possibly a developer specific lab config]
|
V
[pollux generic lab]
The paradigm here is to keep base classes generic and specialize them in each
level ...
Although ... This is quite a complex case. We should wait and see how often
these situations actually arise before deciding on a best practice.
Now, about board-configs:
First of all, my bad: The code you quoted below, which I provided as a quick
solution some time ago might not be the best in the long run ... As you demonstrated.
I think, the board-config right now is two things at the same time:
A config for the physical board (board.Board) and a config for the abstract idea
of this hardware/software configuration (board.UBootMachine and board.LinuxMachine).
We should separate this, because that way we get a cleaner config split. The abstract
board, let's call it "virtual", should be completely orthogonal to the rest. It is also
the part the would belong in a database if we create one. There might still be some
parts in it which depend on the lab, for example a serverip. I'd deal with this by just
trying to access it and failing if it isn't provided:
class MyBoardLinux(board.LinuxWithUBootMachine):
def do_boot(self, ub):
serverip = ub.board.lh.serverip # Raises an AttributeError if no serverip is set
ub.exec0("ping", serverip)
The physical board config however is completely lab dependent. We can't and in my opinion
shouldn't try to force it to be generic. I would propose the following approach:
To use a board with a lab in a certain project, create a config like this:
from boards_db import bbb
import denx
class MyBoard(bbb.BeagleBoneBlack, denx.DenxBoard):
pass
BOARD = MyBoard
UBOOT = bbb.BeagleBoneUBoot
LINUX = bbb.BeagleBoneLinux
This would belong into tbot-denx. boards_db is a new repo where the shareable "virtual" board
configs would reside. To summarize:
boards-db:
bbb.py # Virtual bbb config
mira.py # Virtual mira config
tbot-denx:
labs/pollux.py # Standalone config for pollux
board/denx.py # Base class for boards in our VLAB
board/bbb.py -> boards-db/bbb.py
# The bbb is available in out VLAB and this config "marks" this.
# It also contains lab-specific changes
boards/mira.py -> boards-db/mira.py
# Same as bbb
augsburg:
labs/augsburg.py # Standalone config for your lab
board/augsburg_board.py # Base clase for your boards
board/mira.py # You have a mira at your lab
mira-project:
labs/base_lab.py # Abstract Base Class defining settings for this project
labs/pollux.py -> tbot-denx/labs/pollux.py & labs/base_lab.py
# + definition of pollux-specific mira configs
labs/augsburg.py -> augsburg/labs/augsburg.py & labs/base_lab.pu
# + definition of augsburg-specific mira configs
board/mira.py -> boards-db/mira.py
# + project-specific changes.
# Also contains the if guards you quoted below to
# switch
between tbot-denx/boards/mira.py and
# augsburg/boards/mira.py ...
Whew this is quite a lot ... I hope I got my idea across, if not, please tell me!
What do you think? Does this cover all your usecases so far? Is there something I missed?
Also, as mentioned above, this is supporting quite a complex use-case. For most times,
I feel like this is overkill ... Right now, board-configs don't contain much useful info
anyway ... This might change if we decide to add more build-info in there but even that
isn't much ...
Id'd say, at the moment, having two standalone configs in tbot-denx/boards/mira.py and
augsbug/boards/mira.py is acceptable because there isn't much to be shared yet. But as this
changes, we can introduce the virtual board config database as a third repo.
> This is currently not the case because the board file should be
> aware of the lab, as it was discussed in previous thread. So I have
> something in my board file like:
>
> if tbot.selectable.LabHost.name == "pollux":
> # Use pollux specific config
> import denx
> BaseBoard = denx.DenxBoard
> BaseUBootBuild = denx.DenxUBootBuildInfo
> elif tbot.selectable.LabHost.name == "local" or
> tbot.selectable.LabHost.name == "papero":
> import augsburg
> # Use your personal lab config
> BaseBoard = augsburg.MylabBoard
> BaseUBootBuild = augsburg.MylabUBootBuildInfo
> else:
> raise NotImplementedError("Board not available on this labhost!")
>
> and this does not scale well because everyone has its own lab that must
> be imported. I do not know if this is the only case, but the "poweron"
> and "poweroff" methods are bound to the board - should they not be bound
> to the lab ?
Hmm, good point ... Well, I'd argue "poweron" is sort of difficult, because
it depends on both the board and the lab. Making it a lab-only thing is bad,
because you now need to edit your lab-config when adding a new board. Making
it only board-dependent is also not good, because in different labs, toggling
power looks very different ... See my above approach for a suggestion on how
to solve this ...
My issue with just making it a lab-function is the following: Not everyone has
a setup as tidy as ours, where every board is powered on and off in the same way.
By making poweron a lab thing, we would make this sort of setup mandatory.
> I could write in the board something like:
>
>
> class MylabBoard(board.Board):
>
> def poweron(self) -> None:
> with lab or tbot.acquire_lab() as lh:
> lh.poweron(self.name)
>
> but it is not portable if a lab does not export the method. Should it do
> it ?
>
> > > In this example:
> > >
> > > my-project.py:
> > > - contains the paths to the project sources
> > >
> > > /path/to/my/user/tbot.py:
> > > - contains customization to the base lab classes, so that login etc.
> > > works (ssh-key key path, choosen build server etc.)
> > >
> > > /path/to/tbot-denx/labs/pollux.py:
> > > - contains base defintions of the available lab hardware/build server
> > >
> > > I think thats just a lot of composition necessary. So that either has to
> > > be so easy, like specifying it via the command line or handled
> > > seperatly. That was one thing we learned with kas, at some point
> > > creating a file for every permutation is just to much, so we implemented
> > > a mechanism to combine configuration files with colons in the command
> > > line.
> > >
> > > Maybe I am just missing a good example of how things should work if you
> > > have multiple users build multiple projects in different labs and
> > > trying to share as much as possible with each other.
> >
> > I don't have one either at this point ... But I think we need to find a pythonic
> > solution for this and not force a custom one on the user as I did in my first
> > version. At the moment, I have the following ideas floating around:
> >
> > * As I did in the dcu repo, use a configparser. I think this works well enough
> > for this specific case.
> > * Add a config.py that is either imported or sourced using `eval`. I have seen
> > this pattern in a lot of other projects, where it seems to work really well.
> > The advantage is, that you can now modify literally everything from your config
> > because of pythons dynamic nature. On the flipside, confining the room what a
> > config can change might also be good, to keep the code simple to reason about.
> > * Each user creates their own lab.py which inherits the projects lab.py. This
> > would also allow all possible customizations and would have one nice side-effect:
> > If the projects lab-config has sane defaults, most people could get started right
> > away without needing any custom config file at all ... The downside I see is that
> > you need to document elsewhere, what should be configured per user, which can be
> > seen right away with the other two solutions.
> >
> > What I do not like is adding some fixed solution to tbot. I have done this in the
> > past and it was for the worse. You will always stumble upon an edge-case where the
> > system is not expressive enough and with a fixed solution, you have no option to
> > change it (easily). By going this route, you force downstream users to implement
> > hacks for solving their problems ...
> >
> > I think you know what I am talking about (we have this exact same issue in isar all
> > the time!), but for others, I want to make an example:
> > Just a few days ago, I was writing a testcase to bisect U-Boot. The current
> > design for building U-Boot in tbot is one that is strictly configured (and does not
> > really use composition). The issue arose that the `uboot.build` testcase internally
> > checks out the U-Boot repository. During a git-bisect, this will make troubles because
> > we do not want to checkout the current master but build on the revision the bisect
> > put us on. How to solve this? Well, you can't. You have to change the `uboot.build`
> > testcase. Luckily you can do this without actually touching tbot code, but the solution
> > is still less than ideal:
> >
> > class BuildInfo(getattr(tbot.selectable.UBootMachine, "build")):
> > def checkout(self, clean):
> > return super().checkout(False)
> >
> > builddir = uboot.build(
> > bh,
> > BuildInfo,
> > clean=True,
> > )
> >
> > This code overwrites the checkout method in the U-Boot build-info (which is a composable
> > part of this config!) and tells tbot to always do a dirty checkout, even if we supply
> > clean=True to `uboot.build`. This is arguably not good programming and definitely not
> > pythonic in any way. If the U-Boot build was more composable instead of this config-approach,
> > this issue would never arise.
> >
> > The reason I am so adamant about this is because I believe it is really *really* hard to
> > think about everything when using the config approach so nobody will ever have the issue
> > that his needs are not supported. If you implement a compositing approach, however, you
> > nicely delegate this responsibility to the downstream user and in doing so solve a lot of
> > headaches implicitly.
> >
> > I think I am getting off topic ... And this mail is getting way too long anyway.
> >
> > Hope I was able to clear things up a little ... And that I haven't hurt anyones
> > feelings by having such a strong opinion about this ...
> >
>
> Regards,
> Stefano
>
>
One more thing: I think we are seeing a few different use-cases for tbot here.
On the one hand the way I use tbot right now: To support me while hacking on some
customer issue. In this "mode", tbot just supports me by automating eg compile+flash+run_test.
The config and testcases I create during working like this are all "throw-away" code, I don't
intend it to be pretty or used later on for anything else. A lot of it is copy pasted between
different projects for the quickest way to solving whatever the issue is. I really hack
into tbot to get solutions quick and dirty. I try, however, to find acceptable solutions
that don't require internal knowledge of tbot and if I can't, I will come up with upstream
solutions ...
There is also the "nightly-tests/ci" mode, which would be writing tests that are intended to be
run from ci. Code written here should be robust and adhere to coding styles but doesn't need to
be as generic as the following:
Finally there is what Claudius and Stefano seem to intend, a use-case where tbot is more
than just a helper and code written is intended to be "distributable" and extendable. Personally,
I don't quite see this being applicable yet, but I see that it might come in the future and we should
think about it when making design decisions. Wolfgangs public board database also plays in this
direction.
I think the first thing we should get "public" is a bunch of testcases. Right now the only real
builtin testcase in tbot is building U-Boot (and git-bisect if you want to count that). There is
definitely room for improvements here, like testing a running U-Boot and the whole GNU+Linux stack ...
I'd prioritiese testcases for A) ci and B) supporting a developer during his daily work, because
those are what I see tbot being used for the most right now. And to be perfectly honest,
it is what I'd focus tbot on in general as it is the original nieche Heiko tried to fit in.
Anyway, I got carried away again ... have a nice evening!
--
Harald
DENX Software Engineering GmbH, Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: +49-8142-66989-62 Fax: +49-8142-66989-80 Email: hws at denx.de
More information about the tbot
mailing list