[tbot] [DENX] tbot: board hangs if no autoload

Stefano Babic sbabic at denx.de
Fri Nov 16 09:38:45 UTC 2018


Hi Heiko, Harald,

On 16/11/18 06:41, Heiko Schocher wrote:
> Hello Harald, Stefano,
> 
> Am 15.11.2018 um 19:22 schrieb Stefano Babic:
>> Hi Harald,
>>
>> On 15/11/18 16:59, Harald Seiler wrote:
>>> Hi Stefano, Heiko,
>>>
>>> you two keep posting so much that I can't even keep up
>>> with it ;) I'll try to chime in on some aspects to maybe
>>> clear things up.  I hope it's not too confusing ...
>>
>> :-)
> 
> :-P
> 
>>> On Thu, 2018-11-15 at 15:51 +0100, Stefano Babic wrote:
>>>> Hallo Heiko,
>>>>
>>>> On 15/11/18 13:01, Heiko Schocher wrote:
>>>>> Hello Stefano,
>>>>>
>>>>
>>>> [snip]
>>>>
>>>>>> Well, developers have his own way and each of us do things in a
>>>>>> different and preferred way. It is just to start a flame to use
>>>>>> just Vim
>>>>>> instead of Emacs...
>>>>>
>>>>> Of course! I can not force anyone to use tbot ...
>>>>>
>>>>> But I hope that others are also lazy, and want to automate as much
>>>>> tasks
>>>>> as they can.
>>>
>>> TBot lives in the middle of two worlds:  On one hand it is an automation
>>> tool solely to support a developer in his everyday work.  That means
>>> automating
>>> repetitive tasks.
>>>
>>> As an example, you might be trying to get some U-Boot feature working
>>> on a new
>>> hardware.  In that case, without TBot, you'd manually compile, copy,
>>> flash, reboot,
>>> try if it works, repeat.  This is what TBot tries to help you with. 
>>> With TBot,
>>> you'd have testcases for all these steps and also one that does all
>>> at once.
>>>
>>> So instead of having to run 10 commands that are always the same (or
>>> almost the same),
>>> you just run one: tbot.  While in theory, you could also just use a
>>> shellscript,
>>> TBot has the nice benefit of also managing your hardware and juggle
>>> multiple
>>> hosts (labhost, buildhost, localhost, ...) for you.
>>>
>>>
>>> On the other hand, TBot is also made for testing/ci use.  As a
>>> testtool for a
>>> customer, I think, TBot would be used more for trying to deploy
>>> artifacts that
>>> are built elsewhere and verify they work.
>>
>> Fine.
> 
> Yup.
> 
>>
>>>
>>>>>> But what developers surely need (and this is why I put functional
>>>>>> tests
>>>>>> on top of priorities) is a way to validate what they did and to have
>>>>>> regression tests without a lot of effort. And in both of them, tbot
>>>>>> excels.
>>>>>
>>>>> Isn;t it for example also a valid testcase to ensure, that u-boot for
>>>>> example compiles?
>>>>>
>>>>> Just yesterday I posted a patch on U-Boot ML, which compiled, but
>>>>> dropped
>>>>> warnings I did not check, because I build U-Boot with bitbake ... :-(
>>>>
>>>> As maintainer, my current work-flow is with buildman and/or travis. I
>>>> get an e-mail if travis reports error, if not I sent a PR...
>>>>
>>>>> Or if customer uses swupdate for updating, write a testcase for it?
>>>>
>>>> Exactly, this is the functional test case. And you understand why I am
>>>> not so interested about a specific setup for installing U-Boot and/or
>>>> kernel on the target. Specific install details are already hidden by
>>>> SWUpdate and I have not to take care of it. My testcase is to push the
>>>> SWU to the target, and this can be done generic because the project
>>>> specific parts are already handled.
> 
> Yes.
> 
>>>>>> I would not say that there won't be a customer who wants to have
>>>>>> this,
>>>>>> but as far as I know, most customers rely on already known way to
>>>>>> build
>>>>>> software (Jenkins / bitbake /..) and I guess that building from
>>>>>> U-Boot
>>>>>> is not the first priority for them. But testing that the build
>>>>>> works is
>>>>>> on the top of the list.
>>>>>
>>>>> Ok. But thats the benefit of using tbot. You (as developer) can
>>>>> automate
>>>>> *all* your task you have ... and pass the customer only the
>>>>> testcase for
>>>>> example which start testing functionality for his board ...
>>>>
>>>> Yes, I think anyone should find the most profitable way to use the
>>>> tool.
>>>>
>>>>>>> And if I have one command for doing all the boring stuff from
>>>>>>> scratch ... this is nice. Also if you get at the end a documentation
>>>>>>> with all the steps for the customer, how to reproduce this.
>>>>>>>
>>>>>>>> If we start to convert how to install software on the board, we
>>>>>>>> start
>>>>>>>> with a lot of single different cases, because this is absolutely
>>>>>>>> board
>>>>>>>> specific.
>>>>>>>
>>>>>>> Yes ... so write for the board specific part a board specific
>>>>>>> testcase
>>>>>>> which is called from a generic part ...
>>>>>>
>>>>>> I am just looking to the current status and what we have
>>>>>> available. To
>>>>>> do this, I am expecting that class Board has additional methods like
>>>>>> "install_uboot" and/or "install_linux" near poweron / poweroff/..,
>>>>>> see
>>>>>> machine/board/board.py. So I guess we are not ready for it and it is
>>>>>> better to start with testcases that do not imply to have a very
>>>>>> specific
>>>>>> setup for each board.
>>>>>
>>>>> I rather have in mind, not to fullfill the class with a lot of tasks,
>>>>> instead
>>>>> let tbot as simpel as possible and do the hard work in testcases ...
>>>>
>>>> ok
>>>
>>> The Board class is part of TBot's core and should stay as small as
>>> possible.
>>> `install_uboot` should definitely be a testcase and because it is
>>> very board
>>> specific I don't even think it belongs in TBot at all.
>>>
>>
>> ok, got it.
> 
> Yes, I think also, we must keep the core as small as possible.
> 
>>>>> But may I am here on the wrong way ...
>>>>>
>>>>>>>> My vote goes to start with the more general cases, that is:
>>>>>>>> Software is
>>>>>>>> on the board, does the board work as expected ? Things like:
>>>>>>>>
>>>>>>>> - U-Boot:
>>>>>>>>       - does network work ?
>>>>>>>>       - does storage work ?
>>>>>>>>       - do other u-boot peripherals work ?
>>>>>>>
>>>>>>> Of course also a valid starting point!
>>>>>>>
>>>>>>> But also you must define a way, how to find out, what devices are
>>>>>>> oon the board... I did for example "help date" and if this is
>>>>>>> successfull, I can test the date command ...
>>>>>>
>>>>>> I think this can be ok to put into the board configuration file.
>>>>>> It is a
>>>>>> static configuration and does not depend on the runtime.
>>>
>>> One could technically detect these features at runtime but this might
>>> be error
>>> prone as a failing detection will lead to a feature not being tested
>>> when it should.
>>>
>>> I think the boards features should be explicitly stated in the config
>>> and in
>>> board-specific testcases.  For example:
>>>
>>>     from tbot import tc
>>>     from tbot.tc import uboot
>>>
>>>     def p2020rdb_uboot_tests() -> None:
>>>         with tbot.acquire_lab() as lh:
>>>             tc.testsuite(
>>>                 # These should be testcases provided by TBot
>>>                 # you list them in this board-specific testcase
>>>                      # to explicitly run them for this board
>>>
>>>                 uboot.tests.i2c,
>>>                 uboot.tests.spi,
>>>                 uboot.tests.hush,
>>>                 uboot.tests.network,
>>>                 uboot.tests.mmc,
>>>             )
>>>
>>> I'll add some info about this to our Repo's README ...
>>
>> ok, thanks !
> 
> May a practical example would help?
> 
> May we use the BeagleBoneBlack as a reference board, as yocto do, and
> write examples for it?
> 
>>>>> Hmm... really .. think on the capes of the beagleboneblack ...
>>>>>
>>>>> I would say, write a board specific testcase, which calls all the
>>>>> (maybe
>>>>> generic) testcases you want to run on the board ... or test what
>>>>> testcases
>>>>> it can run ...>
>>>>>>> Or parse help output and decide then?
>>>>>>
>>>>>> Also a good idea, too.
>>>>>>
>>>>>>> Parse U-Boots config and/or DTS ?
>>>>>>>
>>>>>>>> Such cases - they are unaware of which board is running, and we
>>>>>>>> can at
>>>>>>>> the early beginning have more general test cases. Same thing for
>>>>>>>> linux,
>>>>>>>> but of course we can have much more.
>>>>>>>
>>>>>>> see above.
>>>>>>>
>>>>>>> Also as you can call testcases from another testcase, you can write
>>>>>>> a board specific testcase, in which you (as board maintainer) should
>>>>>>> now, which generic testcases you can call ...
>>>>>>
>>>>>> That is nice ! I wait for tomorrow when testcases will be put into
>>>>>> tbot-denx. It will help me to understand better.
>>>>>
>>>>> At least with the old tbot you can do this ... and I am sure Haralds
>>>>> newer version can do this!
>>>>>
>>>>> I had/have variables which hold the name of a testcase ... so you can
>>>>> write a generic testcase, which calls testcases you can configure
>>>>> in the board config file ...
>>>>>
>>>>> For example:
>>>>> https://github.com/hsdenx/tbot/blob/master/src/tc/demo/u-boot/tc_demo_compile_install_test.py#L134
>>>>>
>>>>>
>>>>>
>>>>> if tb.config.tc_demo_uboot_test_update != 'none':
>>>>>
>>>>> call testcase with the name in this variable ... so you can write a
>>>>> board specific testcase, which installs SPL/U-Boot on your specific
>>>>> board ...
>>>>>
>>>>> so you can set (old tbot!) in your board or lab config file:
>>>>>
>>>>> tc_demo_uboot_test_update = 'tc_install_uboot_on_p2020rdb.py'
>>>
>>> Yes, I know you make use of this design pattern.  I would argue
>>> against it though, as
>>> this callback style unnecessarily limits you in what you can do.
>>>      You are much more flexible if TBot just provides generic helpers
>>> for things that
>>> are generic and implement the rest as board-specific testcases.  In
>>> this case, a
>>> `p2020rdb_install_uboot` testcase which does the specific install
>>> routine for p2020rdb
>>> and finally calls a testcase provided by TBot that might be called
>>> `uboot.check_version`.
> 
> Yes. I had no seperation between generic tbot testcases and board specific
> ones.

I think this is a great improvement by Harald. A clean split between
board / testcases lets us to reuse testcases.

> May we have more than one testcase "repo" usable with "-T" parameter
> of tbot?

+1

This is the same approach as in LTP - and some testcases are really
generic. For example, testing if ssh works must be generic and should
not be modified simply because it runs on another board. Current
approach allows this.

> 
>>>> ok, maybe you are getting the point because this is what I do not like.
>>>> I prefer a more generic approach as I see in the example. I can test
>>>> any
>>>> linux command simply with:
>>>>
>>>>              lh = cx.enter_context(tbot.acquire_lab())
>>>>              b = cx.enter_context(tbot.acquire_board(lh))
>>>>              lnx = cx.enter_context(tbot.acquire_linux(b))
>>>>
>>>> This hides which board (mira in my case), which lab, how u-boot is
>>>> started and how linux is started. There is not a "boot_linux_on_mira",
>>>> it is simply tbot.acquire_linux(b) ! Very nice !
>>>
>>> Yes, the idea is to be able to write testcases as generic as possible.
>>> (If applicable!)
>>
>> Right
> 
> Correct.
> 
>>>> The class hides the specific part (which u-boot variables I must set,
>>>> which commands, ..). That means that this testcase runs as it is on any
>>>> board, from power-off until it is turned off again, and full resuse is
>>>> possible. Something like:
>>>>
>>>> @tbot.testcase
>>>>   def iinstall_uboot() -> None:
>>>>       with tbot.acquire_lab() as lh:
>>>>           with tbot.acquire_board(lh) as b:
>>>>               with tbot.acquire_uboot(b) as ub:
>>>>                   ub.install_uboot()
>>>>
>>>> And this is generic and I just need to define a set of commands for my
>>>> board to istall u-boot (like "boot_command" array, I mean).
>>>
>>> `install_uboot` is something I would not implement generically as this
>>> can be vastly different depending on the hardware.  In this case I'd
>>> rather
>>> add board specific `p2020rdb_install_uboot`/`mira_install_uboot`
>>> testcases,
>>> that just directly run the commands necessary for installation.
>>
>> ok, understood, it is fine.
>>
>>>
>>> I guess my approach can be summarized as:  Prefer composition over
>>> configuration
>>>
>>> Prefer writing testcases that call smaller, generic testcases over
>>> writing one monolith that can be configured to do everything.
> 
> Full Ack, that was also my approach ...
> 
> But ended in testcases which did all tasks for a specific board ...
> which called all the small generic testcases, where possible.
> 
> And I had a lot fo duplication in them, so I tried to write generic
> testcases, which call here and there board specific testcases,
> when a board specific task was needed.
> 
> But I am an old C programmer, so I hope you can show me the correct
> way with phython :-D
> 
>>>
>>> If a testcase is generic enough in it's nature, configuration is ok.
>>> For example I have implemented `uboot.build` as a testcase that takes
>>> configuration from the board.  I made this choice, because the U-Boot
>>> build
>>> process doesn't differ much between different hardware (At least in
>>> my experience).
> 
> Yes, this should be possible as a generic testcase.
> 
>>> `uboot.install`, however, doesn't make much sense as a configurable
>>> testcase
>>> in my opinion.  I'd rather write board specific testcases each time, as
>>> there isn't much shared functionality anyway.  The shared pieces that do
>>> exist can be refactored into TBot's builtin testcases of course!
> 
> Yes ... I defined in my U-Boot Environment variable setup a tbot_upd_spl
> and tbot_upd_uboot variable, which does the necessary steps and call
> them from the generic approach ... but I never was happy with it.
> 
>>>> In your example, I cannot set a generic testcase like
>>>> tc_demo_uboot_test_update to work for any board, because it must
>>>> reference to a specific file / function (install_uboot_p2020rdb).
>>>> IMHO I
>>>> like to have an abstraction to hide the specific part of a board (as
>>>> the
>>>> enter_context above let me do).
>>>>
>>>>> and the generic testcase will call this board specific function for
>>>>> installing SPL/U-Boot for the p2020rdb board ...
>>>>>
>>>>> You got the idea ?
>>>>>
>>>>> Hope I do not make Harald now headaches :-P
>>>>
>>>> Maybe I have added some headaches...
>>>
>>> No worries!
> 
> Fine!
> 
>>>>>>>>> - create a register dump file
>>>>>>>>>      write register content into a register dump file
>>>>>>>>> - do register checks
>>>>>>>>>      open register dump file and check if register content
>>>>>>>>>      is the same as in the file
>>>>>>>>> - convert all DUTS testcases
>>>>>>>>>      http://git.denx.de/?p=duts.git
>>>>>>>>
>>>>>>>> I do not thing this is a great idea. This "duts" is obsolete and I
>>>>>>>> think
>>>>>>>> we have now a more generic and better concept with tbot. I think we
>>>>>>>> should just have a list of test cases and then translate them in
>>>>>>>> @tbot.testcase, without looking at the past. IMHO duts is quite
>>>>>>>> broken
>>>>>>>> and we should not care of it, it can just confuse us and it
>>>>>>>> could be a
>>>>>>>> waste of time.
>>>>>>>
>>>>>>> But there are a lot of valid tests!
>>>>>>
>>>>>> That is the reason I think we should have a list of testcases, and
>>>>>> then
>>>>>> implement them as @tbot.testcase
>>>>>
>>>>> Yes!
>>>>>
>>>>>>> It is just an idea ... I converted some of them (not finished all)
>>>>>>> and made based on the results a U-Boot cmdline documentation, as
>>>>>>> we had with the DULG.
>>>>>>
>>>>>> ok, I'll wait for ;-)
>>>>>
>>>>> :-P
>>>>>
>>>>> Not ready for the new tbot ... patches are welcome!
>>>>>
>>>>>>>>>      goal, create at the end a u-boot commandline documentation
>>>>>>>>>
>>>>>>>>> - call pytest from u-boot?
>>>>>>>>
>>>>>>>> Do we ?
>>>>>>>
>>>>>>> I meant: call U-Boot testframework which is in "tools/py"
>>>>>>> from tbot.
>>>>>>>
>>>>>>>>> - if new u-boot does not boot, switch bootmode and unbreak it
>>>>>>>>
>>>>>>>> This is also very board specific and it does not always work. I
>>>>>>>> prefer
>>>>>>>> to start with a more generic approach.
>>>>>>>>
>>>>>>>> For example, start with testing network in U-Boot. How can I split
>>>>>>>> between Lab setup and board setup ? Let's say the tftp server. I
>>>>>>>> can
>>>>>>>> set
>>>>>>>> in the board file a "setenv serverip", but this is broken,
>>>>>>>> because a
>>>>>>>> board could belong to different Labs (I have a mira here and I
>>>>>>>> have my
>>>>>>>> own Lab setup). Is there a a way to do this ? Where should I
>>>>>>>> look for
>>>>>>>> such a cases ?
>>>>>>>
>>>>>>> Than the serverip should be a lab specific variable.
>>>>>>
>>>>>> Should not be an attribute of UbootMachine class, that I can
>>>>>> overwrite
>>>>>> in my lab.py ?
>>>>>
>>>>> Or better, it maybe is detectable through a testcase, executed on the
>>>>> lab PC ?
>>>>>
>>>>> The tftp serverip is configured somewhere on the lab PC ... so write a
>>>>> testcase
>>>>
>>>> But a lab PC is not strictly required, and we have not. If you see the
>>>> code, I manage to add my lab simply having my own board/lab.py instead
>>>> of board/denx.py, and I inherit my board (mira) from it.
>>>
>>> By default, TBot just uses your localhost as a "lab PC".
>>>
>>> In the denx repo, board/denx.py is just a convenience to reduce code
>>> duplication.
>>
>> I find it a nice convenience that I can reuse to redefine my internal
>> "lab" commands
> 
> Yes, and I this is good!
> 
>>> If you want to add your own lab, you'd be better off creating
>>> a labs/sbabic.py.
>>
>> Yes, this is what I did.
> 
> Me too :-P
> 
>>>   But I am not sure if that should reside in our
>>> denx repo.
>>
>> No, it makes no sense - it is just a definition for my own lab.
> 
> No ... you should create your own repo ... May we have more than
> one repo we use with tbot. Therefore "-T" parameter is... ?

Why should I set a new repo ?

One thing to have a repo with boards if we can completely separate
between boards and lab. As I understand from Harald, this is not
possible and I derive the board from my "convenience" file, as Harald
suggested (denx.py and sbabic.py in previous e-mail).

This means that boards definitions are quite the same, but differ due to
the class they inherited :

if tbot.selectable.LabHost.name == <something>:
    ..
elif ....

This depends on the own environment (I would call it Lab, but lab is
used here in another context) and differs for each of us.

The rest is the definition of board and is common code. Maybe we can set
a repo for this common code if we agree on the name of the class to derive:

if tbot.selectable.LabHost.name == <something>:
    BaseBoard = <.....>

[snip]


import mira.py

> 
> I would not say we should have something like the meta layer construct
> in yocto?
> 
> Again old tbot: There you just had to create a subdir in src/tc
> for adding "private" testcases ... but forget this fast ;-)

"tc" is automatically search in the current directory, as far as I
understand.

> 
>>> If you want a board to work in multiple labs, you could make
>>> the board-config check which lab was selected and use a different
>>> base class depending on that:
>>>
>>>     if tbot.selectable.LabHost.name == "pollux":
>>>         # Use pollux specific config
>>>         import denx
>>>         BoardBase = denx.DenxBoard
>>>     elif tbot.selectable.LabHost.name == "sbabic":
>>>         # Use your personal lab config
>>>         import sbabic
>>>         BoardBase = sbabic.SbabicBoard
>>>     else:
>>>         raise NotImplementedError("Board not available on this
>>> labhost!")
>>>
>>>     class Mira(BaseBoard):
>>>         ...
>>
>> Good idea, thanks, I will do in this way.
> 
> I try it.
> 
>>>>> for it, which returns the ip ... and you do not need to configure it!
>>>
>>> If `serverip` proves to be needed enough times, we could add it as an
>>> option
>>> to the labhost:
>>>
>>>     class PolluxLab(lab.SSHLabHost):
>>>         ...
>>>         serverip = "192.168.1.1"
>>
>> That means, if I use the default host, should I derive my own class
>> (Let's say, SbabicLab) from LabHost, adding serverip or whatever I want,
>> and then at the end LAB = SbabicLab ? Is this the correct way ?
>>
>>>
>>> and check it using
>>>
>>>     # Board config
>>>     tbot.selectable.LabHost.serverip
>>>     # or cleaner (maybe even with a default value?):
>>>     getattr(tbot.selectable.LabHost, "serverip", "192.168.0.1")
>>
>> ok
>>
>>>
>>>>>>>>> Linux:
>>>>>>>>>
>>>>>>>>> - get sources
>>>>>>>>> - may apply patches to it
>>>>>>>>> - install linux on the board
>>>>>>>>> - check if booted version is the expected one
>>>>>>>>> - create a register dump file
>>>>>>>>>      write register content into a register dump file
>>>>>>>>> - do register checks
>>>>>>>>
>>>>>>>> See above. I think this is useful during a porting, but it is less
>>>>>>>> useful for a customer who wants to test functionalities. I would
>>>>>>>> like to
>>>>>>>
>>>>>>> I have here another opinion.
>>>>>>
>>>>>> Well, of course ;-). We should not always agree, we get more
>>>>>> improvement
>>>>>> when we discuss and have different opinions ! ;-)
>>>>>
>>>>> Yep!
>>>>>
>>>>> I like this discussion ... 4 years nearly nobody was interested in my
>>>>> old tbot.
>>>>> Ok, it was a big misuse of python ... but it worked ;-)
>>>>>
>>>>> I could not say it to much ... many thanks to Harald!
>>>>>
>>>>>>> This is also interesting for a customer.
>>>>>>>
>>>>>>> Which customer does never change a DTS or does not try a linux
>>>>>>> update
>>>>>>> on his own?
>>>>>>>
>>>>>>> If he have an automated check, if all important registers are setup
>>>>>>> as expected ... this is nice.
>>>>>>>
>>>>>>> This testcase could be done very generic...
>>>>>>>
>>>>>>>> have first a catalog of testcases with functionalities, like:
>>>>>>>>
>>>>>>>>       - is network working ?
>>>>>>>>       - are peripherals working (SPI / I2C /....) ?
>>>>>>>
>>>>>>> Yes. My hope is, that we get a lot of users, so we will get a lot of
>>>>>>> testcases ;-)
>>>>>>
>>>>>> ok
>>>>>>
>>>>>>>> In the ideal case, DT is parsed to get a list of testcases...
>>>>>>>
>>>>>>> Yes.
>>>>>>>
>>>>>>>>>      open register dump file and check if register content
>>>>>>>>>      is the same as in the file
>>>>>>>>> - look if a list of string are in dmesg output
>>>>>>>>>
>>>>>>>>> - look for example at the LTP project, what they test
>>>>>>>>>
>>>>>>>>
>>>>>>>> +1
>>>>>>>>
>>>>>>>> LTP contains a lot of useful testcases, but of course they are
>>>>>>>> meant to
>>>>>>>> run as scripts directly on the target / host. Anyway, they have
>>>>>>>> testcases for a lot of things.
>>>>>>>
>>>>>>> Yes, and we may can use this scripts! Start them and analyse the
>>>>>>> results.
>>>>>>>
>>>>>>
>>>>>> ok, I let this for later, it is not clear to me how...
>>>>>
>>>>> I am also just speculating. But executing a script on the board is
>>>>> easy...
>>>>
>>>> I see a lot of calls to something related to LTP (tst_res,
>>>> tst_resm,..).
>>>> Most testcases are simple, we could have most of them in tbot as own
>>>> testcases as python code.
>>>>
>>>>>>>>> - check if ptest-runner is in the rootfs and call it
>>>>>>>>
>>>>>>>> ptest-runner means python. Do we have it on most projects ? Some
>>>>>>>> yes,
>>>>>>>> some not...
>>>>>>>
>>>>>>> Therefore is "check if ptest-runner" exists ;-)
>>>>>>>
>>>>>>>>> ...
>>>>>>>>>
>>>>>>>>> yocto:
>>>>>>>>> - get the sources
>>>>>>>>> - configure
>>>>>>>>> - bake
>>>>>>>>> - check if files you are interested in are created
>>>>>>>>> - install new images
>>>>>>>>> - boot them
>>>>>>>>> - check if rootfsversion is correct
>>>>>>>>
>>>>>>>> See above - IMHO it is better to split between functional tests on
>>>>>>>> target and build, and to start with the functional tests.
>>>>>>>
>>>>>>> Of course. Both parts can be done independently
>>>>>>
>>>>>> Sure !
>>>

Cheers,
Stefano

-- 
=====================================================================
DENX Software Engineering GmbH,      Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: +49-8142-66989-53 Fax: +49-8142-66989-80 Email: sbabic at denx.de
=====================================================================


More information about the tbot mailing list