[U-Boot] [RFC] Continuos integration with DUTS v2

Niklaus Giger niklaus.giger at member.fsf.org
Tue Oct 6 14:59:01 CEST 2009


Hi

As I consider testing as an important part to ensure high code quality for
any product. It should form part of the global development process.

1) When adding a new board or feature to U-Boot running tests to ensure that 
it
works as advertised should be mandatory but not time consuming.

2) If one has a board up and running, I consider it adviseable to ensure that
it always compiles the actual state of the U-Boot code and passes all the 
tests.

3) It would be nice to have somewhere a central repository where everybody
(specially the maintainers) could see whether the different boards pass all
the tests or not.

To achieve the points 1) and 2) I am working on a solution which should 
achieve
these goals if you meet the following requirements
- you have a spare board to run the tests.
- you have some HW to switch on/off the power (There are solutions that
  switch 4 or 8 outlet for 100 or 200 Euros).
- you have system running a GNU/Linux. (My system runs Debian Lenny) which has
  some background processing power.
- you have (at least weekly) some time to look after the results and report/fix
  the problems.
- have 1 to 4 hours to setup a testing environment.

I would be interested what you think about the points raised above and whether
following the instructions given under
http://ngiger.dyndns.org/duts_v2/doc/index.html and
http://ngiger.dyndns.org/hudson.doc/doc/index.html
allows you to achieve above mentioned goals 1) and 2).


Background:
----------

In 2004 - 2006 I run for a while a buildbot for the Xenomai project.

In the last two months I used my pre-existing Ruby scripts and the
DENX Universal Test Suite (http://www.denx.de/wiki/DUTS/DUTSDocs) to
rewrite into DUTS-v2 (http://ngiger.dyndns.org/duts_v2/doc/index.html).
Quite some effort did go into writing unit tests for test suite.
Easy adaption to various environments was a important design factor.

My previous experiences with the continuos integrations tools
http://buildbot.net/trac and http://cruisecontrol.sourceforge.net/
made me have a look at https://hudson.dev.java.net/.

Hudson proved to be really simple to setup and administer. My experiences
are documented at http://ngiger.dyndns.org/hudson.doc/doc/index.html.

Future extensions:
------------------

a) Use the same infrastructure not only for U-Boot but also for related 
projects like xenomai, RT-preempt, ltp.

b) To achieve goal 3) I could not find any project around the linux kernel 
which publishes in an automated way test results.

If I don't receive other suggestions I plan the following steps:
- Create a XML file with test test results containings
  - version of the test suite/scripts
  - information about the target like
    - HW configuration (CPU, RAM, devices)
    - SW version/configuration
  - name and result of each teststep
  - performance information (e.g. min/max/avg, histogram) for parameters like
    latency, throughput
  - ???
- Send XML file at end of test-run to a special mailing list
- Process mailing lists to feed all the XML into a database
- Web-Frontend to this database using ??? (Ruby On Rails comes to my mind)

Any comments to these points?


Next steps:
-----------

- Listen to your suggestions
- Fix bugs reported
- Move code from my SVN-repository to git.denx.de ?
- Open wiki entries for points 1 - 3 ?

Best regards

Niklaus

-- 
Niklaus Giger
Wieshoschet 6
CH-8753 Mollis
+41  (0)55 612 20 54 P
+41  (0)55 618 64 68 G


More information about the U-Boot mailing list