[tbot] Test Groups and Dependencies
hws at denx.de
Thu Aug 18 10:39:17 CEST 2022
On Sat, 2022-07-30 at 15:29 -0700, Anthony Needles wrote:
> Hi all,
> Hopefully this is the right place to ask this. I am currently using tbot at
> my work and like it very much. However, we require two specific features
> that I am manually implementing:
> 1. Test cases can be marked to be in a group (at the test case definition,
> with a python decorator). This way groups of tests can be ran in a simpler,
> unified method.
> 2. With the same decorator method as #1, define dependencies for tests. If
> a certain test is invoked, and it is found to have another test case (or
> group) as a dependency, those are automatically ran first.
> For example:
> @tbot.testcase(groups=[“networking”], depends=[“reach_login”],
> def wifi_connectivity_test():
> While something like grouping can be done with the existing
> tbot.tc.testsuite method, but it seems like better organization to define
> all attributes of a testcase at the actual testcase definition.
> I was wondering if these features are something that would be desirable to
> have natively in tbot. Since I’m already writing the general logic, I could
> possibly contribute to tbot to add these. I’m new to open source
> contributions, but I think this would be pretty cool to do. Let me know if
> these features are in-scope with the desired functionality of tbot, and I
> can start contributions.
In general, I think what you are trying to implement is a large
testsuite with many tests, right? And you want the ability to run a
subset of the tests instead of having to run the entire testsuite every
If that is correct, then I think you might be interested in the pytest
integration that I just documented with the latest release . It is
something that I have been doing for a long while but never got around
to documenting properly upstream until now.
The idea is that pytest is a framework that already has a lot of the
features that one might need when writing a testsuite. You can filter
which tests should run, specify "dependencies" (called test fixtures),
generate reports, etc.
So my idea was that it is better to integrate tbot with pytest, than to
reimplement all those things on our own. tbot provides the mechanism
for interacting with the hardware and pytest provides the test-runner.
What do you think?
If there is something not covered in this approach, I would argue it is
better to try improving the pytest integration, than to provide a
downstream mechanism for it in tbot.
More information about the tbot