[PATCH] test: make tests should use pytest -ra
Simon Glass
sjg at chromium.org
Tue Apr 21 20:13:08 CEST 2020
Hi Tom,
On Tue, 21 Apr 2020 at 11:47, Tom Rini <trini at konsulko.com> wrote:
>
> On Tue, Apr 21, 2020 at 11:36:32AM -0600, Simon Glass wrote:
> > Hi Heinrich,
> >
> > On Mon, 20 Apr 2020 at 13:03, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > >
> > > On 4/20/20 8:45 PM, Simon Glass wrote:
> > > > Hi Heinrich,
> > > >
> > > > On Mon, 20 Apr 2020 at 12:23, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > > >>
> > > >> On 4/20/20 1:38 AM, Simon Glass wrote:
> > > >>> On Sat, 18 Apr 2020 at 06:09, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > > >>>>
> > > >>>> By passing -ra to pytest we get a summary indicating which tests failed
> > > >>>> and why tests were skipped.
> > > >>>>
> > > >>>> Here is an example output:
> > > >>>>
> > > >>>> ======================== short test summary info =========================
> > > >>>> SKIPPED [1] test/py/tests/test_efi_loader.py:81: No DHCP server available
> > > >>>> SKIPPED [1] test/py/tests/test_efi_loader.py:100: No static network
> > > >>>> configuration is defined
> > > >>>> SKIPPED [2] test/py/tests/test_efi_loader.py:115: Network not initialized
> > > >>>>
> > > >>>> Signed-off-by: Heinrich Schuchardt <xypron.glpk at gmx.de>
> > > >>>> ---
> > > >>>> test/run | 6 +++---
> > > >>>> 1 file changed, 3 insertions(+), 3 deletions(-)
> > > >>>>
> > > >>>
> > > >>> This is really noisy - I get lots of extra output. Can we make this an option?
> > > >>
> > > >> When I run 'make tests' I get 41 out of 199 lines explaining skipped
> > > >> and failed tests.
> > > >>
> > > >> Lines like these are noise because there is no actionable information:
> > > >>
> > > >> test/py/tests/test_fs/test_basic.py
> > > >> sssssssssssssssssssssssssssssssssssssss [ 0%]
> > > >> test/py/tests/test_fs/test_ext.py ssssssssssssssssssssss [ 0%]
> > > >> test/py/tests/test_fs/test_mkdir.py ssssssssssss [ 0%]
> > > >> test/py/tests/test_fs/test_symlink.py ssss [ 0%]
> > > >> test/py/tests/test_fs/test_unlink.py ssssssssssssss [ 0%]
> > > >>
> > > >> This new line has actionable information:
> > > >>
> > > >> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> > > >> filesystem: fat16
> > > >>
> > > >> Next step is to change this line to provide a more useful output, e.g.
> > > >>
> > > >> - except CalledProcessError:
> > > >> - pytest.skip('Setup failed for filesystem: ' + fs_type)
> > > >> + except CalledProcessError as err:
> > > >> + pytest.skip('Setup failed for filesystem: ' + fs_type + \
> > > >> + ', {}'.format(err))
> > > >>
> > > >> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> > > >> filesystem: fat16, Command 'mkfs.vfat -F 16
> > > >> build-sandbox/persistent-data/3GB.fat16.img' returned non-zero exit
> > > >> status 127.
> > > >>
> > > >> Now we know that that the test is wrong by assuming that mkfs.vfat is in
> > > >> the path instead of using /usr/sbin/mkfs.vfat or /sbin/mkfs.vfat and we
> > > >> can fix it.
> > > >>
> > > >> We should get rid of all skipped tests especially on Travis CI and
> > > >> Gitlab CI. Further we should provide instructions to set up a local
> > > >> system to avoid skipping tests.
> > > >>
> > > >> Simon, why do you want to remove the actionable information?
> > > >
> > > > I don't want to remove it. It isn't there now!
> > > >
> > > > Let's fix it before we enable it. Otherwise it is just noise. The
> > > > device tree fiasco is a real pain.
> > > >
> > > > BTW the fs tests are flaky for me so I seldom run them.
> > >
> > > What do you mean by flaky?
> > >
> > > Do you mean unreliable (cf.
> > > https://www.urbandictionary.com/define.php?term=flaky)?
> >
> > Yes!
> >
> > >
> > > What is unreliable about the tests?
> >
> > You have it above - the filesystem tests sometimes fail for me.
>
> I think I've seen some of the FAT tests historically, but not recently.
> The biggest problem I see with them is that they are skipped or run for
> seemingly random reasons, under gitlab. And the root patch here would
> help to see why. For example:
> https://gitlab.denx.de/u-boot/u-boot/-/jobs/80886 skips them but
> https://gitlab.denx.de/u-boot/u-boot/-/jobs/80850 runs them. May be a
> runner config problem?
Oh gosh that is odd.
>
> > I think all the other tests are good, although I think there is one
> > that has a time delay in it that needs to be fixed.
> >
> > Also we should really run the tests concurrently like binman does (see
> > tools/concurrencytest).
>
> I'm not sure how we could run most tests concurrently and keep things
> generic. We can spawn N sandbox binaries but only one physical board.
Yes this is only for sandbox. It is pretty easy to turn on when it is allowed.
The current code in binman does this:
use_concurrent = True
try:
from concurrencytest import ConcurrentTestSuite, fork_for_tests
except:
use_concurrent = False
then:
if use_concurrent and processes != 1:
concurrent_suite = ConcurrentTestSuite(suite,
fork_for_tests(processes or multiprocessing.cpu_count()))
concurrent_suite.run(result)
else:
suite.run(result)
Regards,
Simon
More information about the U-Boot
mailing list