[PATCH 1/1] gitlab: show skipped Python tests
Simon Glass
sjg at chromium.org
Mon Jun 29 19:26:07 CEST 2020
Hi Tom,
On Fri, 26 Jun 2020 at 14:16, Tom Rini <trini at konsulko.com> wrote:
>
> On Wed, Jun 24, 2020 at 02:04:37PM -0600, Simon Glass wrote:
> > Hi Tom,
> >
> > On Wed, 24 Jun 2020 at 09:53, Tom Rini <trini at konsulko.com> wrote:
> > >
> > > On Wed, Jun 24, 2020 at 09:17:51AM -0600, Simon Glass wrote:
> > > > Hi Heinrich,
> > > >
> > > > On Wed, 24 Jun 2020 at 07:56, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > > > >
> > > > > On 24.06.20 15:49, Simon Glass wrote:
> > > > > > Hi,
> > > > > >
> > > > > > On Mon, 22 Jun 2020 at 12:46, Tom Rini <trini at konsulko.com> wrote:
> > > > > >>
> > > > > >> On Mon, Jun 22, 2020 at 12:23:35PM -0600, Simon Glass wrote:
> > > > > >>> Hi Heinrich,
> > > > > >>>
> > > > > >>> On Mon, 22 Jun 2020 at 10:40, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > > > > >>>>
> > > > > >>>> On 22.06.20 18:17, Simon Glass wrote:
> > > > > >>>>> Hi Heinrich,
> > > > > >>>>>
> > > > > >>>>> On Mon, 22 Jun 2020 at 10:07, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > > > > >>>>>>
> > > > > >>>>>> Call pytest3 with argument -ra to display reason why Python tests are
> > > > > >>>>>> skipped.
> > > > > >>>>>>
> > > > > >>>>>> Signed-off-by: Heinrich Schuchardt <xypron.glpk at gmx.de>
> > > > > >>>>>> ---
> > > > > >>>>>> .gitlab-ci.yml | 2 +-
> > > > > >>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
> > > > > >>>>>>
> > > > > >>>>>> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
> > > > > >>>>>> index f2e491c117..f53098ea5f 100644
> > > > > >>>>>> --- a/.gitlab-ci.yml
> > > > > >>>>>> +++ b/.gitlab-ci.yml
> > > > > >>>>>> @@ -46,7 +46,7 @@ stages:
> > > > > >>>>>> # "${var:+"-k $var"}" expands to "" if $var is empty, "-k $var" if not
> > > > > >>>>>> - export PATH=/opt/qemu/bin:/tmp/uboot-test-hooks/bin:${PATH};
> > > > > >>>>>> export PYTHONPATH=/tmp/uboot-test-hooks/py/travis-ci;
> > > > > >>>>>> - ./test/py/test.py --bd ${TEST_PY_BD} ${TEST_PY_ID}
> > > > > >>>>>> + ./test/py/test.py -ra --bd ${TEST_PY_BD} ${TEST_PY_ID}
> > > > > >>>>>> ${TEST_PY_TEST_SPEC:+"-k ${TEST_PY_TEST_SPEC}"}
> > > > > >>>>>> --build-dir "$UBOOT_TRAVIS_BUILD_DIR"
> > > > > >>>>>
> > > > > >>>>> Do you have a link showing the current output with this patch?
> > > > > >>>>
> > > > > >>>> Hello Simon,
> > > > > >>>>
> > > > > >>>> here is an example output:
> > > > > >>>>
> > > > > >>>> https://gitlab.denx.de/u-boot/custodians/u-boot-efi/-/jobs/112385
> > > > > >>>
> > > > > >>> That's what I was afraid of. The skip output is more than the normal
> > > > > >>> output, and if we don't intend to fix it, I'd rather not have
> > > > > >>> unactionable warnings in the output.
> > > > > >>>
> > > > > >>> Having said that, we need to enable SPI flash, FPGA and MMC
> > > > > >>> environment tests by the look of it.
> > > > > >>
> > > > > >> On a different note, I think we should look at:
> > > > > >> https://gitlab.denx.de/u-boot/custodians/u-boot-efi/-/jobs/112376
> > > > > >> and:
> > > > > >> https://gitlab.denx.de/u-boot/custodians/u-boot-efi/-/jobs/112380
> > > > > >>
> > > > > >> as it shows that the reason we probably skip the test_fs/test_mkdir.py
> > > > > >> tests is that since board is literal, we don't match sandbox on
> > > > > >> sandbox_flattree. That answers one outstanding question about why we
> > > > > >> skip some tests and not others at least.
> > > > > >
> > > > > > Hmm yes.
> > > > > >
> > > > > > It is definitely good to have this output so we can figure out what
> > > > > > should not be skipped.
> > > > > >
> > > > > > But outputting things which we know should be skipped just means we
> > > > > > won't notice those that are not supposed to be skipped. How do we
> > > > > > handle that?
> > > > > >
> > > > > > Regards,
> > > > > > Simon
> > > > > >
> > > > > If you have a lines like:
> > > > >
> > > > > .config feature "cmd_fpga_loadbp" not enabled
> > > > > board "qemu_arm64" not supported
> > > > >
> > > > > you know the test is skipped due to configuration.
> > > >
> > > > OK then can we avoid printing this useless information by default?
> > >
> > > It's not useless information. It's what I pointed to in another part of
> > > the thread as to why we're skipping tests we didn't expect to skip.
> >
> > I thought these ones were intended to be skipped? I'm perhaps just
> > confused about what is going on here.
>
> We have a number of "sandbox" targets which should be running the FS
> tests, but do not. Figuring out why has been low on my TODO list, but
> is very clear with this extra information.
>
> Still haven't however figured out why we do see this:
> https://travis-ci.org/github/u-boot/u-boot/jobs/702114991 (which is, all
> FS tests run).
>
> > > > > Other messages clearly tell you that something is not correctly set up:
> > > > >
> > > > > No env__efi_loader_grub_file binary specified in environment
> > > > > got empty parameter set ['env__mmc_dev_config']
> > > >
> > > > OK then this is what we should display.
> > >
> > > This one is actually one I dug in to a bit, and I don't like how pytest
> > > handles. You can't add a custom parameter checker, AFAICT, you can only
> > > have empty params be skip or xfail.
> > >
> > > I think we could condense the output a little bit as @pytest.mark things
> > > get condensed, but pytest.skip in a test do not (as it counts line
> > > number). That's what got me looking for a way to mark that a config
> > > needs to exist, but that isn't supported. But we could condense some of
> > > the network related stuff by having a single test / helper for network
> > > configuration rather than duplicating it, and then mark network tests as
> > > depending on it.
> >
> > That sounds good.
> >
> > But in general, my point is that we should avoid displaying a message
> > when things are working as intended, only when some action has to be
> > taken. When everything is right, there should not be any warnings or
> > failures IMO.
>
> Sure. But this is CI where it's better to have more information rather
> than less and explaining why we're skipping tests but in perhaps a bit
> too verbose fashion will perhaps motivate cleaning up and re-organizing
> our tests. That we have N "no network configured" items rather than
> perhaps 2 (one for "not configured", one line of N-1 skipped due to not
> configured test was skipped) today is because there's little reason to
> do otherwise. An 's' is an 's'. Even if it means that the EFI tests
> duplicate the network stuff. Even if it means that the EFI tests about
> networking can be fragile, and the networking tests can also be fragile,
> as when you pass in specific tests to run you can end up without the
> network configured. I also suspect that having a valid run where we
> have 93 runs and 213 skips (qemu-ppce500) is violating some
> best practices for pytest.
>
> I very much see why we don't want to include this in "make qcheck" and
> similar. But for CI this is something we want.
OK well let's go with it and see if we get the hoped-for clean-up.
Merging tests is a double-edge sword though. It is nice to start a
test from a known initial state rather than lumping lots of code
together that depends on previous code.
I think it would be better to collect together the tests blocked by a
condition and show them in a list.
Reviewed-by: Simon Glass <sjg at chromium.org>
Regards,
Simon
More information about the U-Boot
mailing list