Two jobs at once on denx-vulcan?

Tom Rini trini at konsulko.com
Fri Sep 24 16:55:53 CEST 2021


On Fri, Sep 24, 2021 at 08:38:49AM -0600, Simon Glass wrote:
> Hi Tom,
> 
> On Fri, 24 Sept 2021 at 08:20, Tom Rini <trini at konsulko.com> wrote:
> >
> > On Fri, Sep 24, 2021 at 04:01:21PM +0200, Harald Seiler wrote:
> > > Hi Simon,
> > >
> > > On Mon, 2021-09-20 at 08:06 -0600, Simon Glass wrote:
> > > > Hi Harald,
> > > >
> > > > On Mon, 20 Sept 2021 at 02:12, Harald Seiler <hws at denx.de> wrote:
> > > > >
> > > > > Hi,
> > > > >
> > > > > On Sat, 2021-09-18 at 10:37 -0600, Simon Glass wrote:
> > > > > > Hi,
> > > > > >
> > > > > > Is there something screwy with this? It seems that denx-vulcan does
> > > > > > two builds at once?
> > > > > >
> > > > > > https://source.denx.de/u-boot/custodians/u-boot-dm/-/jobs/323540
> > > > >
> > > > > Hm, I did some changes to the vulcan runner which might have caused
> > > > > this... But still, even if it is running multiple jobs in parallel, they
> > > > > should still be isolated, so how does this lead to a build failure?
> > > >
> > > > I'm not sure that it does, but I do see this at the above link:
> > > >
> > > > Error: Unable to create
> > > > '/builds/u-boot/custodians/u-boot-dm/.git/logs/HEAD.lock': File
> > > > exists.
> > >
> > > This is super strange... Each build should be running in its own
> > > container so there should never be a way for such a race to occur.  No
> > > clue what is going on here...
> >
> > I know this from having to track down a different oddball failure with
> > konsulko-bootbake.  It comes down to something along the lines of
> > volumes being re-used.  Good in that it means that every job every time
> > isn't doing a whole clone of the u-boot tree.  Bad in that just in case
> > the job gets wedged/killed in a crazy spot you end up with problems like
> > this.  If you run a 'find' on vulcan you'll figure out which overlay has
> > a problem.  Or you can stop the runner for a moment and tell docker to
> > purge unused volumes and it'll clear it up.
> >
> > > > Re doing multiple builds, have you set it up so it doesn't take on the
> > > > very large builds? I would love to enable multiple builds for the qemu
> > > > steps since they mostly use a single CPU, but am not sure how to do
> > > > it.
> > >
> > > Actually, this was more a mistake than an intentional change.  I updated
> > > the runner on vulcan to also take jobs for some other repos and wanted
> > > those jobs to run in parallel.  It looks like I just forgot setting the
> > > `limit = 1` option for the U-Boot runner.
> > >
> > > Now, I think doing what you suggest is possible.  We need to tag build
> > > and "test" jobs differently and then define multiple runners with
> > > different limits.  E.g. in `.gitlab-ci.yml`:
> > >
> > >       build all 32bit ARM platforms:
> > >         stage: world build
> > >         tags:
> > >           - build
> > >
> > >       cppcheck:
> > >         stage: testsuites
> > >         tags:
> > >           - test
> > >
> > > And then define two runners in `/etc/gitlab-runner/config.toml`:
> > >
> > >       concurrent = 4
> > >
> > >       [[runners]]
> > >         name = "u-boot builder on vulcan"
> > >         limit = 1
> > >         ...
> > >
> > >       [[runners]]
> > >         name = "u-boot tester on vulcan"
> > >         limit = 4
> > >         ...
> > >
> > > and during registration they get the `build` and `test` tags
> > > respectively.  This would allow running (in this example) up to 4 test
> > > jobs concurrently, but only ever one large build job at once.
> >
> > Yes, but this would also make it harder for people to use the CI as-is
> > with their own runners.  For example, the only thing stopping people
> > from using the free gitlab CI runners on their own is that squashfs
> > test being broken.
> 
> Thanks for the info Harald.
> 
> Would it just mean that they would need to add both 'build' and 'test'
> tags to their running? If so that does not sound onerous.

Along with not being able to use the gitlab free runners.

> I believe it would speed up CI quite a bit.

I'm not sure?  First, did you upgrade your runners recently?  I started
by looking at
https://source.denx.de/u-boot/u-boot/-/pipelines/9238/builds and all of
the last stage jobs went super quick.  But second, assuming the time
there includes spinning up the runner, sandbox+clang took 2x as long to
run as regular sandbox, to run less tests:
https://source.denx.de/u-boot/u-boot/-/jobs/326772
https://source.denx.de/u-boot/u-boot/-/jobs/326773

But we might save a minute, or two, if all of the other much quicker
tests ran to completion sooner, but we'd still be stuck waiting on the
longest running test.

So while I think splitting the job in to stages, such that if something
fails early we call it all off, a time test where we just have a single
stage would mean more stuff in parallel and maybe would be quicker,
especially when we have more free runners.  And to me, sadly, that's our
biggest gating factor and the one that can be solved with money rather
than technical wizardry.

-- 
Tom
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 659 bytes
Desc: not available
URL: <https://lists.denx.de/pipermail/u-boot/attachments/20210924/2682a58f/attachment.sig>


More information about the U-Boot mailing list