[U-Boot] [U-Boot-Board-Maintainers] [U-Boot-Custodians] [ANN] U-Boot v2019.07-rc4 released
Matwey V. Kornilov
matwey.kornilov at gmail.com
Sun Jun 30 10:34:50 UTC 2019
25.06.2019 15:04, Tom Rini пишет:
> On Tue, Jun 25, 2019 at 01:10:26PM +0200, Neil Armstrong wrote:
>> On 24/06/2019 17:29, Tom Rini wrote:
>>> On Sat, Jun 22, 2019 at 09:43:42PM +0200, Marek Vasut wrote:
>>>> On 6/22/19 9:12 PM, Heinrich Schuchardt wrote:
>>>>> On 6/22/19 8:15 PM, Simon Glass wrote:
>>>>>> Hi,
>>>>>>
>>>>>> On Sat, 22 Jun 2019 at 16:10, Andreas Färber
>>>>>> <afaerber at suse.de> wrote:
>>>>>>>
>>>>>>> Hi Simon,
>>>>>>>
>>>>>>> Am 22.06.19 um 16:55 schrieb Simon Glass:
>>>>>>>> I'd like to better understand the benefits of the
>>>>>>>> 3-month timeline.
>>>>>>>
>>>>>>> It takes time to learn about a release, package and
>>>>>>> build it, test it on various hardware, investigate and
>>>>>>> report errors, wait for feedback and fixes, rinse and
>>>>>>> repeat with the next -rc. Many people don't do this as
>>>>>>> their main job.
>>>>>>>
>>>>>>> If we shorten the release cycle, newer boards will get
>>>>>>> out faster (which is good) but the overall quality of
>>>>>>> boards not actively worked on (because they were
>>>>>>> working good enough before) will decay, which is bad.
>>>>>>> The only way to counteract that would be to
>>>>>>> automatically test on real hardware rather than just
>>>>>>> building, and doing that for all these masses of boards
>>>>>>> seems unrealistic.
>>>>>>
>>>>>> Here I think you are talking about distributions. But why
>>>>>> not just take every second release?
>>>>>>
>>>>>> I have certain had the experience of getting a board our
>>>>>> of the cupboard and finding that the latest U-Boot
>>>>>> doesn't work, nor the one before, nor the three before
>>>>>> that.
>>>>>>
>>>>>> Are we actually seeing an improvement in regressions? I
>>>>>> feel that testing is the only way to get that.
>>>>>>
>>>>>> Perhaps we should select a small subset of boards which
>>>>>> do get tested, and actually have custodians build/test on
>>>>>> those for every rc?
>>>>>
>>>>> What I have been doing before all my recent pull requests
>>>>> is to boot both an arm32 (Orange Pi) and and an aarch64
>>>>> (Pine A64 LTS) board via bootefi and GRUB. To make this
>>>>> easier I am using a Raspberry with a relay board and a
>>>>> Tizen SD-Wire card (https://wiki.tizen.org/SDWire)
>>>>> controlling the system under test, cf
>>>>> https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What
>>>>> would be needed is scripts to automate the testing
>>>>> including all the Python tests.
>>>>>
>>>>> It would make sense to have such test automation for all of
>>>>> our architectures similar to what Kernel CI
>>>>> (https://kernelci.org/) does.
>>>>
>>>> So who's gonna set it up and host it ?
>>>
>>> My hope is that we can make use of the GitLab CI features to
>>> carefully (!!!!) expose some labs and setups.
>>
>> Yes, the Gitlab CI could send jobs to lava instances to run
>> physical boot tests, we (baylibre) are investigating this at some
>> point, re-using our kernelCI infrastructure.
>
> That seems like overkill, possibly. How hard would it be to have
> lava kick off our test.py code? In the .gitlab-ci.yml I posted, I
> migrated the logic we have for travis to run our tests. I wonder
> how hard it would be to have test.py "check out" or whatever
> machines from lava?
>
Isn't it possible to kick off the lava from gitlab webhooks?
>
> _______________________________________________ U-Boot mailing
> list U-Boot at lists.denx.de https://lists.denx.de/listinfo/u-boot
>
More information about the U-Boot
mailing list