[U-Boot] [PATCH 38/60] ARM: tegra: remove tegra_get_chip()

Simon Glass sjg at chromium.org
Sun May 1 21:16:51 CEST 2016


Hi Stephen,

On 29 April 2016 at 13:21, Stephen Warren <swarren at wwwdotorg.org> wrote:
> On 04/29/2016 11:42 AM, Simon Glass wrote:
>>
>> Hi Stephen,
>>
>> On 29 April 2016 at 10:53, Simon Glass <sjg at chromium.org> wrote:
>>>
>>> Hi Stephen,
>>>
>>> On 29 April 2016 at 10:27, Stephen Warren <swarren at wwwdotorg.org> wrote:
>>>>
>>>> On 04/29/2016 08:02 AM, Simon Glass wrote:
>>>>>
>>>>>
>>>>> Hi Stephen,
>>>>>
>>>>> On 27 April 2016 at 10:13, Stephen Warren <swarren at wwwdotorg.org>
>>>>> wrote:
>>>>>>
>>>>>>
>>>>>> On 04/27/2016 08:50 AM, Simon Glass wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Hi Stephen,
>>>>>>>
>>>>>>> On 25 April 2016 at 13:25, Stephen Warren <swarren at wwwdotorg.org>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 04/23/2016 11:14 AM, Simon Glass wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hi Stephen,
>>>>>>>>>
>>>>>>>>> On 19 April 2016 at 14:59, Stephen Warren <swarren at wwwdotorg.org>
>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> From: Stephen Warren <swarren at nvidia.com>
>>>>>>>>>>
>>>>>>>>>> U-Boot is compiled for a single board, which in turn uses a
>>>>>>>>>> specific
>>>>>>>>>> SoC.
>>>>>>>>>> There's no need to make runtime decisions based on SoC ID. While
>>>>>>>>>> there's
>>>>>>>>>> certainly an argument for making the code support different SoCs
>>>>>>>>>> at
>>>>>>>>>> run-time, the Tegra code is so far from that possible ideal that
>>>>>>>>>> the
>>>>>>>>>> existing runtime code is an anomaly. If this changes in the
>>>>>>>>>> future,
>>>>>>>>>> all
>>>>>>>>>> runtime decisions should likely be based on DT anyway.
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Stephen Warren <swarren at nvidia.com>
>>>>>>>>>> ---
>>>>>>>>>>      arch/arm/mach-tegra/ap.c               | 106
>>>>>>>>>> ++++++++++-----------------------
>>>>>>>>>>      arch/arm/mach-tegra/cache.c            |  20 +++----
>>>>>>>>>>      arch/arm/mach-tegra/cpu.c              |  16 ++---
>>>>>>>>>>      arch/arm/mach-tegra/cpu.h              |   6 --
>>>>>>>>>>      arch/arm/mach-tegra/tegra20/warmboot.c |  20 ++-----
>>>>>>>>>>      5 files changed, 51 insertions(+), 117 deletions(-)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> What exactly is missing to prevent multi-arch support?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> In a word: everything:-)
>>>>>>>>
>>>>>>>> Pretty much all decisions in core architecture code, core Tegra
>>>>>>>> code,
>>>>>>>> drivers, and even board files are currently made at compile time.
>>>>>>>> For
>>>>>>>> example, consider drivers where the register layouts are different
>>>>>>>> between
>>>>>>>> different SoCs; not just new fields added, but existing fields moved
>>>>>>>> to
>>>>>>>> different offsets. Right now, we handle this by changing the
>>>>>>>> register
>>>>>>>> struct
>>>>>>>> definition at compile time. To support multiple chips, we'd have to
>>>>>>>> either
>>>>>>>> (a) link in n copies of the driver, one per register layout, or (b)
>>>>>>>> rework
>>>>>>>> the driver to use #defines and runtime calculations for register
>>>>>>>> offsets,
>>>>>>>> like the Linux kernel drivers do. Tegra USB is one example. The
>>>>>>>> pinmux
>>>>>>>> and
>>>>>>>> clock drivers have a significantly different sets of
>>>>>>>> pins/clocks/resets/...
>>>>>>>> per SoC, and enums/tables describing those sets are currently
>>>>>>>> configured at
>>>>>>>> compile time. Some PMIC constants (e.g. vdd_cpu voltage) are
>>>>>>>> configured
>>>>>>>> at
>>>>>>>> compile-time, and even differ per board.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I wonder how far we would get by converting clock, pinctrl, reset to
>>>>>>> driver model drivers?
>>>>>>
>>>>>>
>>>>>>
>>>>>> Well, I expect we'll find out soon. The next SoC has radically
>>>>>> different
>>>>>> clock/reset mechanisms, so we'll need to switch to standardized APIs
>>>>>> for
>>>>>> clock/reset on Tegra to isolate drivers from those differences, and I
>>>>>> imagine that conversion would also involve conversion to DM since any
>>>>>> standard APIs probably assume use of DM. I haven't investigated this
>>>>>> in
>>>>>> detail yet though.
>>>>>
>>>>>
>>>>>
>>>>> That sounds like a good move.
>>>>
>>>>
>>>> I'm not sure if you still object to this patch for now, or would be
>>>> happy
>>>> giving it an ack?
>>
>>
>> And just to be clear, I'm always keen to move things forward and I can
>> see you have put a lot of work into this series. It would be easier to
>> just apply it as is. What I am looking for is how we might get closer
>> to the goal, perhaps after this series. This series has exposed this
>> which is why I have brought it up.
>
>
> So, just to be clear: TomW can apply this (once I post V2) even if you're
> not giving an ack?

Provided you are OK with using drivers for clock, pinctrl etc. with
your new Tegra work. I'd really like to move away from all this code
in arch/arm, private APIs, etc..

>
>>> Sorry, I still feel this is going in the wrong
>>> direction...CONFIG_TEGRA124 should mean that the code supports it,
>>> rather than the code exclusively supports it.
>
>
> That's certainly not what it means right now, and my intention in this
> series was not to address any short-comings in the current meanings of

> Kconfig.

Well it means that in quite a few places - e.g. this one that you are removing.

>
> No matter what, if we do drive all this core SoC code from DT eventually, I
> don't imagine tegra_get_chip() will be the way code gets dispatched between
> multiple chips in the future, so removing it now shouldn't hurt any later
> conversion to runtime multi-chip support.
>
> In practice in a DT-driven world, I'd expect:
>
> - The code in enable_scu() to be part of some Tegra20-specific "SoC driver"
> or similar, and hence not need any runtime support restored.
>
> - The code in cache.c to be part of some specific cache drivers, which would
> only be instantiated on Tegra20, and hence not need any runtime support
> restored.
>
> - The code in cpu.c to be moved into a clock driver, and base on some
> per-SoC table in the clock driver, and hence not need any runtime
> special-casing.
>
> As such, I don't see reverting any of this patch being reverted later, and I
> don't believe leaving in these runtime calls would make it any easier to
> refactor the code into drivers later.

Sounds reasonable. So long as we are agreed on using drivers then I am
OK with it as a means to an end.

>
>>> In your commit message you say:
>>>
>>> "While there's
>>> certainly an argument for making the code support different SoCs at
>>> run-time, the Tegra code is so far from that possible ideal that the
>>> existing runtime code is an anomaly. If this changes in the future, all
>>> runtime decisions should likely be based on DT anyway."
>>>
>>> Or even build time.... That statement  seems like saying that
>>> everything is so terrible that we might as well give up.
>
>
> I'm just being practical. Please note in particular the comment about basing
> such decisions on DT; that and refactoring the code into DT-instantiated
> drivers would leave it so different that the tiny number of current runtime
> switches doesn't help anything.

Right, but you are actually removing functions that detect the SoC.
That is surely heading the wrong way.

>
>>> What's your plan to move code into drivers? I am happy to help move
>>> things over and get towards the goal rather than further away. But
>>> really that should happen first.
>
>
> I don't have a plan for anything besides the clock and reset drivers at the
> moment. I /think/ that most of the rest of the code in arch/arm/mach-tegra
> simply won't be needed on Tegra186. We'll see; I only have UART and GPIO
> working right now, well and a very hacked-up MMC to avoid clock/reset
> access. There is a large amount of work left that I haven't started looking
> at yet.

OK good.

>
>>> Also, the model that seems to be emerging is that SPL detects the
>>> platform and passes the correct DT to U-Boot proper.
>
>
> There is no SPL on Tegra210 or later. It would be a pity to have to
> introduce one just to do runtime DT selection when there's no need to have
> runtime DT selection.

How does the platform start up? Does it use BCT and then jump straight
into U-Boot?

>
>>> Specifically I think the end state should be:
>>> - Most SoC code is in drivers using driver model
>
>
> That is fine in many cases. There's plenty of low-level bootstrap code where
> I'm not sure that it's worth it. Equally, given that you have stated DM==DT,
> I worry about the implications of that statement if universally applied,
> rather than selectively/pragmatically applied.

I have not stated that DM==DT. Where there are reasons to use platform
data we can do that. See this comment in the driver-model README:

*** Note: platform data is the old way of doing things. It is
*** basically a C structure which is passed to drivers to tell them about
*** platform-specific settings like the address of its registers, bus
*** speed, etc. Device tree is now the preferred way of handling this.
*** Unless you have a good reason not to use device tree (the main one
*** being you need serial support in SPL and don't have enough SRAM for
*** the cut-down device tree and libfdt libraries) you should stay away
*** from platform data.

This is a boot loader. We need to be pragmatic. This is why I push
back on core driver-model code bloat.

>
>>> - It is possible to build most of the tegra code without building each
>>> SoC (just by having a 'tegra' board which enables all drivers)
>
>
> I'm not convinced this is worth it universally.
>
> For the purposes of actually running a binary on a device, I believe there
> is zero need for this.
>
> For compile coverage it might be nice, especially for driver code, although
> running buildman on one Tegra board for each SoC generation doesn't take
> much time.
>
> Even for drivers that solely use generic (non-SoC-version-specific) APIs,
> this would be difficult in some cases. This plays back into my earlier
> comments elsewhere about structs-vs-defines for register address
> calculations. Register layouts of some devices vary incompatibly between
> Tegra SoC generations. With a define-based driver, this can all be handled
> at run-time. With a struct-based driver, one must compile the code once per
> register layout. Rather than invest in making a "compile test" build that
> does this, which would require various build system work, it's much simpler
> to just compile the driver multiple times as part of a set of board builds,
> which already works today. The result is still compiling the code n times,
> so there wouldn't be much time saving wrapping everything into one build.

Well your hardware guys really should make up their mind. Is there any
code review? And s/w people should have a system (with serial driver)
running before tape-out and catch this stuff.

But in any case, struct-based stuff is not a requirement now. I
believe a lot of people prefer it (including me) but in the case you
raise it isn't practical. See ns16550 for how this can be done badly.

>
>>> - If someone wanted to support multiple SoCs it would not be
>>> impossible to do so (even if mainline doesn't make it)
>
>
> I don't think it's a good idea to support features that mainline doesn't
> use. They'll bit-rot and not work anyway. I also can't see any use-case for
> such a build.

While SoC vendors think of the SoC as a the most important factor in
the platform, we are starting to see similar devices using different
SoCs, or at least different variants. From a user point of view I'd
rather not worry about which Raspberry Pi I have, for example, so long
as I can build something that boots.

>
>>> - New SoCs are supported mostly by adding new drivers (although there
>>> is always of course some core/start-up code)
>
>
> Yes, it certainly makes sense to use drivers for functionality which other
> pieces of the code must interact with in a generic way. For example, calls
> from I2C/MMC drivers to clock/reset APIs.
>
> I expect some debate over what code is core/start-up code though. For
> example, static pinmux setup fits that category for me, since there's no
> generic code that needs to interface with it.

So please put your pinmux code in a pinctrl driver, even it is just in
the probe() method.

Also I have a Tegra124 platform now. Once the smoke has cleared with
your work I'd like to take a look at how to improve things.

Regards,
Simon


More information about the U-Boot mailing list