[PATCH 3/3] arm: caches: manage phys_addr_t overflow in mmu_set_region_dcache_behaviour
Marek Vasut
marex at denx.de
Fri Apr 10 10:05:37 CEST 2020
On 4/9/20 4:18 PM, Patrick DELAUNAY wrote:
>
>
>> -----Original Message-----
>> From: Marek Vasut <marex at denx.de>
>> Sent: vendredi 3 avril 2020 23:31
>> To: Patrick DELAUNAY <patrick.delaunay at st.com>; u-boot at lists.denx.de
>> Cc: Simon Glass <sjg at chromium.org>; Alexey Brodkin
>> <abrodkin at synopsys.com>; Lokesh Vutla <lokeshvutla at ti.com>; Tom Rini
>> <trini at konsulko.com>; Trevor Woerner <trevor at toganlabs.com>; U-Boot STM32
>> <uboot-stm32 at st-md-mailman.stormreply.com>
>> Subject: Re: [PATCH 3/3] arm: caches: manage phys_addr_t overflow in
>> mmu_set_region_dcache_behaviour
>> Importance: High
>>
>> On 4/3/20 10:28 AM, Patrick Delaunay wrote:
>>> Detect and solve the overflow on phys_addr_t type for start + size in
>>> mmu_set_region_dcache_behaviour() function.
>>>
>>> This issue occurs for example with ARM32, start = 0xC0000000 and size
>>> = 0x40000000: start + size = 0x100000000 and end = 0x0.
>>>
>>> Overflow is detected when end < start.
>>> In normal case the previous behavior is still used: when start is not
>>> aligned on MMU section, the end address is only aligned after the sum
>>> start + size.
>>>
>>> Signed-off-by: Patrick Delaunay <patrick.delaunay at st.com>
>>> ---
>>>
>>> arch/arm/lib/cache-cp15.c | 5 +++++
>>> 1 file changed, 5 insertions(+)
>>>
>>> diff --git a/arch/arm/lib/cache-cp15.c b/arch/arm/lib/cache-cp15.c
>>> index d15144188b..e5a7fd0ef4 100644
>>> --- a/arch/arm/lib/cache-cp15.c
>>> +++ b/arch/arm/lib/cache-cp15.c
>>> @@ -63,6 +63,11 @@ void mmu_set_region_dcache_behaviour(phys_addr_t
>>> start, size_t size,
>>>
>>> end = ALIGN(start + size, MMU_SECTION_SIZE) >>
>> MMU_SECTION_SHIFT;
>>> start = start >> MMU_SECTION_SHIFT;
>>> +
>>> + /* phys_addr_t overflow detected */
>>> + if (end < start)
>>> + end = (~(phys_addr_t)0x0 >> MMU_SECTION_SHIFT) + 1;
>>> +
>>
>> Or, you can divide $start and $size separately by MMU_SECTION_SIZE and then
>> add them up .
>
> It was my first idea but that change the function behavior,
> because today start and size can be not aligned on MMU_SECTION aligned.
>
> I think it is strange, but I preferred to don't change this part.
>
> Example with shift = 21 and 2MB section size: 0x200000
>
> Start = 0x1000000
> Size = 0x1000000
>
> End = 0x2000000
>
> => after alignment start = 0x0, end = 0x1
>
> But if we align the start and size before addition as proposed, the final result change
>
> Start = 0x1000000 => 0
> Size = 0x1000000 => 0
>
> End = 0x0
>
> I prefer don't modify this current (strange) behavior to avoid regression.
>
> But if it is acceptable (because the caller MUST always use start and size MMU_SECTION aligned),
> I will change the proposal
The minimum page size is 4k, right ? Then divide both by 4k and then by
the rest of MMU_SECTION_SHIFT.
More information about the U-Boot
mailing list