[U-Boot] [RFC PATCH] armv8: cache: Switch MMU table without turning off MMU

Stephen Warren swarren at wwwdotorg.org
Fri Feb 10 17:04:02 UTC 2017


On 02/03/2017 03:22 PM, York Sun wrote:
> We don't have to completely turn off MMU and cache to switch to
> another MMU table as far as the data is coherent before and after
> the switching. This patch relaxes the procedure.

This seems plausible, but I'm not immersed well enough in the details to 
really review it. I CC'd a couple of Linux people from ARM who should be 
able to review.

> Signed-off-by: York Sun <york.sun at nxp.com>
> CC: Alexander Graf <agraf at suse.de>
> ---
> I found this issue while trying to change MMU table for a SPL boot.
> The RAM version U-Boot was copied to memory when d-cache was enabled.
> SPL code never flushed the cache (I will send another patch to fix that).
> With existing code, U-Boot stops running as soon as the MMU is diabled.
> With below propsed change, U-Boot continues to run. I have been in
> contact with ARM support and got very useful information. However, this
> switching TTBR method is "behavious that should work"  but not well
> verified by ARM. During my debugging, I found other minor issue which
> convinced me this code wasn't exercised. I don't intend to use this
> method in long term, but figure it may help others by fixing it.
>
>  arch/arm/cpu/armv8/cache.S | 44 +++++++++-----------------------------------
>  1 file changed, 9 insertions(+), 35 deletions(-)
>
> diff --git a/arch/arm/cpu/armv8/cache.S b/arch/arm/cpu/armv8/cache.S
> index f1deaa7..63fb112 100644
> --- a/arch/arm/cpu/armv8/cache.S
> +++ b/arch/arm/cpu/armv8/cache.S
> @@ -171,35 +171,10 @@ ENDPROC(__asm_invalidate_l3_icache)
>  /*
>   * void __asm_switch_ttbr(ulong new_ttbr)
>   *
> - * Safely switches to a new page table.
> + * Switches to a new page table. Cache coherency must be maintained
> + * before calling this function.
>   */
>  ENTRY(__asm_switch_ttbr)
> -	/* x2 = SCTLR (alive throghout the function) */
> -	switch_el x4, 3f, 2f, 1f
> -3:	mrs	x2, sctlr_el3
> -	b	0f
> -2:	mrs	x2, sctlr_el2
> -	b	0f
> -1:	mrs	x2, sctlr_el1
> -0:
> -
> -	/* Unset CR_M | CR_C | CR_I from SCTLR to disable all caches */
> -	movn	x1, #(CR_M | CR_C | CR_I)
> -	and	x1, x2, x1
> -	switch_el x4, 3f, 2f, 1f
> -3:	msr	sctlr_el3, x1
> -	b	0f
> -2:	msr	sctlr_el2, x1
> -	b	0f
> -1:	msr	sctlr_el1, x1
> -0:	isb
> -
> -	/* This call only clobbers x30 (lr) and x9 (unused) */
> -	mov	x3, x30
> -	bl	__asm_invalidate_tlb_all
> -
> -	/* From here on we're running safely with caches disabled */
> -
>  	/* Set TTBR to our first argument */
>  	switch_el x4, 3f, 2f, 1f
>  3:	msr	ttbr0_el3, x0
> @@ -209,14 +184,13 @@ ENTRY(__asm_switch_ttbr)
>  1:	msr	ttbr0_el1, x0
>  0:	isb
>
> -	/* Restore original SCTLR and thus enable caches again */
> -	switch_el x4, 3f, 2f, 1f
> -3:	msr	sctlr_el3, x2
> -	b	0f
> -2:	msr	sctlr_el2, x2
> -	b	0f
> -1:	msr	sctlr_el1, x2
> -0:	isb
> +	/* invalidate i-cache */
> +	ic      ialluis
> +	isb     sy
> +
> +	/* This call only clobbers x30 (lr) and x9 (unused) */
> +	mov	x3, x30
> +	bl	__asm_invalidate_tlb_all
>
>  	ret	x3
>  ENDPROC(__asm_switch_ttbr)
>



More information about the U-Boot mailing list