[U-Boot] [PATCH 5/8] ARMv8: PCSI: Add generic ARMv8 PSCI code

Mark Rutland mark.rutland at arm.com
Thu Aug 28 13:37:02 CEST 2014


Hi Arnab,

On Wed, Aug 27, 2014 at 09:29:58PM +0100, Arnab Basu wrote:
> Implement core support for PSCI. As this is generic code, it doesn't
> implement anything really useful (all the functions are returning
> Not Implemented).

This is really nice to see! Thanks for working on this.

Some functions which return NOT_IMPLEMENTED below are requried to be
implemented per the PSCI 0.2 spec.

I hope that the plan is to implement all the required functions before
we turn this on?

Otherwise, comments below.

> This is largely ported from the similar code that exists for ARMv7
> 
> Signed-off-by: Arnab Basu <arnab.basu at freescale.com>
> Reviewed-by: Bhupesh Sharma <bhupesh.sharma at freescale.com>
> Cc: Marc Zyngier <marc.zyngier at arm.com>
> ---
>  arch/arm/cpu/armv8/Makefile |    1 +
>  arch/arm/cpu/armv8/psci.S   |  171 +++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 172 insertions(+), 0 deletions(-)
>  create mode 100644 arch/arm/cpu/armv8/psci.S
> 
> diff --git a/arch/arm/cpu/armv8/Makefile b/arch/arm/cpu/armv8/Makefile
> index 4f0ea87..8f6988d 100644
> --- a/arch/arm/cpu/armv8/Makefile
> +++ b/arch/arm/cpu/armv8/Makefile
> @@ -15,3 +15,4 @@ obj-y	+= cache.o
>  obj-y	+= tlb.o
>  obj-y	+= transition.o
>  obj-y	+= cpu-dt.o
> +obj-$(CONFIG_ARMV8_PSCI)	+= psci.o
> diff --git a/arch/arm/cpu/armv8/psci.S b/arch/arm/cpu/armv8/psci.S
> new file mode 100644
> index 0000000..5f4e3b2
> --- /dev/null
> +++ b/arch/arm/cpu/armv8/psci.S
> @@ -0,0 +1,171 @@
> +/*
> + * (C) Copyright 2014
> + * Arnab Basu <arnab.basu at freescale.com>
> + *
> + * Based on arch/arm/cpu/armv7/psci.S
> + *
> + * SPDX-License-Identifier:	GPL-2.0+
> + */
> +
> +#include <linux/linkage.h>
> +#include <asm/psci.h>
> +
> +.pushsection ._secure.text, "ax"
> +
> +ENTRY(psci_0_2_cpu_suspend_64)
> +ENTRY(psci_0_2_cpu_on_64)
> +ENTRY(psci_0_2_affinity_info_64)
> +ENTRY(psci_0_2_migrate_64)
> +ENTRY(psci_0_2_migrate_info_up_cpu_64)
> +	mov	x0, #ARM_PSCI_RET_NI	/* Return -1 (Not Implemented) */
> +	ret
> +ENDPROC(psci_0_2_cpu_suspend_64)
> +ENDPROC(psci_0_2_cpu_on_64)
> +ENDPROC(psci_0_2_affinity_info_64)
> +ENDPROC(psci_0_2_migrate_64)
> +ENDPROC(psci_0_2_migrate_info_up_cpu_64)
> +.weak psci_0_2_cpu_suspend_64
> +.weak psci_0_2_cpu_on_64
> +.weak psci_0_2_affinity_info_64
> +.weak psci_0_2_migrate_64
> +.weak psci_0_2_migrate_info_up_cpu_64
> +
> +ENTRY(psci_0_2_psci_version)
> +	mov	x0, #2			/* Return Major = 0, Minor = 2*/
> +	ret
> +ENDPROC(psci_0_2_psci_version)
> +
> +.align 4
> +_psci_0_2_table:
> +	.quad	PSCI_0_2_FN_PSCI_VERSION
> +	.quad	psci_0_2_psci_version
> +	.quad	PSCI_0_2_FN64_CPU_SUSPEND
> +	.quad	psci_0_2_cpu_suspend_64
> +	.quad	PSCI_0_2_FN64_CPU_ON
> +	.quad	psci_0_2_cpu_on_64
> +	.quad	PSCI_0_2_FN64_AFFINITY_INFO
> +	.quad	psci_0_2_affinity_info_64
> +	.quad	PSCI_0_2_FN64_MIGRATE
> +	.quad	psci_0_2_migrate_64
> +	.quad	PSCI_0_2_FN64_MIGRATE_INFO_UP_CPU
> +	.quad	psci_0_2_migrate_info_up_cpu_64
> +	.quad	0
> +	.quad	0

It would be nice if we could reorganise this something like:

	.quad PSCI_0_2_FN_PSCI_VERSION,		psci_0_2_psci_version
	.quad PSCI_0_2_FN64_CPU_SUSPEND,	psci_0_2_cpu_suspend_64
	.quad PSCI_0_2_FN64_CPU_ON,		psci_0_2_cpu_on_64
	.quad PSCI_0_2_FN64_AFFINITY_INFO,	psci_0_2_affinity_info_64
	.quad PSCI_0_2_FN64_MIGRATE,		psci_0_2_migrate_64
	.quad PSCI_0_2_FN64_MIGRATE_INFO_UP_CPU,	psci_0_2_migrate_info_up_cpu_64
	.quad 0, 				0

As that would make the relationship between IDs and functions clearer
(at least to me). Maybe a macro could make this less painful.

> +.macro	psci_enter
> +	stp	x29, x30, [sp, #-16]!
> +	stp	x27, x28, [sp, #-16]!
> +	stp	x25, x26, [sp, #-16]!
> +	stp	x23, x24, [sp, #-16]!
> +	stp	x21, x22, [sp, #-16]!
> +	stp	x19, x20, [sp, #-16]!
> +	stp	x17, x18, [sp, #-16]!
> +	stp	x15, x16, [sp, #-16]!
> +	stp	x13, x14, [sp, #-16]!
> +	stp	x11, x12, [sp, #-16]!
> +	stp	x9, x10, [sp, #-16]!
> +	stp	x7, x8, [sp, #-16]!
> +	stp	x5, x6, [sp, #-16]!
> +	mrs	x5, elr_el3
> +	stp	x5, x4, [sp, #-16]!
> +
> +	/* EL0 and El1 will execute in secure */

I think this would be better as:

	/* U-Boot will run on the secure side */

> +	mrs	x4, scr_el3
> +	bic	x4, x4, #1
> +	msr	scr_el3, x4
> +.endm
> +
> +.macro	psci_return
> +	/* EL0 and El1 will execute in non-secure */

Similarly:

	/* The OS will run on the non-secure side */

> +	mrs	x4, scr_el3
> +	orr	x4, x4, #1
> +	msr	scr_el3, x4
> +
> +	ldp	x5, x4, [sp], #16
> +	msr	elr_el3, x5
> +	ldp	x5, x6, [sp], #16
> +	ldp	x7, x8, [sp], #16
> +	ldp	x9, x10, [sp], #16
> +	ldp	x11, x12, [sp], #16
> +	ldp	x13, x14, [sp], #16
> +	ldp	x15, x16, [sp], #16
> +	ldp	x17, x18, [sp], #16
> +	ldp	x19, x20, [sp], #16
> +	ldp	x21, x22, [sp], #16
> +	ldp	x23, x24, [sp], #16
> +	ldp	x25, x26, [sp], #16
> +	ldp	x27, x28, [sp], #16
> +	ldp	x29, x30, [sp], #16
> +	eret
> +.endm

PSCI 0.2 follows the SMC calling convention, where x0-x17 are
caller-saved as with the usual AArch64 calling conventions, so we don't
need to save/restore them here for AArch64 callers.

We do need to ensure we don't clobber x18-x30, sp_elx and sp_el0. The
calling convention is ambiguous as to whether we can clobber sp_el1, but
for the moment I'd assume we shouldn't.

For AArch32 callers, only x0-x3 are caller-saved.

> +ENTRY(_smc_psci)
> +	psci_enter
> +	adr	x4, _psci_0_2_table
> +1:	ldr	x5, [x4]		/* Load PSCI function ID */
> +	ldr	x6, [x4, #8]		/* Load target PC */

1:	ldp	x5, x6, [x4]		/* Load function ID and handler addr */

> +	cmp	x5, #0			/* If reach the end, bail out */
> +	b.eq	fn_not_found

cbz	x5, fn_not_found		/* If reach the end, bail out */

> +	cmp	x0, x5			/* If not matching, try next entry */
> +	b.eq	fn_call
> +	add	x4, x4, #16
> +	b	1b
> +
> +fn_call:
> +	blr	x6
> +	psci_return
> +
> +fn_not_found:
> +	mov	x0, #ARM_PSCI_RET_INVAL  /* Return -2 (Invalid) */

I believe this should be NOT_SUPPORTED (-1), which states that the PSCI
implementation doesn't implement this function rather than the arguments
to the function were invalid.

> +	psci_return
> +ENDPROC(_smc_psci)
> +
> +ENTRY(default_psci_vector)
> +	eret
> +ENDPROC(default_psci_vector)

This looks misnamed. This is the vector for exceptions triggered by
something other than PSCI.

That said I think we need to do something else here. If we haven't
handled the exception we're likely to just return to whatever caused it
and take the same exception again.

A failstop behaviour would seem better if the only other exceptions we
expect to take are fatal anyway.

> +.align 2

I don't think this is necessary; we had code immediately before.

> +__handle_sync:
> +	str 	x4, [sp, #-8]!
> +	mrs	x4, esr_el3
> +	ubfx	x4, x4, #26, #6
> +	cmp	x4, #0x17

Could we not have a macro like ESR_EC_SMC64 instead of the #17?

Thanks,
Mark.

> +	b.eq	smc_found
> +	ldr	x4, [sp], #8
> +	b	default_psci_vector
> +smc_found:
> +	ldr     x4, [sp], #8
> +	b	_smc_psci
> +
> +/*
> + * PSCI Exception vectors.
> + */
> +	.align	11
> +	.globl	psci_vectors
> +psci_vectors:
> +	.align	7
> +	b	default_psci_vector	/* Current EL Synchronous Thread */
> +	.align	7
> +	b	default_psci_vector	/* Current EL IRQ Thread */
> +	.align	7
> +	b	default_psci_vector	/* Current EL FIQ Thread */
> +	.align	7
> +	b	default_psci_vector	/* Current EL Error Thread */
> +	.align	7
> +	b	default_psci_vector	/* Current EL Synchronous Handler */
> +	.align	7
> +	b	default_psci_vector	/* Current EL IRQ Handler */
> +	.align	7
> +	b	default_psci_vector	/* Current EL FIQ Handler */
> +	.align	7
> +	b	default_psci_vector	/* Current EL Error Handler */
> +	.align	7
> +	b	__handle_sync		/* Lower EL Synchronous (64b) */
> +	.align	7
> +	b	default_psci_vector	/* Lower EL IRQ (64b) */
> +	.align	7
> +	b	default_psci_vector	/* Lower EL FIQ (64b) */
> +	.align	7
> +	b	default_psci_vector	/* Lower EL Error (64b) */
> +
> +.popsection
> -- 
> 1.7.7.4
> 
> 


More information about the U-Boot mailing list