[PATCH 3/3] pci: layerscape: add a way of specifying additional iommu mappings

laurentiu.tudor at nxp.com laurentiu.tudor at nxp.com
Tue Jun 9 12:45:10 CEST 2020


From: Laurentiu Tudor <laurentiu.tudor at nxp.com>

In the current implementation, u-boot creates iommu mappings only
for PCI devices enumarated at boot time thus does not take into
account more dynamic scenarios such as SR-IOV or PCI hot-plug.
Add an u-boot env var and a device tree property (to be used for
example in more static scenarios such as hardwired PCI endpoints
that get initialized later in the system setup) that would allow
two things:
 - for a SRIOV capable PCI EP identified by its B.D.F specify
   the maximum number of VFs that will ever be created for it
 - for hot-plug case, specify the B.D.F with which the device
   will show up on the PCI bus

The env var consists of a list of <bdf>,<action> pairs for a certain
pci bus identified by its controller's base register address, as
defined in the "reg" property in the device tree.

pci_iommu_extra = pci@<addr1>,<bdf>,<action>,<bdf>,<action>,
		  pci@<addr2>,<bdf>,<action>,<bdf>,<action>,...

where:
 <addr> is the register base address of the pci controller for which
the subsequent <bdf>,<action> pairs apply
 <bdf> identifies to which B.D.F the action applies to
 <action> can be:
    - "vfs=<number>" to specify that for the PCI EP identified
      previously by the <bdf> to include mappings for <number> of VFs
    - "hp" to specify that on this <bdf> there will be a hot-plugged
      device so it needs a mapping
The device tree property must be placed under the correct pci
controller node and only the bdf and action pairs need to be specified,
like this:

pci-iommu-extra = "<bdf>,<action>,<bdf>,<action>,...";

For example, given this configuration on bus 6:

=> pci 6
Scanning PCI devices on bus 6
BusDevFun  VendorId   DeviceId   Device Class       Sub-Class
_____________________________________________________________
06.00.00   0x8086     0x1572     Network controller      0x00
06.00.01   0x8086     0x1572     Network controller      0x00

The following u-boot env var will create iommu mappings for 3 VFs
for each PF:

=> setenv pci_iommu_extra pci at 0x3800000,6.0.0,vfs=3,6.0.1,vfs=3

For the device tree case, this would be specified like this:

pci-iommu-extra = "6.0.0,vfs=3,6.0.1,vfs=3";

To add an iommu mapping for a hot-plugged device, please see
following example:

=> setenv pci_iommu_extra pci at 0x3800000,6.2.0,hp

For the device tree case, this would be specified like this:

pci-iommu-extra = "6.2.0,hp";

Signed-off-by: Laurentiu Tudor <laurentiu.tudor at nxp.com>
---
 .../fsl-layerscape/doc/README.pci_iommu_extra |  68 +++++++
 drivers/pci/Kconfig                           |  12 ++
 drivers/pci/pcie_layerscape_fixup.c           | 178 ++++++++++++++++++
 3 files changed, 258 insertions(+)
 create mode 100644 arch/arm/cpu/armv8/fsl-layerscape/doc/README.pci_iommu_extra

diff --git a/arch/arm/cpu/armv8/fsl-layerscape/doc/README.pci_iommu_extra b/arch/arm/cpu/armv8/fsl-layerscape/doc/README.pci_iommu_extra
new file mode 100644
index 0000000000..cb1388796b
--- /dev/null
+++ b/arch/arm/cpu/armv8/fsl-layerscape/doc/README.pci_iommu_extra
@@ -0,0 +1,68 @@
+#
+# Copyright 2020 NXP
+#
+# SPDX-License-Identifier:      GPL-2.0+
+#
+
+Specifying extra IOMMU mappings for PCI controllers
+
+This feature can be enabled through the PCI_IOMMU_EXTRA_MAPPINGS
+Kconfig option.
+
+The "pci_iommu_extra" env var or "pci-iommu-extra" device tree
+property  (to be used for example in more static scenarios such
+as hardwired PCI endpoints that get initialized later in the system
+setup) allows two things:
+ - for a SRIOV capable PCI EP identified by its B.D.F specify
+   the maximum number of VFs that will ever be created for it
+ - for hot-plug case, specify the B.D.F with which the device
+   will show up on the PCI bus
+
+The env var consists of a list of <bdf>,<action> pairs for a certain
+pci bus identified by its controller's base register address, as
+defined in the "reg" property in the device tree.
+
+pci_iommu_extra = pci@<addr1>,<bdf>,<action>,<bdf>,<action>,
+		  pci@<addr2>,<bdf>,<action>,<bdf>,<action>,...
+
+where:
+ <addr> is the register base address of the pci controller for which
+the subsequent <bdf>,<action> pairs apply
+ <bdf> identifies to which B.D.F the action applies to
+ <action> can be:
+    - "vfs=<number>" to specify that for the PCI EP identified
+      previously by the <bdf> to include mappings for <number> of VFs
+    - "hp" to specify that on this <bdf> there will be a hot-plugged
+      device so it needs a mapping
+The device tree property must be placed under the correct pci
+controller node and only the bdf and action pairs need to be specified,
+like this:
+
+pci-iommu-extra = "<bdf>,<action>,<bdf>,<action>,...";
+
+For example, given this configuration on bus 6:
+
+=> pci 6
+Scanning PCI devices on bus 6
+BusDevFun  VendorId   DeviceId   Device Class       Sub-Class
+_____________________________________________________________
+06.00.00   0x8086     0x1572     Network controller      0x00
+06.00.01   0x8086     0x1572     Network controller      0x00
+
+The following u-boot env var will create iommu mappings for 3 VFs
+for each PF:
+
+=> setenv pci_iommu_extra pci at 0x3800000,6.0.0,vfs=3,6.0.1,vfs=3
+
+For the device tree case, this would be specified like this:
+
+pci-iommu-extra = "6.0.0,vfs=3,6.0.1,vfs=3";
+
+To add an iommu mapping for a hot-plugged device, please see
+following example:
+
+=> setenv pci_iommu_extra pci at 0x3800000,6.2.0,hp
+
+For the device tree case, this would be specified like this:
+
+pci-iommu-extra = "6.2.0,hp";
diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
index 6d8c22aacf..2697879dec 100644
--- a/drivers/pci/Kconfig
+++ b/drivers/pci/Kconfig
@@ -135,6 +135,18 @@ config PCIE_LAYERSCAPE
 	  PCIe controllers. The PCIe may works in RC or EP mode according to
 	  RCW[HOST_AGT_PEX] setting.
 
+config PCI_IOMMU_EXTRA_MAPPINGS
+	bool "Support for specifying extra IOMMU mappings for PCI"
+	depends on PCIE_LAYERSCAPE
+	help
+	  Enable support for specifying extra IOMMU mappings for PCI
+	  controllers through a special env var called "pci_iommu_extra" or
+	  through a device tree property named "pci-iommu-extra" placed in
+	  the node describing the PCI controller.
+	  The intent is to cover SR-IOV scenarios which need mappings for VFs
+	  and PCI hot-plug scenarios. More documentation can be found under:
+	    arch/arm/cpu/armv8/fsl-layerscape/doc/README.pci_iommu_extra
+
 config PCIE_LAYERSCAPE_GEN4
 	bool "Layerscape Gen4 PCIe support"
 	depends on DM_PCI
diff --git a/drivers/pci/pcie_layerscape_fixup.c b/drivers/pci/pcie_layerscape_fixup.c
index 64738453e1..72233eee3d 100644
--- a/drivers/pci/pcie_layerscape_fixup.c
+++ b/drivers/pci/pcie_layerscape_fixup.c
@@ -18,6 +18,7 @@
 #ifdef CONFIG_ARM
 #include <asm/arch/clock.h>
 #endif
+#include <malloc.h>
 #include "pcie_layerscape.h"
 #include "pcie_layerscape_fixup_common.h"
 
@@ -187,11 +188,119 @@ static int fdt_fixup_pcie_device_ls(void *blob, pci_dev_t bdf,
 	return 0;
 }
 
+#ifdef CONFIG_PCI_IOMMU_EXTRA_MAPPINGS
+struct extra_iommu_entry {
+	int action;
+	pci_dev_t bdf;
+	int num_vfs;
+};
+
+#define EXTRA_IOMMU_ENTRY_HOTPLUG	1
+#define EXTRA_IOMMU_ENTRY_VFS		2
+
+static struct extra_iommu_entry *get_extra_iommu_ents(void *blob,
+						      int nodeoffset,
+						      phys_addr_t addr,
+						      int *cnt)
+{
+	const char *s, *p, *tok;
+	struct extra_iommu_entry *entries;
+	int i = 0, b, d, f;
+
+	s = env_get("pci_iommu_extra");
+	if (!s) {
+		s = fdt_getprop(blob, nodeoffset, "pci-iommu-extra", NULL);
+	} else {
+		phys_addr_t pci_base;
+		char *endp;
+
+		tok = s;
+		p = strchrnul(s + 1, ',');
+		s = NULL;
+		do {
+			if (!strncmp(tok, "pci", 3)) {
+				pci_base = simple_strtoul(tok  + 4, &endp, 0);
+				if (pci_base == addr) {
+					s = endp + 1;
+					break;
+				}
+			}
+			p = strchrnul(p + 1, ',');
+			tok = p + 1;
+		} while (*p);
+	}
+
+	if (!s)
+		return NULL;
+
+	*cnt = 0;
+	p = s;
+	while (*p && strncmp(p, "pci", 3)) {
+		if (*p == ',')
+			(*cnt)++;
+		p++;
+	}
+	if (!(*p))
+		(*cnt)++;
+
+	if (!(*cnt) || (*cnt) % 2) {
+		printf("ERROR: invalid or odd extra iommu token count %d\n",
+		       *cnt);
+		return NULL;
+	}
+	*cnt = (*cnt) / 2;
+
+	entries = malloc((*cnt) * sizeof(*entries));
+	if (!entries) {
+		printf("ERROR: fail to allocate extra iommu entries\n");
+		return NULL;
+	}
+
+	p = s;
+	while (p) {
+		b = simple_strtoul(p, (char **)&p, 0); p++;
+		d = simple_strtoul(p, (char **)&p, 0); p++;
+		f = simple_strtoul(p, (char **)&p, 0); p++;
+		entries[i].bdf = PCI_BDF(b, d, f);
+
+		if (!strncmp(p, "hp", 2)) {
+			entries[i].action = EXTRA_IOMMU_ENTRY_HOTPLUG;
+			p += 3;
+		} else if (!strncmp(p, "vfs", 3)) {
+			entries[i].action = EXTRA_IOMMU_ENTRY_VFS;
+
+			p = strchr(p, '=');
+			entries[i].num_vfs = simple_strtoul(p + 1, (char **)&p,
+							    0);
+			if (*p)
+				p++;
+		} else {
+			printf("ERROR: invalid action in extra iommu entry\n");
+			free(entries);
+
+			return NULL;
+		}
+
+		if (!(*p) || !strncmp(p, "pci", 3))
+			break;
+
+		i++;
+	}
+
+	return entries;
+}
+#endif /* CONFIG_PCI_IOMMU_EXTRA_MAPPINGS */
+
 static void fdt_fixup_pcie_ls(void *blob)
 {
 	struct udevice *dev, *bus;
 	struct ls_pcie *pcie;
 	pci_dev_t bdf;
+#ifdef CONFIG_PCI_IOMMU_EXTRA_MAPPINGS
+	struct extra_iommu_entry *entries;
+	unsigned short vf_offset, vf_stride;
+	int i, j, cnt, sriov_pos, nodeoffset;
+#endif
 
 	/* Scan all known buses */
 	for (pci_find_first_device(&dev);
@@ -207,6 +316,75 @@ static void fdt_fixup_pcie_ls(void *blob)
 		if (fdt_fixup_pcie_device_ls(blob, bdf, pcie) < 0)
 			break;
 	}
+
+#ifdef CONFIG_PCI_IOMMU_EXTRA_MAPPINGS
+	list_for_each_entry(pcie, &ls_pcie_list, list) {
+		nodeoffset = fdt_pcie_get_nodeoffset(blob, pcie);
+		if (nodeoffset < 0) {
+			printf("ERROR: couldn't find pci node\n");
+			continue;
+		}
+
+		entries = get_extra_iommu_ents(blob, nodeoffset,
+					       pcie->dbi_res.start, &cnt);
+		if (!entries)
+			continue;
+
+		for (i = 0; i < cnt; i++) {
+			if (entries[i].action == EXTRA_IOMMU_ENTRY_HOTPLUG) {
+				bdf = entries[i].bdf -
+					PCI_BDF(pcie->bus->seq + 1, 0, 0);
+				printf("Added iommu map for hotplug %d.%d.%d\n",
+				       PCI_BUS(entries[i].bdf),
+				       PCI_DEV(entries[i].bdf),
+				       PCI_FUNC(entries[i].bdf));
+				if (fdt_fixup_pcie_device_ls(blob,
+							     bdf, pcie) < 0) {
+					free(entries);
+					return;
+				}
+				continue;
+			}
+
+			/* EXTRA_IOMMU_ENTRY_VFS case */
+			if (dm_pci_bus_find_bdf(entries[i].bdf, &dev)) {
+				printf("ERROR: BDF %d.%d.%d not found\n",
+				       PCI_BUS(entries[i].bdf),
+				       PCI_DEV(entries[i].bdf),
+				       PCI_FUNC(entries[i].bdf));
+				continue;
+			}
+			sriov_pos = dm_pci_find_ext_capability
+						(dev, PCI_EXT_CAP_ID_SRIOV);
+			if (!sriov_pos) {
+				printf("WARN: setting VFs on non-SRIOV dev\n");
+				continue;
+			}
+			dm_pci_read_config16(dev, sriov_pos + 0x14,
+					     &vf_offset);
+			dm_pci_read_config16(dev, sriov_pos + 0x16,
+					     &vf_stride);
+
+			bdf = entries[i].bdf -
+				PCI_BDF(pcie->bus->seq + 1, 0, 0) +
+				(vf_offset << 8);
+			printf("Added %d iommu VF mappings for PF %d.%d.%d\n",
+			       entries[i].num_vfs, PCI_BUS(entries[i].bdf),
+			       PCI_DEV(entries[i].bdf),
+			       PCI_FUNC(entries[i].bdf));
+			for (j = 0; j < entries[i].num_vfs; j++) {
+				if (fdt_fixup_pcie_device_ls(blob,
+							     bdf, pcie) < 0) {
+					free(entries);
+					return;
+				}
+				bdf += vf_stride << 8;
+			}
+		}
+		free(entries);
+	}
+#endif /* CONFIG_PCI_IOMMU_EXTRA_MAPPINGS */
+
 	pcie_board_fix_fdt(blob);
 }
 #endif
-- 
2.17.1



More information about the U-Boot mailing list