pci-v4.11-changes

-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJYrvYlAAoJEFmIoMA60/r8FuQQAMDpia3kacyCAJpa+zjmyMNF
 1slytaoIvP37dFq9XF1em031lwGNr5sahZ7nP1EKgALz4odZUzait7BUABcfviIn
 Uesz2E1s/miMo4/0X1j9DqY9xV649DmmSIgk1yn3kvCkH/+Ix27dexu47auGzPEb
 H/sEfd1RZidjZ5EWaG0ww5FrHcuge+JHtcH6vFQtWsTOspcx++IhaVIGjC0JCpqK
 DnlQKilsJ38KUkvuDcxWjtFKxAc8De9jvCR4kX96OvbHahfAWwBO4AtUv7U3JpJN
 2nyQk+I5kRagbfBucaXZISUtWM7h4peLiL+TGkvKg8eOVlOCedjYlrZW4SWkbAN+
 0qwcHRQ8lwhNmgp3VYq7pmnugIvW4P2Fh3uqaplCAIwlpODxWPDQP7HLM2kyzmvq
 gPGi0R4Yo2PdIXqfbilrzbFVeyqkIFECr287a6+5PekC0DxsqZvOG0uA1mWKLIaH
 pRQMT0FO2SCCSOpcxRExeIj+XxhXlDVOrIBP6eMiFXAMgzUAyU8fLSZVMtXAvsTS
 02hVDOc/Fq2jKlCSoJRIiRp5aj1QDFS/DjBhOnW7pXuvUTCrfYBXY5NCdT9UV3Q7
 W6qHWkizRmRDGxUzqSODRt5aU7VOKbWvZnp10eJyKt5s2Iawe6We5V1NX+u18UIS
 Scc1nbuPTL6u1n8PsaBG
 =4Owc
 -----END PGP SIGNATURE-----

Merge tag 'pci-v4.11-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:

 - add ASPM L1 substate support

 - enable PCIe Extended Tags when supported

 - configure PCIe MPS settings on iProc, Versatile, X-Gene, and Xilinx

 - increase VPD access timeout

 - add ACS quirks for Intel Union Point, Qualcomm QDF2400 and QDF2432

 - use new pci_irq_alloc_vectors() in more drivers

 - fix MSI affinity memory leak

 - remove unused MSI interfaces and update documentation

 - remove unused AER .link_reset() callback

 - avoid pci_lock / p->pi_lock deadlock seen with perf

 - serialize sysfs enable/disable num_vfs operations

 - move DesignWare IP from drivers/pci/host/ to drivers/pci/dwc/ and
   refactor so we can support both hosts and endpoints

 - add DT ECAM-like support for HiSilicon Hip06/Hip07 controllers

 - add Rockchip system power management support

 - add Thunder-X cn81xx and cn83xx support

 - add Exynos 5440 PCIe PHY support

* tag 'pci-v4.11-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (93 commits)
  PCI: dwc: Remove dependency of designware on CONFIG_PCI
  PCI: dwc: Add CONFIG_PCIE_DW_HOST to enable PCI dwc host
  PCI: dwc: Split pcie-designware.c into host and core files
  PCI: dwc: designware: Fix style errors in pcie-designware.c
  PCI: dwc: designware: Parse "num-lanes" property in dw_pcie_setup_rc()
  PCI: dwc: all: Split struct pcie_port into host-only and core structures
  PCI: dwc: designware: Get device pointer at the start of dw_pcie_host_init()
  PCI: dwc: all: Rename cfg_read/cfg_write to read/write
  PCI: dwc: all: Use platform_set_drvdata() to save private data
  PCI: dwc: designware: Move register defines to designware header file
  PCI: dwc: Use PTR_ERR_OR_ZERO to simplify code
  PCI: dra7xx: Group PHY API invocations
  PCI: dra7xx: Enable MSI and legacy interrupts simultaneously
  PCI: dra7xx: Add support to force RC to work in GEN1 mode
  PCI: dra7xx: Simplify probe code with devm_gpiod_get_optional()
  PCI: Move DesignWare IP support to new drivers/pci/dwc/ directory
  PCI: exynos: Support the PHY generic framework
  Documentation: binding: Modify the exynos5440 PCIe binding
  phy: phy-exynos-pcie: Add support for Exynos PCIe PHY
  Documentation: samsung-phy: Add exynos-pcie-phy binding
  ...
This commit is contained in:
Linus Torvalds 2017-02-23 11:53:22 -08:00
commit 60e8d3e116
74 changed files with 3352 additions and 2262 deletions

View File

@ -162,8 +162,6 @@ The following old APIs to enable and disable MSI or MSI-X interrupts should
not be used in new code:
pci_enable_msi() /* deprecated */
pci_enable_msi_range() /* deprecated */
pci_enable_msi_exact() /* deprecated */
pci_disable_msi() /* deprecated */
pci_enable_msix_range() /* deprecated */
pci_enable_msix_exact() /* deprecated */
@ -268,5 +266,5 @@ or disabled (0). If 0 is found in any of the msi_bus files belonging
to bridges between the PCI root and the device, MSIs are disabled.
It is also worth checking the device driver to see whether it supports MSIs.
For example, it may contain calls to pci_enable_msi_range() or
pci_enable_msix_range().
For example, it may contain calls to pci_irq_alloc_vectors() with the
PCI_IRQ_MSI or PCI_IRQ_MSIX flags.

View File

@ -161,21 +161,13 @@ Since all service drivers of a PCI-PCI Bridge Port device are
allowed to run simultaneously, below lists a few of possible resource
conflicts with proposed solutions.
6.1 MSI Vector Resource
6.1 MSI and MSI-X Vector Resource
The MSI capability structure enables a device software driver to call
pci_enable_msi to request MSI based interrupts. Once MSI interrupts
are enabled on a device, it stays in this mode until a device driver
calls pci_disable_msi to disable MSI interrupts and revert back to
INTx emulation mode. Since service drivers of the same PCI-PCI Bridge
port share the same physical device, if an individual service driver
calls pci_enable_msi/pci_disable_msi it may result unpredictable
behavior. For example, two service drivers run simultaneously on the
same physical Root Port. Both service drivers call pci_enable_msi to
request MSI based interrupts. A service driver may not know whether
any other service drivers have run on this Root Port. If either one
of them calls pci_disable_msi, it puts the other service driver
in a wrong interrupt mode.
Once MSI or MSI-X interrupts are enabled on a device, it stays in this
mode until they are disabled again. Since service drivers of the same
PCI-PCI Bridge port share the same physical device, if an individual
service driver enables or disables MSI/MSI-X mode it may result
unpredictable behavior.
To avoid this situation all service drivers are not permitted to
switch interrupt mode on its device. The PCI Express Port Bus driver
@ -187,17 +179,6 @@ driver. Service drivers should use (struct pcie_device*)dev->irq to
call request_irq/free_irq. In addition, the interrupt mode is stored
in the field interrupt_mode of struct pcie_device.
6.2 MSI-X Vector Resources
Similar to the MSI a device driver for an MSI-X capable device can
call pci_enable_msix to request MSI-X interrupts. All service drivers
are not permitted to switch interrupt mode on its device. The PCI
Express Port Bus driver is responsible for determining the interrupt
mode and this should be transparent to service drivers. Any attempt
by service driver to call pci_enable_msix/pci_disable_msix may
result unpredictable behavior. Service drivers should use
(struct pcie_device*)dev->irq and call request_irq/free_irq.
6.3 PCI Memory/IO Mapped Regions
Service drivers for PCI Express Power Management (PME), Advanced

View File

@ -78,7 +78,6 @@ struct pci_error_handlers
{
int (*error_detected)(struct pci_dev *dev, enum pci_channel_state);
int (*mmio_enabled)(struct pci_dev *dev);
int (*link_reset)(struct pci_dev *dev);
int (*slot_reset)(struct pci_dev *dev);
void (*resume)(struct pci_dev *dev);
};
@ -104,8 +103,7 @@ if it implements any, it must implement error_detected(). If a callback
is not implemented, the corresponding feature is considered unsupported.
For example, if mmio_enabled() and resume() aren't there, then it
is assumed that the driver is not doing any direct recovery and requires
a slot reset. If link_reset() is not implemented, the card is assumed to
not care about link resets. Typically a driver will want to know about
a slot reset. Typically a driver will want to know about
a slot_reset().
The actual steps taken by a platform to recover from a PCI error
@ -232,25 +230,9 @@ proceeds to STEP 4 (Slot Reset)
STEP 3: Link Reset
------------------
The platform resets the link, and then calls the link_reset() callback
on all affected device drivers. This is a PCI-Express specific state
The platform resets the link. This is a PCI-Express specific step
and is done whenever a non-fatal error has been detected that can be
"solved" by resetting the link. This call informs the driver of the
reset and the driver should check to see if the device appears to be
in working condition.
The driver is not supposed to restart normal driver I/O operations
at this point. It should limit itself to "probing" the device to
check its recoverability status. If all is right, then the platform
will call resume() once all drivers have ack'd link_reset().
Result codes:
(identical to STEP 3 (MMIO Enabled)
The platform then proceeds to either STEP 4 (Slot Reset) or STEP 5
(Resume Operations).
>>> The current powerpc implementation does not implement this callback.
"solved" by resetting the link.
STEP 4: Slot Reset
------------------

View File

@ -382,18 +382,18 @@ The fundamental difference between MSI and MSI-X is how multiple
"vectors" get allocated. MSI requires contiguous blocks of vectors
while MSI-X can allocate several individual ones.
MSI capability can be enabled by calling pci_enable_msi() or
pci_enable_msix() before calling request_irq(). This causes
the PCI support to program CPU vector data into the PCI device
capability registers.
MSI capability can be enabled by calling pci_alloc_irq_vectors() with the
PCI_IRQ_MSI and/or PCI_IRQ_MSIX flags before calling request_irq(). This
causes the PCI support to program CPU vector data into the PCI device
capability registers. Many architectures, chip-sets, or BIOSes do NOT
support MSI or MSI-X and a call to pci_alloc_irq_vectors with just
the PCI_IRQ_MSI and PCI_IRQ_MSIX flags will fail, so try to always
specify PCI_IRQ_LEGACY as well.
If your PCI device supports both, try to enable MSI-X first.
Only one can be enabled at a time. Many architectures, chip-sets,
or BIOSes do NOT support MSI or MSI-X and the call to pci_enable_msi/msix
will fail. This is important to note since many drivers have
two (or more) interrupt handlers: one for MSI/MSI-X and another for IRQs.
They choose which handler to register with request_irq() based on the
return value from pci_enable_msi/msix().
Drivers that have different interrupt handlers for MSI/MSI-X and
legacy INTx should chose the right one based on the msi_enabled
and msix_enabled flags in the pci_dev structure after calling
pci_alloc_irq_vectors.
There are (at least) two really good reasons for using MSI:
1) MSI is an exclusive interrupt vector by definition.

View File

@ -42,3 +42,40 @@ Hip05 Example (note that Hip06 is the same except compatible):
0x0 0 0 4 &mbigen_pcie 4 13>;
status = "ok";
};
HiSilicon Hip06/Hip07 PCIe host bridge DT (almost-ECAM) description.
The properties and their meanings are identical to those described in
host-generic-pci.txt except as listed below.
Properties of the host controller node that differ from
host-generic-pci.txt:
- compatible : Must be "hisilicon,pcie-almost-ecam"
- reg : Two entries: First the ECAM configuration space for any
other bus underneath the root bus. Second, the base
and size of the HiSilicon host bridge registers include
the RC's own config space.
Example:
pcie0: pcie@a0090000 {
compatible = "hisilicon,pcie-almost-ecam";
reg = <0 0xb0000000 0 0x2000000>, /* ECAM configuration space */
<0 0xa0090000 0 0x10000>; /* host bridge registers */
bus-range = <0 31>;
msi-map = <0x0000 &its_dsa 0x0000 0x2000>;
msi-map-mask = <0xffff>;
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
dma-coherent;
ranges = <0x02000000 0 0xb2000000 0x0 0xb2000000 0 0x5ff0000
0x01000000 0 0 0 0xb7ff0000 0 0x10000>;
#interrupt-cells = <1>;
interrupt-map-mask = <0xf800 0 0 7>;
interrupt-map = <0x0 0 0 1 &mbigen_pcie0 650 4
0x0 0 0 2 &mbigen_pcie0 650 4
0x0 0 0 3 &mbigen_pcie0 650 4
0x0 0 0 4 &mbigen_pcie0 650 4>;
status = "ok";
};

View File

@ -78,7 +78,8 @@ and the following optional properties:
multiple lanes. If this property is not found, we assume that the
value is 0.
- reset-gpios: optional gpio to PERST#
- reset-delay-us: delay in us to wait after reset de-assertion
- reset-delay-us: delay in us to wait after reset de-assertion, if not
specified will default to 100ms, as required by the PCIe specification.
Example:

View File

@ -6,6 +6,7 @@ compatible: "renesas,pcie-r8a7779" for the R8A7779 SoC;
"renesas,pcie-r8a7791" for the R8A7791 SoC;
"renesas,pcie-r8a7793" for the R8A7793 SoC;
"renesas,pcie-r8a7795" for the R8A7795 SoC;
"renesas,pcie-r8a7796" for the R8A7796 SoC;
"renesas,pcie-rcar-gen2" for a generic R-Car Gen2 compatible device.
"renesas,pcie-rcar-gen3" for a generic R-Car Gen3 compatible device.

View File

@ -43,6 +43,8 @@ Required properties:
- interrupt-map-mask and interrupt-map: standard PCI properties
Optional Property:
- aspm-no-l0s: RC won't support ASPM L0s. This property is needed if
using 24MHz OSC for RC's PHY.
- ep-gpios: contain the entry for pre-reset gpio
- num-lanes: number of lanes to use
- vpcie3v3-supply: The phandle to the 3.3v regulator to use for PCIe.

View File

@ -7,8 +7,19 @@ Required properties:
- compatible: "samsung,exynos5440-pcie"
- reg: base addresses and lengths of the pcie controller,
the phy controller, additional register for the phy controller.
(Registers for the phy controller are DEPRECATED.
Use the PHY framework.)
- reg-names : First name should be set to "elbi".
And use the "config" instead of getting the confgiruation address space
from "ranges".
NOTE: When use the "config" property, reg-names must be set.
- interrupts: A list of interrupt outputs for level interrupt,
pulse interrupt, special interrupt.
- phys: From PHY binding. Phandle for the Generic PHY.
Refer to Documentation/devicetree/bindings/phy/samsung-phy.txt
Other common properties refer to
Documentation/devicetree/binding/pci/designware-pcie.txt
Example:
@ -54,6 +65,24 @@ SoC specific DT Entry:
num-lanes = <4>;
};
With using PHY framework:
pcie_phy0: pcie-phy@270000 {
...
reg = <0x270000 0x1000>, <0x271000 0x40>;
reg-names = "phy", "block";
...
};
pcie@290000 {
...
reg = <0x290000 0x1000>, <0x40000000 0x1000>;
reg-names = "elbi", "config";
phys = <&pcie_phy0>;
ranges = <0x81000000 0 0 0x60001000 0 0x00010000
0x82000000 0 0x60011000 0x60011000 0 0x1ffef000>;
...
};
Board specific DT Entry:
pcie@290000 {

View File

@ -191,3 +191,20 @@ Example:
usbdrdphy0 = &usb3_phy0;
usbdrdphy1 = &usb3_phy1;
};
Samsung Exynos SoC series PCIe PHY controller
--------------------------------------------------
Required properties:
- compatible : Should be set to "samsung,exynos5440-pcie-phy"
- #phy-cells : Must be zero
- reg : a register used by phy driver.
- First is for phy register, second is for block register.
- reg-names : Must be set to "phy" and "block".
Example:
pcie_phy0: pcie-phy@270000 {
#phy-cells = <0>;
compatible = "samsung,exynos5440-pcie-phy";
reg = <0x270000 0x1000>, <0x271000 0x40>;
reg-names = "phy", "block";
};

View File

@ -9551,7 +9551,7 @@ L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org
S: Maintained
F: Documentation/devicetree/bindings/pci/pci-armada8k.txt
F: drivers/pci/host/pcie-armada8k.c
F: drivers/pci/dwc/pcie-armada8k.c
PCI DRIVER FOR APPLIEDMICRO XGENE
M: Tanmay Inamdar <tinamdar@apm.com>
@ -9569,7 +9569,7 @@ L: linuxppc-dev@lists.ozlabs.org
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org
S: Maintained
F: drivers/pci/host/*layerscape*
F: drivers/pci/dwc/*layerscape*
PCI DRIVER FOR IMX6
M: Richard Zhu <hongxing.zhu@nxp.com>
@ -9578,14 +9578,14 @@ L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt
F: drivers/pci/host/*imx6*
F: drivers/pci/dwc/*imx6*
PCI DRIVER FOR TI KEYSTONE
M: Murali Karicheri <m-karicheri2@ti.com>
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: drivers/pci/host/*keystone*
F: drivers/pci/dwc/*keystone*
PCI DRIVER FOR MVEBU (Marvell Armada 370 and Armada XP SOC support)
M: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
@ -9617,7 +9617,7 @@ L: linux-omap@vger.kernel.org
L: linux-pci@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/pci/ti-pci.txt
F: drivers/pci/host/pci-dra7xx.c
F: drivers/pci/dwc/pci-dra7xx.c
PCI DRIVER FOR RENESAS R-CAR
M: Simon Horman <horms@verge.net.au>
@ -9632,7 +9632,7 @@ L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
S: Maintained
F: drivers/pci/host/pci-exynos.c
F: drivers/pci/dwc/pci-exynos.c
PCI DRIVER FOR SYNOPSIS DESIGNWARE
M: Jingoo Han <jingoohan1@gmail.com>
@ -9640,7 +9640,7 @@ M: Joao Pinto <Joao.Pinto@synopsys.com>
L: linux-pci@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/designware-pcie.txt
F: drivers/pci/host/*designware*
F: drivers/pci/dwc/*designware*
PCI DRIVER FOR GENERIC OF HOSTS
M: Will Deacon <will.deacon@arm.com>
@ -9661,7 +9661,7 @@ PCIE DRIVER FOR ST SPEAR13XX
M: Pratyush Anand <pratyush.anand@gmail.com>
L: linux-pci@vger.kernel.org
S: Maintained
F: drivers/pci/host/*spear*
F: drivers/pci/dwc/*spear*
PCI MSI DRIVER FOR ALTERA MSI IP
M: Ley Foon Tan <lftan@altera.com>
@ -9686,7 +9686,7 @@ L: linux-arm-kernel@axis.com
L: linux-pci@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/axis,artpec*
F: drivers/pci/host/*artpec*
F: drivers/pci/dwc/*artpec*
PCIE DRIVER FOR HISILICON
M: Zhou Wang <wangzhou1@hisilicon.com>
@ -9694,7 +9694,7 @@ M: Gabriele Paoloni <gabriele.paoloni@huawei.com>
L: linux-pci@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/hisilicon-pcie.txt
F: drivers/pci/host/pcie-hisi.c
F: drivers/pci/dwc/pcie-hisi.c
PCIE DRIVER FOR ROCKCHIP
M: Shawn Lin <shawn.lin@rock-chips.com>
@ -9710,7 +9710,7 @@ M: Stanimir Varbanov <svarbanov@mm-sol.com>
L: linux-pci@vger.kernel.org
L: linux-arm-msm@vger.kernel.org
S: Maintained
F: drivers/pci/host/*qcom*
F: drivers/pci/dwc/*qcom*
PCIE DRIVER FOR CAVIUM THUNDERX
M: David Daney <david.daney@cavium.com>

View File

@ -82,7 +82,7 @@ int native_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
if (domain == NULL)
return -ENOSYS;
return pci_msi_domain_alloc_irqs(domain, dev, nvec, type);
return msi_domain_alloc_irqs(domain, &dev->dev, nvec);
}
void native_teardown_msi_irq(unsigned int irq)

View File

@ -15,6 +15,9 @@ obj-$(CONFIG_PINCTRL) += pinctrl/
obj-$(CONFIG_GPIOLIB) += gpio/
obj-y += pwm/
obj-$(CONFIG_PCI) += pci/
# PCI dwc controller drivers
obj-y += pci/dwc/
obj-$(CONFIG_PARISC) += parisc/
obj-$(CONFIG_RAPIDIO) += rapidio/
obj-y += video/

View File

@ -195,11 +195,10 @@ int pci_mcfg_lookup(struct acpi_pci_root *root, struct resource *cfgres,
goto skip_lookup;
/*
* We expect exact match, unless MCFG entry end bus covers more than
* specified by caller.
* We expect the range in bus_res in the coverage of MCFG bus range.
*/
list_for_each_entry(e, &pci_mcfg_list, list) {
if (e->segment == seg && e->bus_start == bus_res->start &&
if (e->segment == seg && e->bus_start <= bus_res->start &&
e->bus_end >= bus_res->end) {
root->mcfg_addr = e->addr;
}

View File

@ -122,104 +122,40 @@
#include "xgbe.h"
#include "xgbe-common.h"
static int xgbe_config_msi(struct xgbe_prv_data *pdata)
static int xgbe_config_multi_msi(struct xgbe_prv_data *pdata)
{
unsigned int msi_count;
unsigned int vector_count;
unsigned int i, j;
int ret;
msi_count = XGBE_MSIX_BASE_COUNT;
msi_count += max(pdata->rx_ring_count,
pdata->tx_ring_count);
msi_count = roundup_pow_of_two(msi_count);
vector_count = XGBE_MSI_BASE_COUNT;
vector_count += max(pdata->rx_ring_count,
pdata->tx_ring_count);
ret = pci_enable_msi_exact(pdata->pcidev, msi_count);
ret = pci_alloc_irq_vectors(pdata->pcidev, XGBE_MSI_MIN_COUNT,
vector_count, PCI_IRQ_MSI | PCI_IRQ_MSIX);
if (ret < 0) {
dev_info(pdata->dev, "MSI request for %u interrupts failed\n",
msi_count);
ret = pci_enable_msi(pdata->pcidev);
if (ret < 0) {
dev_info(pdata->dev, "MSI enablement failed\n");
return ret;
}
msi_count = 1;
}
pdata->irq_count = msi_count;
pdata->dev_irq = pdata->pcidev->irq;
if (msi_count > 1) {
pdata->ecc_irq = pdata->pcidev->irq + 1;
pdata->i2c_irq = pdata->pcidev->irq + 2;
pdata->an_irq = pdata->pcidev->irq + 3;
for (i = XGBE_MSIX_BASE_COUNT, j = 0;
(i < msi_count) && (j < XGBE_MAX_DMA_CHANNELS);
i++, j++)
pdata->channel_irq[j] = pdata->pcidev->irq + i;
pdata->channel_irq_count = j;
pdata->per_channel_irq = 1;
pdata->channel_irq_mode = XGBE_IRQ_MODE_LEVEL;
} else {
pdata->ecc_irq = pdata->pcidev->irq;
pdata->i2c_irq = pdata->pcidev->irq;
pdata->an_irq = pdata->pcidev->irq;
}
if (netif_msg_probe(pdata))
dev_dbg(pdata->dev, "MSI interrupts enabled\n");
return 0;
}
static int xgbe_config_msix(struct xgbe_prv_data *pdata)
{
unsigned int msix_count;
unsigned int i, j;
int ret;
msix_count = XGBE_MSIX_BASE_COUNT;
msix_count += max(pdata->rx_ring_count,
pdata->tx_ring_count);
pdata->msix_entries = devm_kcalloc(pdata->dev, msix_count,
sizeof(struct msix_entry),
GFP_KERNEL);
if (!pdata->msix_entries)
return -ENOMEM;
for (i = 0; i < msix_count; i++)
pdata->msix_entries[i].entry = i;
ret = pci_enable_msix_range(pdata->pcidev, pdata->msix_entries,
XGBE_MSIX_MIN_COUNT, msix_count);
if (ret < 0) {
dev_info(pdata->dev, "MSI-X enablement failed\n");
devm_kfree(pdata->dev, pdata->msix_entries);
pdata->msix_entries = NULL;
dev_info(pdata->dev, "multi MSI/MSI-X enablement failed\n");
return ret;
}
pdata->irq_count = ret;
pdata->dev_irq = pdata->msix_entries[0].vector;
pdata->ecc_irq = pdata->msix_entries[1].vector;
pdata->i2c_irq = pdata->msix_entries[2].vector;
pdata->an_irq = pdata->msix_entries[3].vector;
pdata->dev_irq = pci_irq_vector(pdata->pcidev, 0);
pdata->ecc_irq = pci_irq_vector(pdata->pcidev, 1);
pdata->i2c_irq = pci_irq_vector(pdata->pcidev, 2);
pdata->an_irq = pci_irq_vector(pdata->pcidev, 3);
for (i = XGBE_MSIX_BASE_COUNT, j = 0; i < ret; i++, j++)
pdata->channel_irq[j] = pdata->msix_entries[i].vector;
for (i = XGBE_MSI_BASE_COUNT, j = 0; i < ret; i++, j++)
pdata->channel_irq[j] = pci_irq_vector(pdata->pcidev, i);
pdata->channel_irq_count = j;
pdata->per_channel_irq = 1;
pdata->channel_irq_mode = XGBE_IRQ_MODE_LEVEL;
if (netif_msg_probe(pdata))
dev_dbg(pdata->dev, "MSI-X interrupts enabled\n");
dev_dbg(pdata->dev, "multi %s interrupts enabled\n",
pdata->pcidev->msix_enabled ? "MSI-X" : "MSI");
return 0;
}
@ -228,21 +164,28 @@ static int xgbe_config_irqs(struct xgbe_prv_data *pdata)
{
int ret;
ret = xgbe_config_msix(pdata);
ret = xgbe_config_multi_msi(pdata);
if (!ret)
goto out;
ret = xgbe_config_msi(pdata);
if (!ret)
goto out;
ret = pci_alloc_irq_vectors(pdata->pcidev, 1, 1,
PCI_IRQ_LEGACY | PCI_IRQ_MSI);
if (ret < 0) {
dev_info(pdata->dev, "single IRQ enablement failed\n");
return ret;
}
pdata->irq_count = 1;
pdata->irq_shared = 1;
pdata->channel_irq_count = 1;
pdata->dev_irq = pdata->pcidev->irq;
pdata->ecc_irq = pdata->pcidev->irq;
pdata->i2c_irq = pdata->pcidev->irq;
pdata->an_irq = pdata->pcidev->irq;
pdata->dev_irq = pci_irq_vector(pdata->pcidev, 0);
pdata->ecc_irq = pci_irq_vector(pdata->pcidev, 0);
pdata->i2c_irq = pci_irq_vector(pdata->pcidev, 0);
pdata->an_irq = pci_irq_vector(pdata->pcidev, 0);
if (netif_msg_probe(pdata))
dev_dbg(pdata->dev, "single %s interrupt enabled\n",
pdata->pcidev->msi_enabled ? "MSI" : "legacy");
out:
if (netif_msg_probe(pdata)) {
@ -425,12 +368,15 @@ static int xgbe_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
/* Configure the netdev resource */
ret = xgbe_config_netdev(pdata);
if (ret)
goto err_pci_enable;
goto err_irq_vectors;
netdev_notice(pdata->netdev, "net device enabled\n");
return 0;
err_irq_vectors:
pci_free_irq_vectors(pdata->pcidev);
err_pci_enable:
xgbe_free_pdata(pdata);
@ -446,6 +392,8 @@ static void xgbe_pci_remove(struct pci_dev *pdev)
xgbe_deconfig_netdev(pdata);
pci_free_irq_vectors(pdata->pcidev);
xgbe_free_pdata(pdata);
}

View File

@ -211,9 +211,9 @@
#define XGBE_MAC_PROP_OFFSET 0x1d000
#define XGBE_I2C_CTRL_OFFSET 0x1e000
/* PCI MSIx support */
#define XGBE_MSIX_BASE_COUNT 4
#define XGBE_MSIX_MIN_COUNT (XGBE_MSIX_BASE_COUNT + 1)
/* PCI MSI/MSIx support */
#define XGBE_MSI_BASE_COUNT 4
#define XGBE_MSI_MIN_COUNT (XGBE_MSI_BASE_COUNT + 1)
/* PCI clock frequencies */
#define XGBE_V2_DMA_CLOCK_FREQ 500000000 /* 500 MHz */
@ -982,14 +982,12 @@ struct xgbe_prv_data {
unsigned int desc_ded_count;
unsigned int desc_sec_count;
struct msix_entry *msix_entries;
int dev_irq;
int ecc_irq;
int i2c_irq;
int channel_irq[XGBE_MAX_DMA_CHANNELS];
unsigned int per_channel_irq;
unsigned int irq_shared;
unsigned int irq_count;
unsigned int channel_irq_count;
unsigned int channel_irq_mode;

View File

@ -132,4 +132,5 @@ config PCI_HYPERV
PCI devices from a PCI backend to support PCI driver domains.
source "drivers/pci/hotplug/Kconfig"
source "drivers/pci/dwc/Kconfig"
source "drivers/pci/host/Kconfig"

View File

@ -367,7 +367,7 @@ static size_t pci_vpd_size(struct pci_dev *dev, size_t old_size)
static int pci_vpd_wait(struct pci_dev *dev)
{
struct pci_vpd *vpd = dev->vpd;
unsigned long timeout = jiffies + msecs_to_jiffies(50);
unsigned long timeout = jiffies + msecs_to_jiffies(125);
unsigned long max_sleep = 16;
u16 status;
int ret;
@ -684,8 +684,9 @@ void pci_cfg_access_unlock(struct pci_dev *dev)
WARN_ON(!dev->block_cfg_access);
dev->block_cfg_access = 0;
wake_up_all(&pci_cfg_wait);
raw_spin_unlock_irqrestore(&pci_lock, flags);
wake_up_all(&pci_cfg_wait);
}
EXPORT_SYMBOL_GPL(pci_cfg_access_unlock);

132
drivers/pci/dwc/Kconfig Normal file
View File

@ -0,0 +1,132 @@
menu "DesignWare PCI Core Support"
config PCIE_DW
bool
config PCIE_DW_HOST
bool
depends on PCI
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
config PCI_DRA7XX
bool "TI DRA7xx PCIe controller"
depends on PCI
depends on OF && HAS_IOMEM && TI_PIPE3
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST
help
Enables support for the PCIe controller in the DRA7xx SoC. There
are two instances of PCIe controller in DRA7xx. This controller can
act both as EP and RC. This reuses the Designware core.
config PCIE_DW_PLAT
bool "Platform bus based DesignWare PCIe Controller"
depends on PCI
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST
---help---
This selects the DesignWare PCIe controller support. Select this if
you have a PCIe controller on Platform bus.
If you have a controller with this interface, say Y or M here.
If unsure, say N.
config PCI_EXYNOS
bool "Samsung Exynos PCIe controller"
depends on PCI
depends on SOC_EXYNOS5440
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
config PCI_IMX6
bool "Freescale i.MX6 PCIe controller"
depends on PCI
depends on SOC_IMX6Q
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
config PCIE_SPEAR13XX
bool "STMicroelectronics SPEAr PCIe controller"
depends on PCI
depends on ARCH_SPEAR13XX
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
help
Say Y here if you want PCIe support on SPEAr13XX SoCs.
config PCI_KEYSTONE
bool "TI Keystone PCIe controller"
depends on PCI
depends on ARCH_KEYSTONE
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
help
Say Y here if you want to enable PCI controller support on Keystone
SoCs. The PCI controller on Keystone is based on Designware hardware
and therefore the driver re-uses the Designware core functions to
implement the driver.
config PCI_LAYERSCAPE
bool "Freescale Layerscape PCIe controller"
depends on PCI
depends on OF && (ARM || ARCH_LAYERSCAPE)
depends on PCI_MSI_IRQ_DOMAIN
select MFD_SYSCON
select PCIE_DW_HOST
help
Say Y here if you want PCIe controller support on Layerscape SoCs.
config PCI_HISI
depends on OF && ARM64
bool "HiSilicon Hip05 and Hip06 SoCs PCIe controllers"
depends on PCI
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
help
Say Y here if you want PCIe controller support on HiSilicon
Hip05 and Hip06 SoCs
config PCIE_QCOM
bool "Qualcomm PCIe controller"
depends on PCI
depends on ARCH_QCOM && OF
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
help
Say Y here to enable PCIe controller support on Qualcomm SoCs. The
PCIe controller uses the Designware core plus Qualcomm-specific
hardware wrappers.
config PCIE_ARMADA_8K
bool "Marvell Armada-8K PCIe controller"
depends on PCI
depends on ARCH_MVEBU
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
help
Say Y here if you want to enable PCIe controller support on
Armada-8K SoCs. The PCIe controller on Armada-8K is based on
Designware hardware and therefore the driver re-uses the
Designware core functions to implement the driver.
config PCIE_ARTPEC6
bool "Axis ARTPEC-6 PCIe controller"
depends on PCI
depends on MACH_ARTPEC6
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
help
Say Y here to enable PCIe controller support on Axis ARTPEC-6
SoCs. This PCIe controller uses the DesignWare core.
endmenu

24
drivers/pci/dwc/Makefile Normal file
View File

@ -0,0 +1,24 @@
obj-$(CONFIG_PCIE_DW) += pcie-designware.o
obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o
obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o pci-keystone.o
obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o
obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o
obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o
obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o
# The following drivers are for devices that use the generic ACPI
# pci_root.c driver but don't support standard ECAM config access.
# They contain MCFG quirks to replace the generic ECAM accessors with
# device-specific ones that are shared with the DT driver.
# The ACPI driver is generic and should not require driver-specific
# config options to be enabled, so we always build these drivers on
# ARM64 and use internal ifdefs to only build the pieces we need
# depending on whether ACPI, the DT driver, or both are enabled.
obj-$(CONFIG_ARM64) += pcie-hisi.o

View File

@ -17,6 +17,7 @@
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/of_gpio.h>
#include <linux/of_pci.h>
#include <linux/pci.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
@ -63,14 +64,18 @@
#define LINK_UP BIT(16)
#define DRA7XX_CPU_TO_BUS_ADDR 0x0FFFFFFF
#define EXP_CAP_ID_OFFSET 0x70
struct dra7xx_pcie {
struct pcie_port pp;
struct dw_pcie *pci;
void __iomem *base; /* DT ti_conf */
int phy_count; /* DT phy-names count */
struct phy **phy;
int link_gen;
struct irq_domain *irq_domain;
};
#define to_dra7xx_pcie(x) container_of((x), struct dra7xx_pcie, pp)
#define to_dra7xx_pcie(x) dev_get_drvdata((x)->dev)
static inline u32 dra7xx_pcie_readl(struct dra7xx_pcie *pcie, u32 offset)
{
@ -83,9 +88,9 @@ static inline void dra7xx_pcie_writel(struct dra7xx_pcie *pcie, u32 offset,
writel(value, pcie->base + offset);
}
static int dra7xx_pcie_link_up(struct pcie_port *pp)
static int dra7xx_pcie_link_up(struct dw_pcie *pci)
{
struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pp);
struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci);
u32 reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_PHY_CS);
return !!(reg & LINK_UP);
@ -93,20 +98,41 @@ static int dra7xx_pcie_link_up(struct pcie_port *pp)
static int dra7xx_pcie_establish_link(struct dra7xx_pcie *dra7xx)
{
struct pcie_port *pp = &dra7xx->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = dra7xx->pci;
struct device *dev = pci->dev;
u32 reg;
u32 exp_cap_off = EXP_CAP_ID_OFFSET;
if (dw_pcie_link_up(pp)) {
if (dw_pcie_link_up(pci)) {
dev_err(dev, "link is already up\n");
return 0;
}
if (dra7xx->link_gen == 1) {
dw_pcie_read(pci->dbi_base + exp_cap_off + PCI_EXP_LNKCAP,
4, &reg);
if ((reg & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_2_5GB) {
reg &= ~((u32)PCI_EXP_LNKCAP_SLS);
reg |= PCI_EXP_LNKCAP_SLS_2_5GB;
dw_pcie_write(pci->dbi_base + exp_cap_off +
PCI_EXP_LNKCAP, 4, reg);
}
dw_pcie_read(pci->dbi_base + exp_cap_off + PCI_EXP_LNKCTL2,
2, &reg);
if ((reg & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_2_5GB) {
reg &= ~((u32)PCI_EXP_LNKCAP_SLS);
reg |= PCI_EXP_LNKCAP_SLS_2_5GB;
dw_pcie_write(pci->dbi_base + exp_cap_off +
PCI_EXP_LNKCTL2, 2, reg);
}
}
reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_DEVICE_CMD);
reg |= LTSSM_EN;
dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_DEVICE_CMD, reg);
return dw_pcie_wait_for_link(pp);
return dw_pcie_wait_for_link(pci);
}
static void dra7xx_pcie_enable_interrupts(struct dra7xx_pcie *dra7xx)
@ -117,19 +143,14 @@ static void dra7xx_pcie_enable_interrupts(struct dra7xx_pcie *dra7xx)
PCIECTRL_DRA7XX_CONF_IRQENABLE_SET_MAIN, INTERRUPTS);
dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI,
~LEG_EP_INTERRUPTS & ~MSI);
if (IS_ENABLED(CONFIG_PCI_MSI))
dra7xx_pcie_writel(dra7xx,
PCIECTRL_DRA7XX_CONF_IRQENABLE_SET_MSI, MSI);
else
dra7xx_pcie_writel(dra7xx,
PCIECTRL_DRA7XX_CONF_IRQENABLE_SET_MSI,
LEG_EP_INTERRUPTS);
dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQENABLE_SET_MSI,
MSI | LEG_EP_INTERRUPTS);
}
static void dra7xx_pcie_host_init(struct pcie_port *pp)
{
struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pp);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci);
pp->io_base &= DRA7XX_CPU_TO_BUS_ADDR;
pp->mem_base &= DRA7XX_CPU_TO_BUS_ADDR;
@ -139,13 +160,11 @@ static void dra7xx_pcie_host_init(struct pcie_port *pp)
dw_pcie_setup_rc(pp);
dra7xx_pcie_establish_link(dra7xx);
if (IS_ENABLED(CONFIG_PCI_MSI))
dw_pcie_msi_init(pp);
dw_pcie_msi_init(pp);
dra7xx_pcie_enable_interrupts(dra7xx);
}
static struct pcie_host_ops dra7xx_pcie_host_ops = {
.link_up = dra7xx_pcie_link_up,
static struct dw_pcie_host_ops dra7xx_pcie_host_ops = {
.host_init = dra7xx_pcie_host_init,
};
@ -164,7 +183,9 @@ static const struct irq_domain_ops intx_domain_ops = {
static int dra7xx_pcie_init_irq_domain(struct pcie_port *pp)
{
struct device *dev = pp->dev;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct device *dev = pci->dev;
struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci);
struct device_node *node = dev->of_node;
struct device_node *pcie_intc_node = of_get_next_child(node, NULL);
@ -173,9 +194,9 @@ static int dra7xx_pcie_init_irq_domain(struct pcie_port *pp)
return -ENODEV;
}
pp->irq_domain = irq_domain_add_linear(pcie_intc_node, 4,
&intx_domain_ops, pp);
if (!pp->irq_domain) {
dra7xx->irq_domain = irq_domain_add_linear(pcie_intc_node, 4,
&intx_domain_ops, pp);
if (!dra7xx->irq_domain) {
dev_err(dev, "Failed to get a INTx IRQ domain\n");
return -ENODEV;
}
@ -186,7 +207,8 @@ static int dra7xx_pcie_init_irq_domain(struct pcie_port *pp)
static irqreturn_t dra7xx_pcie_msi_irq_handler(int irq, void *arg)
{
struct dra7xx_pcie *dra7xx = arg;
struct pcie_port *pp = &dra7xx->pp;
struct dw_pcie *pci = dra7xx->pci;
struct pcie_port *pp = &pci->pp;
u32 reg;
reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI);
@ -199,7 +221,8 @@ static irqreturn_t dra7xx_pcie_msi_irq_handler(int irq, void *arg)
case INTB:
case INTC:
case INTD:
generic_handle_irq(irq_find_mapping(pp->irq_domain, ffs(reg)));
generic_handle_irq(irq_find_mapping(dra7xx->irq_domain,
ffs(reg)));
break;
}
@ -212,7 +235,8 @@ static irqreturn_t dra7xx_pcie_msi_irq_handler(int irq, void *arg)
static irqreturn_t dra7xx_pcie_irq_handler(int irq, void *arg)
{
struct dra7xx_pcie *dra7xx = arg;
struct device *dev = dra7xx->pp.dev;
struct dw_pcie *pci = dra7xx->pci;
struct device *dev = pci->dev;
u32 reg;
reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MAIN);
@ -267,8 +291,9 @@ static int __init dra7xx_add_pcie_port(struct dra7xx_pcie *dra7xx,
struct platform_device *pdev)
{
int ret;
struct pcie_port *pp = &dra7xx->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = dra7xx->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = pci->dev;
struct resource *res;
pp->irq = platform_get_irq(pdev, 1);
@ -285,15 +310,13 @@ static int __init dra7xx_add_pcie_port(struct dra7xx_pcie *dra7xx,
return ret;
}
if (!IS_ENABLED(CONFIG_PCI_MSI)) {
ret = dra7xx_pcie_init_irq_domain(pp);
if (ret < 0)
return ret;
}
ret = dra7xx_pcie_init_irq_domain(pp);
if (ret < 0)
return ret;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rc_dbics");
pp->dbi_base = devm_ioremap(dev, res->start, resource_size(res));
if (!pp->dbi_base)
pci->dbi_base = devm_ioremap(dev, res->start, resource_size(res));
if (!pci->dbi_base)
return -ENOMEM;
ret = dw_pcie_host_init(pp);
@ -305,6 +328,49 @@ static int __init dra7xx_add_pcie_port(struct dra7xx_pcie *dra7xx,
return 0;
}
static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = dra7xx_pcie_link_up,
};
static void dra7xx_pcie_disable_phy(struct dra7xx_pcie *dra7xx)
{
int phy_count = dra7xx->phy_count;
while (phy_count--) {
phy_power_off(dra7xx->phy[phy_count]);
phy_exit(dra7xx->phy[phy_count]);
}
}
static int dra7xx_pcie_enable_phy(struct dra7xx_pcie *dra7xx)
{
int phy_count = dra7xx->phy_count;
int ret;
int i;
for (i = 0; i < phy_count; i++) {
ret = phy_init(dra7xx->phy[i]);
if (ret < 0)
goto err_phy;
ret = phy_power_on(dra7xx->phy[i]);
if (ret < 0) {
phy_exit(dra7xx->phy[i]);
goto err_phy;
}
}
return 0;
err_phy:
while (--i >= 0) {
phy_power_off(dra7xx->phy[i]);
phy_exit(dra7xx->phy[i]);
}
return ret;
}
static int __init dra7xx_pcie_probe(struct platform_device *pdev)
{
u32 reg;
@ -315,21 +381,26 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
struct phy **phy;
void __iomem *base;
struct resource *res;
struct dra7xx_pcie *dra7xx;
struct dw_pcie *pci;
struct pcie_port *pp;
struct dra7xx_pcie *dra7xx;
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
char name[10];
int gpio_sel;
enum of_gpio_flags flags;
unsigned long gpio_flags;
struct gpio_desc *reset;
dra7xx = devm_kzalloc(dev, sizeof(*dra7xx), GFP_KERNEL);
if (!dra7xx)
return -ENOMEM;
pp = &dra7xx->pp;
pp->dev = dev;
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
if (!pci)
return -ENOMEM;
pci->dev = dev;
pci->ops = &dw_pcie_ops;
pp = &pci->pp;
pp->ops = &dra7xx_pcie_host_ops;
irq = platform_get_irq(pdev, 0);
@ -365,22 +436,21 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
phy[i] = devm_phy_get(dev, name);
if (IS_ERR(phy[i]))
return PTR_ERR(phy[i]);
ret = phy_init(phy[i]);
if (ret < 0)
goto err_phy;
ret = phy_power_on(phy[i]);
if (ret < 0) {
phy_exit(phy[i]);
goto err_phy;
}
}
dra7xx->base = base;
dra7xx->phy = phy;
dra7xx->pci = pci;
dra7xx->phy_count = phy_count;
ret = dra7xx_pcie_enable_phy(dra7xx);
if (ret) {
dev_err(dev, "failed to enable phy\n");
return ret;
}
platform_set_drvdata(pdev, dra7xx);
pm_runtime_enable(dev);
ret = pm_runtime_get_sync(dev);
if (ret < 0) {
@ -388,19 +458,10 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
goto err_get_sync;
}
gpio_sel = of_get_gpio_flags(dev->of_node, 0, &flags);
if (gpio_is_valid(gpio_sel)) {
gpio_flags = (flags & OF_GPIO_ACTIVE_LOW) ?
GPIOF_OUT_INIT_LOW : GPIOF_OUT_INIT_HIGH;
ret = devm_gpio_request_one(dev, gpio_sel, gpio_flags,
"pcie_reset");
if (ret) {
dev_err(dev, "gpio%d request failed, ret %d\n",
gpio_sel, ret);
goto err_gpio;
}
} else if (gpio_sel == -EPROBE_DEFER) {
ret = -EPROBE_DEFER;
reset = devm_gpiod_get_optional(dev, NULL, GPIOD_OUT_HIGH);
if (IS_ERR(reset)) {
ret = PTR_ERR(reset);
dev_err(&pdev->dev, "gpio request failed, ret %d\n", ret);
goto err_gpio;
}
@ -408,11 +469,14 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
reg &= ~LTSSM_EN;
dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_DEVICE_CMD, reg);
dra7xx->link_gen = of_pci_get_max_link_speed(np);
if (dra7xx->link_gen < 0 || dra7xx->link_gen > 2)
dra7xx->link_gen = 2;
ret = dra7xx_add_pcie_port(dra7xx, pdev);
if (ret < 0)
goto err_gpio;
platform_set_drvdata(pdev, dra7xx);
return 0;
err_gpio:
@ -420,12 +484,7 @@ err_gpio:
err_get_sync:
pm_runtime_disable(dev);
err_phy:
while (--i >= 0) {
phy_power_off(phy[i]);
phy_exit(phy[i]);
}
dra7xx_pcie_disable_phy(dra7xx);
return ret;
}
@ -434,13 +493,13 @@ err_phy:
static int dra7xx_pcie_suspend(struct device *dev)
{
struct dra7xx_pcie *dra7xx = dev_get_drvdata(dev);
struct pcie_port *pp = &dra7xx->pp;
struct dw_pcie *pci = dra7xx->pci;
u32 val;
/* clear MSE */
val = dw_pcie_readl_rc(pp, PCI_COMMAND);
val = dw_pcie_readl_dbi(pci, PCI_COMMAND);
val &= ~PCI_COMMAND_MEMORY;
dw_pcie_writel_rc(pp, PCI_COMMAND, val);
dw_pcie_writel_dbi(pci, PCI_COMMAND, val);
return 0;
}
@ -448,13 +507,13 @@ static int dra7xx_pcie_suspend(struct device *dev)
static int dra7xx_pcie_resume(struct device *dev)
{
struct dra7xx_pcie *dra7xx = dev_get_drvdata(dev);
struct pcie_port *pp = &dra7xx->pp;
struct dw_pcie *pci = dra7xx->pci;
u32 val;
/* set MSE */
val = dw_pcie_readl_rc(pp, PCI_COMMAND);
val = dw_pcie_readl_dbi(pci, PCI_COMMAND);
val |= PCI_COMMAND_MEMORY;
dw_pcie_writel_rc(pp, PCI_COMMAND, val);
dw_pcie_writel_dbi(pci, PCI_COMMAND, val);
return 0;
}
@ -462,12 +521,8 @@ static int dra7xx_pcie_resume(struct device *dev)
static int dra7xx_pcie_suspend_noirq(struct device *dev)
{
struct dra7xx_pcie *dra7xx = dev_get_drvdata(dev);
int count = dra7xx->phy_count;
while (count--) {
phy_power_off(dra7xx->phy[count]);
phy_exit(dra7xx->phy[count]);
}
dra7xx_pcie_disable_phy(dra7xx);
return 0;
}
@ -475,31 +530,15 @@ static int dra7xx_pcie_suspend_noirq(struct device *dev)
static int dra7xx_pcie_resume_noirq(struct device *dev)
{
struct dra7xx_pcie *dra7xx = dev_get_drvdata(dev);
int phy_count = dra7xx->phy_count;
int ret;
int i;
for (i = 0; i < phy_count; i++) {
ret = phy_init(dra7xx->phy[i]);
if (ret < 0)
goto err_phy;
ret = phy_power_on(dra7xx->phy[i]);
if (ret < 0) {
phy_exit(dra7xx->phy[i]);
goto err_phy;
}
ret = dra7xx_pcie_enable_phy(dra7xx);
if (ret) {
dev_err(dev, "failed to enable phy\n");
return ret;
}
return 0;
err_phy:
while (--i >= 0) {
phy_power_off(dra7xx->phy[i]);
phy_exit(dra7xx->phy[i]);
}
return ret;
}
#endif

View File

@ -0,0 +1,751 @@
/*
* PCIe host controller driver for Samsung EXYNOS SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Jingoo Han <jg1.han@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/of_device.h>
#include <linux/of_gpio.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/phy/phy.h>
#include <linux/resource.h>
#include <linux/signal.h>
#include <linux/types.h>
#include "pcie-designware.h"
#define to_exynos_pcie(x) dev_get_drvdata((x)->dev)
/* PCIe ELBI registers */
#define PCIE_IRQ_PULSE 0x000
#define IRQ_INTA_ASSERT BIT(0)
#define IRQ_INTB_ASSERT BIT(2)
#define IRQ_INTC_ASSERT BIT(4)
#define IRQ_INTD_ASSERT BIT(6)
#define PCIE_IRQ_LEVEL 0x004
#define PCIE_IRQ_SPECIAL 0x008
#define PCIE_IRQ_EN_PULSE 0x00c
#define PCIE_IRQ_EN_LEVEL 0x010
#define IRQ_MSI_ENABLE BIT(2)
#define PCIE_IRQ_EN_SPECIAL 0x014
#define PCIE_PWR_RESET 0x018
#define PCIE_CORE_RESET 0x01c
#define PCIE_CORE_RESET_ENABLE BIT(0)
#define PCIE_STICKY_RESET 0x020
#define PCIE_NONSTICKY_RESET 0x024
#define PCIE_APP_INIT_RESET 0x028
#define PCIE_APP_LTSSM_ENABLE 0x02c
#define PCIE_ELBI_RDLH_LINKUP 0x064
#define PCIE_ELBI_LTSSM_ENABLE 0x1
#define PCIE_ELBI_SLV_AWMISC 0x11c
#define PCIE_ELBI_SLV_ARMISC 0x120
#define PCIE_ELBI_SLV_DBI_ENABLE BIT(21)
/* PCIe Purple registers */
#define PCIE_PHY_GLOBAL_RESET 0x000
#define PCIE_PHY_COMMON_RESET 0x004
#define PCIE_PHY_CMN_REG 0x008
#define PCIE_PHY_MAC_RESET 0x00c
#define PCIE_PHY_PLL_LOCKED 0x010
#define PCIE_PHY_TRSVREG_RESET 0x020
#define PCIE_PHY_TRSV_RESET 0x024
/* PCIe PHY registers */
#define PCIE_PHY_IMPEDANCE 0x004
#define PCIE_PHY_PLL_DIV_0 0x008
#define PCIE_PHY_PLL_BIAS 0x00c
#define PCIE_PHY_DCC_FEEDBACK 0x014
#define PCIE_PHY_PLL_DIV_1 0x05c
#define PCIE_PHY_COMMON_POWER 0x064
#define PCIE_PHY_COMMON_PD_CMN BIT(3)
#define PCIE_PHY_TRSV0_EMP_LVL 0x084
#define PCIE_PHY_TRSV0_DRV_LVL 0x088
#define PCIE_PHY_TRSV0_RXCDR 0x0ac
#define PCIE_PHY_TRSV0_POWER 0x0c4
#define PCIE_PHY_TRSV0_PD_TSV BIT(7)
#define PCIE_PHY_TRSV0_LVCC 0x0dc
#define PCIE_PHY_TRSV1_EMP_LVL 0x144
#define PCIE_PHY_TRSV1_RXCDR 0x16c
#define PCIE_PHY_TRSV1_POWER 0x184
#define PCIE_PHY_TRSV1_PD_TSV BIT(7)
#define PCIE_PHY_TRSV1_LVCC 0x19c
#define PCIE_PHY_TRSV2_EMP_LVL 0x204
#define PCIE_PHY_TRSV2_RXCDR 0x22c
#define PCIE_PHY_TRSV2_POWER 0x244
#define PCIE_PHY_TRSV2_PD_TSV BIT(7)
#define PCIE_PHY_TRSV2_LVCC 0x25c
#define PCIE_PHY_TRSV3_EMP_LVL 0x2c4
#define PCIE_PHY_TRSV3_RXCDR 0x2ec
#define PCIE_PHY_TRSV3_POWER 0x304
#define PCIE_PHY_TRSV3_PD_TSV BIT(7)
#define PCIE_PHY_TRSV3_LVCC 0x31c
struct exynos_pcie_mem_res {
void __iomem *elbi_base; /* DT 0th resource: PCIe CTRL */
void __iomem *phy_base; /* DT 1st resource: PHY CTRL */
void __iomem *block_base; /* DT 2nd resource: PHY ADDITIONAL CTRL */
};
struct exynos_pcie_clk_res {
struct clk *clk;
struct clk *bus_clk;
};
struct exynos_pcie {
struct dw_pcie *pci;
struct exynos_pcie_mem_res *mem_res;
struct exynos_pcie_clk_res *clk_res;
const struct exynos_pcie_ops *ops;
int reset_gpio;
/* For Generic PHY Framework */
bool using_phy;
struct phy *phy;
};
struct exynos_pcie_ops {
int (*get_mem_resources)(struct platform_device *pdev,
struct exynos_pcie *ep);
int (*get_clk_resources)(struct exynos_pcie *ep);
int (*init_clk_resources)(struct exynos_pcie *ep);
void (*deinit_clk_resources)(struct exynos_pcie *ep);
};
static int exynos5440_pcie_get_mem_resources(struct platform_device *pdev,
struct exynos_pcie *ep)
{
struct dw_pcie *pci = ep->pci;
struct device *dev = pci->dev;
struct resource *res;
/* If using the PHY framework, doesn't need to get other resource */
if (ep->using_phy)
return 0;
ep->mem_res = devm_kzalloc(dev, sizeof(*ep->mem_res), GFP_KERNEL);
if (!ep->mem_res)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
ep->mem_res->elbi_base = devm_ioremap_resource(dev, res);
if (IS_ERR(ep->mem_res->elbi_base))
return PTR_ERR(ep->mem_res->elbi_base);
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
ep->mem_res->phy_base = devm_ioremap_resource(dev, res);
if (IS_ERR(ep->mem_res->phy_base))
return PTR_ERR(ep->mem_res->phy_base);
res = platform_get_resource(pdev, IORESOURCE_MEM, 2);
ep->mem_res->block_base = devm_ioremap_resource(dev, res);
if (IS_ERR(ep->mem_res->block_base))
return PTR_ERR(ep->mem_res->block_base);
return 0;
}
static int exynos5440_pcie_get_clk_resources(struct exynos_pcie *ep)
{
struct dw_pcie *pci = ep->pci;
struct device *dev = pci->dev;
ep->clk_res = devm_kzalloc(dev, sizeof(*ep->clk_res), GFP_KERNEL);
if (!ep->clk_res)
return -ENOMEM;
ep->clk_res->clk = devm_clk_get(dev, "pcie");
if (IS_ERR(ep->clk_res->clk)) {
dev_err(dev, "Failed to get pcie rc clock\n");
return PTR_ERR(ep->clk_res->clk);
}
ep->clk_res->bus_clk = devm_clk_get(dev, "pcie_bus");
if (IS_ERR(ep->clk_res->bus_clk)) {
dev_err(dev, "Failed to get pcie bus clock\n");
return PTR_ERR(ep->clk_res->bus_clk);
}
return 0;
}
static int exynos5440_pcie_init_clk_resources(struct exynos_pcie *ep)
{
struct dw_pcie *pci = ep->pci;
struct device *dev = pci->dev;
int ret;
ret = clk_prepare_enable(ep->clk_res->clk);
if (ret) {
dev_err(dev, "cannot enable pcie rc clock");
return ret;
}
ret = clk_prepare_enable(ep->clk_res->bus_clk);
if (ret) {
dev_err(dev, "cannot enable pcie bus clock");
goto err_bus_clk;
}
return 0;
err_bus_clk:
clk_disable_unprepare(ep->clk_res->clk);
return ret;
}
static void exynos5440_pcie_deinit_clk_resources(struct exynos_pcie *ep)
{
clk_disable_unprepare(ep->clk_res->bus_clk);
clk_disable_unprepare(ep->clk_res->clk);
}
static const struct exynos_pcie_ops exynos5440_pcie_ops = {
.get_mem_resources = exynos5440_pcie_get_mem_resources,
.get_clk_resources = exynos5440_pcie_get_clk_resources,
.init_clk_resources = exynos5440_pcie_init_clk_resources,
.deinit_clk_resources = exynos5440_pcie_deinit_clk_resources,
};
static void exynos_pcie_writel(void __iomem *base, u32 val, u32 reg)
{
writel(val, base + reg);
}
static u32 exynos_pcie_readl(void __iomem *base, u32 reg)
{
return readl(base + reg);
}
static void exynos_pcie_sideband_dbi_w_mode(struct exynos_pcie *ep, bool on)
{
u32 val;
val = exynos_pcie_readl(ep->mem_res->elbi_base, PCIE_ELBI_SLV_AWMISC);
if (on)
val |= PCIE_ELBI_SLV_DBI_ENABLE;
else
val &= ~PCIE_ELBI_SLV_DBI_ENABLE;
exynos_pcie_writel(ep->mem_res->elbi_base, val, PCIE_ELBI_SLV_AWMISC);
}
static void exynos_pcie_sideband_dbi_r_mode(struct exynos_pcie *ep, bool on)
{
u32 val;
val = exynos_pcie_readl(ep->mem_res->elbi_base, PCIE_ELBI_SLV_ARMISC);
if (on)
val |= PCIE_ELBI_SLV_DBI_ENABLE;
else
val &= ~PCIE_ELBI_SLV_DBI_ENABLE;
exynos_pcie_writel(ep->mem_res->elbi_base, val, PCIE_ELBI_SLV_ARMISC);
}
static void exynos_pcie_assert_core_reset(struct exynos_pcie *ep)
{
u32 val;
val = exynos_pcie_readl(ep->mem_res->elbi_base, PCIE_CORE_RESET);
val &= ~PCIE_CORE_RESET_ENABLE;
exynos_pcie_writel(ep->mem_res->elbi_base, val, PCIE_CORE_RESET);
exynos_pcie_writel(ep->mem_res->elbi_base, 0, PCIE_PWR_RESET);
exynos_pcie_writel(ep->mem_res->elbi_base, 0, PCIE_STICKY_RESET);
exynos_pcie_writel(ep->mem_res->elbi_base, 0, PCIE_NONSTICKY_RESET);
}
static void exynos_pcie_deassert_core_reset(struct exynos_pcie *ep)
{
u32 val;
val = exynos_pcie_readl(ep->mem_res->elbi_base, PCIE_CORE_RESET);
val |= PCIE_CORE_RESET_ENABLE;
exynos_pcie_writel(ep->mem_res->elbi_base, val, PCIE_CORE_RESET);
exynos_pcie_writel(ep->mem_res->elbi_base, 1, PCIE_STICKY_RESET);
exynos_pcie_writel(ep->mem_res->elbi_base, 1, PCIE_NONSTICKY_RESET);
exynos_pcie_writel(ep->mem_res->elbi_base, 1, PCIE_APP_INIT_RESET);
exynos_pcie_writel(ep->mem_res->elbi_base, 0, PCIE_APP_INIT_RESET);
exynos_pcie_writel(ep->mem_res->block_base, 1, PCIE_PHY_MAC_RESET);
}
static void exynos_pcie_assert_phy_reset(struct exynos_pcie *ep)
{
exynos_pcie_writel(ep->mem_res->block_base, 0, PCIE_PHY_MAC_RESET);
exynos_pcie_writel(ep->mem_res->block_base, 1, PCIE_PHY_GLOBAL_RESET);
}
static void exynos_pcie_deassert_phy_reset(struct exynos_pcie *ep)
{
exynos_pcie_writel(ep->mem_res->block_base, 0, PCIE_PHY_GLOBAL_RESET);
exynos_pcie_writel(ep->mem_res->elbi_base, 1, PCIE_PWR_RESET);
exynos_pcie_writel(ep->mem_res->block_base, 0, PCIE_PHY_COMMON_RESET);
exynos_pcie_writel(ep->mem_res->block_base, 0, PCIE_PHY_CMN_REG);
exynos_pcie_writel(ep->mem_res->block_base, 0, PCIE_PHY_TRSVREG_RESET);
exynos_pcie_writel(ep->mem_res->block_base, 0, PCIE_PHY_TRSV_RESET);
}
static void exynos_pcie_power_on_phy(struct exynos_pcie *ep)
{
u32 val;
val = exynos_pcie_readl(ep->mem_res->phy_base, PCIE_PHY_COMMON_POWER);
val &= ~PCIE_PHY_COMMON_PD_CMN;
exynos_pcie_writel(ep->mem_res->phy_base, val, PCIE_PHY_COMMON_POWER);
val = exynos_pcie_readl(ep->mem_res->phy_base, PCIE_PHY_TRSV0_POWER);
val &= ~PCIE_PHY_TRSV0_PD_TSV;
exynos_pcie_writel(ep->mem_res->phy_base, val, PCIE_PHY_TRSV0_POWER);
val = exynos_pcie_readl(ep->mem_res->phy_base, PCIE_PHY_TRSV1_POWER);
val &= ~PCIE_PHY_TRSV1_PD_TSV;
exynos_pcie_writel(ep->mem_res->phy_base, val, PCIE_PHY_TRSV1_POWER);
val = exynos_pcie_readl(ep->mem_res->phy_base, PCIE_PHY_TRSV2_POWER);
val &= ~PCIE_PHY_TRSV2_PD_TSV;
exynos_pcie_writel(ep->mem_res->phy_base, val, PCIE_PHY_TRSV2_POWER);
val = exynos_pcie_readl(ep->mem_res->phy_base, PCIE_PHY_TRSV3_POWER);
val &= ~PCIE_PHY_TRSV3_PD_TSV;
exynos_pcie_writel(ep->mem_res->phy_base, val, PCIE_PHY_TRSV3_POWER);
}
static void exynos_pcie_power_off_phy(struct exynos_pcie *ep)
{
u32 val;
val = exynos_pcie_readl(ep->mem_res->phy_base, PCIE_PHY_COMMON_POWER);
val |= PCIE_PHY_COMMON_PD_CMN;
exynos_pcie_writel(ep->mem_res->phy_base, val, PCIE_PHY_COMMON_POWER);
val = exynos_pcie_readl(ep->mem_res->phy_base, PCIE_PHY_TRSV0_POWER);
val |= PCIE_PHY_TRSV0_PD_TSV;
exynos_pcie_writel(ep->mem_res->phy_base, val, PCIE_PHY_TRSV0_POWER);
val = exynos_pcie_readl(ep->mem_res->phy_base, PCIE_PHY_TRSV1_POWER);
val |= PCIE_PHY_TRSV1_PD_TSV;
exynos_pcie_writel(ep->mem_res->phy_base, val, PCIE_PHY_TRSV1_POWER);
val = exynos_pcie_readl(ep->mem_res->phy_base, PCIE_PHY_TRSV2_POWER);
val |= PCIE_PHY_TRSV2_PD_TSV;
exynos_pcie_writel(ep->mem_res->phy_base, val, PCIE_PHY_TRSV2_POWER);
val = exynos_pcie_readl(ep->mem_res->phy_base, PCIE_PHY_TRSV3_POWER);
val |= PCIE_PHY_TRSV3_PD_TSV;
exynos_pcie_writel(ep->mem_res->phy_base, val, PCIE_PHY_TRSV3_POWER);
}
static void exynos_pcie_init_phy(struct exynos_pcie *ep)
{
/* DCC feedback control off */
exynos_pcie_writel(ep->mem_res->phy_base, 0x29, PCIE_PHY_DCC_FEEDBACK);
/* set TX/RX impedance */
exynos_pcie_writel(ep->mem_res->phy_base, 0xd5, PCIE_PHY_IMPEDANCE);
/* set 50Mhz PHY clock */
exynos_pcie_writel(ep->mem_res->phy_base, 0x14, PCIE_PHY_PLL_DIV_0);
exynos_pcie_writel(ep->mem_res->phy_base, 0x12, PCIE_PHY_PLL_DIV_1);
/* set TX Differential output for lane 0 */
exynos_pcie_writel(ep->mem_res->phy_base, 0x7f, PCIE_PHY_TRSV0_DRV_LVL);
/* set TX Pre-emphasis Level Control for lane 0 to minimum */
exynos_pcie_writel(ep->mem_res->phy_base, 0x0, PCIE_PHY_TRSV0_EMP_LVL);
/* set RX clock and data recovery bandwidth */
exynos_pcie_writel(ep->mem_res->phy_base, 0xe7, PCIE_PHY_PLL_BIAS);
exynos_pcie_writel(ep->mem_res->phy_base, 0x82, PCIE_PHY_TRSV0_RXCDR);
exynos_pcie_writel(ep->mem_res->phy_base, 0x82, PCIE_PHY_TRSV1_RXCDR);
exynos_pcie_writel(ep->mem_res->phy_base, 0x82, PCIE_PHY_TRSV2_RXCDR);
exynos_pcie_writel(ep->mem_res->phy_base, 0x82, PCIE_PHY_TRSV3_RXCDR);
/* change TX Pre-emphasis Level Control for lanes */
exynos_pcie_writel(ep->mem_res->phy_base, 0x39, PCIE_PHY_TRSV0_EMP_LVL);
exynos_pcie_writel(ep->mem_res->phy_base, 0x39, PCIE_PHY_TRSV1_EMP_LVL);
exynos_pcie_writel(ep->mem_res->phy_base, 0x39, PCIE_PHY_TRSV2_EMP_LVL);
exynos_pcie_writel(ep->mem_res->phy_base, 0x39, PCIE_PHY_TRSV3_EMP_LVL);
/* set LVCC */
exynos_pcie_writel(ep->mem_res->phy_base, 0x20, PCIE_PHY_TRSV0_LVCC);
exynos_pcie_writel(ep->mem_res->phy_base, 0xa0, PCIE_PHY_TRSV1_LVCC);
exynos_pcie_writel(ep->mem_res->phy_base, 0xa0, PCIE_PHY_TRSV2_LVCC);
exynos_pcie_writel(ep->mem_res->phy_base, 0xa0, PCIE_PHY_TRSV3_LVCC);
}
static void exynos_pcie_assert_reset(struct exynos_pcie *ep)
{
struct dw_pcie *pci = ep->pci;
struct device *dev = pci->dev;
if (ep->reset_gpio >= 0)
devm_gpio_request_one(dev, ep->reset_gpio,
GPIOF_OUT_INIT_HIGH, "RESET");
}
static int exynos_pcie_establish_link(struct exynos_pcie *ep)
{
struct dw_pcie *pci = ep->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = pci->dev;
u32 val;
if (dw_pcie_link_up(pci)) {
dev_err(dev, "Link already up\n");
return 0;
}
exynos_pcie_assert_core_reset(ep);
if (ep->using_phy) {
phy_reset(ep->phy);
exynos_pcie_writel(ep->mem_res->elbi_base, 1,
PCIE_PWR_RESET);
phy_power_on(ep->phy);
phy_init(ep->phy);
} else {
exynos_pcie_assert_phy_reset(ep);
exynos_pcie_deassert_phy_reset(ep);
exynos_pcie_power_on_phy(ep);
exynos_pcie_init_phy(ep);
/* pulse for common reset */
exynos_pcie_writel(ep->mem_res->block_base, 1,
PCIE_PHY_COMMON_RESET);
udelay(500);
exynos_pcie_writel(ep->mem_res->block_base, 0,
PCIE_PHY_COMMON_RESET);
}
/* pulse for common reset */
exynos_pcie_writel(ep->mem_res->block_base, 1, PCIE_PHY_COMMON_RESET);
udelay(500);
exynos_pcie_writel(ep->mem_res->block_base, 0, PCIE_PHY_COMMON_RESET);
exynos_pcie_deassert_core_reset(ep);
dw_pcie_setup_rc(pp);
exynos_pcie_assert_reset(ep);
/* assert LTSSM enable */
exynos_pcie_writel(ep->mem_res->elbi_base, PCIE_ELBI_LTSSM_ENABLE,
PCIE_APP_LTSSM_ENABLE);
/* check if the link is up or not */
if (!dw_pcie_wait_for_link(pci))
return 0;
if (ep->using_phy) {
phy_power_off(ep->phy);
return -ETIMEDOUT;
}
while (exynos_pcie_readl(ep->mem_res->phy_base,
PCIE_PHY_PLL_LOCKED) == 0) {
val = exynos_pcie_readl(ep->mem_res->block_base,
PCIE_PHY_PLL_LOCKED);
dev_info(dev, "PLL Locked: 0x%x\n", val);
}
exynos_pcie_power_off_phy(ep);
return -ETIMEDOUT;
}
static void exynos_pcie_clear_irq_pulse(struct exynos_pcie *ep)
{
u32 val;
val = exynos_pcie_readl(ep->mem_res->elbi_base, PCIE_IRQ_PULSE);
exynos_pcie_writel(ep->mem_res->elbi_base, val, PCIE_IRQ_PULSE);
}
static void exynos_pcie_enable_irq_pulse(struct exynos_pcie *ep)
{
u32 val;
/* enable INTX interrupt */
val = IRQ_INTA_ASSERT | IRQ_INTB_ASSERT |
IRQ_INTC_ASSERT | IRQ_INTD_ASSERT;
exynos_pcie_writel(ep->mem_res->elbi_base, val, PCIE_IRQ_EN_PULSE);
}
static irqreturn_t exynos_pcie_irq_handler(int irq, void *arg)
{
struct exynos_pcie *ep = arg;
exynos_pcie_clear_irq_pulse(ep);
return IRQ_HANDLED;
}
static irqreturn_t exynos_pcie_msi_irq_handler(int irq, void *arg)
{
struct exynos_pcie *ep = arg;
struct dw_pcie *pci = ep->pci;
struct pcie_port *pp = &pci->pp;
return dw_handle_msi_irq(pp);
}
static void exynos_pcie_msi_init(struct exynos_pcie *ep)
{
struct dw_pcie *pci = ep->pci;
struct pcie_port *pp = &pci->pp;
u32 val;
dw_pcie_msi_init(pp);
/* enable MSI interrupt */
val = exynos_pcie_readl(ep->mem_res->elbi_base, PCIE_IRQ_EN_LEVEL);
val |= IRQ_MSI_ENABLE;
exynos_pcie_writel(ep->mem_res->elbi_base, val, PCIE_IRQ_EN_LEVEL);
}
static void exynos_pcie_enable_interrupts(struct exynos_pcie *ep)
{
exynos_pcie_enable_irq_pulse(ep);
if (IS_ENABLED(CONFIG_PCI_MSI))
exynos_pcie_msi_init(ep);
}
static u32 exynos_pcie_readl_dbi(struct dw_pcie *pci, u32 reg)
{
struct exynos_pcie *ep = to_exynos_pcie(pci);
u32 val;
exynos_pcie_sideband_dbi_r_mode(ep, true);
val = readl(pci->dbi_base + reg);
exynos_pcie_sideband_dbi_r_mode(ep, false);
return val;
}
static void exynos_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val)
{
struct exynos_pcie *ep = to_exynos_pcie(pci);
exynos_pcie_sideband_dbi_w_mode(ep, true);
writel(val, pci->dbi_base + reg);
exynos_pcie_sideband_dbi_w_mode(ep, false);
}
static int exynos_pcie_rd_own_conf(struct pcie_port *pp, int where, int size,
u32 *val)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct exynos_pcie *ep = to_exynos_pcie(pci);
int ret;
exynos_pcie_sideband_dbi_r_mode(ep, true);
ret = dw_pcie_read(pci->dbi_base + where, size, val);
exynos_pcie_sideband_dbi_r_mode(ep, false);
return ret;
}
static int exynos_pcie_wr_own_conf(struct pcie_port *pp, int where, int size,
u32 val)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct exynos_pcie *ep = to_exynos_pcie(pci);
int ret;
exynos_pcie_sideband_dbi_w_mode(ep, true);
ret = dw_pcie_write(pci->dbi_base + where, size, val);
exynos_pcie_sideband_dbi_w_mode(ep, false);
return ret;
}
static int exynos_pcie_link_up(struct dw_pcie *pci)
{
struct exynos_pcie *ep = to_exynos_pcie(pci);
u32 val;
val = exynos_pcie_readl(ep->mem_res->elbi_base, PCIE_ELBI_RDLH_LINKUP);
if (val == PCIE_ELBI_LTSSM_ENABLE)
return 1;
return 0;
}
static void exynos_pcie_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct exynos_pcie *ep = to_exynos_pcie(pci);
exynos_pcie_establish_link(ep);
exynos_pcie_enable_interrupts(ep);
}
static struct dw_pcie_host_ops exynos_pcie_host_ops = {
.rd_own_conf = exynos_pcie_rd_own_conf,
.wr_own_conf = exynos_pcie_wr_own_conf,
.host_init = exynos_pcie_host_init,
};
static int __init exynos_add_pcie_port(struct exynos_pcie *ep,
struct platform_device *pdev)
{
struct dw_pcie *pci = ep->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = &pdev->dev;
int ret;
pp->irq = platform_get_irq(pdev, 1);
if (!pp->irq) {
dev_err(dev, "failed to get irq\n");
return -ENODEV;
}
ret = devm_request_irq(dev, pp->irq, exynos_pcie_irq_handler,
IRQF_SHARED, "exynos-pcie", ep);
if (ret) {
dev_err(dev, "failed to request irq\n");
return ret;
}
if (IS_ENABLED(CONFIG_PCI_MSI)) {
pp->msi_irq = platform_get_irq(pdev, 0);
if (!pp->msi_irq) {
dev_err(dev, "failed to get msi irq\n");
return -ENODEV;
}
ret = devm_request_irq(dev, pp->msi_irq,
exynos_pcie_msi_irq_handler,
IRQF_SHARED | IRQF_NO_THREAD,
"exynos-pcie", ep);
if (ret) {
dev_err(dev, "failed to request msi irq\n");
return ret;
}
}
pp->root_bus_nr = -1;
pp->ops = &exynos_pcie_host_ops;
ret = dw_pcie_host_init(pp);
if (ret) {
dev_err(dev, "failed to initialize host\n");
return ret;
}
return 0;
}
static const struct dw_pcie_ops dw_pcie_ops = {
.readl_dbi = exynos_pcie_readl_dbi,
.writel_dbi = exynos_pcie_writel_dbi,
.link_up = exynos_pcie_link_up,
};
static int __init exynos_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct dw_pcie *pci;
struct exynos_pcie *ep;
struct device_node *np = dev->of_node;
int ret;
ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL);
if (!ep)
return -ENOMEM;
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
if (!pci)
return -ENOMEM;
pci->dev = dev;
pci->ops = &dw_pcie_ops;
ep->ops = (const struct exynos_pcie_ops *)
of_device_get_match_data(dev);
ep->reset_gpio = of_get_named_gpio(np, "reset-gpio", 0);
/* Assume that controller doesn't use the PHY framework */
ep->using_phy = false;
ep->phy = devm_of_phy_get(dev, np, NULL);
if (IS_ERR(ep->phy)) {
if (PTR_ERR(ep->phy) == -EPROBE_DEFER)
return PTR_ERR(ep->phy);
dev_warn(dev, "Use the 'phy' property. Current DT of pci-exynos was deprecated!!\n");
} else
ep->using_phy = true;
if (ep->ops && ep->ops->get_mem_resources) {
ret = ep->ops->get_mem_resources(pdev, ep);
if (ret)
return ret;
}
if (ep->ops && ep->ops->get_clk_resources) {
ret = ep->ops->get_clk_resources(ep);
if (ret)
return ret;
ret = ep->ops->init_clk_resources(ep);
if (ret)
return ret;
}
platform_set_drvdata(pdev, ep);
ret = exynos_add_pcie_port(ep, pdev);
if (ret < 0)
goto fail_probe;
return 0;
fail_probe:
if (ep->using_phy)
phy_exit(ep->phy);
if (ep->ops && ep->ops->deinit_clk_resources)
ep->ops->deinit_clk_resources(ep);
return ret;
}
static int __exit exynos_pcie_remove(struct platform_device *pdev)
{
struct exynos_pcie *ep = platform_get_drvdata(pdev);
if (ep->ops && ep->ops->deinit_clk_resources)
ep->ops->deinit_clk_resources(ep);
return 0;
}
static const struct of_device_id exynos_pcie_of_match[] = {
{
.compatible = "samsung,exynos5440-pcie",
.data = &exynos5440_pcie_ops
},
{},
};
static struct platform_driver exynos_pcie_driver = {
.remove = __exit_p(exynos_pcie_remove),
.driver = {
.name = "exynos-pcie",
.of_match_table = exynos_pcie_of_match,
},
};
/* Exynos PCIe driver does not allow module unload */
static int __init exynos_pcie_init(void)
{
return platform_driver_probe(&exynos_pcie_driver, exynos_pcie_probe);
}
subsys_initcall(exynos_pcie_init);

View File

@ -30,7 +30,7 @@
#include "pcie-designware.h"
#define to_imx6_pcie(x) container_of(x, struct imx6_pcie, pp)
#define to_imx6_pcie(x) dev_get_drvdata((x)->dev)
enum imx6_pcie_variants {
IMX6Q,
@ -39,7 +39,7 @@ enum imx6_pcie_variants {
};
struct imx6_pcie {
struct pcie_port pp; /* pp.dbi_base is DT 0th resource */
struct dw_pcie *pci;
int reset_gpio;
bool gpio_active_high;
struct clk *pcie_bus;
@ -97,13 +97,13 @@ struct imx6_pcie {
static int pcie_phy_poll_ack(struct imx6_pcie *imx6_pcie, int exp_val)
{
struct pcie_port *pp = &imx6_pcie->pp;
struct dw_pcie *pci = imx6_pcie->pci;
u32 val;
u32 max_iterations = 10;
u32 wait_counter = 0;
do {
val = dw_pcie_readl_rc(pp, PCIE_PHY_STAT);
val = dw_pcie_readl_dbi(pci, PCIE_PHY_STAT);
val = (val >> PCIE_PHY_STAT_ACK_LOC) & 0x1;
wait_counter++;
@ -118,22 +118,22 @@ static int pcie_phy_poll_ack(struct imx6_pcie *imx6_pcie, int exp_val)
static int pcie_phy_wait_ack(struct imx6_pcie *imx6_pcie, int addr)
{
struct pcie_port *pp = &imx6_pcie->pp;
struct dw_pcie *pci = imx6_pcie->pci;
u32 val;
int ret;
val = addr << PCIE_PHY_CTRL_DATA_LOC;
dw_pcie_writel_rc(pp, PCIE_PHY_CTRL, val);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, val);
val |= (0x1 << PCIE_PHY_CTRL_CAP_ADR_LOC);
dw_pcie_writel_rc(pp, PCIE_PHY_CTRL, val);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, val);
ret = pcie_phy_poll_ack(imx6_pcie, 1);
if (ret)
return ret;
val = addr << PCIE_PHY_CTRL_DATA_LOC;
dw_pcie_writel_rc(pp, PCIE_PHY_CTRL, val);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, val);
return pcie_phy_poll_ack(imx6_pcie, 0);
}
@ -141,7 +141,7 @@ static int pcie_phy_wait_ack(struct imx6_pcie *imx6_pcie, int addr)
/* Read from the 16-bit PCIe PHY control registers (not memory-mapped) */
static int pcie_phy_read(struct imx6_pcie *imx6_pcie, int addr, int *data)
{
struct pcie_port *pp = &imx6_pcie->pp;
struct dw_pcie *pci = imx6_pcie->pci;
u32 val, phy_ctl;
int ret;
@ -151,24 +151,24 @@ static int pcie_phy_read(struct imx6_pcie *imx6_pcie, int addr, int *data)
/* assert Read signal */
phy_ctl = 0x1 << PCIE_PHY_CTRL_RD_LOC;
dw_pcie_writel_rc(pp, PCIE_PHY_CTRL, phy_ctl);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, phy_ctl);
ret = pcie_phy_poll_ack(imx6_pcie, 1);
if (ret)
return ret;
val = dw_pcie_readl_rc(pp, PCIE_PHY_STAT);
val = dw_pcie_readl_dbi(pci, PCIE_PHY_STAT);
*data = val & 0xffff;
/* deassert Read signal */
dw_pcie_writel_rc(pp, PCIE_PHY_CTRL, 0x00);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, 0x00);
return pcie_phy_poll_ack(imx6_pcie, 0);
}
static int pcie_phy_write(struct imx6_pcie *imx6_pcie, int addr, int data)
{
struct pcie_port *pp = &imx6_pcie->pp;
struct dw_pcie *pci = imx6_pcie->pci;
u32 var;
int ret;
@ -179,11 +179,11 @@ static int pcie_phy_write(struct imx6_pcie *imx6_pcie, int addr, int data)
return ret;
var = data << PCIE_PHY_CTRL_DATA_LOC;
dw_pcie_writel_rc(pp, PCIE_PHY_CTRL, var);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var);
/* capture data */
var |= (0x1 << PCIE_PHY_CTRL_CAP_DAT_LOC);
dw_pcie_writel_rc(pp, PCIE_PHY_CTRL, var);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var);
ret = pcie_phy_poll_ack(imx6_pcie, 1);
if (ret)
@ -191,7 +191,7 @@ static int pcie_phy_write(struct imx6_pcie *imx6_pcie, int addr, int data)
/* deassert cap data */
var = data << PCIE_PHY_CTRL_DATA_LOC;
dw_pcie_writel_rc(pp, PCIE_PHY_CTRL, var);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var);
/* wait for ack de-assertion */
ret = pcie_phy_poll_ack(imx6_pcie, 0);
@ -200,7 +200,7 @@ static int pcie_phy_write(struct imx6_pcie *imx6_pcie, int addr, int data)
/* assert wr signal */
var = 0x1 << PCIE_PHY_CTRL_WR_LOC;
dw_pcie_writel_rc(pp, PCIE_PHY_CTRL, var);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var);
/* wait for ack */
ret = pcie_phy_poll_ack(imx6_pcie, 1);
@ -209,14 +209,14 @@ static int pcie_phy_write(struct imx6_pcie *imx6_pcie, int addr, int data)
/* deassert wr signal */
var = data << PCIE_PHY_CTRL_DATA_LOC;
dw_pcie_writel_rc(pp, PCIE_PHY_CTRL, var);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var);
/* wait for ack de-assertion */
ret = pcie_phy_poll_ack(imx6_pcie, 0);
if (ret)
return ret;
dw_pcie_writel_rc(pp, PCIE_PHY_CTRL, 0x0);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, 0x0);
return 0;
}
@ -247,9 +247,6 @@ static int imx6q_pcie_abort_handler(unsigned long addr,
static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie)
{
struct pcie_port *pp = &imx6_pcie->pp;
u32 val, gpr1, gpr12;
switch (imx6_pcie->variant) {
case IMX6SX:
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
@ -266,33 +263,6 @@ static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie)
IMX6Q_GPR1_PCIE_SW_RST);
break;
case IMX6Q:
/*
* If the bootloader already enabled the link we need some
* special handling to get the core back into a state where
* it is safe to touch it for configuration. As there is
* no dedicated reset signal wired up for MX6QDL, we need
* to manually force LTSSM into "detect" state before
* completely disabling LTSSM, which is a prerequisite for
* core configuration.
*
* If both LTSSM_ENABLE and REF_SSP_ENABLE are active we
* have a strong indication that the bootloader activated
* the link.
*/
regmap_read(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, &gpr1);
regmap_read(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, &gpr12);
if ((gpr1 & IMX6Q_GPR1_PCIE_REF_CLK_EN) &&
(gpr12 & IMX6Q_GPR12_PCIE_CTL_2)) {
val = dw_pcie_readl_rc(pp, PCIE_PL_PFLR);
val &= ~PCIE_PL_PFLR_LINK_STATE_MASK;
val |= PCIE_PL_PFLR_FORCE_LINK;
dw_pcie_writel_rc(pp, PCIE_PL_PFLR, val);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX6Q_GPR12_PCIE_CTL_2, 0 << 10);
}
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1,
IMX6Q_GPR1_PCIE_TEST_PD, 1 << 18);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1,
@ -303,8 +273,8 @@ static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie)
static int imx6_pcie_enable_ref_clk(struct imx6_pcie *imx6_pcie)
{
struct pcie_port *pp = &imx6_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = imx6_pcie->pci;
struct device *dev = pci->dev;
int ret = 0;
switch (imx6_pcie->variant) {
@ -340,8 +310,8 @@ static int imx6_pcie_enable_ref_clk(struct imx6_pcie *imx6_pcie)
static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie)
{
struct pcie_port *pp = &imx6_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = imx6_pcie->pci;
struct device *dev = pci->dev;
int ret;
ret = clk_prepare_enable(imx6_pcie->pcie_phy);
@ -440,28 +410,28 @@ static void imx6_pcie_init_phy(struct imx6_pcie *imx6_pcie)
static int imx6_pcie_wait_for_link(struct imx6_pcie *imx6_pcie)
{
struct pcie_port *pp = &imx6_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = imx6_pcie->pci;
struct device *dev = pci->dev;
/* check if the link is up or not */
if (!dw_pcie_wait_for_link(pp))
if (!dw_pcie_wait_for_link(pci))
return 0;
dev_dbg(dev, "DEBUG_R0: 0x%08x, DEBUG_R1: 0x%08x\n",
dw_pcie_readl_rc(pp, PCIE_PHY_DEBUG_R0),
dw_pcie_readl_rc(pp, PCIE_PHY_DEBUG_R1));
dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R0),
dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R1));
return -ETIMEDOUT;
}
static int imx6_pcie_wait_for_speed_change(struct imx6_pcie *imx6_pcie)
{
struct pcie_port *pp = &imx6_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = imx6_pcie->pci;
struct device *dev = pci->dev;
u32 tmp;
unsigned int retries;
for (retries = 0; retries < 200; retries++) {
tmp = dw_pcie_readl_rc(pp, PCIE_LINK_WIDTH_SPEED_CONTROL);
tmp = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
/* Test if the speed change finished. */
if (!(tmp & PORT_LOGIC_SPEED_CHANGE))
return 0;
@ -475,15 +445,16 @@ static int imx6_pcie_wait_for_speed_change(struct imx6_pcie *imx6_pcie)
static irqreturn_t imx6_pcie_msi_handler(int irq, void *arg)
{
struct imx6_pcie *imx6_pcie = arg;
struct pcie_port *pp = &imx6_pcie->pp;
struct dw_pcie *pci = imx6_pcie->pci;
struct pcie_port *pp = &pci->pp;
return dw_handle_msi_irq(pp);
}
static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
{
struct pcie_port *pp = &imx6_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = imx6_pcie->pci;
struct device *dev = pci->dev;
u32 tmp;
int ret;
@ -492,27 +463,25 @@ static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
* started in Gen2 mode, there is a possibility the devices on the
* bus will not be detected at all. This happens with PCIe switches.
*/
tmp = dw_pcie_readl_rc(pp, PCIE_RC_LCR);
tmp = dw_pcie_readl_dbi(pci, PCIE_RC_LCR);
tmp &= ~PCIE_RC_LCR_MAX_LINK_SPEEDS_MASK;
tmp |= PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN1;
dw_pcie_writel_rc(pp, PCIE_RC_LCR, tmp);
dw_pcie_writel_dbi(pci, PCIE_RC_LCR, tmp);
/* Start LTSSM. */
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX6Q_GPR12_PCIE_CTL_2, 1 << 10);
ret = imx6_pcie_wait_for_link(imx6_pcie);
if (ret) {
dev_info(dev, "Link never came up\n");
if (ret)
goto err_reset_phy;
}
if (imx6_pcie->link_gen == 2) {
/* Allow Gen2 mode after the link is up. */
tmp = dw_pcie_readl_rc(pp, PCIE_RC_LCR);
tmp = dw_pcie_readl_dbi(pci, PCIE_RC_LCR);
tmp &= ~PCIE_RC_LCR_MAX_LINK_SPEEDS_MASK;
tmp |= PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN2;
dw_pcie_writel_rc(pp, PCIE_RC_LCR, tmp);
dw_pcie_writel_dbi(pci, PCIE_RC_LCR, tmp);
} else {
dev_info(dev, "Link: Gen2 disabled\n");
}
@ -521,9 +490,9 @@ static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
* Start Directed Speed Change so the best possible speed both link
* partners support can be negotiated.
*/
tmp = dw_pcie_readl_rc(pp, PCIE_LINK_WIDTH_SPEED_CONTROL);
tmp = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
tmp |= PORT_LOGIC_SPEED_CHANGE;
dw_pcie_writel_rc(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, tmp);
dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, tmp);
ret = imx6_pcie_wait_for_speed_change(imx6_pcie);
if (ret) {
@ -538,21 +507,22 @@ static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
goto err_reset_phy;
}
tmp = dw_pcie_readl_rc(pp, PCIE_RC_LCSR);
tmp = dw_pcie_readl_dbi(pci, PCIE_RC_LCSR);
dev_info(dev, "Link up, Gen%i\n", (tmp >> 16) & 0xf);
return 0;
err_reset_phy:
dev_dbg(dev, "PHY DEBUG_R0=0x%08x DEBUG_R1=0x%08x\n",
dw_pcie_readl_rc(pp, PCIE_PHY_DEBUG_R0),
dw_pcie_readl_rc(pp, PCIE_PHY_DEBUG_R1));
dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R0),
dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R1));
imx6_pcie_reset_phy(imx6_pcie);
return ret;
}
static void imx6_pcie_host_init(struct pcie_port *pp)
{
struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct imx6_pcie *imx6_pcie = to_imx6_pcie(pci);
imx6_pcie_assert_core_reset(imx6_pcie);
imx6_pcie_init_phy(imx6_pcie);
@ -564,22 +534,22 @@ static void imx6_pcie_host_init(struct pcie_port *pp)
dw_pcie_msi_init(pp);
}
static int imx6_pcie_link_up(struct pcie_port *pp)
static int imx6_pcie_link_up(struct dw_pcie *pci)
{
return dw_pcie_readl_rc(pp, PCIE_PHY_DEBUG_R1) &
return dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R1) &
PCIE_PHY_DEBUG_R1_XMLH_LINK_UP;
}
static struct pcie_host_ops imx6_pcie_host_ops = {
.link_up = imx6_pcie_link_up,
static struct dw_pcie_host_ops imx6_pcie_host_ops = {
.host_init = imx6_pcie_host_init,
};
static int __init imx6_add_pcie_port(struct imx6_pcie *imx6_pcie,
struct platform_device *pdev)
{
struct pcie_port *pp = &imx6_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = imx6_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = &pdev->dev;
int ret;
if (IS_ENABLED(CONFIG_PCI_MSI)) {
@ -611,11 +581,15 @@ static int __init imx6_add_pcie_port(struct imx6_pcie *imx6_pcie,
return 0;
}
static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = imx6_pcie_link_up,
};
static int __init imx6_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct dw_pcie *pci;
struct imx6_pcie *imx6_pcie;
struct pcie_port *pp;
struct resource *dbi_base;
struct device_node *node = dev->of_node;
int ret;
@ -624,8 +598,12 @@ static int __init imx6_pcie_probe(struct platform_device *pdev)
if (!imx6_pcie)
return -ENOMEM;
pp = &imx6_pcie->pp;
pp->dev = dev;
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
if (!pci)
return -ENOMEM;
pci->dev = dev;
pci->ops = &dw_pcie_ops;
imx6_pcie->variant =
(enum imx6_pcie_variants)of_device_get_match_data(dev);
@ -635,9 +613,9 @@ static int __init imx6_pcie_probe(struct platform_device *pdev)
"imprecise external abort");
dbi_base = platform_get_resource(pdev, IORESOURCE_MEM, 0);
pp->dbi_base = devm_ioremap_resource(dev, dbi_base);
if (IS_ERR(pp->dbi_base))
return PTR_ERR(pp->dbi_base);
pci->dbi_base = devm_ioremap_resource(dev, dbi_base);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
/* Fetch GPIOs */
imx6_pcie->reset_gpio = of_get_named_gpio(node, "reset-gpio", 0);
@ -678,8 +656,7 @@ static int __init imx6_pcie_probe(struct platform_device *pdev)
imx6_pcie->pcie_inbound_axi = devm_clk_get(dev,
"pcie_inbound_axi");
if (IS_ERR(imx6_pcie->pcie_inbound_axi)) {
dev_err(dev,
"pcie_incbound_axi clock missing or invalid\n");
dev_err(dev, "pcie_inbound_axi clock missing or invalid\n");
return PTR_ERR(imx6_pcie->pcie_inbound_axi);
}
}
@ -719,11 +696,12 @@ static int __init imx6_pcie_probe(struct platform_device *pdev)
if (ret)
imx6_pcie->link_gen = 1;
platform_set_drvdata(pdev, imx6_pcie);
ret = imx6_add_pcie_port(imx6_pcie, pdev);
if (ret < 0)
return ret;
platform_set_drvdata(pdev, imx6_pcie);
return 0;
}

View File

@ -72,7 +72,7 @@
/* Config space registers */
#define DEBUG0 0x728
#define to_keystone_pcie(x) container_of(x, struct keystone_pcie, pp)
#define to_keystone_pcie(x) dev_get_drvdata((x)->dev)
static inline void update_reg_offset_bit_pos(u32 offset, u32 *reg_offset,
u32 *bit_pos)
@ -83,7 +83,8 @@ static inline void update_reg_offset_bit_pos(u32 offset, u32 *reg_offset,
phys_addr_t ks_dw_pcie_get_msi_addr(struct pcie_port *pp)
{
struct keystone_pcie *ks_pcie = to_keystone_pcie(pp);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
return ks_pcie->app.start + MSI_IRQ;
}
@ -100,8 +101,9 @@ static void ks_dw_app_writel(struct keystone_pcie *ks_pcie, u32 offset, u32 val)
void ks_dw_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset)
{
struct pcie_port *pp = &ks_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = pci->dev;
u32 pending, vector;
int src, virq;
@ -128,10 +130,12 @@ static void ks_dw_pcie_msi_irq_ack(struct irq_data *d)
struct keystone_pcie *ks_pcie;
struct msi_desc *msi;
struct pcie_port *pp;
struct dw_pcie *pci;
msi = irq_data_get_msi_desc(d);
pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi);
ks_pcie = to_keystone_pcie(pp);
pci = to_dw_pcie_from_pp(pp);
ks_pcie = to_keystone_pcie(pci);
offset = d->irq - irq_linear_revmap(pp->irq_domain, 0);
update_reg_offset_bit_pos(offset, &reg_offset, &bit_pos);
@ -143,7 +147,8 @@ static void ks_dw_pcie_msi_irq_ack(struct irq_data *d)
void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq)
{
u32 reg_offset, bit_pos;
struct keystone_pcie *ks_pcie = to_keystone_pcie(pp);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos);
ks_dw_app_writel(ks_pcie, MSI0_IRQ_ENABLE_SET + (reg_offset << 4),
@ -153,7 +158,8 @@ void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq)
void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq)
{
u32 reg_offset, bit_pos;
struct keystone_pcie *ks_pcie = to_keystone_pcie(pp);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos);
ks_dw_app_writel(ks_pcie, MSI0_IRQ_ENABLE_CLR + (reg_offset << 4),
@ -165,11 +171,13 @@ static void ks_dw_pcie_msi_irq_mask(struct irq_data *d)
struct keystone_pcie *ks_pcie;
struct msi_desc *msi;
struct pcie_port *pp;
struct dw_pcie *pci;
u32 offset;
msi = irq_data_get_msi_desc(d);
pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi);
ks_pcie = to_keystone_pcie(pp);
pci = to_dw_pcie_from_pp(pp);
ks_pcie = to_keystone_pcie(pci);
offset = d->irq - irq_linear_revmap(pp->irq_domain, 0);
/* Mask the end point if PVM implemented */
@ -186,11 +194,13 @@ static void ks_dw_pcie_msi_irq_unmask(struct irq_data *d)
struct keystone_pcie *ks_pcie;
struct msi_desc *msi;
struct pcie_port *pp;
struct dw_pcie *pci;
u32 offset;
msi = irq_data_get_msi_desc(d);
pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi);
ks_pcie = to_keystone_pcie(pp);
pci = to_dw_pcie_from_pp(pp);
ks_pcie = to_keystone_pcie(pci);
offset = d->irq - irq_linear_revmap(pp->irq_domain, 0);
/* Mask the end point if PVM implemented */
@ -225,8 +235,9 @@ static const struct irq_domain_ops ks_dw_pcie_msi_domain_ops = {
int ks_dw_pcie_msi_host_init(struct pcie_port *pp, struct msi_controller *chip)
{
struct keystone_pcie *ks_pcie = to_keystone_pcie(pp);
struct device *dev = pp->dev;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
struct device *dev = pci->dev;
int i;
pp->irq_domain = irq_domain_add_linear(ks_pcie->msi_intc_np,
@ -254,8 +265,8 @@ void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie)
void ks_dw_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, int offset)
{
struct pcie_port *pp = &ks_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = ks_pcie->pci;
struct device *dev = pci->dev;
u32 pending;
int virq;
@ -285,7 +296,7 @@ irqreturn_t ks_dw_pcie_handle_error_irq(struct keystone_pcie *ks_pcie)
return IRQ_NONE;
if (status & ERR_FATAL_IRQ)
dev_err(ks_pcie->pp.dev, "fatal error (status %#010x)\n",
dev_err(ks_pcie->pci->dev, "fatal error (status %#010x)\n",
status);
/* Ack the IRQ; status bits are RW1C */
@ -366,15 +377,16 @@ static void ks_dw_pcie_clear_dbi_mode(struct keystone_pcie *ks_pcie)
void ks_dw_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie)
{
struct pcie_port *pp = &ks_pcie->pp;
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
u32 start = pp->mem->start, end = pp->mem->end;
int i, tr_size;
u32 val;
/* Disable BARs for inbound access */
ks_dw_pcie_set_dbi_mode(ks_pcie);
dw_pcie_writel_rc(pp, PCI_BASE_ADDRESS_0, 0);
dw_pcie_writel_rc(pp, PCI_BASE_ADDRESS_1, 0);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0);
ks_dw_pcie_clear_dbi_mode(ks_pcie);
/* Set outbound translation size per window division */
@ -415,11 +427,12 @@ static void __iomem *ks_pcie_cfg_setup(struct keystone_pcie *ks_pcie, u8 bus,
unsigned int devfn)
{
u8 device = PCI_SLOT(devfn), function = PCI_FUNC(devfn);
struct pcie_port *pp = &ks_pcie->pp;
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
u32 regval;
if (bus == 0)
return pp->dbi_base;
return pci->dbi_base;
regval = (bus << 16) | (device << 8) | function;
@ -438,25 +451,27 @@ static void __iomem *ks_pcie_cfg_setup(struct keystone_pcie *ks_pcie, u8 bus,
int ks_dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size, u32 *val)
{
struct keystone_pcie *ks_pcie = to_keystone_pcie(pp);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
u8 bus_num = bus->number;
void __iomem *addr;
addr = ks_pcie_cfg_setup(ks_pcie, bus_num, devfn);
return dw_pcie_cfg_read(addr + where, size, val);
return dw_pcie_read(addr + where, size, val);
}
int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size, u32 val)
{
struct keystone_pcie *ks_pcie = to_keystone_pcie(pp);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
u8 bus_num = bus->number;
void __iomem *addr;
addr = ks_pcie_cfg_setup(ks_pcie, bus_num, devfn);
return dw_pcie_cfg_write(addr + where, size, val);
return dw_pcie_write(addr + where, size, val);
}
/**
@ -466,14 +481,15 @@ int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus,
*/
void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp)
{
struct keystone_pcie *ks_pcie = to_keystone_pcie(pp);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
/* Configure and set up BAR0 */
ks_dw_pcie_set_dbi_mode(ks_pcie);
/* Enable BAR0 */
dw_pcie_writel_rc(pp, PCI_BASE_ADDRESS_0, 1);
dw_pcie_writel_rc(pp, PCI_BASE_ADDRESS_0, SZ_4K - 1);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 1);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, SZ_4K - 1);
ks_dw_pcie_clear_dbi_mode(ks_pcie);
@ -481,17 +497,17 @@ void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp)
* For BAR0, just setting bus address for inbound writes (MSI) should
* be sufficient. Use physical address to avoid any conflicts.
*/
dw_pcie_writel_rc(pp, PCI_BASE_ADDRESS_0, ks_pcie->app.start);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start);
}
/**
* ks_dw_pcie_link_up() - Check if link up
*/
int ks_dw_pcie_link_up(struct pcie_port *pp)
int ks_dw_pcie_link_up(struct dw_pcie *pci)
{
u32 val;
val = dw_pcie_readl_rc(pp, DEBUG0);
val = dw_pcie_readl_dbi(pci, DEBUG0);
return (val & LTSSM_STATE_MASK) == LTSSM_STATE_L0;
}
@ -519,22 +535,23 @@ void ks_dw_pcie_initiate_link_train(struct keystone_pcie *ks_pcie)
int __init ks_dw_pcie_host_init(struct keystone_pcie *ks_pcie,
struct device_node *msi_intc_np)
{
struct pcie_port *pp = &ks_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = pci->dev;
struct platform_device *pdev = to_platform_device(dev);
struct resource *res;
/* Index 0 is the config reg. space address */
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
pp->dbi_base = devm_ioremap_resource(dev, res);
if (IS_ERR(pp->dbi_base))
return PTR_ERR(pp->dbi_base);
pci->dbi_base = devm_ioremap_resource(dev, res);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
/*
* We set these same and is used in pcie rd/wr_other_conf
* functions
*/
pp->va_cfg0_base = pp->dbi_base + SPACE0_REMOTE_CFG_OFFSET;
pp->va_cfg0_base = pci->dbi_base + SPACE0_REMOTE_CFG_OFFSET;
pp->va_cfg1_base = pp->va_cfg0_base;
/* Index 1 is the application reg. space address */

View File

@ -44,7 +44,7 @@
#define PCIE_RC_K2E 0xb009
#define PCIE_RC_K2L 0xb00a
#define to_keystone_pcie(x) container_of(x, struct keystone_pcie, pp)
#define to_keystone_pcie(x) dev_get_drvdata((x)->dev)
static void quirk_limit_mrrs(struct pci_dev *dev)
{
@ -88,13 +88,14 @@ DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, quirk_limit_mrrs);
static int ks_pcie_establish_link(struct keystone_pcie *ks_pcie)
{
struct pcie_port *pp = &ks_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = pci->dev;
unsigned int retries;
dw_pcie_setup_rc(pp);
if (dw_pcie_link_up(pp)) {
if (dw_pcie_link_up(pci)) {
dev_err(dev, "Link already up\n");
return 0;
}
@ -102,7 +103,7 @@ static int ks_pcie_establish_link(struct keystone_pcie *ks_pcie)
/* check if the link is up or not */
for (retries = 0; retries < 5; retries++) {
ks_dw_pcie_initiate_link_train(ks_pcie);
if (!dw_pcie_wait_for_link(pp))
if (!dw_pcie_wait_for_link(pci))
return 0;
}
@ -115,8 +116,8 @@ static void ks_pcie_msi_irq_handler(struct irq_desc *desc)
unsigned int irq = irq_desc_get_irq(desc);
struct keystone_pcie *ks_pcie = irq_desc_get_handler_data(desc);
u32 offset = irq - ks_pcie->msi_host_irqs[0];
struct pcie_port *pp = &ks_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = ks_pcie->pci;
struct device *dev = pci->dev;
struct irq_chip *chip = irq_desc_get_chip(desc);
dev_dbg(dev, "%s, irq %d\n", __func__, irq);
@ -143,8 +144,8 @@ static void ks_pcie_legacy_irq_handler(struct irq_desc *desc)
{
unsigned int irq = irq_desc_get_irq(desc);
struct keystone_pcie *ks_pcie = irq_desc_get_handler_data(desc);
struct pcie_port *pp = &ks_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = ks_pcie->pci;
struct device *dev = pci->dev;
u32 irq_offset = irq - ks_pcie->legacy_host_irqs[0];
struct irq_chip *chip = irq_desc_get_chip(desc);
@ -164,7 +165,7 @@ static int ks_pcie_get_irq_controller_info(struct keystone_pcie *ks_pcie,
char *controller, int *num_irqs)
{
int temp, max_host_irqs, legacy = 1, *host_irqs;
struct device *dev = ks_pcie->pp.dev;
struct device *dev = ks_pcie->pci->dev;
struct device_node *np_pcie = dev->of_node, **np_temp;
if (!strcmp(controller, "msi-interrupt-controller"))
@ -262,24 +263,25 @@ static int keystone_pcie_fault(unsigned long addr, unsigned int fsr,
static void __init ks_pcie_host_init(struct pcie_port *pp)
{
struct keystone_pcie *ks_pcie = to_keystone_pcie(pp);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
u32 val;
ks_pcie_establish_link(ks_pcie);
ks_dw_pcie_setup_rc_app_regs(ks_pcie);
ks_pcie_setup_interrupts(ks_pcie);
writew(PCI_IO_RANGE_TYPE_32 | (PCI_IO_RANGE_TYPE_32 << 8),
pp->dbi_base + PCI_IO_BASE);
pci->dbi_base + PCI_IO_BASE);
/* update the Vendor ID */
writew(ks_pcie->device_id, pp->dbi_base + PCI_DEVICE_ID);
writew(ks_pcie->device_id, pci->dbi_base + PCI_DEVICE_ID);
/* update the DEV_STAT_CTRL to publish right mrrs */
val = readl(pp->dbi_base + PCIE_CAP_BASE + PCI_EXP_DEVCTL);
val = readl(pci->dbi_base + PCIE_CAP_BASE + PCI_EXP_DEVCTL);
val &= ~PCI_EXP_DEVCTL_READRQ;
/* set the mrrs to 256 bytes */
val |= BIT(12);
writel(val, pp->dbi_base + PCIE_CAP_BASE + PCI_EXP_DEVCTL);
writel(val, pci->dbi_base + PCIE_CAP_BASE + PCI_EXP_DEVCTL);
/*
* PCIe access errors that result into OCP errors are caught by ARM as
@ -289,10 +291,9 @@ static void __init ks_pcie_host_init(struct pcie_port *pp)
"Asynchronous external abort");
}
static struct pcie_host_ops keystone_pcie_host_ops = {
static struct dw_pcie_host_ops keystone_pcie_host_ops = {
.rd_other_conf = ks_dw_pcie_rd_other_conf,
.wr_other_conf = ks_dw_pcie_wr_other_conf,
.link_up = ks_dw_pcie_link_up,
.host_init = ks_pcie_host_init,
.msi_set_irq = ks_dw_pcie_msi_set_irq,
.msi_clear_irq = ks_dw_pcie_msi_clear_irq,
@ -311,8 +312,9 @@ static irqreturn_t pcie_err_irq_handler(int irq, void *priv)
static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie,
struct platform_device *pdev)
{
struct pcie_port *pp = &ks_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = &pdev->dev;
int ret;
ret = ks_pcie_get_irq_controller_info(ks_pcie,
@ -365,6 +367,10 @@ static const struct of_device_id ks_pcie_of_match[] = {
{ },
};
static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = ks_dw_pcie_link_up,
};
static int __exit ks_pcie_remove(struct platform_device *pdev)
{
struct keystone_pcie *ks_pcie = platform_get_drvdata(pdev);
@ -377,8 +383,8 @@ static int __exit ks_pcie_remove(struct platform_device *pdev)
static int __init ks_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct dw_pcie *pci;
struct keystone_pcie *ks_pcie;
struct pcie_port *pp;
struct resource *res;
void __iomem *reg_p;
struct phy *phy;
@ -388,8 +394,12 @@ static int __init ks_pcie_probe(struct platform_device *pdev)
if (!ks_pcie)
return -ENOMEM;
pp = &ks_pcie->pp;
pp->dev = dev;
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
if (!pci)
return -ENOMEM;
pci->dev = dev;
pci->ops = &dw_pcie_ops;
/* initialize SerDes Phy if present */
phy = devm_phy_get(dev, "pcie-phy");
@ -422,6 +432,8 @@ static int __init ks_pcie_probe(struct platform_device *pdev)
if (ret)
return ret;
platform_set_drvdata(pdev, ks_pcie);
ret = ks_add_pcie_port(ks_pcie, pdev);
if (ret < 0)
goto fail_clk;

View File

@ -17,7 +17,7 @@
#define MAX_LEGACY_HOST_IRQS 4
struct keystone_pcie {
struct pcie_port pp; /* pp.dbi_base is DT 0th res */
struct dw_pcie *pci;
struct clk *clk;
/* PCI Device ID */
u32 device_id;
@ -54,10 +54,10 @@ int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus,
int ks_dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size, u32 *val);
void ks_dw_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie);
int ks_dw_pcie_link_up(struct pcie_port *pp);
void ks_dw_pcie_initiate_link_train(struct keystone_pcie *ks_pcie);
void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq);
void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq);
void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp);
int ks_dw_pcie_msi_host_init(struct pcie_port *pp,
struct msi_controller *chip);
int ks_dw_pcie_link_up(struct dw_pcie *pci);

View File

@ -39,24 +39,26 @@ struct ls_pcie_drvdata {
u32 lut_offset;
u32 ltssm_shift;
u32 lut_dbg;
struct pcie_host_ops *ops;
struct dw_pcie_host_ops *ops;
const struct dw_pcie_ops *dw_pcie_ops;
};
struct ls_pcie {
struct pcie_port pp; /* pp.dbi_base is DT regs */
struct dw_pcie *pci;
void __iomem *lut;
struct regmap *scfg;
const struct ls_pcie_drvdata *drvdata;
int index;
};
#define to_ls_pcie(x) container_of(x, struct ls_pcie, pp)
#define to_ls_pcie(x) dev_get_drvdata((x)->dev)
static bool ls_pcie_is_bridge(struct ls_pcie *pcie)
{
struct dw_pcie *pci = pcie->pci;
u32 header_type;
header_type = ioread8(pcie->pp.dbi_base + PCI_HEADER_TYPE);
header_type = ioread8(pci->dbi_base + PCI_HEADER_TYPE);
header_type &= 0x7f;
return header_type == PCI_HEADER_TYPE_BRIDGE;
@ -65,29 +67,34 @@ static bool ls_pcie_is_bridge(struct ls_pcie *pcie)
/* Clear multi-function bit */
static void ls_pcie_clear_multifunction(struct ls_pcie *pcie)
{
iowrite8(PCI_HEADER_TYPE_BRIDGE, pcie->pp.dbi_base + PCI_HEADER_TYPE);
struct dw_pcie *pci = pcie->pci;
iowrite8(PCI_HEADER_TYPE_BRIDGE, pci->dbi_base + PCI_HEADER_TYPE);
}
/* Fix class value */
static void ls_pcie_fix_class(struct ls_pcie *pcie)
{
iowrite16(PCI_CLASS_BRIDGE_PCI, pcie->pp.dbi_base + PCI_CLASS_DEVICE);
struct dw_pcie *pci = pcie->pci;
iowrite16(PCI_CLASS_BRIDGE_PCI, pci->dbi_base + PCI_CLASS_DEVICE);
}
/* Drop MSG TLP except for Vendor MSG */
static void ls_pcie_drop_msg_tlp(struct ls_pcie *pcie)
{
u32 val;
struct dw_pcie *pci = pcie->pci;
val = ioread32(pcie->pp.dbi_base + PCIE_STRFMR1);
val = ioread32(pci->dbi_base + PCIE_STRFMR1);
val &= 0xDFFFFFFF;
iowrite32(val, pcie->pp.dbi_base + PCIE_STRFMR1);
iowrite32(val, pci->dbi_base + PCIE_STRFMR1);
}
static int ls1021_pcie_link_up(struct pcie_port *pp)
static int ls1021_pcie_link_up(struct dw_pcie *pci)
{
u32 state;
struct ls_pcie *pcie = to_ls_pcie(pp);
struct ls_pcie *pcie = to_ls_pcie(pci);
if (!pcie->scfg)
return 0;
@ -103,8 +110,9 @@ static int ls1021_pcie_link_up(struct pcie_port *pp)
static void ls1021_pcie_host_init(struct pcie_port *pp)
{
struct device *dev = pp->dev;
struct ls_pcie *pcie = to_ls_pcie(pp);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct ls_pcie *pcie = to_ls_pcie(pci);
struct device *dev = pci->dev;
u32 index[2];
pcie->scfg = syscon_regmap_lookup_by_phandle(dev->of_node,
@ -127,9 +135,9 @@ static void ls1021_pcie_host_init(struct pcie_port *pp)
ls_pcie_drop_msg_tlp(pcie);
}
static int ls_pcie_link_up(struct pcie_port *pp)
static int ls_pcie_link_up(struct dw_pcie *pci)
{
struct ls_pcie *pcie = to_ls_pcie(pp);
struct ls_pcie *pcie = to_ls_pcie(pci);
u32 state;
state = (ioread32(pcie->lut + pcie->drvdata->lut_dbg) >>
@ -144,19 +152,21 @@ static int ls_pcie_link_up(struct pcie_port *pp)
static void ls_pcie_host_init(struct pcie_port *pp)
{
struct ls_pcie *pcie = to_ls_pcie(pp);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct ls_pcie *pcie = to_ls_pcie(pci);
iowrite32(1, pcie->pp.dbi_base + PCIE_DBI_RO_WR_EN);
iowrite32(1, pci->dbi_base + PCIE_DBI_RO_WR_EN);
ls_pcie_fix_class(pcie);
ls_pcie_clear_multifunction(pcie);
ls_pcie_drop_msg_tlp(pcie);
iowrite32(0, pcie->pp.dbi_base + PCIE_DBI_RO_WR_EN);
iowrite32(0, pci->dbi_base + PCIE_DBI_RO_WR_EN);
}
static int ls_pcie_msi_host_init(struct pcie_port *pp,
struct msi_controller *chip)
{
struct device *dev = pp->dev;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct device *dev = pci->dev;
struct device_node *np = dev->of_node;
struct device_node *msi_node;
@ -175,20 +185,27 @@ static int ls_pcie_msi_host_init(struct pcie_port *pp,
return 0;
}
static struct pcie_host_ops ls1021_pcie_host_ops = {
.link_up = ls1021_pcie_link_up,
static struct dw_pcie_host_ops ls1021_pcie_host_ops = {
.host_init = ls1021_pcie_host_init,
.msi_host_init = ls_pcie_msi_host_init,
};
static struct pcie_host_ops ls_pcie_host_ops = {
.link_up = ls_pcie_link_up,
static struct dw_pcie_host_ops ls_pcie_host_ops = {
.host_init = ls_pcie_host_init,
.msi_host_init = ls_pcie_msi_host_init,
};
static const struct dw_pcie_ops dw_ls1021_pcie_ops = {
.link_up = ls1021_pcie_link_up,
};
static const struct dw_pcie_ops dw_ls_pcie_ops = {
.link_up = ls_pcie_link_up,
};
static struct ls_pcie_drvdata ls1021_drvdata = {
.ops = &ls1021_pcie_host_ops,
.dw_pcie_ops = &dw_ls1021_pcie_ops,
};
static struct ls_pcie_drvdata ls1043_drvdata = {
@ -196,6 +213,7 @@ static struct ls_pcie_drvdata ls1043_drvdata = {
.ltssm_shift = 24,
.lut_dbg = 0x7fc,
.ops = &ls_pcie_host_ops,
.dw_pcie_ops = &dw_ls_pcie_ops,
};
static struct ls_pcie_drvdata ls1046_drvdata = {
@ -203,6 +221,7 @@ static struct ls_pcie_drvdata ls1046_drvdata = {
.ltssm_shift = 24,
.lut_dbg = 0x407fc,
.ops = &ls_pcie_host_ops,
.dw_pcie_ops = &dw_ls_pcie_ops,
};
static struct ls_pcie_drvdata ls2080_drvdata = {
@ -210,6 +229,7 @@ static struct ls_pcie_drvdata ls2080_drvdata = {
.ltssm_shift = 0,
.lut_dbg = 0x7fc,
.ops = &ls_pcie_host_ops,
.dw_pcie_ops = &dw_ls_pcie_ops,
};
static const struct of_device_id ls_pcie_of_match[] = {
@ -223,10 +243,13 @@ static const struct of_device_id ls_pcie_of_match[] = {
static int __init ls_add_pcie_port(struct ls_pcie *pcie)
{
struct pcie_port *pp = &pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = pci->dev;
int ret;
pp->ops = pcie->drvdata->ops;
ret = dw_pcie_host_init(pp);
if (ret) {
dev_err(dev, "failed to initialize host\n");
@ -239,35 +262,36 @@ static int __init ls_add_pcie_port(struct ls_pcie *pcie)
static int __init ls_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
const struct of_device_id *match;
struct dw_pcie *pci;
struct ls_pcie *pcie;
struct pcie_port *pp;
struct resource *dbi_base;
int ret;
match = of_match_device(ls_pcie_of_match, dev);
if (!match)
return -ENODEV;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
pp = &pcie->pp;
pp->dev = dev;
pcie->drvdata = match->data;
pp->ops = pcie->drvdata->ops;
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
if (!pci)
return -ENOMEM;
pcie->drvdata = of_device_get_match_data(dev);
pci->dev = dev;
pci->ops = pcie->drvdata->dw_pcie_ops;
dbi_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs");
pcie->pp.dbi_base = devm_ioremap_resource(dev, dbi_base);
if (IS_ERR(pcie->pp.dbi_base))
return PTR_ERR(pcie->pp.dbi_base);
pci->dbi_base = devm_ioremap_resource(dev, dbi_base);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
pcie->lut = pcie->pp.dbi_base + pcie->drvdata->lut_offset;
pcie->lut = pci->dbi_base + pcie->drvdata->lut_offset;
if (!ls_pcie_is_bridge(pcie))
return -ENODEV;
platform_set_drvdata(pdev, pcie);
ret = ls_add_pcie_port(pcie);
if (ret < 0)
return ret;

View File

@ -29,7 +29,7 @@
#include "pcie-designware.h"
struct armada8k_pcie {
struct pcie_port pp; /* pp.dbi_base is DT ctrl */
struct dw_pcie *pci;
struct clk *clk;
};
@ -67,76 +67,77 @@ struct armada8k_pcie {
#define AX_USER_DOMAIN_MASK 0x3
#define AX_USER_DOMAIN_SHIFT 4
#define to_armada8k_pcie(x) container_of(x, struct armada8k_pcie, pp)
#define to_armada8k_pcie(x) dev_get_drvdata((x)->dev)
static int armada8k_pcie_link_up(struct pcie_port *pp)
static int armada8k_pcie_link_up(struct dw_pcie *pci)
{
u32 reg;
u32 mask = PCIE_GLB_STS_RDLH_LINK_UP | PCIE_GLB_STS_PHY_LINK_UP;
reg = dw_pcie_readl_rc(pp, PCIE_GLOBAL_STATUS_REG);
reg = dw_pcie_readl_dbi(pci, PCIE_GLOBAL_STATUS_REG);
if ((reg & mask) == mask)
return 1;
dev_dbg(pp->dev, "No link detected (Global-Status: 0x%08x).\n", reg);
dev_dbg(pci->dev, "No link detected (Global-Status: 0x%08x).\n", reg);
return 0;
}
static void armada8k_pcie_establish_link(struct armada8k_pcie *pcie)
{
struct pcie_port *pp = &pcie->pp;
struct dw_pcie *pci = pcie->pci;
u32 reg;
if (!dw_pcie_link_up(pp)) {
if (!dw_pcie_link_up(pci)) {
/* Disable LTSSM state machine to enable configuration */
reg = dw_pcie_readl_rc(pp, PCIE_GLOBAL_CONTROL_REG);
reg = dw_pcie_readl_dbi(pci, PCIE_GLOBAL_CONTROL_REG);
reg &= ~(PCIE_APP_LTSSM_EN);
dw_pcie_writel_rc(pp, PCIE_GLOBAL_CONTROL_REG, reg);
dw_pcie_writel_dbi(pci, PCIE_GLOBAL_CONTROL_REG, reg);
}
/* Set the device to root complex mode */
reg = dw_pcie_readl_rc(pp, PCIE_GLOBAL_CONTROL_REG);
reg = dw_pcie_readl_dbi(pci, PCIE_GLOBAL_CONTROL_REG);
reg &= ~(PCIE_DEVICE_TYPE_MASK << PCIE_DEVICE_TYPE_SHIFT);
reg |= PCIE_DEVICE_TYPE_RC << PCIE_DEVICE_TYPE_SHIFT;
dw_pcie_writel_rc(pp, PCIE_GLOBAL_CONTROL_REG, reg);
dw_pcie_writel_dbi(pci, PCIE_GLOBAL_CONTROL_REG, reg);
/* Set the PCIe master AxCache attributes */
dw_pcie_writel_rc(pp, PCIE_ARCACHE_TRC_REG, ARCACHE_DEFAULT_VALUE);
dw_pcie_writel_rc(pp, PCIE_AWCACHE_TRC_REG, AWCACHE_DEFAULT_VALUE);
dw_pcie_writel_dbi(pci, PCIE_ARCACHE_TRC_REG, ARCACHE_DEFAULT_VALUE);
dw_pcie_writel_dbi(pci, PCIE_AWCACHE_TRC_REG, AWCACHE_DEFAULT_VALUE);
/* Set the PCIe master AxDomain attributes */
reg = dw_pcie_readl_rc(pp, PCIE_ARUSER_REG);
reg = dw_pcie_readl_dbi(pci, PCIE_ARUSER_REG);
reg &= ~(AX_USER_DOMAIN_MASK << AX_USER_DOMAIN_SHIFT);
reg |= DOMAIN_OUTER_SHAREABLE << AX_USER_DOMAIN_SHIFT;
dw_pcie_writel_rc(pp, PCIE_ARUSER_REG, reg);
dw_pcie_writel_dbi(pci, PCIE_ARUSER_REG, reg);
reg = dw_pcie_readl_rc(pp, PCIE_AWUSER_REG);
reg = dw_pcie_readl_dbi(pci, PCIE_AWUSER_REG);
reg &= ~(AX_USER_DOMAIN_MASK << AX_USER_DOMAIN_SHIFT);
reg |= DOMAIN_OUTER_SHAREABLE << AX_USER_DOMAIN_SHIFT;
dw_pcie_writel_rc(pp, PCIE_AWUSER_REG, reg);
dw_pcie_writel_dbi(pci, PCIE_AWUSER_REG, reg);
/* Enable INT A-D interrupts */
reg = dw_pcie_readl_rc(pp, PCIE_GLOBAL_INT_MASK1_REG);
reg = dw_pcie_readl_dbi(pci, PCIE_GLOBAL_INT_MASK1_REG);
reg |= PCIE_INT_A_ASSERT_MASK | PCIE_INT_B_ASSERT_MASK |
PCIE_INT_C_ASSERT_MASK | PCIE_INT_D_ASSERT_MASK;
dw_pcie_writel_rc(pp, PCIE_GLOBAL_INT_MASK1_REG, reg);
dw_pcie_writel_dbi(pci, PCIE_GLOBAL_INT_MASK1_REG, reg);
if (!dw_pcie_link_up(pp)) {
if (!dw_pcie_link_up(pci)) {
/* Configuration done. Start LTSSM */
reg = dw_pcie_readl_rc(pp, PCIE_GLOBAL_CONTROL_REG);
reg = dw_pcie_readl_dbi(pci, PCIE_GLOBAL_CONTROL_REG);
reg |= PCIE_APP_LTSSM_EN;
dw_pcie_writel_rc(pp, PCIE_GLOBAL_CONTROL_REG, reg);
dw_pcie_writel_dbi(pci, PCIE_GLOBAL_CONTROL_REG, reg);
}
/* Wait until the link becomes active again */
if (dw_pcie_wait_for_link(pp))
dev_err(pp->dev, "Link not up after reconfiguration\n");
if (dw_pcie_wait_for_link(pci))
dev_err(pci->dev, "Link not up after reconfiguration\n");
}
static void armada8k_pcie_host_init(struct pcie_port *pp)
{
struct armada8k_pcie *pcie = to_armada8k_pcie(pp);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct armada8k_pcie *pcie = to_armada8k_pcie(pci);
dw_pcie_setup_rc(pp);
armada8k_pcie_establish_link(pcie);
@ -145,7 +146,7 @@ static void armada8k_pcie_host_init(struct pcie_port *pp)
static irqreturn_t armada8k_pcie_irq_handler(int irq, void *arg)
{
struct armada8k_pcie *pcie = arg;
struct pcie_port *pp = &pcie->pp;
struct dw_pcie *pci = pcie->pci;
u32 val;
/*
@ -153,21 +154,21 @@ static irqreturn_t armada8k_pcie_irq_handler(int irq, void *arg)
* PCI device. However, they are also latched into the PCIe
* controller, so we simply discard them.
*/
val = dw_pcie_readl_rc(pp, PCIE_GLOBAL_INT_CAUSE1_REG);
dw_pcie_writel_rc(pp, PCIE_GLOBAL_INT_CAUSE1_REG, val);
val = dw_pcie_readl_dbi(pci, PCIE_GLOBAL_INT_CAUSE1_REG);
dw_pcie_writel_dbi(pci, PCIE_GLOBAL_INT_CAUSE1_REG, val);
return IRQ_HANDLED;
}
static struct pcie_host_ops armada8k_pcie_host_ops = {
.link_up = armada8k_pcie_link_up,
static struct dw_pcie_host_ops armada8k_pcie_host_ops = {
.host_init = armada8k_pcie_host_init,
};
static int armada8k_add_pcie_port(struct armada8k_pcie *pcie,
struct platform_device *pdev)
{
struct pcie_port *pp = &pcie->pp;
struct dw_pcie *pci = pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = &pdev->dev;
int ret;
@ -196,10 +197,14 @@ static int armada8k_add_pcie_port(struct armada8k_pcie *pcie,
return 0;
}
static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = armada8k_pcie_link_up,
};
static int armada8k_pcie_probe(struct platform_device *pdev)
{
struct dw_pcie *pci;
struct armada8k_pcie *pcie;
struct pcie_port *pp;
struct device *dev = &pdev->dev;
struct resource *base;
int ret;
@ -208,24 +213,30 @@ static int armada8k_pcie_probe(struct platform_device *pdev)
if (!pcie)
return -ENOMEM;
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
if (!pci)
return -ENOMEM;
pci->dev = dev;
pci->ops = &dw_pcie_ops;
pcie->clk = devm_clk_get(dev, NULL);
if (IS_ERR(pcie->clk))
return PTR_ERR(pcie->clk);
clk_prepare_enable(pcie->clk);
pp = &pcie->pp;
pp->dev = dev;
/* Get the dw-pcie unit configuration/control registers base. */
base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctrl");
pp->dbi_base = devm_ioremap_resource(dev, base);
if (IS_ERR(pp->dbi_base)) {
pci->dbi_base = devm_ioremap_resource(dev, base);
if (IS_ERR(pci->dbi_base)) {
dev_err(dev, "couldn't remap regs base %p\n", base);
ret = PTR_ERR(pp->dbi_base);
ret = PTR_ERR(pci->dbi_base);
goto fail;
}
platform_set_drvdata(pdev, pcie);
ret = armada8k_add_pcie_port(pcie, pdev);
if (ret)
goto fail;

View File

@ -24,10 +24,10 @@
#include "pcie-designware.h"
#define to_artpec6_pcie(x) container_of(x, struct artpec6_pcie, pp)
#define to_artpec6_pcie(x) dev_get_drvdata((x)->dev)
struct artpec6_pcie {
struct pcie_port pp; /* pp.dbi_base is DT dbi */
struct dw_pcie *pci;
struct regmap *regmap; /* DT axis,syscon-pcie */
void __iomem *phy_base; /* DT phy */
};
@ -80,7 +80,8 @@ static void artpec6_pcie_writel(struct artpec6_pcie *artpec6_pcie, u32 offset, u
static int artpec6_pcie_establish_link(struct artpec6_pcie *artpec6_pcie)
{
struct pcie_port *pp = &artpec6_pcie->pp;
struct dw_pcie *pci = artpec6_pcie->pci;
struct pcie_port *pp = &pci->pp;
u32 val;
unsigned int retries;
@ -139,7 +140,7 @@ static int artpec6_pcie_establish_link(struct artpec6_pcie *artpec6_pcie)
* Enable writing to config regs. This is required as the Synopsys
* driver changes the class code. That register needs DBI write enable.
*/
dw_pcie_writel_rc(pp, MISC_CONTROL_1_OFF, DBI_RO_WR_EN);
dw_pcie_writel_dbi(pci, MISC_CONTROL_1_OFF, DBI_RO_WR_EN);
pp->io_base &= ARTPEC6_CPU_TO_BUS_ADDR;
pp->mem_base &= ARTPEC6_CPU_TO_BUS_ADDR;
@ -155,19 +156,20 @@ static int artpec6_pcie_establish_link(struct artpec6_pcie *artpec6_pcie)
artpec6_pcie_writel(artpec6_pcie, PCIECFG, val);
/* check if the link is up or not */
if (!dw_pcie_wait_for_link(pp))
if (!dw_pcie_wait_for_link(pci))
return 0;
dev_dbg(pp->dev, "DEBUG_R0: 0x%08x, DEBUG_R1: 0x%08x\n",
dw_pcie_readl_rc(pp, PCIE_PHY_DEBUG_R0),
dw_pcie_readl_rc(pp, PCIE_PHY_DEBUG_R1));
dev_dbg(pci->dev, "DEBUG_R0: 0x%08x, DEBUG_R1: 0x%08x\n",
dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R0),
dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R1));
return -ETIMEDOUT;
}
static void artpec6_pcie_enable_interrupts(struct artpec6_pcie *artpec6_pcie)
{
struct pcie_port *pp = &artpec6_pcie->pp;
struct dw_pcie *pci = artpec6_pcie->pci;
struct pcie_port *pp = &pci->pp;
if (IS_ENABLED(CONFIG_PCI_MSI))
dw_pcie_msi_init(pp);
@ -175,20 +177,22 @@ static void artpec6_pcie_enable_interrupts(struct artpec6_pcie *artpec6_pcie)
static void artpec6_pcie_host_init(struct pcie_port *pp)
{
struct artpec6_pcie *artpec6_pcie = to_artpec6_pcie(pp);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct artpec6_pcie *artpec6_pcie = to_artpec6_pcie(pci);
artpec6_pcie_establish_link(artpec6_pcie);
artpec6_pcie_enable_interrupts(artpec6_pcie);
}
static struct pcie_host_ops artpec6_pcie_host_ops = {
static struct dw_pcie_host_ops artpec6_pcie_host_ops = {
.host_init = artpec6_pcie_host_init,
};
static irqreturn_t artpec6_pcie_msi_handler(int irq, void *arg)
{
struct artpec6_pcie *artpec6_pcie = arg;
struct pcie_port *pp = &artpec6_pcie->pp;
struct dw_pcie *pci = artpec6_pcie->pci;
struct pcie_port *pp = &pci->pp;
return dw_handle_msi_irq(pp);
}
@ -196,8 +200,9 @@ static irqreturn_t artpec6_pcie_msi_handler(int irq, void *arg)
static int artpec6_add_pcie_port(struct artpec6_pcie *artpec6_pcie,
struct platform_device *pdev)
{
struct pcie_port *pp = &artpec6_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = artpec6_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = pci->dev;
int ret;
if (IS_ENABLED(CONFIG_PCI_MSI)) {
@ -232,8 +237,8 @@ static int artpec6_add_pcie_port(struct artpec6_pcie *artpec6_pcie,
static int artpec6_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct dw_pcie *pci;
struct artpec6_pcie *artpec6_pcie;
struct pcie_port *pp;
struct resource *dbi_base;
struct resource *phy_base;
int ret;
@ -242,13 +247,16 @@ static int artpec6_pcie_probe(struct platform_device *pdev)
if (!artpec6_pcie)
return -ENOMEM;
pp = &artpec6_pcie->pp;
pp->dev = dev;
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
if (!pci)
return -ENOMEM;
pci->dev = dev;
dbi_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
pp->dbi_base = devm_ioremap_resource(dev, dbi_base);
if (IS_ERR(pp->dbi_base))
return PTR_ERR(pp->dbi_base);
pci->dbi_base = devm_ioremap_resource(dev, dbi_base);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
phy_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "phy");
artpec6_pcie->phy_base = devm_ioremap_resource(dev, phy_base);
@ -261,6 +269,8 @@ static int artpec6_pcie_probe(struct platform_device *pdev)
if (IS_ERR(artpec6_pcie->regmap))
return PTR_ERR(artpec6_pcie->regmap);
platform_set_drvdata(pdev, artpec6_pcie);
ret = artpec6_add_pcie_port(artpec6_pcie, pdev);
if (ret < 0)
return ret;

View File

@ -11,239 +11,38 @@
* published by the Free Software Foundation.
*/
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/msi.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
#include <linux/pci.h>
#include <linux/pci_regs.h>
#include <linux/platform_device.h>
#include <linux/types.h>
#include <linux/delay.h>
#include "pcie-designware.h"
/* Parameters for the waiting for link up routine */
#define LINK_WAIT_MAX_RETRIES 10
#define LINK_WAIT_USLEEP_MIN 90000
#define LINK_WAIT_USLEEP_MAX 100000
/* Parameters for the waiting for iATU enabled routine */
#define LINK_WAIT_MAX_IATU_RETRIES 5
#define LINK_WAIT_IATU_MIN 9000
#define LINK_WAIT_IATU_MAX 10000
/* Synopsys-specific PCIe configuration registers */
#define PCIE_PORT_LINK_CONTROL 0x710
#define PORT_LINK_MODE_MASK (0x3f << 16)
#define PORT_LINK_MODE_1_LANES (0x1 << 16)
#define PORT_LINK_MODE_2_LANES (0x3 << 16)
#define PORT_LINK_MODE_4_LANES (0x7 << 16)
#define PORT_LINK_MODE_8_LANES (0xf << 16)
#define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C
#define PORT_LOGIC_SPEED_CHANGE (0x1 << 17)
#define PORT_LOGIC_LINK_WIDTH_MASK (0x1f << 8)
#define PORT_LOGIC_LINK_WIDTH_1_LANES (0x1 << 8)
#define PORT_LOGIC_LINK_WIDTH_2_LANES (0x2 << 8)
#define PORT_LOGIC_LINK_WIDTH_4_LANES (0x4 << 8)
#define PORT_LOGIC_LINK_WIDTH_8_LANES (0x8 << 8)
#define PCIE_MSI_ADDR_LO 0x820
#define PCIE_MSI_ADDR_HI 0x824
#define PCIE_MSI_INTR0_ENABLE 0x828
#define PCIE_MSI_INTR0_MASK 0x82C
#define PCIE_MSI_INTR0_STATUS 0x830
#define PCIE_ATU_VIEWPORT 0x900
#define PCIE_ATU_REGION_INBOUND (0x1 << 31)
#define PCIE_ATU_REGION_OUTBOUND (0x0 << 31)
#define PCIE_ATU_REGION_INDEX2 (0x2 << 0)
#define PCIE_ATU_REGION_INDEX1 (0x1 << 0)
#define PCIE_ATU_REGION_INDEX0 (0x0 << 0)
#define PCIE_ATU_CR1 0x904
#define PCIE_ATU_TYPE_MEM (0x0 << 0)
#define PCIE_ATU_TYPE_IO (0x2 << 0)
#define PCIE_ATU_TYPE_CFG0 (0x4 << 0)
#define PCIE_ATU_TYPE_CFG1 (0x5 << 0)
#define PCIE_ATU_CR2 0x908
#define PCIE_ATU_ENABLE (0x1 << 31)
#define PCIE_ATU_BAR_MODE_ENABLE (0x1 << 30)
#define PCIE_ATU_LOWER_BASE 0x90C
#define PCIE_ATU_UPPER_BASE 0x910
#define PCIE_ATU_LIMIT 0x914
#define PCIE_ATU_LOWER_TARGET 0x918
#define PCIE_ATU_BUS(x) (((x) & 0xff) << 24)
#define PCIE_ATU_DEV(x) (((x) & 0x1f) << 19)
#define PCIE_ATU_FUNC(x) (((x) & 0x7) << 16)
#define PCIE_ATU_UPPER_TARGET 0x91C
/*
* iATU Unroll-specific register definitions
* From 4.80 core version the address translation will be made by unroll
*/
#define PCIE_ATU_UNR_REGION_CTRL1 0x00
#define PCIE_ATU_UNR_REGION_CTRL2 0x04
#define PCIE_ATU_UNR_LOWER_BASE 0x08
#define PCIE_ATU_UNR_UPPER_BASE 0x0C
#define PCIE_ATU_UNR_LIMIT 0x10
#define PCIE_ATU_UNR_LOWER_TARGET 0x14
#define PCIE_ATU_UNR_UPPER_TARGET 0x18
/* Register address builder */
#define PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(region) ((0x3 << 20) | (region << 9))
/* PCIe Port Logic registers */
#define PLR_OFFSET 0x700
#define PCIE_PHY_DEBUG_R1 (PLR_OFFSET + 0x2c)
#define PCIE_PHY_DEBUG_R1_LINK_UP (0x1 << 4)
#define PCIE_PHY_DEBUG_R1_LINK_IN_TRAINING (0x1 << 29)
static struct pci_ops dw_pcie_ops;
int dw_pcie_cfg_read(void __iomem *addr, int size, u32 *val)
{
if ((uintptr_t)addr & (size - 1)) {
*val = 0;
return PCIBIOS_BAD_REGISTER_NUMBER;
}
if (size == 4)
*val = readl(addr);
else if (size == 2)
*val = readw(addr);
else if (size == 1)
*val = readb(addr);
else {
*val = 0;
return PCIBIOS_BAD_REGISTER_NUMBER;
}
return PCIBIOS_SUCCESSFUL;
}
int dw_pcie_cfg_write(void __iomem *addr, int size, u32 val)
{
if ((uintptr_t)addr & (size - 1))
return PCIBIOS_BAD_REGISTER_NUMBER;
if (size == 4)
writel(val, addr);
else if (size == 2)
writew(val, addr);
else if (size == 1)
writeb(val, addr);
else
return PCIBIOS_BAD_REGISTER_NUMBER;
return PCIBIOS_SUCCESSFUL;
}
u32 dw_pcie_readl_rc(struct pcie_port *pp, u32 reg)
{
if (pp->ops->readl_rc)
return pp->ops->readl_rc(pp, reg);
return readl(pp->dbi_base + reg);
}
void dw_pcie_writel_rc(struct pcie_port *pp, u32 reg, u32 val)
{
if (pp->ops->writel_rc)
pp->ops->writel_rc(pp, reg, val);
else
writel(val, pp->dbi_base + reg);
}
static u32 dw_pcie_readl_unroll(struct pcie_port *pp, u32 index, u32 reg)
{
u32 offset = PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(index);
return dw_pcie_readl_rc(pp, offset + reg);
}
static void dw_pcie_writel_unroll(struct pcie_port *pp, u32 index, u32 reg,
u32 val)
{
u32 offset = PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(index);
dw_pcie_writel_rc(pp, offset + reg, val);
}
static int dw_pcie_rd_own_conf(struct pcie_port *pp, int where, int size,
u32 *val)
{
struct dw_pcie *pci;
if (pp->ops->rd_own_conf)
return pp->ops->rd_own_conf(pp, where, size, val);
return dw_pcie_cfg_read(pp->dbi_base + where, size, val);
pci = to_dw_pcie_from_pp(pp);
return dw_pcie_read(pci->dbi_base + where, size, val);
}
static int dw_pcie_wr_own_conf(struct pcie_port *pp, int where, int size,
u32 val)
{
struct dw_pcie *pci;
if (pp->ops->wr_own_conf)
return pp->ops->wr_own_conf(pp, where, size, val);
return dw_pcie_cfg_write(pp->dbi_base + where, size, val);
}
static void dw_pcie_prog_outbound_atu(struct pcie_port *pp, int index,
int type, u64 cpu_addr, u64 pci_addr, u32 size)
{
u32 retries, val;
if (pp->iatu_unroll_enabled) {
dw_pcie_writel_unroll(pp, index, PCIE_ATU_UNR_LOWER_BASE,
lower_32_bits(cpu_addr));
dw_pcie_writel_unroll(pp, index, PCIE_ATU_UNR_UPPER_BASE,
upper_32_bits(cpu_addr));
dw_pcie_writel_unroll(pp, index, PCIE_ATU_UNR_LIMIT,
lower_32_bits(cpu_addr + size - 1));
dw_pcie_writel_unroll(pp, index, PCIE_ATU_UNR_LOWER_TARGET,
lower_32_bits(pci_addr));
dw_pcie_writel_unroll(pp, index, PCIE_ATU_UNR_UPPER_TARGET,
upper_32_bits(pci_addr));
dw_pcie_writel_unroll(pp, index, PCIE_ATU_UNR_REGION_CTRL1,
type);
dw_pcie_writel_unroll(pp, index, PCIE_ATU_UNR_REGION_CTRL2,
PCIE_ATU_ENABLE);
} else {
dw_pcie_writel_rc(pp, PCIE_ATU_VIEWPORT,
PCIE_ATU_REGION_OUTBOUND | index);
dw_pcie_writel_rc(pp, PCIE_ATU_LOWER_BASE,
lower_32_bits(cpu_addr));
dw_pcie_writel_rc(pp, PCIE_ATU_UPPER_BASE,
upper_32_bits(cpu_addr));
dw_pcie_writel_rc(pp, PCIE_ATU_LIMIT,
lower_32_bits(cpu_addr + size - 1));
dw_pcie_writel_rc(pp, PCIE_ATU_LOWER_TARGET,
lower_32_bits(pci_addr));
dw_pcie_writel_rc(pp, PCIE_ATU_UPPER_TARGET,
upper_32_bits(pci_addr));
dw_pcie_writel_rc(pp, PCIE_ATU_CR1, type);
dw_pcie_writel_rc(pp, PCIE_ATU_CR2, PCIE_ATU_ENABLE);
}
/*
* Make sure ATU enable takes effect before any subsequent config
* and I/O accesses.
*/
for (retries = 0; retries < LINK_WAIT_MAX_IATU_RETRIES; retries++) {
if (pp->iatu_unroll_enabled)
val = dw_pcie_readl_unroll(pp, index,
PCIE_ATU_UNR_REGION_CTRL2);
else
val = dw_pcie_readl_rc(pp, PCIE_ATU_CR2);
if (val == PCIE_ATU_ENABLE)
return;
usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
}
dev_err(pp->dev, "iATU is not being enabled\n");
pci = to_dw_pcie_from_pp(pp);
return dw_pcie_write(pci->dbi_base + where, size, val);
}
static struct irq_chip dw_msi_irq_chip = {
@ -263,16 +62,15 @@ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
for (i = 0; i < MAX_MSI_CTRLS; i++) {
dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12, 4,
(u32 *)&val);
(u32 *)&val);
if (val) {
ret = IRQ_HANDLED;
pos = 0;
while ((pos = find_next_bit(&val, 32, pos)) != 32) {
irq = irq_find_mapping(pp->irq_domain,
i * 32 + pos);
dw_pcie_wr_own_conf(pp,
PCIE_MSI_INTR0_STATUS + i * 12,
4, 1 << pos);
i * 32 + pos);
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS +
i * 12, 4, 1 << pos);
generic_handle_irq(irq);
pos++;
}
@ -338,8 +136,9 @@ static void dw_pcie_msi_set_irq(struct pcie_port *pp, int irq)
static int assign_irq(int no_irqs, struct msi_desc *desc, int *pos)
{
int irq, pos0, i;
struct pcie_port *pp = (struct pcie_port *) msi_desc_to_pci_sysdata(desc);
struct pcie_port *pp;
pp = (struct pcie_port *)msi_desc_to_pci_sysdata(desc);
pos0 = bitmap_find_free_region(pp->msi_irq_in_use, MAX_MSI_IRQS,
order_base_2(no_irqs));
if (pos0 < 0)
@ -401,7 +200,7 @@ static void dw_msi_setup_msg(struct pcie_port *pp, unsigned int irq, u32 pos)
}
static int dw_msi_setup_irq(struct msi_controller *chip, struct pci_dev *pdev,
struct msi_desc *desc)
struct msi_desc *desc)
{
int irq, pos;
struct pcie_port *pp = pdev->bus->sysdata;
@ -449,7 +248,7 @@ static void dw_msi_teardown_irq(struct msi_controller *chip, unsigned int irq)
{
struct irq_data *data = irq_get_irq_data(irq);
struct msi_desc *msi = irq_data_get_msi_desc(data);
struct pcie_port *pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi);
struct pcie_port *pp = (struct pcie_port *)msi_desc_to_pci_sysdata(msi);
clear_irq_range(pp, irq, 1, data->hwirq);
}
@ -460,38 +259,8 @@ static struct msi_controller dw_pcie_msi_chip = {
.teardown_irq = dw_msi_teardown_irq,
};
int dw_pcie_wait_for_link(struct pcie_port *pp)
{
int retries;
/* check if the link is up or not */
for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
if (dw_pcie_link_up(pp)) {
dev_info(pp->dev, "link up\n");
return 0;
}
usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
}
dev_err(pp->dev, "phy link never came up\n");
return -ETIMEDOUT;
}
int dw_pcie_link_up(struct pcie_port *pp)
{
u32 val;
if (pp->ops->link_up)
return pp->ops->link_up(pp);
val = readl(pp->dbi_base + PCIE_PHY_DEBUG_R1);
return ((val & PCIE_PHY_DEBUG_R1_LINK_UP) &&
(!(val & PCIE_PHY_DEBUG_R1_LINK_IN_TRAINING)));
}
static int dw_pcie_msi_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq)
irq_hw_number_t hwirq)
{
irq_set_chip_and_handler(irq, &dw_msi_irq_chip, handle_simple_irq);
irq_set_chip_data(irq, domain->host_data);
@ -503,21 +272,12 @@ static const struct irq_domain_ops msi_domain_ops = {
.map = dw_pcie_msi_map,
};
static u8 dw_pcie_iatu_unroll_enabled(struct pcie_port *pp)
{
u32 val;
val = dw_pcie_readl_rc(pp, PCIE_ATU_VIEWPORT);
if (val == 0xffffffff)
return 1;
return 0;
}
int dw_pcie_host_init(struct pcie_port *pp)
{
struct device_node *np = pp->dev->of_node;
struct platform_device *pdev = to_platform_device(pp->dev);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct device *dev = pci->dev;
struct device_node *np = dev->of_node;
struct platform_device *pdev = to_platform_device(dev);
struct pci_bus *bus, *child;
struct resource *cfg_res;
int i, ret;
@ -526,19 +286,19 @@ int dw_pcie_host_init(struct pcie_port *pp)
cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config");
if (cfg_res) {
pp->cfg0_size = resource_size(cfg_res)/2;
pp->cfg1_size = resource_size(cfg_res)/2;
pp->cfg0_size = resource_size(cfg_res) / 2;
pp->cfg1_size = resource_size(cfg_res) / 2;
pp->cfg0_base = cfg_res->start;
pp->cfg1_base = cfg_res->start + pp->cfg0_size;
} else if (!pp->va_cfg0_base) {
dev_err(pp->dev, "missing *config* reg space\n");
dev_err(dev, "missing *config* reg space\n");
}
ret = of_pci_get_host_bridge_resources(np, 0, 0xff, &res, &pp->io_base);
if (ret)
return ret;
ret = devm_request_pci_bus_resources(&pdev->dev, &res);
ret = devm_request_pci_bus_resources(dev, &res);
if (ret)
goto error;
@ -548,7 +308,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
case IORESOURCE_IO:
ret = pci_remap_iospace(win->res, pp->io_base);
if (ret) {
dev_warn(pp->dev, "error %d: failed to map resource %pR\n",
dev_warn(dev, "error %d: failed to map resource %pR\n",
ret, win->res);
resource_list_destroy_entry(win);
} else {
@ -566,8 +326,8 @@ int dw_pcie_host_init(struct pcie_port *pp)
break;
case 0:
pp->cfg = win->res;
pp->cfg0_size = resource_size(pp->cfg)/2;
pp->cfg1_size = resource_size(pp->cfg)/2;
pp->cfg0_size = resource_size(pp->cfg) / 2;
pp->cfg1_size = resource_size(pp->cfg) / 2;
pp->cfg0_base = pp->cfg->start;
pp->cfg1_base = pp->cfg->start + pp->cfg0_size;
break;
@ -577,11 +337,11 @@ int dw_pcie_host_init(struct pcie_port *pp)
}
}
if (!pp->dbi_base) {
pp->dbi_base = devm_ioremap(pp->dev, pp->cfg->start,
if (!pci->dbi_base) {
pci->dbi_base = devm_ioremap(dev, pp->cfg->start,
resource_size(pp->cfg));
if (!pp->dbi_base) {
dev_err(pp->dev, "error with ioremap\n");
if (!pci->dbi_base) {
dev_err(dev, "error with ioremap\n");
ret = -ENOMEM;
goto error;
}
@ -590,40 +350,36 @@ int dw_pcie_host_init(struct pcie_port *pp)
pp->mem_base = pp->mem->start;
if (!pp->va_cfg0_base) {
pp->va_cfg0_base = devm_ioremap(pp->dev, pp->cfg0_base,
pp->va_cfg0_base = devm_ioremap(dev, pp->cfg0_base,
pp->cfg0_size);
if (!pp->va_cfg0_base) {
dev_err(pp->dev, "error with ioremap in function\n");
dev_err(dev, "error with ioremap in function\n");
ret = -ENOMEM;
goto error;
}
}
if (!pp->va_cfg1_base) {
pp->va_cfg1_base = devm_ioremap(pp->dev, pp->cfg1_base,
pp->va_cfg1_base = devm_ioremap(dev, pp->cfg1_base,
pp->cfg1_size);
if (!pp->va_cfg1_base) {
dev_err(pp->dev, "error with ioremap\n");
dev_err(dev, "error with ioremap\n");
ret = -ENOMEM;
goto error;
}
}
ret = of_property_read_u32(np, "num-lanes", &pp->lanes);
ret = of_property_read_u32(np, "num-viewport", &pci->num_viewport);
if (ret)
pp->lanes = 0;
ret = of_property_read_u32(np, "num-viewport", &pp->num_viewport);
if (ret)
pp->num_viewport = 2;
pci->num_viewport = 2;
if (IS_ENABLED(CONFIG_PCI_MSI)) {
if (!pp->ops->msi_host_init) {
pp->irq_domain = irq_domain_add_linear(pp->dev->of_node,
pp->irq_domain = irq_domain_add_linear(dev->of_node,
MAX_MSI_IRQS, &msi_domain_ops,
&dw_pcie_msi_chip);
if (!pp->irq_domain) {
dev_err(pp->dev, "irq domain init failed\n");
dev_err(dev, "irq domain init failed\n");
ret = -ENXIO;
goto error;
}
@ -642,12 +398,12 @@ int dw_pcie_host_init(struct pcie_port *pp)
pp->root_bus_nr = pp->busn->start;
if (IS_ENABLED(CONFIG_PCI_MSI)) {
bus = pci_scan_root_bus_msi(pp->dev, pp->root_bus_nr,
bus = pci_scan_root_bus_msi(dev, pp->root_bus_nr,
&dw_pcie_ops, pp, &res,
&dw_pcie_msi_chip);
dw_pcie_msi_chip.dev = pp->dev;
dw_pcie_msi_chip.dev = dev;
} else
bus = pci_scan_root_bus(pp->dev, pp->root_bus_nr, &dw_pcie_ops,
bus = pci_scan_root_bus(dev, pp->root_bus_nr, &dw_pcie_ops,
pp, &res);
if (!bus) {
ret = -ENOMEM;
@ -677,12 +433,13 @@ error:
}
static int dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
u32 devfn, int where, int size, u32 *val)
u32 devfn, int where, int size, u32 *val)
{
int ret, type;
u32 busdev, cfg_size;
u64 cpu_addr;
void __iomem *va_cfg_base;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
if (pp->ops->rd_other_conf)
return pp->ops->rd_other_conf(pp, bus, devfn, where, size, val);
@ -702,12 +459,12 @@ static int dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
va_cfg_base = pp->va_cfg1_base;
}
dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX1,
dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX1,
type, cpu_addr,
busdev, cfg_size);
ret = dw_pcie_cfg_read(va_cfg_base + where, size, val);
if (pp->num_viewport <= 2)
dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX1,
ret = dw_pcie_read(va_cfg_base + where, size, val);
if (pci->num_viewport <= 2)
dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX1,
PCIE_ATU_TYPE_IO, pp->io_base,
pp->io_bus_addr, pp->io_size);
@ -715,12 +472,13 @@ static int dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
}
static int dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus,
u32 devfn, int where, int size, u32 val)
u32 devfn, int where, int size, u32 val)
{
int ret, type;
u32 busdev, cfg_size;
u64 cpu_addr;
void __iomem *va_cfg_base;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
if (pp->ops->wr_other_conf)
return pp->ops->wr_other_conf(pp, bus, devfn, where, size, val);
@ -740,12 +498,12 @@ static int dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus,
va_cfg_base = pp->va_cfg1_base;
}
dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX1,
dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX1,
type, cpu_addr,
busdev, cfg_size);
ret = dw_pcie_cfg_write(va_cfg_base + where, size, val);
if (pp->num_viewport <= 2)
dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX1,
ret = dw_pcie_write(va_cfg_base + where, size, val);
if (pci->num_viewport <= 2)
dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX1,
PCIE_ATU_TYPE_IO, pp->io_base,
pp->io_bus_addr, pp->io_size);
@ -755,9 +513,11 @@ static int dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus,
static int dw_pcie_valid_device(struct pcie_port *pp, struct pci_bus *bus,
int dev)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
/* If there is no link, then there is no device */
if (bus->number != pp->root_bus_nr) {
if (!dw_pcie_link_up(pp))
if (!dw_pcie_link_up(pci))
return 0;
}
@ -769,7 +529,7 @@ static int dw_pcie_valid_device(struct pcie_port *pp, struct pci_bus *bus,
}
static int dw_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where,
int size, u32 *val)
int size, u32 *val)
{
struct pcie_port *pp = bus->sysdata;
@ -785,7 +545,7 @@ static int dw_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where,
}
static int dw_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
int where, int size, u32 val)
int where, int size, u32 val)
{
struct pcie_port *pp = bus->sysdata;
@ -803,73 +563,46 @@ static struct pci_ops dw_pcie_ops = {
.write = dw_pcie_wr_conf,
};
void dw_pcie_setup_rc(struct pcie_port *pp)
static u8 dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci)
{
u32 val;
/* set the number of lanes */
val = dw_pcie_readl_rc(pp, PCIE_PORT_LINK_CONTROL);
val &= ~PORT_LINK_MODE_MASK;
switch (pp->lanes) {
case 1:
val |= PORT_LINK_MODE_1_LANES;
break;
case 2:
val |= PORT_LINK_MODE_2_LANES;
break;
case 4:
val |= PORT_LINK_MODE_4_LANES;
break;
case 8:
val |= PORT_LINK_MODE_8_LANES;
break;
default:
dev_err(pp->dev, "num-lanes %u: invalid value\n", pp->lanes);
return;
}
dw_pcie_writel_rc(pp, PCIE_PORT_LINK_CONTROL, val);
val = dw_pcie_readl_dbi(pci, PCIE_ATU_VIEWPORT);
if (val == 0xffffffff)
return 1;
/* set link width speed control register */
val = dw_pcie_readl_rc(pp, PCIE_LINK_WIDTH_SPEED_CONTROL);
val &= ~PORT_LOGIC_LINK_WIDTH_MASK;
switch (pp->lanes) {
case 1:
val |= PORT_LOGIC_LINK_WIDTH_1_LANES;
break;
case 2:
val |= PORT_LOGIC_LINK_WIDTH_2_LANES;
break;
case 4:
val |= PORT_LOGIC_LINK_WIDTH_4_LANES;
break;
case 8:
val |= PORT_LOGIC_LINK_WIDTH_8_LANES;
break;
}
dw_pcie_writel_rc(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
return 0;
}
void dw_pcie_setup_rc(struct pcie_port *pp)
{
u32 val;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
dw_pcie_setup(pci);
/* setup RC BARs */
dw_pcie_writel_rc(pp, PCI_BASE_ADDRESS_0, 0x00000004);
dw_pcie_writel_rc(pp, PCI_BASE_ADDRESS_1, 0x00000000);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0x00000004);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000);
/* setup interrupt pins */
val = dw_pcie_readl_rc(pp, PCI_INTERRUPT_LINE);
val = dw_pcie_readl_dbi(pci, PCI_INTERRUPT_LINE);
val &= 0xffff00ff;
val |= 0x00000100;
dw_pcie_writel_rc(pp, PCI_INTERRUPT_LINE, val);
dw_pcie_writel_dbi(pci, PCI_INTERRUPT_LINE, val);
/* setup bus numbers */
val = dw_pcie_readl_rc(pp, PCI_PRIMARY_BUS);
val = dw_pcie_readl_dbi(pci, PCI_PRIMARY_BUS);
val &= 0xff000000;
val |= 0x00010100;
dw_pcie_writel_rc(pp, PCI_PRIMARY_BUS, val);
dw_pcie_writel_dbi(pci, PCI_PRIMARY_BUS, val);
/* setup command register */
val = dw_pcie_readl_rc(pp, PCI_COMMAND);
val = dw_pcie_readl_dbi(pci, PCI_COMMAND);
val &= 0xffff0000;
val |= PCI_COMMAND_IO | PCI_COMMAND_MEMORY |
PCI_COMMAND_MASTER | PCI_COMMAND_SERR;
dw_pcie_writel_rc(pp, PCI_COMMAND, val);
dw_pcie_writel_dbi(pci, PCI_COMMAND, val);
/*
* If the platform provides ->rd_other_conf, it means the platform
@ -878,15 +611,15 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
*/
if (!pp->ops->rd_other_conf) {
/* get iATU unroll support */
pp->iatu_unroll_enabled = dw_pcie_iatu_unroll_enabled(pp);
dev_dbg(pp->dev, "iATU unroll: %s\n",
pp->iatu_unroll_enabled ? "enabled" : "disabled");
pci->iatu_unroll_enabled = dw_pcie_iatu_unroll_enabled(pci);
dev_dbg(pci->dev, "iATU unroll: %s\n",
pci->iatu_unroll_enabled ? "enabled" : "disabled");
dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX0,
dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX0,
PCIE_ATU_TYPE_MEM, pp->mem_base,
pp->mem_bus_addr, pp->mem_size);
if (pp->num_viewport > 2)
dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX2,
if (pci->num_viewport > 2)
dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX2,
PCIE_ATU_TYPE_IO, pp->io_base,
pp->io_bus_addr, pp->io_size);
}

View File

@ -25,7 +25,7 @@
#include "pcie-designware.h"
struct dw_plat_pcie {
struct pcie_port pp; /* pp.dbi_base is DT 0th resource */
struct dw_pcie *pci;
};
static irqreturn_t dw_plat_pcie_msi_irq_handler(int irq, void *arg)
@ -37,21 +37,23 @@ static irqreturn_t dw_plat_pcie_msi_irq_handler(int irq, void *arg)
static void dw_plat_pcie_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
dw_pcie_setup_rc(pp);
dw_pcie_wait_for_link(pp);
dw_pcie_wait_for_link(pci);
if (IS_ENABLED(CONFIG_PCI_MSI))
dw_pcie_msi_init(pp);
}
static struct pcie_host_ops dw_plat_pcie_host_ops = {
static struct dw_pcie_host_ops dw_plat_pcie_host_ops = {
.host_init = dw_plat_pcie_host_init,
};
static int dw_plat_add_pcie_port(struct pcie_port *pp,
struct platform_device *pdev)
{
struct device *dev = pp->dev;
struct device *dev = &pdev->dev;
int ret;
pp->irq = platform_get_irq(pdev, 1);
@ -88,7 +90,7 @@ static int dw_plat_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct dw_plat_pcie *dw_plat_pcie;
struct pcie_port *pp;
struct dw_pcie *pci;
struct resource *res; /* Resource from DT */
int ret;
@ -96,15 +98,20 @@ static int dw_plat_pcie_probe(struct platform_device *pdev)
if (!dw_plat_pcie)
return -ENOMEM;
pp = &dw_plat_pcie->pp;
pp->dev = dev;
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
if (!pci)
return -ENOMEM;
pci->dev = dev;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
pp->dbi_base = devm_ioremap_resource(dev, res);
if (IS_ERR(pp->dbi_base))
return PTR_ERR(pp->dbi_base);
pci->dbi_base = devm_ioremap_resource(dev, res);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
ret = dw_plat_add_pcie_port(pp, pdev);
platform_set_drvdata(pdev, dw_plat_pcie);
ret = dw_plat_add_pcie_port(&pci->pp, pdev);
if (ret < 0)
return ret;

View File

@ -0,0 +1,233 @@
/*
* Synopsys Designware PCIe host controller driver
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Jingoo Han <jg1.han@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/delay.h>
#include <linux/of.h>
#include <linux/types.h>
#include "pcie-designware.h"
/* PCIe Port Logic registers */
#define PLR_OFFSET 0x700
#define PCIE_PHY_DEBUG_R1 (PLR_OFFSET + 0x2c)
#define PCIE_PHY_DEBUG_R1_LINK_UP (0x1 << 4)
#define PCIE_PHY_DEBUG_R1_LINK_IN_TRAINING (0x1 << 29)
int dw_pcie_read(void __iomem *addr, int size, u32 *val)
{
if ((uintptr_t)addr & (size - 1)) {
*val = 0;
return PCIBIOS_BAD_REGISTER_NUMBER;
}
if (size == 4) {
*val = readl(addr);
} else if (size == 2) {
*val = readw(addr);
} else if (size == 1) {
*val = readb(addr);
} else {
*val = 0;
return PCIBIOS_BAD_REGISTER_NUMBER;
}
return PCIBIOS_SUCCESSFUL;
}
int dw_pcie_write(void __iomem *addr, int size, u32 val)
{
if ((uintptr_t)addr & (size - 1))
return PCIBIOS_BAD_REGISTER_NUMBER;
if (size == 4)
writel(val, addr);
else if (size == 2)
writew(val, addr);
else if (size == 1)
writeb(val, addr);
else
return PCIBIOS_BAD_REGISTER_NUMBER;
return PCIBIOS_SUCCESSFUL;
}
u32 dw_pcie_readl_dbi(struct dw_pcie *pci, u32 reg)
{
if (pci->ops->readl_dbi)
return pci->ops->readl_dbi(pci, reg);
return readl(pci->dbi_base + reg);
}
void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val)
{
if (pci->ops->writel_dbi)
pci->ops->writel_dbi(pci, reg, val);
else
writel(val, pci->dbi_base + reg);
}
static u32 dw_pcie_readl_unroll(struct dw_pcie *pci, u32 index, u32 reg)
{
u32 offset = PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(index);
return dw_pcie_readl_dbi(pci, offset + reg);
}
static void dw_pcie_writel_unroll(struct dw_pcie *pci, u32 index, u32 reg,
u32 val)
{
u32 offset = PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(index);
dw_pcie_writel_dbi(pci, offset + reg, val);
}
void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type,
u64 cpu_addr, u64 pci_addr, u32 size)
{
u32 retries, val;
if (pci->iatu_unroll_enabled) {
dw_pcie_writel_unroll(pci, index, PCIE_ATU_UNR_LOWER_BASE,
lower_32_bits(cpu_addr));
dw_pcie_writel_unroll(pci, index, PCIE_ATU_UNR_UPPER_BASE,
upper_32_bits(cpu_addr));
dw_pcie_writel_unroll(pci, index, PCIE_ATU_UNR_LIMIT,
lower_32_bits(cpu_addr + size - 1));
dw_pcie_writel_unroll(pci, index, PCIE_ATU_UNR_LOWER_TARGET,
lower_32_bits(pci_addr));
dw_pcie_writel_unroll(pci, index, PCIE_ATU_UNR_UPPER_TARGET,
upper_32_bits(pci_addr));
dw_pcie_writel_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL1,
type);
dw_pcie_writel_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2,
PCIE_ATU_ENABLE);
} else {
dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT,
PCIE_ATU_REGION_OUTBOUND | index);
dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_BASE,
lower_32_bits(cpu_addr));
dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_BASE,
upper_32_bits(cpu_addr));
dw_pcie_writel_dbi(pci, PCIE_ATU_LIMIT,
lower_32_bits(cpu_addr + size - 1));
dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_TARGET,
lower_32_bits(pci_addr));
dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_TARGET,
upper_32_bits(pci_addr));
dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, type);
dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, PCIE_ATU_ENABLE);
}
/*
* Make sure ATU enable takes effect before any subsequent config
* and I/O accesses.
*/
for (retries = 0; retries < LINK_WAIT_MAX_IATU_RETRIES; retries++) {
if (pci->iatu_unroll_enabled)
val = dw_pcie_readl_unroll(pci, index,
PCIE_ATU_UNR_REGION_CTRL2);
else
val = dw_pcie_readl_dbi(pci, PCIE_ATU_CR2);
if (val == PCIE_ATU_ENABLE)
return;
usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
}
dev_err(pci->dev, "iATU is not being enabled\n");
}
int dw_pcie_wait_for_link(struct dw_pcie *pci)
{
int retries;
/* check if the link is up or not */
for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
if (dw_pcie_link_up(pci)) {
dev_info(pci->dev, "link up\n");
return 0;
}
usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
}
dev_err(pci->dev, "phy link never came up\n");
return -ETIMEDOUT;
}
int dw_pcie_link_up(struct dw_pcie *pci)
{
u32 val;
if (pci->ops->link_up)
return pci->ops->link_up(pci);
val = readl(pci->dbi_base + PCIE_PHY_DEBUG_R1);
return ((val & PCIE_PHY_DEBUG_R1_LINK_UP) &&
(!(val & PCIE_PHY_DEBUG_R1_LINK_IN_TRAINING)));
}
void dw_pcie_setup(struct dw_pcie *pci)
{
int ret;
u32 val;
u32 lanes;
struct device *dev = pci->dev;
struct device_node *np = dev->of_node;
ret = of_property_read_u32(np, "num-lanes", &lanes);
if (ret)
lanes = 0;
/* set the number of lanes */
val = dw_pcie_readl_dbi(pci, PCIE_PORT_LINK_CONTROL);
val &= ~PORT_LINK_MODE_MASK;
switch (lanes) {
case 1:
val |= PORT_LINK_MODE_1_LANES;
break;
case 2:
val |= PORT_LINK_MODE_2_LANES;
break;
case 4:
val |= PORT_LINK_MODE_4_LANES;
break;
case 8:
val |= PORT_LINK_MODE_8_LANES;
break;
default:
dev_err(pci->dev, "num-lanes %u: invalid value\n", lanes);
return;
}
dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val);
/* set link width speed control register */
val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
val &= ~PORT_LOGIC_LINK_WIDTH_MASK;
switch (lanes) {
case 1:
val |= PORT_LOGIC_LINK_WIDTH_1_LANES;
break;
case 2:
val |= PORT_LOGIC_LINK_WIDTH_2_LANES;
break;
case 4:
val |= PORT_LOGIC_LINK_WIDTH_4_LANES;
break;
case 8:
val |= PORT_LOGIC_LINK_WIDTH_8_LANES;
break;
}
dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
}

View File

@ -0,0 +1,198 @@
/*
* Synopsys Designware PCIe host controller driver
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Jingoo Han <jg1.han@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _PCIE_DESIGNWARE_H
#define _PCIE_DESIGNWARE_H
#include <linux/irq.h>
#include <linux/msi.h>
#include <linux/pci.h>
/* Parameters for the waiting for link up routine */
#define LINK_WAIT_MAX_RETRIES 10
#define LINK_WAIT_USLEEP_MIN 90000
#define LINK_WAIT_USLEEP_MAX 100000
/* Parameters for the waiting for iATU enabled routine */
#define LINK_WAIT_MAX_IATU_RETRIES 5
#define LINK_WAIT_IATU_MIN 9000
#define LINK_WAIT_IATU_MAX 10000
/* Synopsys-specific PCIe configuration registers */
#define PCIE_PORT_LINK_CONTROL 0x710
#define PORT_LINK_MODE_MASK (0x3f << 16)
#define PORT_LINK_MODE_1_LANES (0x1 << 16)
#define PORT_LINK_MODE_2_LANES (0x3 << 16)
#define PORT_LINK_MODE_4_LANES (0x7 << 16)
#define PORT_LINK_MODE_8_LANES (0xf << 16)
#define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C
#define PORT_LOGIC_SPEED_CHANGE (0x1 << 17)
#define PORT_LOGIC_LINK_WIDTH_MASK (0x1f << 8)
#define PORT_LOGIC_LINK_WIDTH_1_LANES (0x1 << 8)
#define PORT_LOGIC_LINK_WIDTH_2_LANES (0x2 << 8)
#define PORT_LOGIC_LINK_WIDTH_4_LANES (0x4 << 8)
#define PORT_LOGIC_LINK_WIDTH_8_LANES (0x8 << 8)
#define PCIE_MSI_ADDR_LO 0x820
#define PCIE_MSI_ADDR_HI 0x824
#define PCIE_MSI_INTR0_ENABLE 0x828
#define PCIE_MSI_INTR0_MASK 0x82C
#define PCIE_MSI_INTR0_STATUS 0x830
#define PCIE_ATU_VIEWPORT 0x900
#define PCIE_ATU_REGION_INBOUND (0x1 << 31)
#define PCIE_ATU_REGION_OUTBOUND (0x0 << 31)
#define PCIE_ATU_REGION_INDEX2 (0x2 << 0)
#define PCIE_ATU_REGION_INDEX1 (0x1 << 0)
#define PCIE_ATU_REGION_INDEX0 (0x0 << 0)
#define PCIE_ATU_CR1 0x904
#define PCIE_ATU_TYPE_MEM (0x0 << 0)
#define PCIE_ATU_TYPE_IO (0x2 << 0)
#define PCIE_ATU_TYPE_CFG0 (0x4 << 0)
#define PCIE_ATU_TYPE_CFG1 (0x5 << 0)
#define PCIE_ATU_CR2 0x908
#define PCIE_ATU_ENABLE (0x1 << 31)
#define PCIE_ATU_BAR_MODE_ENABLE (0x1 << 30)
#define PCIE_ATU_LOWER_BASE 0x90C
#define PCIE_ATU_UPPER_BASE 0x910
#define PCIE_ATU_LIMIT 0x914
#define PCIE_ATU_LOWER_TARGET 0x918
#define PCIE_ATU_BUS(x) (((x) & 0xff) << 24)
#define PCIE_ATU_DEV(x) (((x) & 0x1f) << 19)
#define PCIE_ATU_FUNC(x) (((x) & 0x7) << 16)
#define PCIE_ATU_UPPER_TARGET 0x91C
/*
* iATU Unroll-specific register definitions
* From 4.80 core version the address translation will be made by unroll
*/
#define PCIE_ATU_UNR_REGION_CTRL1 0x00
#define PCIE_ATU_UNR_REGION_CTRL2 0x04
#define PCIE_ATU_UNR_LOWER_BASE 0x08
#define PCIE_ATU_UNR_UPPER_BASE 0x0C
#define PCIE_ATU_UNR_LIMIT 0x10
#define PCIE_ATU_UNR_LOWER_TARGET 0x14
#define PCIE_ATU_UNR_UPPER_TARGET 0x18
/* Register address builder */
#define PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(region) \
((0x3 << 20) | ((region) << 9))
/*
* Maximum number of MSI IRQs can be 256 per controller. But keep
* it 32 as of now. Probably we will never need more than 32. If needed,
* then increment it in multiple of 32.
*/
#define MAX_MSI_IRQS 32
#define MAX_MSI_CTRLS (MAX_MSI_IRQS / 32)
struct pcie_port;
struct dw_pcie;
struct dw_pcie_host_ops {
int (*rd_own_conf)(struct pcie_port *pp, int where, int size, u32 *val);
int (*wr_own_conf)(struct pcie_port *pp, int where, int size, u32 val);
int (*rd_other_conf)(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size, u32 *val);
int (*wr_other_conf)(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size, u32 val);
void (*host_init)(struct pcie_port *pp);
void (*msi_set_irq)(struct pcie_port *pp, int irq);
void (*msi_clear_irq)(struct pcie_port *pp, int irq);
phys_addr_t (*get_msi_addr)(struct pcie_port *pp);
u32 (*get_msi_data)(struct pcie_port *pp, int pos);
void (*scan_bus)(struct pcie_port *pp);
int (*msi_host_init)(struct pcie_port *pp, struct msi_controller *chip);
};
struct pcie_port {
u8 root_bus_nr;
u64 cfg0_base;
void __iomem *va_cfg0_base;
u32 cfg0_size;
u64 cfg1_base;
void __iomem *va_cfg1_base;
u32 cfg1_size;
resource_size_t io_base;
phys_addr_t io_bus_addr;
u32 io_size;
u64 mem_base;
phys_addr_t mem_bus_addr;
u32 mem_size;
struct resource *cfg;
struct resource *io;
struct resource *mem;
struct resource *busn;
int irq;
struct dw_pcie_host_ops *ops;
int msi_irq;
struct irq_domain *irq_domain;
unsigned long msi_data;
DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS);
};
struct dw_pcie_ops {
u32 (*readl_dbi)(struct dw_pcie *pcie, u32 reg);
void (*writel_dbi)(struct dw_pcie *pcie, u32 reg, u32 val);
int (*link_up)(struct dw_pcie *pcie);
};
struct dw_pcie {
struct device *dev;
void __iomem *dbi_base;
u32 num_viewport;
u8 iatu_unroll_enabled;
struct pcie_port pp;
const struct dw_pcie_ops *ops;
};
#define to_dw_pcie_from_pp(port) container_of((port), struct dw_pcie, pp)
int dw_pcie_read(void __iomem *addr, int size, u32 *val);
int dw_pcie_write(void __iomem *addr, int size, u32 val);
u32 dw_pcie_readl_dbi(struct dw_pcie *pci, u32 reg);
void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val);
int dw_pcie_link_up(struct dw_pcie *pci);
int dw_pcie_wait_for_link(struct dw_pcie *pci);
void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index,
int type, u64 cpu_addr, u64 pci_addr,
u32 size);
void dw_pcie_setup(struct dw_pcie *pci);
#ifdef CONFIG_PCIE_DW_HOST
irqreturn_t dw_handle_msi_irq(struct pcie_port *pp);
void dw_pcie_msi_init(struct pcie_port *pp);
void dw_pcie_setup_rc(struct pcie_port *pp);
int dw_pcie_host_init(struct pcie_port *pp);
#else
static inline irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
{
return IRQ_NONE;
}
static inline void dw_pcie_msi_init(struct pcie_port *pp)
{
}
static inline void dw_pcie_setup_rc(struct pcie_port *pp)
{
}
static inline int dw_pcie_host_init(struct pcie_port *pp)
{
return 0;
}
#endif
#endif /* _PCIE_DESIGNWARE_H */

View File

@ -24,10 +24,10 @@
#include <linux/regmap.h>
#include "../pci.h"
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
#if defined(CONFIG_PCI_HISI) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS))
static int hisi_pcie_acpi_rd_conf(struct pci_bus *bus, u32 devfn, int where,
int size, u32 *val)
static int hisi_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where,
int size, u32 *val)
{
struct pci_config_window *cfg = bus->sysdata;
int dev = PCI_SLOT(devfn);
@ -44,8 +44,8 @@ static int hisi_pcie_acpi_rd_conf(struct pci_bus *bus, u32 devfn, int where,
return pci_generic_config_read(bus, devfn, where, size, val);
}
static int hisi_pcie_acpi_wr_conf(struct pci_bus *bus, u32 devfn,
int where, int size, u32 val)
static int hisi_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
int where, int size, u32 val)
{
struct pci_config_window *cfg = bus->sysdata;
int dev = PCI_SLOT(devfn);
@ -74,6 +74,8 @@ static void __iomem *hisi_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
return pci_ecam_map_bus(bus, devfn, where);
}
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
static int hisi_pcie_init(struct pci_config_window *cfg)
{
struct device *dev = cfg->parent;
@ -110,8 +112,8 @@ struct pci_ecam_ops hisi_pcie_ops = {
.init = hisi_pcie_init,
.pci_ops = {
.map_bus = hisi_pcie_map_bus,
.read = hisi_pcie_acpi_rd_conf,
.write = hisi_pcie_acpi_wr_conf,
.read = hisi_pcie_rd_conf,
.write = hisi_pcie_wr_conf,
}
};
@ -127,7 +129,7 @@ struct pci_ecam_ops hisi_pcie_ops = {
#define PCIE_LTSSM_LINKUP_STATE 0x11
#define PCIE_LTSSM_STATE_MASK 0x3F
#define to_hisi_pcie(x) container_of(x, struct hisi_pcie, pp)
#define to_hisi_pcie(x) dev_get_drvdata((x)->dev)
struct hisi_pcie;
@ -136,10 +138,10 @@ struct pcie_soc_ops {
};
struct hisi_pcie {
struct pcie_port pp; /* pp.dbi_base is DT rc_dbi */
struct dw_pcie *pci;
struct regmap *subctrl;
u32 port_id;
struct pcie_soc_ops *soc_ops;
const struct pcie_soc_ops *soc_ops;
};
/* HipXX PCIe host only supports 32-bit config access */
@ -149,10 +151,11 @@ static int hisi_pcie_cfg_read(struct pcie_port *pp, int where, int size,
u32 reg;
u32 reg_val;
void *walker = &reg_val;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
walker += (where & 0x3);
reg = where & ~0x3;
reg_val = dw_pcie_readl_rc(pp, reg);
reg_val = dw_pcie_readl_dbi(pci, reg);
if (size == 1)
*val = *(u8 __force *) walker;
@ -173,19 +176,20 @@ static int hisi_pcie_cfg_write(struct pcie_port *pp, int where, int size,
u32 reg_val;
u32 reg;
void *walker = &reg_val;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
walker += (where & 0x3);
reg = where & ~0x3;
if (size == 4)
dw_pcie_writel_rc(pp, reg, val);
dw_pcie_writel_dbi(pci, reg, val);
else if (size == 2) {
reg_val = dw_pcie_readl_rc(pp, reg);
reg_val = dw_pcie_readl_dbi(pci, reg);
*(u16 __force *) walker = val;
dw_pcie_writel_rc(pp, reg, reg_val);
dw_pcie_writel_dbi(pci, reg, reg_val);
} else if (size == 1) {
reg_val = dw_pcie_readl_rc(pp, reg);
reg_val = dw_pcie_readl_dbi(pci, reg);
*(u8 __force *) walker = val;
dw_pcie_writel_rc(pp, reg, reg_val);
dw_pcie_writel_dbi(pci, reg, reg_val);
} else
return PCIBIOS_BAD_REGISTER_NUMBER;
@ -204,32 +208,32 @@ static int hisi_pcie_link_up_hip05(struct hisi_pcie *hisi_pcie)
static int hisi_pcie_link_up_hip06(struct hisi_pcie *hisi_pcie)
{
struct pcie_port *pp = &hisi_pcie->pp;
struct dw_pcie *pci = hisi_pcie->pci;
u32 val;
val = dw_pcie_readl_rc(pp, PCIE_SYS_STATE4);
val = dw_pcie_readl_dbi(pci, PCIE_SYS_STATE4);
return ((val & PCIE_LTSSM_STATE_MASK) == PCIE_LTSSM_LINKUP_STATE);
}
static int hisi_pcie_link_up(struct pcie_port *pp)
static int hisi_pcie_link_up(struct dw_pcie *pci)
{
struct hisi_pcie *hisi_pcie = to_hisi_pcie(pp);
struct hisi_pcie *hisi_pcie = to_hisi_pcie(pci);
return hisi_pcie->soc_ops->hisi_pcie_link_up(hisi_pcie);
}
static struct pcie_host_ops hisi_pcie_host_ops = {
static struct dw_pcie_host_ops hisi_pcie_host_ops = {
.rd_own_conf = hisi_pcie_cfg_read,
.wr_own_conf = hisi_pcie_cfg_write,
.link_up = hisi_pcie_link_up,
};
static int hisi_add_pcie_port(struct hisi_pcie *hisi_pcie,
struct platform_device *pdev)
{
struct pcie_port *pp = &hisi_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = hisi_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = &pdev->dev;
int ret;
u32 port_id;
@ -254,12 +258,15 @@ static int hisi_add_pcie_port(struct hisi_pcie *hisi_pcie,
return 0;
}
static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = hisi_pcie_link_up,
};
static int hisi_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct dw_pcie *pci;
struct hisi_pcie *hisi_pcie;
struct pcie_port *pp;
const struct of_device_id *match;
struct resource *reg;
struct device_driver *driver;
int ret;
@ -268,24 +275,30 @@ static int hisi_pcie_probe(struct platform_device *pdev)
if (!hisi_pcie)
return -ENOMEM;
pp = &hisi_pcie->pp;
pp->dev = dev;
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
if (!pci)
return -ENOMEM;
pci->dev = dev;
pci->ops = &dw_pcie_ops;
driver = dev->driver;
match = of_match_device(driver->of_match_table, dev);
hisi_pcie->soc_ops = (struct pcie_soc_ops *) match->data;
hisi_pcie->soc_ops = of_device_get_match_data(dev);
hisi_pcie->subctrl =
syscon_regmap_lookup_by_compatible("hisilicon,pcie-sas-subctrl");
syscon_regmap_lookup_by_compatible("hisilicon,pcie-sas-subctrl");
if (IS_ERR(hisi_pcie->subctrl)) {
dev_err(dev, "cannot get subctrl base\n");
return PTR_ERR(hisi_pcie->subctrl);
}
reg = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rc_dbi");
pp->dbi_base = devm_ioremap_resource(dev, reg);
if (IS_ERR(pp->dbi_base))
return PTR_ERR(pp->dbi_base);
pci->dbi_base = devm_ioremap_resource(dev, reg);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
platform_set_drvdata(pdev, hisi_pcie);
ret = hisi_add_pcie_port(hisi_pcie, pdev);
if (ret)
@ -323,4 +336,62 @@ static struct platform_driver hisi_pcie_driver = {
};
builtin_platform_driver(hisi_pcie_driver);
static int hisi_pcie_almost_ecam_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct pci_ecam_ops *ops;
ops = (struct pci_ecam_ops *)of_device_get_match_data(dev);
return pci_host_common_probe(pdev, ops);
}
static int hisi_pcie_platform_init(struct pci_config_window *cfg)
{
struct device *dev = cfg->parent;
struct platform_device *pdev = to_platform_device(dev);
struct resource *res;
void __iomem *reg_base;
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (!res) {
dev_err(dev, "missing \"reg[1]\"property\n");
return -EINVAL;
}
reg_base = devm_ioremap(dev, res->start, resource_size(res));
if (!reg_base)
return -ENOMEM;
cfg->priv = reg_base;
return 0;
}
struct pci_ecam_ops hisi_pcie_platform_ops = {
.bus_shift = 20,
.init = hisi_pcie_platform_init,
.pci_ops = {
.map_bus = hisi_pcie_map_bus,
.read = hisi_pcie_rd_conf,
.write = hisi_pcie_wr_conf,
}
};
static const struct of_device_id hisi_pcie_almost_ecam_of_match[] = {
{
.compatible = "hisilicon,pcie-almost-ecam",
.data = (void *) &hisi_pcie_platform_ops,
},
{},
};
static struct platform_driver hisi_pcie_almost_ecam_driver = {
.probe = hisi_pcie_almost_ecam_probe,
.driver = {
.name = "hisi-pcie-almost-ecam",
.of_match_table = hisi_pcie_almost_ecam_of_match,
},
};
builtin_platform_driver(hisi_pcie_almost_ecam_driver);
#endif
#endif

View File

@ -103,7 +103,7 @@ struct qcom_pcie_ops {
};
struct qcom_pcie {
struct pcie_port pp; /* pp.dbi_base is DT dbi */
struct dw_pcie *pci;
void __iomem *parf; /* DT parf */
void __iomem *elbi; /* DT elbi */
union qcom_pcie_resources res;
@ -112,7 +112,7 @@ struct qcom_pcie {
struct qcom_pcie_ops *ops;
};
#define to_qcom_pcie(x) container_of(x, struct qcom_pcie, pp)
#define to_qcom_pcie(x) dev_get_drvdata((x)->dev)
static void qcom_ep_reset_assert(struct qcom_pcie *pcie)
{
@ -155,21 +155,23 @@ static void qcom_pcie_v2_ltssm_enable(struct qcom_pcie *pcie)
static int qcom_pcie_establish_link(struct qcom_pcie *pcie)
{
struct dw_pcie *pci = pcie->pci;
if (dw_pcie_link_up(&pcie->pp))
if (dw_pcie_link_up(pci))
return 0;
/* Enable Link Training state machine */
if (pcie->ops->ltssm_enable)
pcie->ops->ltssm_enable(pcie);
return dw_pcie_wait_for_link(&pcie->pp);
return dw_pcie_wait_for_link(pci);
}
static int qcom_pcie_get_resources_v0(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_v0 *res = &pcie->res.v0;
struct device *dev = pcie->pp.dev;
struct dw_pcie *pci = pcie->pci;
struct device *dev = pci->dev;
res->vdda = devm_regulator_get(dev, "vdda");
if (IS_ERR(res->vdda))
@ -212,16 +214,14 @@ static int qcom_pcie_get_resources_v0(struct qcom_pcie *pcie)
return PTR_ERR(res->por_reset);
res->phy_reset = devm_reset_control_get(dev, "phy");
if (IS_ERR(res->phy_reset))
return PTR_ERR(res->phy_reset);
return 0;
return PTR_ERR_OR_ZERO(res->phy_reset);
}
static int qcom_pcie_get_resources_v1(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_v1 *res = &pcie->res.v1;
struct device *dev = pcie->pp.dev;
struct dw_pcie *pci = pcie->pci;
struct device *dev = pci->dev;
res->vdda = devm_regulator_get(dev, "vdda");
if (IS_ERR(res->vdda))
@ -244,10 +244,7 @@ static int qcom_pcie_get_resources_v1(struct qcom_pcie *pcie)
return PTR_ERR(res->slave_bus);
res->core = devm_reset_control_get(dev, "core");
if (IS_ERR(res->core))
return PTR_ERR(res->core);
return 0;
return PTR_ERR_OR_ZERO(res->core);
}
static void qcom_pcie_deinit_v0(struct qcom_pcie *pcie)
@ -270,7 +267,8 @@ static void qcom_pcie_deinit_v0(struct qcom_pcie *pcie)
static int qcom_pcie_init_v0(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_v0 *res = &pcie->res.v0;
struct device *dev = pcie->pp.dev;
struct dw_pcie *pci = pcie->pci;
struct device *dev = pci->dev;
u32 val;
int ret;
@ -392,7 +390,8 @@ static void qcom_pcie_deinit_v1(struct qcom_pcie *pcie)
static int qcom_pcie_init_v1(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_v1 *res = &pcie->res.v1;
struct device *dev = pcie->pp.dev;
struct dw_pcie *pci = pcie->pci;
struct device *dev = pci->dev;
int ret;
ret = reset_control_deassert(res->core);
@ -459,7 +458,8 @@ err_res:
static int qcom_pcie_get_resources_v2(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_v2 *res = &pcie->res.v2;
struct device *dev = pcie->pp.dev;
struct dw_pcie *pci = pcie->pci;
struct device *dev = pci->dev;
res->aux_clk = devm_clk_get(dev, "aux");
if (IS_ERR(res->aux_clk))
@ -478,16 +478,14 @@ static int qcom_pcie_get_resources_v2(struct qcom_pcie *pcie)
return PTR_ERR(res->slave_clk);
res->pipe_clk = devm_clk_get(dev, "pipe");
if (IS_ERR(res->pipe_clk))
return PTR_ERR(res->pipe_clk);
return 0;
return PTR_ERR_OR_ZERO(res->pipe_clk);
}
static int qcom_pcie_init_v2(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_v2 *res = &pcie->res.v2;
struct device *dev = pcie->pp.dev;
struct dw_pcie *pci = pcie->pci;
struct device *dev = pci->dev;
u32 val;
int ret;
@ -551,7 +549,8 @@ err_cfg_clk:
static int qcom_pcie_post_init_v2(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_v2 *res = &pcie->res.v2;
struct device *dev = pcie->pp.dev;
struct dw_pcie *pci = pcie->pci;
struct device *dev = pci->dev;
int ret;
ret = clk_prepare_enable(res->pipe_clk);
@ -563,10 +562,9 @@ static int qcom_pcie_post_init_v2(struct qcom_pcie *pcie)
return 0;
}
static int qcom_pcie_link_up(struct pcie_port *pp)
static int qcom_pcie_link_up(struct dw_pcie *pci)
{
struct qcom_pcie *pcie = to_qcom_pcie(pp);
u16 val = readw(pcie->pp.dbi_base + PCIE20_CAP + PCI_EXP_LNKSTA);
u16 val = readw(pci->dbi_base + PCIE20_CAP + PCI_EXP_LNKSTA);
return !!(val & PCI_EXP_LNKSTA_DLLLA);
}
@ -584,7 +582,8 @@ static void qcom_pcie_deinit_v2(struct qcom_pcie *pcie)
static void qcom_pcie_host_init(struct pcie_port *pp)
{
struct qcom_pcie *pcie = to_qcom_pcie(pp);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct qcom_pcie *pcie = to_qcom_pcie(pci);
int ret;
qcom_ep_reset_assert(pcie);
@ -622,19 +621,20 @@ err_deinit:
static int qcom_pcie_rd_own_conf(struct pcie_port *pp, int where, int size,
u32 *val)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
/* the device class is not reported correctly from the register */
if (where == PCI_CLASS_REVISION && size == 4) {
*val = readl(pp->dbi_base + PCI_CLASS_REVISION);
*val = readl(pci->dbi_base + PCI_CLASS_REVISION);
*val &= 0xff; /* keep revision id */
*val |= PCI_CLASS_BRIDGE_PCI << 16;
return PCIBIOS_SUCCESSFUL;
}
return dw_pcie_cfg_read(pp->dbi_base + where, size, val);
return dw_pcie_read(pci->dbi_base + where, size, val);
}
static struct pcie_host_ops qcom_pcie_dw_ops = {
.link_up = qcom_pcie_link_up,
static struct dw_pcie_host_ops qcom_pcie_dw_ops = {
.host_init = qcom_pcie_host_init,
.rd_own_conf = qcom_pcie_rd_own_conf,
};
@ -661,19 +661,31 @@ static const struct qcom_pcie_ops ops_v2 = {
.ltssm_enable = qcom_pcie_v2_ltssm_enable,
};
static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = qcom_pcie_link_up,
};
static int qcom_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct resource *res;
struct qcom_pcie *pcie;
struct pcie_port *pp;
struct dw_pcie *pci;
struct qcom_pcie *pcie;
int ret;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
pp = &pcie->pp;
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
if (!pci)
return -ENOMEM;
pci->dev = dev;
pci->ops = &dw_pcie_ops;
pp = &pci->pp;
pcie->ops = (struct qcom_pcie_ops *)of_device_get_match_data(dev);
pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_LOW);
@ -686,9 +698,9 @@ static int qcom_pcie_probe(struct platform_device *pdev)
return PTR_ERR(pcie->parf);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
pp->dbi_base = devm_ioremap_resource(dev, res);
if (IS_ERR(pp->dbi_base))
return PTR_ERR(pp->dbi_base);
pci->dbi_base = devm_ioremap_resource(dev, res);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi");
pcie->elbi = devm_ioremap_resource(dev, res);
@ -699,7 +711,6 @@ static int qcom_pcie_probe(struct platform_device *pdev)
if (IS_ERR(pcie->phy))
return PTR_ERR(pcie->phy);
pp->dev = dev;
ret = pcie->ops->get_resources(pcie);
if (ret)
return ret;
@ -725,6 +736,8 @@ static int qcom_pcie_probe(struct platform_device *pdev)
if (ret)
return ret;
platform_set_drvdata(pdev, pcie);
ret = dw_pcie_host_init(pp);
if (ret) {
dev_err(dev, "cannot initialize host\n");

View File

@ -25,7 +25,7 @@
#include "pcie-designware.h"
struct spear13xx_pcie {
struct pcie_port pp; /* DT dbi is pp.dbi_base */
struct dw_pcie *pci;
void __iomem *app_base;
struct phy *phy;
struct clk *clk;
@ -70,17 +70,18 @@ struct pcie_app_reg {
#define EXP_CAP_ID_OFFSET 0x70
#define to_spear13xx_pcie(x) container_of(x, struct spear13xx_pcie, pp)
#define to_spear13xx_pcie(x) dev_get_drvdata((x)->dev)
static int spear13xx_pcie_establish_link(struct spear13xx_pcie *spear13xx_pcie)
{
struct pcie_port *pp = &spear13xx_pcie->pp;
struct dw_pcie *pci = spear13xx_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct pcie_app_reg *app_reg = spear13xx_pcie->app_base;
u32 val;
u32 exp_cap_off = EXP_CAP_ID_OFFSET;
if (dw_pcie_link_up(pp)) {
dev_err(pp->dev, "link already up\n");
if (dw_pcie_link_up(pci)) {
dev_err(pci->dev, "link already up\n");
return 0;
}
@ -91,34 +92,34 @@ static int spear13xx_pcie_establish_link(struct spear13xx_pcie *spear13xx_pcie)
* default value in capability register is 512 bytes. So force
* it to 128 here.
*/
dw_pcie_cfg_read(pp->dbi_base + exp_cap_off + PCI_EXP_DEVCTL, 2, &val);
dw_pcie_read(pci->dbi_base + exp_cap_off + PCI_EXP_DEVCTL, 2, &val);
val &= ~PCI_EXP_DEVCTL_READRQ;
dw_pcie_cfg_write(pp->dbi_base + exp_cap_off + PCI_EXP_DEVCTL, 2, val);
dw_pcie_write(pci->dbi_base + exp_cap_off + PCI_EXP_DEVCTL, 2, val);
dw_pcie_cfg_write(pp->dbi_base + PCI_VENDOR_ID, 2, 0x104A);
dw_pcie_cfg_write(pp->dbi_base + PCI_DEVICE_ID, 2, 0xCD80);
dw_pcie_write(pci->dbi_base + PCI_VENDOR_ID, 2, 0x104A);
dw_pcie_write(pci->dbi_base + PCI_DEVICE_ID, 2, 0xCD80);
/*
* if is_gen1 is set then handle it, so that some buggy card
* also works
*/
if (spear13xx_pcie->is_gen1) {
dw_pcie_cfg_read(pp->dbi_base + exp_cap_off + PCI_EXP_LNKCAP,
4, &val);
dw_pcie_read(pci->dbi_base + exp_cap_off + PCI_EXP_LNKCAP,
4, &val);
if ((val & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_2_5GB) {
val &= ~((u32)PCI_EXP_LNKCAP_SLS);
val |= PCI_EXP_LNKCAP_SLS_2_5GB;
dw_pcie_cfg_write(pp->dbi_base + exp_cap_off +
PCI_EXP_LNKCAP, 4, val);
dw_pcie_write(pci->dbi_base + exp_cap_off +
PCI_EXP_LNKCAP, 4, val);
}
dw_pcie_cfg_read(pp->dbi_base + exp_cap_off + PCI_EXP_LNKCTL2,
2, &val);
dw_pcie_read(pci->dbi_base + exp_cap_off + PCI_EXP_LNKCTL2,
2, &val);
if ((val & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_2_5GB) {
val &= ~((u32)PCI_EXP_LNKCAP_SLS);
val |= PCI_EXP_LNKCAP_SLS_2_5GB;
dw_pcie_cfg_write(pp->dbi_base + exp_cap_off +
PCI_EXP_LNKCTL2, 2, val);
dw_pcie_write(pci->dbi_base + exp_cap_off +
PCI_EXP_LNKCTL2, 2, val);
}
}
@ -128,14 +129,15 @@ static int spear13xx_pcie_establish_link(struct spear13xx_pcie *spear13xx_pcie)
| ((u32)1 << REG_TRANSLATION_ENABLE),
&app_reg->app_ctrl_0);
return dw_pcie_wait_for_link(pp);
return dw_pcie_wait_for_link(pci);
}
static irqreturn_t spear13xx_pcie_irq_handler(int irq, void *arg)
{
struct spear13xx_pcie *spear13xx_pcie = arg;
struct pcie_app_reg *app_reg = spear13xx_pcie->app_base;
struct pcie_port *pp = &spear13xx_pcie->pp;
struct dw_pcie *pci = spear13xx_pcie->pci;
struct pcie_port *pp = &pci->pp;
unsigned int status;
status = readl(&app_reg->int_sts);
@ -152,7 +154,8 @@ static irqreturn_t spear13xx_pcie_irq_handler(int irq, void *arg)
static void spear13xx_pcie_enable_interrupts(struct spear13xx_pcie *spear13xx_pcie)
{
struct pcie_port *pp = &spear13xx_pcie->pp;
struct dw_pcie *pci = spear13xx_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct pcie_app_reg *app_reg = spear13xx_pcie->app_base;
/* Enable MSI interrupt */
@ -163,9 +166,9 @@ static void spear13xx_pcie_enable_interrupts(struct spear13xx_pcie *spear13xx_pc
}
}
static int spear13xx_pcie_link_up(struct pcie_port *pp)
static int spear13xx_pcie_link_up(struct dw_pcie *pci)
{
struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pp);
struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pci);
struct pcie_app_reg *app_reg = spear13xx_pcie->app_base;
if (readl(&app_reg->app_status_1) & XMLH_LINK_UP)
@ -176,22 +179,23 @@ static int spear13xx_pcie_link_up(struct pcie_port *pp)
static void spear13xx_pcie_host_init(struct pcie_port *pp)
{
struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pp);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pci);
spear13xx_pcie_establish_link(spear13xx_pcie);
spear13xx_pcie_enable_interrupts(spear13xx_pcie);
}
static struct pcie_host_ops spear13xx_pcie_host_ops = {
.link_up = spear13xx_pcie_link_up,
static struct dw_pcie_host_ops spear13xx_pcie_host_ops = {
.host_init = spear13xx_pcie_host_init,
};
static int spear13xx_add_pcie_port(struct spear13xx_pcie *spear13xx_pcie,
struct platform_device *pdev)
{
struct pcie_port *pp = &spear13xx_pcie->pp;
struct device *dev = pp->dev;
struct dw_pcie *pci = spear13xx_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = &pdev->dev;
int ret;
pp->irq = platform_get_irq(pdev, 0);
@ -219,11 +223,15 @@ static int spear13xx_add_pcie_port(struct spear13xx_pcie *spear13xx_pcie,
return 0;
}
static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = spear13xx_pcie_link_up,
};
static int spear13xx_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct dw_pcie *pci;
struct spear13xx_pcie *spear13xx_pcie;
struct pcie_port *pp;
struct device_node *np = dev->of_node;
struct resource *dbi_base;
int ret;
@ -232,6 +240,13 @@ static int spear13xx_pcie_probe(struct platform_device *pdev)
if (!spear13xx_pcie)
return -ENOMEM;
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
if (!pci)
return -ENOMEM;
pci->dev = dev;
pci->ops = &dw_pcie_ops;
spear13xx_pcie->phy = devm_phy_get(dev, "pcie-phy");
if (IS_ERR(spear13xx_pcie->phy)) {
ret = PTR_ERR(spear13xx_pcie->phy);
@ -255,26 +270,24 @@ static int spear13xx_pcie_probe(struct platform_device *pdev)
return ret;
}
pp = &spear13xx_pcie->pp;
pp->dev = dev;
dbi_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
pp->dbi_base = devm_ioremap_resource(dev, dbi_base);
if (IS_ERR(pp->dbi_base)) {
pci->dbi_base = devm_ioremap_resource(dev, dbi_base);
if (IS_ERR(pci->dbi_base)) {
dev_err(dev, "couldn't remap dbi base %p\n", dbi_base);
ret = PTR_ERR(pp->dbi_base);
ret = PTR_ERR(pci->dbi_base);
goto fail_clk;
}
spear13xx_pcie->app_base = pp->dbi_base + 0x2000;
spear13xx_pcie->app_base = pci->dbi_base + 0x2000;
if (of_property_read_bool(np, "st,pcie-is-gen1"))
spear13xx_pcie->is_gen1 = true;
platform_set_drvdata(pdev, spear13xx_pcie);
ret = spear13xx_add_pcie_port(spear13xx_pcie, pdev);
if (ret < 0)
goto fail_clk;
platform_set_drvdata(pdev, spear13xx_pcie);
return 0;
fail_clk:

View File

@ -1,16 +1,6 @@
menu "PCI host controller drivers"
depends on PCI
config PCI_DRA7XX
bool "TI DRA7xx PCIe controller"
depends on OF && HAS_IOMEM && TI_PIPE3
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
help
Enables support for the PCIe controller in the DRA7xx SoC. There
are two instances of PCIe controller in DRA7xx. This controller can
act both as EP and RC. This reuses the Designware core.
config PCI_MVEBU
bool "Marvell EBU PCIe controller"
depends on ARCH_MVEBU || ARCH_DOVE
@ -37,36 +27,6 @@ config PCIE_XILINX_NWL
or End Point. The current option selection will only
support root port enabling.
config PCIE_DW_PLAT
bool "Platform bus based DesignWare PCIe Controller"
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
---help---
This selects the DesignWare PCIe controller support. Select this if
you have a PCIe controller on Platform bus.
If you have a controller with this interface, say Y or M here.
If unsure, say N.
config PCIE_DW
bool
depends on PCI_MSI_IRQ_DOMAIN
config PCI_EXYNOS
bool "Samsung Exynos PCIe controller"
depends on SOC_EXYNOS5440
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW
config PCI_IMX6
bool "Freescale i.MX6 PCIe controller"
depends on SOC_IMX6Q
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW
config PCI_TEGRA
bool "NVIDIA Tegra PCIe controller"
depends on ARCH_TEGRA
@ -103,27 +63,6 @@ config PCI_HOST_GENERIC
Say Y here if you want to support a simple generic PCI host
controller, such as the one emulated by kvmtool.
config PCIE_SPEAR13XX
bool "STMicroelectronics SPEAr PCIe controller"
depends on ARCH_SPEAR13XX
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW
help
Say Y here if you want PCIe support on SPEAr13XX SoCs.
config PCI_KEYSTONE
bool "TI Keystone PCIe controller"
depends on ARCH_KEYSTONE
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
select PCIEPORTBUS
help
Say Y here if you want to enable PCI controller support on Keystone
SoCs. The PCI controller on Keystone is based on Designware hardware
and therefore the driver re-uses the Designware core functions to
implement the driver.
config PCIE_XILINX
bool "Xilinx AXI PCIe host bridge support"
depends on ARCH_ZYNQ || MICROBLAZE
@ -150,15 +89,6 @@ config PCI_XGENE_MSI
Say Y here if you want PCIe MSI support for the APM X-Gene v1 SoC.
This MSI driver supports 5 PCIe ports on the APM X-Gene v1 SoC.
config PCI_LAYERSCAPE
bool "Freescale Layerscape PCIe controller"
depends on OF && (ARM || ARCH_LAYERSCAPE)
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
select MFD_SYSCON
help
Say Y here if you want PCIe controller support on Layerscape SoCs.
config PCI_VERSATILE
bool "ARM Versatile PB PCI controller"
depends on ARCH_VERSATILE
@ -217,27 +147,6 @@ config PCIE_ALTERA_MSI
Say Y here if you want PCIe MSI support for the Altera FPGA.
This MSI driver supports Altera MSI to GIC controller IP.
config PCI_HISI
depends on OF && ARM64
bool "HiSilicon Hip05 and Hip06 SoCs PCIe controllers"
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW
help
Say Y here if you want PCIe controller support on HiSilicon
Hip05 and Hip06 SoCs
config PCIE_QCOM
bool "Qualcomm PCIe controller"
depends on ARCH_QCOM && OF
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
select PCIEPORTBUS
help
Say Y here to enable PCIe controller support on Qualcomm SoCs. The
PCIe controller uses the Designware core plus Qualcomm-specific
hardware wrappers.
config PCI_HOST_THUNDER_PEM
bool "Cavium Thunder PCIe controller to off-chip devices"
depends on ARM64
@ -254,28 +163,6 @@ config PCI_HOST_THUNDER_ECAM
help
Say Y here if you want ECAM support for CN88XX-Pass-1.x Cavium Thunder SoCs.
config PCIE_ARMADA_8K
bool "Marvell Armada-8K PCIe controller"
depends on ARCH_MVEBU
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
select PCIEPORTBUS
help
Say Y here if you want to enable PCIe controller support on
Armada-8K SoCs. The PCIe controller on Armada-8K is based on
Designware hardware and therefore the driver re-uses the
Designware core functions to implement the driver.
config PCIE_ARTPEC6
bool "Axis ARTPEC-6 PCIe controller"
depends on MACH_ARTPEC6
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
select PCIEPORTBUS
help
Say Y here to enable PCIe controller support on Axis ARTPEC-6
SoCs. This PCIe controller uses the DesignWare core.
config PCIE_ROCKCHIP
bool "Rockchip PCIe controller"
depends on ARCH_ROCKCHIP || COMPILE_TEST

View File

@ -1,8 +1,3 @@
obj-$(CONFIG_PCIE_DW) += pcie-designware.o
obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
obj-$(CONFIG_PCI_HYPERV) += pci-hyperv.o
obj-$(CONFIG_PCI_MVEBU) += pci-mvebu.o
obj-$(CONFIG_PCI_AARDVARK) += pci-aardvark.o
@ -11,12 +6,9 @@ obj-$(CONFIG_PCI_RCAR_GEN2) += pci-rcar-gen2.o
obj-$(CONFIG_PCIE_RCAR) += pcie-rcar.o
obj-$(CONFIG_PCI_HOST_COMMON) += pci-host-common.o
obj-$(CONFIG_PCI_HOST_GENERIC) += pci-host-generic.o
obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o
obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o pci-keystone.o
obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o
obj-$(CONFIG_PCIE_XILINX_NWL) += pcie-xilinx-nwl.o
obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o
obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o
obj-$(CONFIG_PCI_VERSATILE) += pci-versatile.o
obj-$(CONFIG_PCIE_IPROC) += pcie-iproc.o
obj-$(CONFIG_PCIE_IPROC_MSI) += pcie-iproc-msi.o
@ -24,9 +16,6 @@ obj-$(CONFIG_PCIE_IPROC_PLATFORM) += pcie-iproc-platform.o
obj-$(CONFIG_PCIE_IPROC_BCMA) += pcie-iproc-bcma.o
obj-$(CONFIG_PCIE_ALTERA) += pcie-altera.o
obj-$(CONFIG_PCIE_ALTERA_MSI) += pcie-altera-msi.o
obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o
obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o
obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o
obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o
obj-$(CONFIG_VMD) += vmd.o
@ -40,7 +29,6 @@ obj-$(CONFIG_VMD) += vmd.o
# ARM64 and use internal ifdefs to only build the pieces we need
# depending on whether ACPI, the DT driver, or both are enabled.
obj-$(CONFIG_ARM64) += pcie-hisi.o
obj-$(CONFIG_ARM64) += pci-thunder-ecam.o
obj-$(CONFIG_ARM64) += pci-thunder-pem.o
obj-$(CONFIG_ARM64) += pci-xgene.o

View File

@ -1,629 +0,0 @@
/*
* PCIe host controller driver for Samsung EXYNOS SoCs
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Jingoo Han <jg1.han@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/of_gpio.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/resource.h>
#include <linux/signal.h>
#include <linux/types.h>
#include "pcie-designware.h"
#define to_exynos_pcie(x) container_of(x, struct exynos_pcie, pp)
struct exynos_pcie {
struct pcie_port pp;
void __iomem *elbi_base; /* DT 0th resource */
void __iomem *phy_base; /* DT 1st resource */
void __iomem *block_base; /* DT 2nd resource */
int reset_gpio;
struct clk *clk;
struct clk *bus_clk;
};
/* PCIe ELBI registers */
#define PCIE_IRQ_PULSE 0x000
#define IRQ_INTA_ASSERT (0x1 << 0)
#define IRQ_INTB_ASSERT (0x1 << 2)
#define IRQ_INTC_ASSERT (0x1 << 4)
#define IRQ_INTD_ASSERT (0x1 << 6)
#define PCIE_IRQ_LEVEL 0x004
#define PCIE_IRQ_SPECIAL 0x008
#define PCIE_IRQ_EN_PULSE 0x00c
#define PCIE_IRQ_EN_LEVEL 0x010
#define IRQ_MSI_ENABLE (0x1 << 2)
#define PCIE_IRQ_EN_SPECIAL 0x014
#define PCIE_PWR_RESET 0x018
#define PCIE_CORE_RESET 0x01c
#define PCIE_CORE_RESET_ENABLE (0x1 << 0)
#define PCIE_STICKY_RESET 0x020
#define PCIE_NONSTICKY_RESET 0x024
#define PCIE_APP_INIT_RESET 0x028
#define PCIE_APP_LTSSM_ENABLE 0x02c
#define PCIE_ELBI_RDLH_LINKUP 0x064
#define PCIE_ELBI_LTSSM_ENABLE 0x1
#define PCIE_ELBI_SLV_AWMISC 0x11c
#define PCIE_ELBI_SLV_ARMISC 0x120
#define PCIE_ELBI_SLV_DBI_ENABLE (0x1 << 21)
/* PCIe Purple registers */
#define PCIE_PHY_GLOBAL_RESET 0x000
#define PCIE_PHY_COMMON_RESET 0x004
#define PCIE_PHY_CMN_REG 0x008
#define PCIE_PHY_MAC_RESET 0x00c
#define PCIE_PHY_PLL_LOCKED 0x010
#define PCIE_PHY_TRSVREG_RESET 0x020
#define PCIE_PHY_TRSV_RESET 0x024
/* PCIe PHY registers */
#define PCIE_PHY_IMPEDANCE 0x004
#define PCIE_PHY_PLL_DIV_0 0x008
#define PCIE_PHY_PLL_BIAS 0x00c
#define PCIE_PHY_DCC_FEEDBACK 0x014
#define PCIE_PHY_PLL_DIV_1 0x05c
#define PCIE_PHY_COMMON_POWER 0x064
#define PCIE_PHY_COMMON_PD_CMN (0x1 << 3)
#define PCIE_PHY_TRSV0_EMP_LVL 0x084
#define PCIE_PHY_TRSV0_DRV_LVL 0x088
#define PCIE_PHY_TRSV0_RXCDR 0x0ac
#define PCIE_PHY_TRSV0_POWER 0x0c4
#define PCIE_PHY_TRSV0_PD_TSV (0x1 << 7)
#define PCIE_PHY_TRSV0_LVCC 0x0dc
#define PCIE_PHY_TRSV1_EMP_LVL 0x144
#define PCIE_PHY_TRSV1_RXCDR 0x16c
#define PCIE_PHY_TRSV1_POWER 0x184
#define PCIE_PHY_TRSV1_PD_TSV (0x1 << 7)
#define PCIE_PHY_TRSV1_LVCC 0x19c
#define PCIE_PHY_TRSV2_EMP_LVL 0x204
#define PCIE_PHY_TRSV2_RXCDR 0x22c
#define PCIE_PHY_TRSV2_POWER 0x244
#define PCIE_PHY_TRSV2_PD_TSV (0x1 << 7)
#define PCIE_PHY_TRSV2_LVCC 0x25c
#define PCIE_PHY_TRSV3_EMP_LVL 0x2c4
#define PCIE_PHY_TRSV3_RXCDR 0x2ec
#define PCIE_PHY_TRSV3_POWER 0x304
#define PCIE_PHY_TRSV3_PD_TSV (0x1 << 7)
#define PCIE_PHY_TRSV3_LVCC 0x31c
static void exynos_elb_writel(struct exynos_pcie *exynos_pcie, u32 val, u32 reg)
{
writel(val, exynos_pcie->elbi_base + reg);
}
static u32 exynos_elb_readl(struct exynos_pcie *exynos_pcie, u32 reg)
{
return readl(exynos_pcie->elbi_base + reg);
}
static void exynos_phy_writel(struct exynos_pcie *exynos_pcie, u32 val, u32 reg)
{
writel(val, exynos_pcie->phy_base + reg);
}
static u32 exynos_phy_readl(struct exynos_pcie *exynos_pcie, u32 reg)
{
return readl(exynos_pcie->phy_base + reg);
}
static void exynos_blk_writel(struct exynos_pcie *exynos_pcie, u32 val, u32 reg)
{
writel(val, exynos_pcie->block_base + reg);
}
static u32 exynos_blk_readl(struct exynos_pcie *exynos_pcie, u32 reg)
{
return readl(exynos_pcie->block_base + reg);
}
static void exynos_pcie_sideband_dbi_w_mode(struct exynos_pcie *exynos_pcie,
bool on)
{
u32 val;
if (on) {
val = exynos_elb_readl(exynos_pcie, PCIE_ELBI_SLV_AWMISC);
val |= PCIE_ELBI_SLV_DBI_ENABLE;
exynos_elb_writel(exynos_pcie, val, PCIE_ELBI_SLV_AWMISC);
} else {
val = exynos_elb_readl(exynos_pcie, PCIE_ELBI_SLV_AWMISC);
val &= ~PCIE_ELBI_SLV_DBI_ENABLE;
exynos_elb_writel(exynos_pcie, val, PCIE_ELBI_SLV_AWMISC);
}
}
static void exynos_pcie_sideband_dbi_r_mode(struct exynos_pcie *exynos_pcie,
bool on)
{
u32 val;
if (on) {
val = exynos_elb_readl(exynos_pcie, PCIE_ELBI_SLV_ARMISC);
val |= PCIE_ELBI_SLV_DBI_ENABLE;
exynos_elb_writel(exynos_pcie, val, PCIE_ELBI_SLV_ARMISC);
} else {
val = exynos_elb_readl(exynos_pcie, PCIE_ELBI_SLV_ARMISC);
val &= ~PCIE_ELBI_SLV_DBI_ENABLE;
exynos_elb_writel(exynos_pcie, val, PCIE_ELBI_SLV_ARMISC);
}
}
static void exynos_pcie_assert_core_reset(struct exynos_pcie *exynos_pcie)
{
u32 val;
val = exynos_elb_readl(exynos_pcie, PCIE_CORE_RESET);
val &= ~PCIE_CORE_RESET_ENABLE;
exynos_elb_writel(exynos_pcie, val, PCIE_CORE_RESET);
exynos_elb_writel(exynos_pcie, 0, PCIE_PWR_RESET);
exynos_elb_writel(exynos_pcie, 0, PCIE_STICKY_RESET);
exynos_elb_writel(exynos_pcie, 0, PCIE_NONSTICKY_RESET);
}
static void exynos_pcie_deassert_core_reset(struct exynos_pcie *exynos_pcie)
{
u32 val;
val = exynos_elb_readl(exynos_pcie, PCIE_CORE_RESET);
val |= PCIE_CORE_RESET_ENABLE;
exynos_elb_writel(exynos_pcie, val, PCIE_CORE_RESET);
exynos_elb_writel(exynos_pcie, 1, PCIE_STICKY_RESET);
exynos_elb_writel(exynos_pcie, 1, PCIE_NONSTICKY_RESET);
exynos_elb_writel(exynos_pcie, 1, PCIE_APP_INIT_RESET);
exynos_elb_writel(exynos_pcie, 0, PCIE_APP_INIT_RESET);
exynos_blk_writel(exynos_pcie, 1, PCIE_PHY_MAC_RESET);
}
static void exynos_pcie_assert_phy_reset(struct exynos_pcie *exynos_pcie)
{
exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_MAC_RESET);
exynos_blk_writel(exynos_pcie, 1, PCIE_PHY_GLOBAL_RESET);
}
static void exynos_pcie_deassert_phy_reset(struct exynos_pcie *exynos_pcie)
{
exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_GLOBAL_RESET);
exynos_elb_writel(exynos_pcie, 1, PCIE_PWR_RESET);
exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_COMMON_RESET);
exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_CMN_REG);
exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_TRSVREG_RESET);
exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_TRSV_RESET);
}
static void exynos_pcie_power_on_phy(struct exynos_pcie *exynos_pcie)
{
u32 val;
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_COMMON_POWER);
val &= ~PCIE_PHY_COMMON_PD_CMN;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_COMMON_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV0_POWER);
val &= ~PCIE_PHY_TRSV0_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV0_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV1_POWER);
val &= ~PCIE_PHY_TRSV1_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV1_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV2_POWER);
val &= ~PCIE_PHY_TRSV2_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV2_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV3_POWER);
val &= ~PCIE_PHY_TRSV3_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV3_POWER);
}
static void exynos_pcie_power_off_phy(struct exynos_pcie *exynos_pcie)
{
u32 val;
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_COMMON_POWER);
val |= PCIE_PHY_COMMON_PD_CMN;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_COMMON_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV0_POWER);
val |= PCIE_PHY_TRSV0_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV0_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV1_POWER);
val |= PCIE_PHY_TRSV1_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV1_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV2_POWER);
val |= PCIE_PHY_TRSV2_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV2_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV3_POWER);
val |= PCIE_PHY_TRSV3_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV3_POWER);
}
static void exynos_pcie_init_phy(struct exynos_pcie *exynos_pcie)
{
/* DCC feedback control off */
exynos_phy_writel(exynos_pcie, 0x29, PCIE_PHY_DCC_FEEDBACK);
/* set TX/RX impedance */
exynos_phy_writel(exynos_pcie, 0xd5, PCIE_PHY_IMPEDANCE);
/* set 50Mhz PHY clock */
exynos_phy_writel(exynos_pcie, 0x14, PCIE_PHY_PLL_DIV_0);
exynos_phy_writel(exynos_pcie, 0x12, PCIE_PHY_PLL_DIV_1);
/* set TX Differential output for lane 0 */
exynos_phy_writel(exynos_pcie, 0x7f, PCIE_PHY_TRSV0_DRV_LVL);
/* set TX Pre-emphasis Level Control for lane 0 to minimum */
exynos_phy_writel(exynos_pcie, 0x0, PCIE_PHY_TRSV0_EMP_LVL);
/* set RX clock and data recovery bandwidth */
exynos_phy_writel(exynos_pcie, 0xe7, PCIE_PHY_PLL_BIAS);
exynos_phy_writel(exynos_pcie, 0x82, PCIE_PHY_TRSV0_RXCDR);
exynos_phy_writel(exynos_pcie, 0x82, PCIE_PHY_TRSV1_RXCDR);
exynos_phy_writel(exynos_pcie, 0x82, PCIE_PHY_TRSV2_RXCDR);
exynos_phy_writel(exynos_pcie, 0x82, PCIE_PHY_TRSV3_RXCDR);
/* change TX Pre-emphasis Level Control for lanes */
exynos_phy_writel(exynos_pcie, 0x39, PCIE_PHY_TRSV0_EMP_LVL);
exynos_phy_writel(exynos_pcie, 0x39, PCIE_PHY_TRSV1_EMP_LVL);
exynos_phy_writel(exynos_pcie, 0x39, PCIE_PHY_TRSV2_EMP_LVL);
exynos_phy_writel(exynos_pcie, 0x39, PCIE_PHY_TRSV3_EMP_LVL);
/* set LVCC */
exynos_phy_writel(exynos_pcie, 0x20, PCIE_PHY_TRSV0_LVCC);
exynos_phy_writel(exynos_pcie, 0xa0, PCIE_PHY_TRSV1_LVCC);
exynos_phy_writel(exynos_pcie, 0xa0, PCIE_PHY_TRSV2_LVCC);
exynos_phy_writel(exynos_pcie, 0xa0, PCIE_PHY_TRSV3_LVCC);
}
static void exynos_pcie_assert_reset(struct exynos_pcie *exynos_pcie)
{
struct pcie_port *pp = &exynos_pcie->pp;
struct device *dev = pp->dev;
if (exynos_pcie->reset_gpio >= 0)
devm_gpio_request_one(dev, exynos_pcie->reset_gpio,
GPIOF_OUT_INIT_HIGH, "RESET");
}
static int exynos_pcie_establish_link(struct exynos_pcie *exynos_pcie)
{
struct pcie_port *pp = &exynos_pcie->pp;
struct device *dev = pp->dev;
u32 val;
if (dw_pcie_link_up(pp)) {
dev_err(dev, "Link already up\n");
return 0;
}
exynos_pcie_assert_core_reset(exynos_pcie);
exynos_pcie_assert_phy_reset(exynos_pcie);
exynos_pcie_deassert_phy_reset(exynos_pcie);
exynos_pcie_power_on_phy(exynos_pcie);
exynos_pcie_init_phy(exynos_pcie);
/* pulse for common reset */
exynos_blk_writel(exynos_pcie, 1, PCIE_PHY_COMMON_RESET);
udelay(500);
exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_COMMON_RESET);
exynos_pcie_deassert_core_reset(exynos_pcie);
dw_pcie_setup_rc(pp);
exynos_pcie_assert_reset(exynos_pcie);
/* assert LTSSM enable */
exynos_elb_writel(exynos_pcie, PCIE_ELBI_LTSSM_ENABLE,
PCIE_APP_LTSSM_ENABLE);
/* check if the link is up or not */
if (!dw_pcie_wait_for_link(pp))
return 0;
while (exynos_phy_readl(exynos_pcie, PCIE_PHY_PLL_LOCKED) == 0) {
val = exynos_blk_readl(exynos_pcie, PCIE_PHY_PLL_LOCKED);
dev_info(dev, "PLL Locked: 0x%x\n", val);
}
exynos_pcie_power_off_phy(exynos_pcie);
return -ETIMEDOUT;
}
static void exynos_pcie_clear_irq_pulse(struct exynos_pcie *exynos_pcie)
{
u32 val;
val = exynos_elb_readl(exynos_pcie, PCIE_IRQ_PULSE);
exynos_elb_writel(exynos_pcie, val, PCIE_IRQ_PULSE);
}
static void exynos_pcie_enable_irq_pulse(struct exynos_pcie *exynos_pcie)
{
u32 val;
/* enable INTX interrupt */
val = IRQ_INTA_ASSERT | IRQ_INTB_ASSERT |
IRQ_INTC_ASSERT | IRQ_INTD_ASSERT;
exynos_elb_writel(exynos_pcie, val, PCIE_IRQ_EN_PULSE);
}
static irqreturn_t exynos_pcie_irq_handler(int irq, void *arg)
{
struct exynos_pcie *exynos_pcie = arg;
exynos_pcie_clear_irq_pulse(exynos_pcie);
return IRQ_HANDLED;
}
static irqreturn_t exynos_pcie_msi_irq_handler(int irq, void *arg)
{
struct exynos_pcie *exynos_pcie = arg;
struct pcie_port *pp = &exynos_pcie->pp;
return dw_handle_msi_irq(pp);
}
static void exynos_pcie_msi_init(struct exynos_pcie *exynos_pcie)
{
struct pcie_port *pp = &exynos_pcie->pp;
u32 val;
dw_pcie_msi_init(pp);
/* enable MSI interrupt */
val = exynos_elb_readl(exynos_pcie, PCIE_IRQ_EN_LEVEL);
val |= IRQ_MSI_ENABLE;
exynos_elb_writel(exynos_pcie, val, PCIE_IRQ_EN_LEVEL);
}
static void exynos_pcie_enable_interrupts(struct exynos_pcie *exynos_pcie)
{
exynos_pcie_enable_irq_pulse(exynos_pcie);
if (IS_ENABLED(CONFIG_PCI_MSI))
exynos_pcie_msi_init(exynos_pcie);
}
static u32 exynos_pcie_readl_rc(struct pcie_port *pp, u32 reg)
{
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
u32 val;
exynos_pcie_sideband_dbi_r_mode(exynos_pcie, true);
val = readl(pp->dbi_base + reg);
exynos_pcie_sideband_dbi_r_mode(exynos_pcie, false);
return val;
}
static void exynos_pcie_writel_rc(struct pcie_port *pp, u32 reg, u32 val)
{
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
exynos_pcie_sideband_dbi_w_mode(exynos_pcie, true);
writel(val, pp->dbi_base + reg);
exynos_pcie_sideband_dbi_w_mode(exynos_pcie, false);
}
static int exynos_pcie_rd_own_conf(struct pcie_port *pp, int where, int size,
u32 *val)
{
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
int ret;
exynos_pcie_sideband_dbi_r_mode(exynos_pcie, true);
ret = dw_pcie_cfg_read(pp->dbi_base + where, size, val);
exynos_pcie_sideband_dbi_r_mode(exynos_pcie, false);
return ret;
}
static int exynos_pcie_wr_own_conf(struct pcie_port *pp, int where, int size,
u32 val)
{
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
int ret;
exynos_pcie_sideband_dbi_w_mode(exynos_pcie, true);
ret = dw_pcie_cfg_write(pp->dbi_base + where, size, val);
exynos_pcie_sideband_dbi_w_mode(exynos_pcie, false);
return ret;
}
static int exynos_pcie_link_up(struct pcie_port *pp)
{
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
u32 val;
val = exynos_elb_readl(exynos_pcie, PCIE_ELBI_RDLH_LINKUP);
if (val == PCIE_ELBI_LTSSM_ENABLE)
return 1;
return 0;
}
static void exynos_pcie_host_init(struct pcie_port *pp)
{
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
exynos_pcie_establish_link(exynos_pcie);
exynos_pcie_enable_interrupts(exynos_pcie);
}
static struct pcie_host_ops exynos_pcie_host_ops = {
.readl_rc = exynos_pcie_readl_rc,
.writel_rc = exynos_pcie_writel_rc,
.rd_own_conf = exynos_pcie_rd_own_conf,
.wr_own_conf = exynos_pcie_wr_own_conf,
.link_up = exynos_pcie_link_up,
.host_init = exynos_pcie_host_init,
};
static int __init exynos_add_pcie_port(struct exynos_pcie *exynos_pcie,
struct platform_device *pdev)
{
struct pcie_port *pp = &exynos_pcie->pp;
struct device *dev = pp->dev;
int ret;
pp->irq = platform_get_irq(pdev, 1);
if (!pp->irq) {
dev_err(dev, "failed to get irq\n");
return -ENODEV;
}
ret = devm_request_irq(dev, pp->irq, exynos_pcie_irq_handler,
IRQF_SHARED, "exynos-pcie", exynos_pcie);
if (ret) {
dev_err(dev, "failed to request irq\n");
return ret;
}
if (IS_ENABLED(CONFIG_PCI_MSI)) {
pp->msi_irq = platform_get_irq(pdev, 0);
if (!pp->msi_irq) {
dev_err(dev, "failed to get msi irq\n");
return -ENODEV;
}
ret = devm_request_irq(dev, pp->msi_irq,
exynos_pcie_msi_irq_handler,
IRQF_SHARED | IRQF_NO_THREAD,
"exynos-pcie", exynos_pcie);
if (ret) {
dev_err(dev, "failed to request msi irq\n");
return ret;
}
}
pp->root_bus_nr = -1;
pp->ops = &exynos_pcie_host_ops;
ret = dw_pcie_host_init(pp);
if (ret) {
dev_err(dev, "failed to initialize host\n");
return ret;
}
return 0;
}
static int __init exynos_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct exynos_pcie *exynos_pcie;
struct pcie_port *pp;
struct device_node *np = dev->of_node;
struct resource *elbi_base;
struct resource *phy_base;
struct resource *block_base;
int ret;
exynos_pcie = devm_kzalloc(dev, sizeof(*exynos_pcie), GFP_KERNEL);
if (!exynos_pcie)
return -ENOMEM;
pp = &exynos_pcie->pp;
pp->dev = dev;
exynos_pcie->reset_gpio = of_get_named_gpio(np, "reset-gpio", 0);
exynos_pcie->clk = devm_clk_get(dev, "pcie");
if (IS_ERR(exynos_pcie->clk)) {
dev_err(dev, "Failed to get pcie rc clock\n");
return PTR_ERR(exynos_pcie->clk);
}
ret = clk_prepare_enable(exynos_pcie->clk);
if (ret)
return ret;
exynos_pcie->bus_clk = devm_clk_get(dev, "pcie_bus");
if (IS_ERR(exynos_pcie->bus_clk)) {
dev_err(dev, "Failed to get pcie bus clock\n");
ret = PTR_ERR(exynos_pcie->bus_clk);
goto fail_clk;
}
ret = clk_prepare_enable(exynos_pcie->bus_clk);
if (ret)
goto fail_clk;
elbi_base = platform_get_resource(pdev, IORESOURCE_MEM, 0);
exynos_pcie->elbi_base = devm_ioremap_resource(dev, elbi_base);
if (IS_ERR(exynos_pcie->elbi_base)) {
ret = PTR_ERR(exynos_pcie->elbi_base);
goto fail_bus_clk;
}
phy_base = platform_get_resource(pdev, IORESOURCE_MEM, 1);
exynos_pcie->phy_base = devm_ioremap_resource(dev, phy_base);
if (IS_ERR(exynos_pcie->phy_base)) {
ret = PTR_ERR(exynos_pcie->phy_base);
goto fail_bus_clk;
}
block_base = platform_get_resource(pdev, IORESOURCE_MEM, 2);
exynos_pcie->block_base = devm_ioremap_resource(dev, block_base);
if (IS_ERR(exynos_pcie->block_base)) {
ret = PTR_ERR(exynos_pcie->block_base);
goto fail_bus_clk;
}
ret = exynos_add_pcie_port(exynos_pcie, pdev);
if (ret < 0)
goto fail_bus_clk;
platform_set_drvdata(pdev, exynos_pcie);
return 0;
fail_bus_clk:
clk_disable_unprepare(exynos_pcie->bus_clk);
fail_clk:
clk_disable_unprepare(exynos_pcie->clk);
return ret;
}
static int __exit exynos_pcie_remove(struct platform_device *pdev)
{
struct exynos_pcie *exynos_pcie = platform_get_drvdata(pdev);
clk_disable_unprepare(exynos_pcie->bus_clk);
clk_disable_unprepare(exynos_pcie->clk);
return 0;
}
static const struct of_device_id exynos_pcie_of_match[] = {
{ .compatible = "samsung,exynos5440-pcie", },
{},
};
static struct platform_driver exynos_pcie_driver = {
.remove = __exit_p(exynos_pcie_remove),
.driver = {
.name = "exynos-pcie",
.of_match_table = exynos_pcie_of_match,
},
};
/* Exynos PCIe driver does not allow module unload */
static int __init exynos_pcie_init(void)
{
return platform_driver_probe(&exynos_pcie_driver, exynos_pcie_probe);
}
subsys_initcall(exynos_pcie_init);

View File

@ -145,7 +145,9 @@ int pci_host_common_probe(struct platform_device *pdev,
return -ENODEV;
}
#ifdef CONFIG_ARM
pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci);
#endif
/*
* We insert PCI resources into the iomem_resource and

View File

@ -130,7 +130,8 @@ union pci_version {
*/
union win_slot_encoding {
struct {
u32 func:8;
u32 dev:5;
u32 func:3;
u32 reserved:24;
} bits;
u32 slot;
@ -485,7 +486,8 @@ static u32 devfn_to_wslot(int devfn)
union win_slot_encoding wslot;
wslot.slot = 0;
wslot.bits.func = PCI_SLOT(devfn) | (PCI_FUNC(devfn) << 5);
wslot.bits.dev = PCI_SLOT(devfn);
wslot.bits.func = PCI_FUNC(devfn);
return wslot.slot;
}
@ -503,7 +505,7 @@ static int wslot_to_devfn(u32 wslot)
union win_slot_encoding slot_no;
slot_no.slot = wslot;
return PCI_DEVFN(0, slot_no.bits.func);
return PCI_DEVFN(slot_no.bits.dev, slot_no.bits.func);
}
/*
@ -1315,6 +1317,18 @@ static struct hv_pci_dev *new_pcichild_device(struct hv_pcibus_device *hbus,
get_pcichild(hpdev, hv_pcidev_ref_initial);
get_pcichild(hpdev, hv_pcidev_ref_childlist);
spin_lock_irqsave(&hbus->device_list_lock, flags);
/*
* When a device is being added to the bus, we set the PCI domain
* number to be the device serial number, which is non-zero and
* unique on the same VM. The serial numbers start with 1, and
* increase by 1 for each device. So device names including this
* can have shorter names than based on the bus instance UUID.
* Only the first device serial number is used for domain, so the
* domain number will not change after the first device is added.
*/
if (list_empty(&hbus->children))
hbus->sysdata.domain = desc->ser;
list_add_tail(&hpdev->list_entry, &hbus->children);
spin_unlock_irqrestore(&hbus->device_list_lock, flags);
return hpdev;

View File

@ -133,6 +133,12 @@ struct mvebu_pcie {
int nports;
};
struct mvebu_pcie_window {
phys_addr_t base;
phys_addr_t remap;
size_t size;
};
/* Structure representing one PCIe interface */
struct mvebu_pcie_port {
char *name;
@ -150,10 +156,8 @@ struct mvebu_pcie_port {
struct mvebu_sw_pci_bridge bridge;
struct device_node *dn;
struct mvebu_pcie *pcie;
phys_addr_t memwin_base;
size_t memwin_size;
phys_addr_t iowin_base;
size_t iowin_size;
struct mvebu_pcie_window memwin;
struct mvebu_pcie_window iowin;
u32 saved_pcie_stat;
};
@ -379,23 +383,45 @@ static void mvebu_pcie_add_windows(struct mvebu_pcie_port *port,
}
}
static void mvebu_pcie_set_window(struct mvebu_pcie_port *port,
unsigned int target, unsigned int attribute,
const struct mvebu_pcie_window *desired,
struct mvebu_pcie_window *cur)
{
if (desired->base == cur->base && desired->remap == cur->remap &&
desired->size == cur->size)
return;
if (cur->size != 0) {
mvebu_pcie_del_windows(port, cur->base, cur->size);
cur->size = 0;
cur->base = 0;
/*
* If something tries to change the window while it is enabled
* the change will not be done atomically. That would be
* difficult to do in the general case.
*/
}
if (desired->size == 0)
return;
mvebu_pcie_add_windows(port, target, attribute, desired->base,
desired->size, desired->remap);
*cur = *desired;
}
static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
{
phys_addr_t iobase;
struct mvebu_pcie_window desired = {};
/* Are the new iobase/iolimit values invalid? */
if (port->bridge.iolimit < port->bridge.iobase ||
port->bridge.iolimitupper < port->bridge.iobaseupper ||
!(port->bridge.command & PCI_COMMAND_IO)) {
/* If a window was configured, remove it */
if (port->iowin_base) {
mvebu_pcie_del_windows(port, port->iowin_base,
port->iowin_size);
port->iowin_base = 0;
port->iowin_size = 0;
}
mvebu_pcie_set_window(port, port->io_target, port->io_attr,
&desired, &port->iowin);
return;
}
@ -412,32 +438,27 @@ static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
* specifications. iobase is the bus address, port->iowin_base
* is the CPU address.
*/
iobase = ((port->bridge.iobase & 0xF0) << 8) |
(port->bridge.iobaseupper << 16);
port->iowin_base = port->pcie->io.start + iobase;
port->iowin_size = ((0xFFF | ((port->bridge.iolimit & 0xF0) << 8) |
(port->bridge.iolimitupper << 16)) -
iobase) + 1;
desired.remap = ((port->bridge.iobase & 0xF0) << 8) |
(port->bridge.iobaseupper << 16);
desired.base = port->pcie->io.start + desired.remap;
desired.size = ((0xFFF | ((port->bridge.iolimit & 0xF0) << 8) |
(port->bridge.iolimitupper << 16)) -
desired.remap) +
1;
mvebu_pcie_add_windows(port, port->io_target, port->io_attr,
port->iowin_base, port->iowin_size,
iobase);
mvebu_pcie_set_window(port, port->io_target, port->io_attr, &desired,
&port->iowin);
}
static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port)
{
struct mvebu_pcie_window desired = {.remap = MVEBU_MBUS_NO_REMAP};
/* Are the new membase/memlimit values invalid? */
if (port->bridge.memlimit < port->bridge.membase ||
!(port->bridge.command & PCI_COMMAND_MEMORY)) {
/* If a window was configured, remove it */
if (port->memwin_base) {
mvebu_pcie_del_windows(port, port->memwin_base,
port->memwin_size);
port->memwin_base = 0;
port->memwin_size = 0;
}
mvebu_pcie_set_window(port, port->mem_target, port->mem_attr,
&desired, &port->memwin);
return;
}
@ -447,14 +468,12 @@ static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port)
* window to setup, according to the PCI-to-PCI bridge
* specifications.
*/
port->memwin_base = ((port->bridge.membase & 0xFFF0) << 16);
port->memwin_size =
(((port->bridge.memlimit & 0xFFF0) << 16) | 0xFFFFF) -
port->memwin_base + 1;
desired.base = ((port->bridge.membase & 0xFFF0) << 16);
desired.size = (((port->bridge.memlimit & 0xFFF0) << 16) | 0xFFFFF) -
desired.base + 1;
mvebu_pcie_add_windows(port, port->mem_target, port->mem_attr,
port->memwin_base, port->memwin_size,
MVEBU_MBUS_NO_REMAP);
mvebu_pcie_set_window(port, port->mem_target, port->mem_attr, &desired,
&port->memwin);
}
/*
@ -1162,7 +1181,7 @@ static int mvebu_pcie_powerup(struct mvebu_pcie_port *port)
return ret;
if (port->reset_gpio) {
u32 reset_udelay = 20000;
u32 reset_udelay = PCI_PM_D3COLD_WAIT * 1000;
of_property_read_u32(port->dn, "reset-delay-us",
&reset_udelay);

View File

@ -36,7 +36,7 @@ struct thunder_pem_pci {
static int thunder_pem_bridge_read(struct pci_bus *bus, unsigned int devfn,
int where, int size, u32 *val)
{
u64 read_val;
u64 read_val, tmp_val;
struct pci_config_window *cfg = bus->sysdata;
struct thunder_pem_pci *pem_pci = (struct thunder_pem_pci *)cfg->priv;
@ -65,13 +65,28 @@ static int thunder_pem_bridge_read(struct pci_bus *bus, unsigned int devfn,
read_val |= 0x00007000; /* Skip MSI CAP */
break;
case 0x70: /* Express Cap */
/* PME interrupt on vector 2*/
read_val |= (2u << 25);
/*
* Change PME interrupt to vector 2 on T88 where it
* reads as 0, else leave it alone.
*/
if (!(read_val & (0x1f << 25)))
read_val |= (2u << 25);
break;
case 0xb0: /* MSI-X Cap */
/* TableSize=4, Next Cap is EA */
/* TableSize=2 or 4, Next Cap is EA */
read_val &= 0xc00000ff;
read_val |= 0x0003bc00;
/*
* If Express Cap(0x70) raw PME vector reads as 0 we are on
* T88 and TableSize is reported as 4, else TableSize
* is 2.
*/
writeq(0x70, pem_pci->pem_reg_base + PEM_CFG_RD);
tmp_val = readq(pem_pci->pem_reg_base + PEM_CFG_RD);
tmp_val >>= 32;
if (!(tmp_val & (0x1f << 25)))
read_val |= 0x0003bc00;
else
read_val |= 0x0001bc00;
break;
case 0xb4:
/* Table offset=0, BIR=0 */

View File

@ -124,7 +124,7 @@ static int versatile_pci_probe(struct platform_device *pdev)
int ret, i, myslot = -1;
u32 val;
void __iomem *local_pci_cfg_base;
struct pci_bus *bus;
struct pci_bus *bus, *child;
LIST_HEAD(pci_res);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
@ -204,6 +204,8 @@ static int versatile_pci_probe(struct platform_device *pdev)
pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci);
pci_assign_unassigned_bus_resources(bus);
list_for_each_entry(child, &bus->children, node)
pcie_bus_configure_settings(child);
pci_bus_add_devices(bus);
return 0;

View File

@ -246,14 +246,11 @@ static int xgene_pcie_ecam_init(struct pci_config_window *cfg, u32 ipversion)
ret = xgene_get_csr_resource(adev, &csr);
if (ret) {
dev_err(dev, "can't get CSR resource\n");
kfree(port);
return ret;
}
port->csr_base = devm_ioremap_resource(dev, &csr);
if (IS_ERR(port->csr_base)) {
kfree(port);
return -ENOMEM;
}
if (IS_ERR(port->csr_base))
return PTR_ERR(port->csr_base);
port->cfg_base = cfg->win;
port->version = ipversion;
@ -638,7 +635,7 @@ static int xgene_pcie_probe_bridge(struct platform_device *pdev)
struct device_node *dn = dev->of_node;
struct xgene_pcie_port *port;
resource_size_t iobase = 0;
struct pci_bus *bus;
struct pci_bus *bus, *child;
int ret;
LIST_HEAD(res);
@ -681,6 +678,8 @@ static int xgene_pcie_probe_bridge(struct platform_device *pdev)
pci_scan_child_bus(bus);
pci_assign_unassigned_bus_resources(bus);
list_for_each_entry(child, &bus->children, node)
pcie_bus_configure_settings(child);
pci_bus_add_devices(bus);
return 0;

View File

@ -65,7 +65,7 @@
(((TLP_REQ_ID(pcie->root_bus_nr, RP_DEVFN)) << 16) | (tag << 8) | (be))
#define TLP_CFG_DW2(bus, devfn, offset) \
(((bus) << 24) | ((devfn) << 16) | (offset))
#define TLP_COMP_STATUS(s) (((s) >> 12) & 7)
#define TLP_COMP_STATUS(s) (((s) >> 13) & 7)
#define TLP_HDR_SIZE 3
#define TLP_LOOP 500

View File

@ -1,86 +0,0 @@
/*
* Synopsys Designware PCIe host controller driver
*
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Jingoo Han <jg1.han@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _PCIE_DESIGNWARE_H
#define _PCIE_DESIGNWARE_H
/*
* Maximum number of MSI IRQs can be 256 per controller. But keep
* it 32 as of now. Probably we will never need more than 32. If needed,
* then increment it in multiple of 32.
*/
#define MAX_MSI_IRQS 32
#define MAX_MSI_CTRLS (MAX_MSI_IRQS / 32)
struct pcie_port {
struct device *dev;
u8 root_bus_nr;
void __iomem *dbi_base;
u64 cfg0_base;
void __iomem *va_cfg0_base;
u32 cfg0_size;
u64 cfg1_base;
void __iomem *va_cfg1_base;
u32 cfg1_size;
resource_size_t io_base;
phys_addr_t io_bus_addr;
u32 io_size;
u64 mem_base;
phys_addr_t mem_bus_addr;
u32 mem_size;
struct resource *cfg;
struct resource *io;
struct resource *mem;
struct resource *busn;
int irq;
u32 lanes;
u32 num_viewport;
struct pcie_host_ops *ops;
int msi_irq;
struct irq_domain *irq_domain;
unsigned long msi_data;
u8 iatu_unroll_enabled;
DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS);
};
struct pcie_host_ops {
u32 (*readl_rc)(struct pcie_port *pp, u32 reg);
void (*writel_rc)(struct pcie_port *pp, u32 reg, u32 val);
int (*rd_own_conf)(struct pcie_port *pp, int where, int size, u32 *val);
int (*wr_own_conf)(struct pcie_port *pp, int where, int size, u32 val);
int (*rd_other_conf)(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size, u32 *val);
int (*wr_other_conf)(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size, u32 val);
int (*link_up)(struct pcie_port *pp);
void (*host_init)(struct pcie_port *pp);
void (*msi_set_irq)(struct pcie_port *pp, int irq);
void (*msi_clear_irq)(struct pcie_port *pp, int irq);
phys_addr_t (*get_msi_addr)(struct pcie_port *pp);
u32 (*get_msi_data)(struct pcie_port *pp, int pos);
void (*scan_bus)(struct pcie_port *pp);
int (*msi_host_init)(struct pcie_port *pp, struct msi_controller *chip);
};
u32 dw_pcie_readl_rc(struct pcie_port *pp, u32 reg);
void dw_pcie_writel_rc(struct pcie_port *pp, u32 reg, u32 val);
int dw_pcie_cfg_read(void __iomem *addr, int size, u32 *val);
int dw_pcie_cfg_write(void __iomem *addr, int size, u32 val);
irqreturn_t dw_handle_msi_irq(struct pcie_port *pp);
void dw_pcie_msi_init(struct pcie_port *pp);
int dw_pcie_wait_for_link(struct pcie_port *pp);
int dw_pcie_link_up(struct pcie_port *pp);
void dw_pcie_setup_rc(struct pcie_port *pp);
int dw_pcie_host_init(struct pcie_port *pp);
#endif /* _PCIE_DESIGNWARE_H */

View File

@ -47,7 +47,6 @@ MODULE_DEVICE_TABLE(of, iproc_pcie_of_match_table);
static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
const struct of_device_id *of_id;
struct iproc_pcie *pcie;
struct device_node *np = dev->of_node;
struct resource reg;
@ -55,16 +54,12 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
LIST_HEAD(res);
int ret;
of_id = of_match_device(iproc_pcie_of_match_table, dev);
if (!of_id)
return -EINVAL;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
pcie->dev = dev;
pcie->type = (enum iproc_pcie_type)of_id->data;
pcie->type = (enum iproc_pcie_type) of_device_get_match_data(dev);
ret = of_address_to_resource(np, 0, &reg);
if (ret < 0) {

View File

@ -1205,7 +1205,7 @@ int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res)
struct device *dev;
int ret;
void *sysdata;
struct pci_bus *bus;
struct pci_bus *bus, *child;
dev = pcie->dev;
@ -1278,6 +1278,9 @@ int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res)
if (pcie->map_irq)
pci_fixup_irqs(pci_common_swizzle, pcie->map_irq);
list_for_each_entry(child, &bus->children, node)
pcie_bus_configure_settings(child);
pci_bus_add_devices(bus);
return 0;

View File

@ -1125,7 +1125,6 @@ static int rcar_pcie_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct rcar_pcie *pcie;
unsigned int data;
const struct of_device_id *of_id;
int err;
int (*hw_init_fn)(struct rcar_pcie *);
@ -1149,11 +1148,6 @@ static int rcar_pcie_probe(struct platform_device *pdev)
if (err)
return err;
of_id = of_match_device(rcar_pcie_of_match, dev);
if (!of_id || !of_id->data)
return -EINVAL;
hw_init_fn = of_id->data;
pm_runtime_enable(dev);
err = pm_runtime_get_sync(dev);
if (err < 0) {
@ -1162,10 +1156,11 @@ static int rcar_pcie_probe(struct platform_device *pdev)
}
/* Failure to get a link might just be that no cards are inserted */
hw_init_fn = of_device_get_match_data(dev);
err = hw_init_fn(pcie);
if (err) {
dev_info(dev, "PCIe link down\n");
err = 0;
err = -ENODEV;
goto err_pm_put;
}

View File

@ -20,6 +20,7 @@
#include <linux/gpio/consumer.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/iopoll.h>
#include <linux/irq.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
@ -55,6 +56,10 @@
#define PCIE_CLIENT_MODE_RC HIWORD_UPDATE_BIT(0x0040)
#define PCIE_CLIENT_GEN_SEL_1 HIWORD_UPDATE(0x0080, 0)
#define PCIE_CLIENT_GEN_SEL_2 HIWORD_UPDATE_BIT(0x0080)
#define PCIE_CLIENT_DEBUG_OUT_0 (PCIE_CLIENT_BASE + 0x3c)
#define PCIE_CLIENT_DEBUG_LTSSM_MASK GENMASK(5, 0)
#define PCIE_CLIENT_DEBUG_LTSSM_L1 0x18
#define PCIE_CLIENT_DEBUG_LTSSM_L2 0x19
#define PCIE_CLIENT_BASIC_STATUS1 (PCIE_CLIENT_BASE + 0x48)
#define PCIE_CLIENT_LINK_STATUS_UP 0x00300000
#define PCIE_CLIENT_LINK_STATUS_MASK 0x00300000
@ -120,6 +125,7 @@
#define PCIE_CORE_INT_CT BIT(11)
#define PCIE_CORE_INT_UTC BIT(18)
#define PCIE_CORE_INT_MMVC BIT(19)
#define PCIE_CORE_CONFIG_VENDOR (PCIE_CORE_CTRL_MGMT_BASE + 0x44)
#define PCIE_CORE_INT_MASK (PCIE_CORE_CTRL_MGMT_BASE + 0x210)
#define PCIE_RC_BAR_CONF (PCIE_CORE_CTRL_MGMT_BASE + 0x300)
@ -133,13 +139,14 @@
PCIE_CORE_INT_MMVC)
#define PCIE_RC_CONFIG_BASE 0xa00000
#define PCIE_RC_CONFIG_VENDOR (PCIE_RC_CONFIG_BASE + 0x00)
#define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08)
#define PCIE_RC_CONFIG_SCC_SHIFT 16
#define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4)
#define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18
#define PCIE_RC_CONFIG_DCR_CSPL_LIMIT 0xff
#define PCIE_RC_CONFIG_DCR_CPLS_SHIFT 26
#define PCIE_RC_CONFIG_LINK_CAP (PCIE_RC_CONFIG_BASE + 0xcc)
#define PCIE_RC_CONFIG_LINK_CAP_L0S BIT(10)
#define PCIE_RC_CONFIG_LCS (PCIE_RC_CONFIG_BASE + 0xd0)
#define PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2 (PCIE_RC_CONFIG_BASE + 0x90c)
#define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274)
@ -167,9 +174,11 @@
#define IB_ROOT_PORT_REG_SIZE_SHIFT 3
#define AXI_WRAPPER_IO_WRITE 0x6
#define AXI_WRAPPER_MEM_WRITE 0x2
#define AXI_WRAPPER_NOR_MSG 0xc
#define MAX_AXI_IB_ROOTPORT_REGION_NUM 3
#define MIN_AXI_ADDR_BITS_PASSED 8
#define PCIE_RC_SEND_PME_OFF 0x11960
#define ROCKCHIP_VENDOR_ID 0x1d87
#define PCIE_ECAM_BUS(x) (((x) & 0xff) << 20)
#define PCIE_ECAM_DEV(x) (((x) & 0x1f) << 15)
@ -178,6 +187,12 @@
#define PCIE_ECAM_ADDR(bus, dev, func, reg) \
(PCIE_ECAM_BUS(bus) | PCIE_ECAM_DEV(dev) | \
PCIE_ECAM_FUNC(func) | PCIE_ECAM_REG(reg))
#define PCIE_LINK_IS_L2(x) \
(((x) & PCIE_CLIENT_DEBUG_LTSSM_MASK) == PCIE_CLIENT_DEBUG_LTSSM_L2)
#define PCIE_LINK_UP(x) \
(((x) & PCIE_CLIENT_LINK_STATUS_MASK) == PCIE_CLIENT_LINK_STATUS_UP)
#define PCIE_LINK_IS_GEN2(x) \
(((x) & PCIE_CORE_PL_CONF_SPEED_MASK) == PCIE_CORE_PL_CONF_SPEED_5G)
#define RC_REGION_0_ADDR_TRANS_H 0x00000000
#define RC_REGION_0_ADDR_TRANS_L 0x00000000
@ -211,7 +226,9 @@ struct rockchip_pcie {
u32 io_size;
int offset;
phys_addr_t io_bus_addr;
void __iomem *msg_region;
u32 mem_size;
phys_addr_t msg_bus_addr;
phys_addr_t mem_bus_addr;
};
@ -449,7 +466,6 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
struct device *dev = rockchip->dev;
int err;
u32 status;
unsigned long timeout;
gpiod_set_value(rockchip->ep_gpio, 0);
@ -590,23 +606,12 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
gpiod_set_value(rockchip->ep_gpio, 1);
/* 500ms timeout value should be enough for Gen1/2 training */
timeout = jiffies + msecs_to_jiffies(500);
for (;;) {
status = rockchip_pcie_read(rockchip,
PCIE_CLIENT_BASIC_STATUS1);
if ((status & PCIE_CLIENT_LINK_STATUS_MASK) ==
PCIE_CLIENT_LINK_STATUS_UP) {
dev_dbg(dev, "PCIe link training gen1 pass!\n");
break;
}
if (time_after(jiffies, timeout)) {
dev_err(dev, "PCIe link training gen1 timeout!\n");
return -ETIMEDOUT;
}
msleep(20);
err = readl_poll_timeout(rockchip->apb_base + PCIE_CLIENT_BASIC_STATUS1,
status, PCIE_LINK_UP(status), 20,
500 * USEC_PER_MSEC);
if (err) {
dev_err(dev, "PCIe link training gen1 timeout!\n");
return -ETIMEDOUT;
}
if (rockchip->link_gen == 2) {
@ -618,22 +623,11 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
status |= PCI_EXP_LNKCTL_RL;
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
timeout = jiffies + msecs_to_jiffies(500);
for (;;) {
status = rockchip_pcie_read(rockchip, PCIE_CORE_CTRL);
if ((status & PCIE_CORE_PL_CONF_SPEED_MASK) ==
PCIE_CORE_PL_CONF_SPEED_5G) {
dev_dbg(dev, "PCIe link training gen2 pass!\n");
break;
}
if (time_after(jiffies, timeout)) {
dev_dbg(dev, "PCIe link training gen2 timeout, fall back to gen1!\n");
break;
}
msleep(20);
}
err = readl_poll_timeout(rockchip->apb_base + PCIE_CORE_CTRL,
status, PCIE_LINK_IS_GEN2(status), 20,
500 * USEC_PER_MSEC);
if (err)
dev_dbg(dev, "PCIe link training gen2 timeout, fall back to gen1!\n");
}
/* Check the final link width from negotiated lane counter from MGMT */
@ -643,7 +637,7 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
dev_dbg(dev, "current link width is x%d\n", status);
rockchip_pcie_write(rockchip, ROCKCHIP_VENDOR_ID,
PCIE_RC_CONFIG_VENDOR);
PCIE_CORE_CONFIG_VENDOR);
rockchip_pcie_write(rockchip,
PCI_CLASS_BRIDGE_PCI << PCIE_RC_CONFIG_SCC_SHIFT,
PCIE_RC_CONFIG_RID_CCR);
@ -653,6 +647,13 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
status &= ~PCIE_RC_CONFIG_THP_CAP_NEXT_MASK;
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_THP_CAP);
/* Clear L0s from RC's link cap */
if (of_property_read_bool(dev->of_node, "aspm-no-l0s")) {
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LINK_CAP);
status &= ~PCIE_RC_CONFIG_LINK_CAP_L0S;
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LINK_CAP);
}
rockchip_pcie_write(rockchip, 0x0, PCIE_RC_BAR_CONF);
rockchip_pcie_write(rockchip,
@ -1186,6 +1187,85 @@ static int rockchip_cfg_atu(struct rockchip_pcie *rockchip)
}
}
/* assign message regions */
rockchip_pcie_prog_ob_atu(rockchip, reg_no + 1 + offset,
AXI_WRAPPER_NOR_MSG,
20 - 1, 0, 0);
rockchip->msg_bus_addr = rockchip->mem_bus_addr +
((reg_no + offset) << 20);
return err;
}
static int rockchip_pcie_wait_l2(struct rockchip_pcie *rockchip)
{
u32 value;
int err;
/* send PME_TURN_OFF message */
writel(0x0, rockchip->msg_region + PCIE_RC_SEND_PME_OFF);
/* read LTSSM and wait for falling into L2 link state */
err = readl_poll_timeout(rockchip->apb_base + PCIE_CLIENT_DEBUG_OUT_0,
value, PCIE_LINK_IS_L2(value), 20,
jiffies_to_usecs(5 * HZ));
if (err) {
dev_err(rockchip->dev, "PCIe link enter L2 timeout!\n");
return err;
}
return 0;
}
static int __maybe_unused rockchip_pcie_suspend_noirq(struct device *dev)
{
struct rockchip_pcie *rockchip = dev_get_drvdata(dev);
int ret;
/* disable core and cli int since we don't need to ack PME_ACK */
rockchip_pcie_write(rockchip, (PCIE_CLIENT_INT_CLI << 16) |
PCIE_CLIENT_INT_CLI, PCIE_CLIENT_INT_MASK);
rockchip_pcie_write(rockchip, (u32)PCIE_CORE_INT, PCIE_CORE_INT_MASK);
ret = rockchip_pcie_wait_l2(rockchip);
if (ret) {
rockchip_pcie_enable_interrupts(rockchip);
return ret;
}
phy_power_off(rockchip->phy);
phy_exit(rockchip->phy);
clk_disable_unprepare(rockchip->clk_pcie_pm);
clk_disable_unprepare(rockchip->hclk_pcie);
clk_disable_unprepare(rockchip->aclk_perf_pcie);
clk_disable_unprepare(rockchip->aclk_pcie);
return ret;
}
static int __maybe_unused rockchip_pcie_resume_noirq(struct device *dev)
{
struct rockchip_pcie *rockchip = dev_get_drvdata(dev);
int err;
clk_prepare_enable(rockchip->clk_pcie_pm);
clk_prepare_enable(rockchip->hclk_pcie);
clk_prepare_enable(rockchip->aclk_perf_pcie);
clk_prepare_enable(rockchip->aclk_pcie);
err = rockchip_pcie_init_port(rockchip);
if (err)
return err;
err = rockchip_cfg_atu(rockchip);
if (err)
return err;
/* Need this to enter L1 again */
rockchip_pcie_update_txcredit_mui(rockchip);
rockchip_pcie_enable_interrupts(rockchip);
return 0;
}
@ -1209,6 +1289,7 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
if (!rockchip)
return -ENOMEM;
platform_set_drvdata(pdev, rockchip);
rockchip->dev = dev;
err = rockchip_pcie_parse_dt(rockchip);
@ -1262,7 +1343,7 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
err = devm_request_pci_bus_resources(dev, &res);
if (err)
goto err_vpcie;
goto err_free_res;
/* Get the I/O and memory ranges from DT */
resource_list_for_each_entry(win, &res) {
@ -1295,11 +1376,19 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
err = rockchip_cfg_atu(rockchip);
if (err)
goto err_vpcie;
goto err_free_res;
rockchip->msg_region = devm_ioremap(rockchip->dev,
rockchip->msg_bus_addr, SZ_1M);
if (!rockchip->msg_region) {
err = -ENOMEM;
goto err_free_res;
}
bus = pci_scan_root_bus(&pdev->dev, 0, &rockchip_pcie_ops, rockchip, &res);
if (!bus) {
err = -ENOMEM;
goto err_vpcie;
goto err_free_res;
}
pci_bus_size_bridges(bus);
@ -1310,6 +1399,8 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
pci_bus_add_devices(bus);
return err;
err_free_res:
pci_free_resource_list(&res);
err_vpcie:
if (!IS_ERR(rockchip->vpcie3v3))
regulator_disable(rockchip->vpcie3v3);
@ -1329,6 +1420,11 @@ err_aclk_pcie:
return err;
}
static const struct dev_pm_ops rockchip_pcie_pm_ops = {
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(rockchip_pcie_suspend_noirq,
rockchip_pcie_resume_noirq)
};
static const struct of_device_id rockchip_pcie_of_match[] = {
{ .compatible = "rockchip,rk3399-pcie", },
{}
@ -1338,6 +1434,7 @@ static struct platform_driver rockchip_pcie_driver = {
.driver = {
.name = "rockchip-pcie",
.of_match_table = rockchip_pcie_of_match,
.pm = &rockchip_pcie_pm_ops,
},
.probe = rockchip_pcie_probe,

View File

@ -62,21 +62,9 @@
#define CFG_ENABLE_PM_MSG_FWD BIT(1)
#define CFG_ENABLE_INT_MSG_FWD BIT(2)
#define CFG_ENABLE_ERR_MSG_FWD BIT(3)
#define CFG_ENABLE_SLT_MSG_FWD BIT(5)
#define CFG_ENABLE_VEN_MSG_FWD BIT(7)
#define CFG_ENABLE_OTH_MSG_FWD BIT(13)
#define CFG_ENABLE_VEN_MSG_EN BIT(14)
#define CFG_ENABLE_VEN_MSG_VEN_INV BIT(15)
#define CFG_ENABLE_VEN_MSG_VEN_ID GENMASK(31, 16)
#define CFG_ENABLE_MSG_FILTER_MASK (CFG_ENABLE_PM_MSG_FWD | \
CFG_ENABLE_INT_MSG_FWD | \
CFG_ENABLE_ERR_MSG_FWD | \
CFG_ENABLE_SLT_MSG_FWD | \
CFG_ENABLE_VEN_MSG_FWD | \
CFG_ENABLE_OTH_MSG_FWD | \
CFG_ENABLE_VEN_MSG_EN | \
CFG_ENABLE_VEN_MSG_VEN_INV | \
CFG_ENABLE_VEN_MSG_VEN_ID)
CFG_ENABLE_ERR_MSG_FWD)
/* Misc interrupt status mask bits */
#define MSGF_MISC_SR_RXMSG_AVAIL BIT(0)

View File

@ -632,7 +632,7 @@ static int xilinx_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct xilinx_pcie_port *port;
struct pci_bus *bus;
struct pci_bus *bus, *child;
int err;
resource_size_t iobase = 0;
LIST_HEAD(res);
@ -686,6 +686,8 @@ static int xilinx_pcie_probe(struct platform_device *pdev)
#ifndef CONFIG_MICROBLAZE
pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci);
#endif
list_for_each_entry(child, &bus->children, node)
pcie_bus_configure_settings(child);
pci_bus_add_devices(bus);
return 0;

View File

@ -107,7 +107,7 @@ static void __exit ibm_acpiphp_exit(void);
static acpi_handle ibm_acpi_handle;
static struct notification ibm_note;
static struct bin_attribute ibm_apci_table_attr = {
static struct bin_attribute ibm_apci_table_attr __ro_after_init = {
.attr = {
.name = "apci_table",
.mode = S_IRUGO,

View File

@ -463,7 +463,6 @@ static inline int is_dlpar_capable(void)
int __init rpadlpar_io_init(void)
{
int rc = 0;
if (!is_dlpar_capable()) {
printk(KERN_WARNING "%s: partition not DLPAR capable\n",
@ -471,8 +470,7 @@ int __init rpadlpar_io_init(void)
return -EPERM;
}
rc = dlpar_sysfs_init();
return rc;
return dlpar_sysfs_init();
}
void rpadlpar_io_exit(void)

View File

@ -124,7 +124,6 @@ int pci_iov_add_virtfn(struct pci_dev *dev, int id, int reset)
struct pci_sriov *iov = dev->sriov;
struct pci_bus *bus;
mutex_lock(&iov->dev->sriov->lock);
bus = virtfn_add_bus(dev->bus, pci_iov_virtfn_bus(dev, id));
if (!bus)
goto failed;
@ -162,7 +161,6 @@ int pci_iov_add_virtfn(struct pci_dev *dev, int id, int reset)
__pci_reset_function(virtfn);
pci_device_add(virtfn, virtfn->bus);
mutex_unlock(&iov->dev->sriov->lock);
pci_bus_add_device(virtfn);
sprintf(buf, "virtfn%u", id);
@ -181,12 +179,10 @@ failed2:
sysfs_remove_link(&dev->dev.kobj, buf);
failed1:
pci_dev_put(dev);
mutex_lock(&iov->dev->sriov->lock);
pci_stop_and_remove_bus_device(virtfn);
failed0:
virtfn_remove_bus(dev->bus, bus);
failed:
mutex_unlock(&iov->dev->sriov->lock);
return rc;
}
@ -195,7 +191,6 @@ void pci_iov_remove_virtfn(struct pci_dev *dev, int id, int reset)
{
char buf[VIRTFN_ID_LEN];
struct pci_dev *virtfn;
struct pci_sriov *iov = dev->sriov;
virtfn = pci_get_domain_bus_and_slot(pci_domain_nr(dev->bus),
pci_iov_virtfn_bus(dev, id),
@ -218,10 +213,8 @@ void pci_iov_remove_virtfn(struct pci_dev *dev, int id, int reset)
if (virtfn->dev.kobj.sd)
sysfs_remove_link(&virtfn->dev.kobj, "physfn");
mutex_lock(&iov->dev->sriov->lock);
pci_stop_and_remove_bus_device(virtfn);
virtfn_remove_bus(dev->bus, virtfn->bus);
mutex_unlock(&iov->dev->sriov->lock);
/* balance pci_get_domain_bus_and_slot() */
pci_dev_put(virtfn);

View File

@ -32,32 +32,13 @@ int pci_msi_ignore_mask;
#define msix_table_size(flags) ((flags & PCI_MSIX_FLAGS_QSIZE) + 1)
#ifdef CONFIG_PCI_MSI_IRQ_DOMAIN
static struct irq_domain *pci_msi_default_domain;
static DEFINE_MUTEX(pci_msi_domain_lock);
struct irq_domain * __weak arch_get_pci_msi_domain(struct pci_dev *dev)
{
return pci_msi_default_domain;
}
static struct irq_domain *pci_msi_get_domain(struct pci_dev *dev)
{
struct irq_domain *domain;
domain = dev_get_msi_domain(&dev->dev);
if (domain)
return domain;
return arch_get_pci_msi_domain(dev);
}
static int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
{
struct irq_domain *domain;
domain = pci_msi_get_domain(dev);
domain = dev_get_msi_domain(&dev->dev);
if (domain && irq_domain_is_hierarchy(domain))
return pci_msi_domain_alloc_irqs(domain, dev, nvec, type);
return msi_domain_alloc_irqs(domain, &dev->dev, nvec);
return arch_setup_msi_irqs(dev, nvec, type);
}
@ -66,9 +47,9 @@ static void pci_msi_teardown_msi_irqs(struct pci_dev *dev)
{
struct irq_domain *domain;
domain = pci_msi_get_domain(dev);
domain = dev_get_msi_domain(&dev->dev);
if (domain && irq_domain_is_hierarchy(domain))
pci_msi_domain_free_irqs(domain, dev);
msi_domain_free_irqs(domain, &dev->dev);
else
arch_teardown_msi_irqs(dev);
}
@ -379,7 +360,7 @@ static void free_msi_irqs(struct pci_dev *dev)
}
list_del(&entry->list);
kfree(entry);
free_msi_entry(entry);
}
if (dev->msi_irq_groups) {
@ -610,7 +591,7 @@ static int msi_verify_entries(struct pci_dev *dev)
* msi_capability_init - configure device's MSI capability structure
* @dev: pointer to the pci_dev data structure of MSI device function
* @nvec: number of interrupts to allocate
* @affinity: flag to indicate cpu irq affinity mask should be set
* @affd: description of automatic irq affinity assignments (may be %NULL)
*
* Setup the MSI capability structure of the device with the requested
* number of interrupts. A return value of zero indicates the successful
@ -731,7 +712,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
ret = 0;
out:
kfree(masks);
return 0;
return ret;
}
static void msix_program_entries(struct pci_dev *dev,
@ -1084,7 +1065,7 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
if (nvec < 0)
return nvec;
if (nvec < minvec)
return -EINVAL;
return -ENOSPC;
if (nvec > maxvec)
nvec = maxvec;
@ -1109,23 +1090,15 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
}
}
/**
* pci_enable_msi_range - configure device's MSI capability structure
* @dev: device to configure
* @minvec: minimal number of interrupts to configure
* @maxvec: maximum number of interrupts to configure
*
* This function tries to allocate a maximum possible number of interrupts in a
* range between @minvec and @maxvec. It returns a negative errno if an error
* occurs. If it succeeds, it returns the actual number of interrupts allocated
* and updates the @dev's irq member to the lowest new interrupt number;
* the other interrupt numbers allocated to this device are consecutive.
**/
int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
/* deprecated, don't use */
int pci_enable_msi(struct pci_dev *dev)
{
return __pci_enable_msi_range(dev, minvec, maxvec, NULL);
int rc = __pci_enable_msi_range(dev, 1, 1, NULL);
if (rc < 0)
return rc;
return 0;
}
EXPORT_SYMBOL(pci_enable_msi_range);
EXPORT_SYMBOL(pci_enable_msi);
static int __pci_enable_msix_range(struct pci_dev *dev,
struct msix_entry *entries, int minvec,
@ -1235,9 +1208,11 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
}
/* use legacy irq if allowed */
if ((flags & PCI_IRQ_LEGACY) && min_vecs == 1) {
pci_intx(dev, 1);
return 1;
if (flags & PCI_IRQ_LEGACY) {
if (min_vecs == 1 && dev->irq) {
pci_intx(dev, 1);
return 1;
}
}
return vecs;
@ -1391,7 +1366,7 @@ int pci_msi_domain_check_cap(struct irq_domain *domain,
{
struct msi_desc *desc = first_pci_msi_entry(to_pci_dev(dev));
/* Special handling to support pci_enable_msi_range() */
/* Special handling to support __pci_enable_msi_range() */
if (pci_msi_desc_is_multi_msi(desc) &&
!(info->flags & MSI_FLAG_MULTI_PCI_MSI))
return 1;
@ -1404,7 +1379,7 @@ int pci_msi_domain_check_cap(struct irq_domain *domain,
static int pci_msi_domain_handle_error(struct irq_domain *domain,
struct msi_desc *desc, int error)
{
/* Special handling to support pci_enable_msi_range() */
/* Special handling to support __pci_enable_msi_range() */
if (pci_msi_desc_is_multi_msi(desc) && error == -ENOSPC)
return 1;
@ -1491,59 +1466,6 @@ struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode,
}
EXPORT_SYMBOL_GPL(pci_msi_create_irq_domain);
/**
* pci_msi_domain_alloc_irqs - Allocate interrupts for @dev in @domain
* @domain: The interrupt domain to allocate from
* @dev: The device for which to allocate
* @nvec: The number of interrupts to allocate
* @type: Unused to allow simpler migration from the arch_XXX interfaces
*
* Returns:
* A virtual interrupt number or an error code in case of failure
*/
int pci_msi_domain_alloc_irqs(struct irq_domain *domain, struct pci_dev *dev,
int nvec, int type)
{
return msi_domain_alloc_irqs(domain, &dev->dev, nvec);
}
/**
* pci_msi_domain_free_irqs - Free interrupts for @dev in @domain
* @domain: The interrupt domain
* @dev: The device for which to free interrupts
*/
void pci_msi_domain_free_irqs(struct irq_domain *domain, struct pci_dev *dev)
{
msi_domain_free_irqs(domain, &dev->dev);
}
/**
* pci_msi_create_default_irq_domain - Create a default MSI interrupt domain
* @fwnode: Optional fwnode of the interrupt controller
* @info: MSI domain info
* @parent: Parent irq domain
*
* Returns: A domain pointer or NULL in case of failure. If successful
* the default PCI/MSI irqdomain pointer is updated.
*/
struct irq_domain *pci_msi_create_default_irq_domain(struct fwnode_handle *fwnode,
struct msi_domain_info *info, struct irq_domain *parent)
{
struct irq_domain *domain;
mutex_lock(&pci_msi_domain_lock);
if (pci_msi_default_domain) {
pr_err("PCI: default irq domain for PCI MSI has already been created.\n");
domain = NULL;
} else {
domain = pci_msi_create_irq_domain(fwnode, info, parent);
pci_msi_default_domain = domain;
}
mutex_unlock(&pci_msi_domain_lock);
return domain;
}
static int get_msi_id_cb(struct pci_dev *pdev, u16 alias, void *data)
{
u32 *pa = data;

View File

@ -381,8 +381,6 @@ static int __pci_device_probe(struct pci_driver *drv, struct pci_dev *pci_dev)
id = pci_match_device(drv, pci_dev);
if (id)
error = pci_call_probe(drv, pci_dev, id);
if (error >= 0)
error = 0;
}
return error;
}

View File

@ -472,6 +472,7 @@ static ssize_t sriov_numvfs_store(struct device *dev,
const char *buf, size_t count)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct pci_sriov *iov = pdev->sriov;
int ret;
u16 num_vfs;
@ -482,38 +483,46 @@ static ssize_t sriov_numvfs_store(struct device *dev,
if (num_vfs > pci_sriov_get_totalvfs(pdev))
return -ERANGE;
mutex_lock(&iov->dev->sriov->lock);
if (num_vfs == pdev->sriov->num_VFs)
return count; /* no change */
goto exit;
/* is PF driver loaded w/callback */
if (!pdev->driver || !pdev->driver->sriov_configure) {
dev_info(&pdev->dev, "Driver doesn't support SRIOV configuration via sysfs\n");
return -ENOSYS;
ret = -ENOENT;
goto exit;
}
if (num_vfs == 0) {
/* disable VFs */
ret = pdev->driver->sriov_configure(pdev, 0);
if (ret < 0)
return ret;
return count;
goto exit;
}
/* enable VFs */
if (pdev->sriov->num_VFs) {
dev_warn(&pdev->dev, "%d VFs already enabled. Disable before enabling %d VFs\n",
pdev->sriov->num_VFs, num_vfs);
return -EBUSY;
ret = -EBUSY;
goto exit;
}
ret = pdev->driver->sriov_configure(pdev, num_vfs);
if (ret < 0)
return ret;
goto exit;
if (ret != num_vfs)
dev_warn(&pdev->dev, "%d VFs requested; only %d enabled\n",
num_vfs, ret);
exit:
mutex_unlock(&iov->dev->sriov->lock);
if (ret < 0)
return ret;
return count;
}

View File

@ -270,7 +270,7 @@ struct pci_sriov {
u16 driver_max_VFs; /* max num VFs driver supports */
struct pci_dev *dev; /* lowest numbered PF */
struct pci_dev *self; /* this PF */
struct mutex lock; /* lock for VF bus */
struct mutex lock; /* lock for setting sriov_numvfs in sysfs */
resource_size_t barsz[PCI_SRIOV_NUM_BARS]; /* VF BAR size */
};

View File

@ -71,6 +71,14 @@ config PCIEASPM_POWERSAVE
Enable PCI Express ASPM L0s and L1 where possible, even if the
BIOS did not.
config PCIEASPM_POWER_SUPERSAVE
bool "Power Supersave"
depends on PCIEASPM
help
Same as PCIEASPM_POWERSAVE, except it also enables L1 substates where
possible. This would result in higher power savings while staying in L1
where the components support it.
config PCIEASPM_PERFORMANCE
bool "Performance"
depends on PCIEASPM

View File

@ -30,8 +30,29 @@
#define ASPM_STATE_L0S_UP (1) /* Upstream direction L0s state */
#define ASPM_STATE_L0S_DW (2) /* Downstream direction L0s state */
#define ASPM_STATE_L1 (4) /* L1 state */
#define ASPM_STATE_L1_1 (8) /* ASPM L1.1 state */
#define ASPM_STATE_L1_2 (0x10) /* ASPM L1.2 state */
#define ASPM_STATE_L1_1_PCIPM (0x20) /* PCI PM L1.1 state */
#define ASPM_STATE_L1_2_PCIPM (0x40) /* PCI PM L1.2 state */
#define ASPM_STATE_L1_SS_PCIPM (ASPM_STATE_L1_1_PCIPM | ASPM_STATE_L1_2_PCIPM)
#define ASPM_STATE_L1_2_MASK (ASPM_STATE_L1_2 | ASPM_STATE_L1_2_PCIPM)
#define ASPM_STATE_L1SS (ASPM_STATE_L1_1 | ASPM_STATE_L1_1_PCIPM |\
ASPM_STATE_L1_2_MASK)
#define ASPM_STATE_L0S (ASPM_STATE_L0S_UP | ASPM_STATE_L0S_DW)
#define ASPM_STATE_ALL (ASPM_STATE_L0S | ASPM_STATE_L1)
#define ASPM_STATE_ALL (ASPM_STATE_L0S | ASPM_STATE_L1 | \
ASPM_STATE_L1SS)
/*
* When L1 substates are enabled, the LTR L1.2 threshold is a timing parameter
* that decides whether L1.1 or L1.2 is entered (Refer PCIe spec for details).
* Not sure is there is a way to "calculate" this on the fly, but maybe we
* could turn it into a parameter in future. This value has been taken from
* the following files from Intel's coreboot (which is the only code I found
* to have used this):
* https://www.coreboot.org/pipermail/coreboot-gerrit/2015-March/021134.html
* https://review.coreboot.org/#/c/8832/
*/
#define LTR_L1_2_THRESHOLD_BITS ((1 << 21) | (1 << 23) | (1 << 30))
struct aspm_latency {
u32 l0s; /* L0s latency (nsec) */
@ -40,6 +61,7 @@ struct aspm_latency {
struct pcie_link_state {
struct pci_dev *pdev; /* Upstream component of the Link */
struct pci_dev *downstream; /* Downstream component, function 0 */
struct pcie_link_state *root; /* pointer to the root port link */
struct pcie_link_state *parent; /* pointer to the parent Link state */
struct list_head sibling; /* node in link_list */
@ -47,11 +69,11 @@ struct pcie_link_state {
struct list_head link; /* node in parent's children list */
/* ASPM state */
u32 aspm_support:3; /* Supported ASPM state */
u32 aspm_enabled:3; /* Enabled ASPM state */
u32 aspm_capable:3; /* Capable ASPM state with latency */
u32 aspm_default:3; /* Default ASPM state by BIOS */
u32 aspm_disable:3; /* Disabled ASPM state */
u32 aspm_support:7; /* Supported ASPM state */
u32 aspm_enabled:7; /* Enabled ASPM state */
u32 aspm_capable:7; /* Capable ASPM state with latency */
u32 aspm_default:7; /* Default ASPM state by BIOS */
u32 aspm_disable:7; /* Disabled ASPM state */
/* Clock PM state */
u32 clkpm_capable:1; /* Clock PM capable? */
@ -66,6 +88,14 @@ struct pcie_link_state {
* has one slot under it, so at most there are 8 functions.
*/
struct aspm_latency acceptable[8];
/* L1 PM Substate info */
struct {
u32 up_cap_ptr; /* L1SS cap ptr in upstream dev */
u32 dw_cap_ptr; /* L1SS cap ptr in downstream dev */
u32 ctl1; /* value to be programmed in ctl1 */
u32 ctl2; /* value to be programmed in ctl2 */
} l1ss;
};
static int aspm_disabled, aspm_force;
@ -76,11 +106,14 @@ static LIST_HEAD(link_list);
#define POLICY_DEFAULT 0 /* BIOS default setting */
#define POLICY_PERFORMANCE 1 /* high performance */
#define POLICY_POWERSAVE 2 /* high power saving */
#define POLICY_POWER_SUPERSAVE 3 /* possibly even more power saving */
#ifdef CONFIG_PCIEASPM_PERFORMANCE
static int aspm_policy = POLICY_PERFORMANCE;
#elif defined CONFIG_PCIEASPM_POWERSAVE
static int aspm_policy = POLICY_POWERSAVE;
#elif defined CONFIG_PCIEASPM_POWER_SUPERSAVE
static int aspm_policy = POLICY_POWER_SUPERSAVE;
#else
static int aspm_policy;
#endif
@ -88,7 +121,8 @@ static int aspm_policy;
static const char *policy_str[] = {
[POLICY_DEFAULT] = "default",
[POLICY_PERFORMANCE] = "performance",
[POLICY_POWERSAVE] = "powersave"
[POLICY_POWERSAVE] = "powersave",
[POLICY_POWER_SUPERSAVE] = "powersupersave"
};
#define LINK_RETRAIN_TIMEOUT HZ
@ -101,6 +135,9 @@ static int policy_to_aspm_state(struct pcie_link_state *link)
return 0;
case POLICY_POWERSAVE:
/* Enable ASPM L0s/L1 */
return (ASPM_STATE_L0S | ASPM_STATE_L1);
case POLICY_POWER_SUPERSAVE:
/* Enable Everything */
return ASPM_STATE_ALL;
case POLICY_DEFAULT:
return link->aspm_default;
@ -115,7 +152,8 @@ static int policy_to_clkpm_state(struct pcie_link_state *link)
/* Disable ASPM and Clock PM */
return 0;
case POLICY_POWERSAVE:
/* Disable Clock PM */
case POLICY_POWER_SUPERSAVE:
/* Enable Clock PM */
return 1;
case POLICY_DEFAULT:
return link->clkpm_default;
@ -278,11 +316,33 @@ static u32 calc_l1_acceptable(u32 encoding)
return (1000 << encoding);
}
/* Convert L1SS T_pwr encoding to usec */
static u32 calc_l1ss_pwron(struct pci_dev *pdev, u32 scale, u32 val)
{
switch (scale) {
case 0:
return val * 2;
case 1:
return val * 10;
case 2:
return val * 100;
}
dev_err(&pdev->dev, "%s: Invalid T_PwrOn scale: %u\n",
__func__, scale);
return 0;
}
struct aspm_register_info {
u32 support:2;
u32 enabled:2;
u32 latency_encoding_l0s;
u32 latency_encoding_l1;
/* L1 substates */
u32 l1ss_cap_ptr;
u32 l1ss_cap;
u32 l1ss_ctl1;
u32 l1ss_ctl2;
};
static void pcie_get_aspm_reg(struct pci_dev *pdev,
@ -297,6 +357,22 @@ static void pcie_get_aspm_reg(struct pci_dev *pdev,
info->latency_encoding_l1 = (reg32 & PCI_EXP_LNKCAP_L1EL) >> 15;
pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &reg16);
info->enabled = reg16 & PCI_EXP_LNKCTL_ASPMC;
/* Read L1 PM substate capabilities */
info->l1ss_cap = info->l1ss_ctl1 = info->l1ss_ctl2 = 0;
info->l1ss_cap_ptr = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_L1SS);
if (!info->l1ss_cap_ptr)
return;
pci_read_config_dword(pdev, info->l1ss_cap_ptr + PCI_L1SS_CAP,
&info->l1ss_cap);
if (!(info->l1ss_cap & PCI_L1SS_CAP_L1_PM_SS)) {
info->l1ss_cap = 0;
return;
}
pci_read_config_dword(pdev, info->l1ss_cap_ptr + PCI_L1SS_CTL1,
&info->l1ss_ctl1);
pci_read_config_dword(pdev, info->l1ss_cap_ptr + PCI_L1SS_CTL2,
&info->l1ss_ctl2);
}
static void pcie_aspm_check_latency(struct pci_dev *endpoint)
@ -327,6 +403,14 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
* Check L1 latency.
* Every switch on the path to root complex need 1
* more microsecond for L1. Spec doesn't mention L0s.
*
* The exit latencies for L1 substates are not advertised
* by a device. Since the spec also doesn't mention a way
* to determine max latencies introduced by enabling L1
* substates on the components, it is not clear how to do
* a L1 substate exit latency check. We assume that the
* L1 exit latencies advertised by a device include L1
* substate latencies (and hence do not do any check).
*/
latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
if ((link->aspm_capable & ASPM_STATE_L1) &&
@ -338,6 +422,60 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
}
}
/*
* The L1 PM substate capability is only implemented in function 0 in a
* multi function device.
*/
static struct pci_dev *pci_function_0(struct pci_bus *linkbus)
{
struct pci_dev *child;
list_for_each_entry(child, &linkbus->devices, bus_list)
if (PCI_FUNC(child->devfn) == 0)
return child;
return NULL;
}
/* Calculate L1.2 PM substate timing parameters */
static void aspm_calc_l1ss_info(struct pcie_link_state *link,
struct aspm_register_info *upreg,
struct aspm_register_info *dwreg)
{
u32 val1, val2, scale1, scale2;
link->l1ss.up_cap_ptr = upreg->l1ss_cap_ptr;
link->l1ss.dw_cap_ptr = dwreg->l1ss_cap_ptr;
link->l1ss.ctl1 = link->l1ss.ctl2 = 0;
if (!(link->aspm_support & ASPM_STATE_L1_2_MASK))
return;
/* Choose the greater of the two T_cmn_mode_rstr_time */
val1 = (upreg->l1ss_cap >> 8) & 0xFF;
val2 = (upreg->l1ss_cap >> 8) & 0xFF;
if (val1 > val2)
link->l1ss.ctl1 |= val1 << 8;
else
link->l1ss.ctl1 |= val2 << 8;
/*
* We currently use LTR L1.2 threshold to be fixed constant picked from
* Intel's coreboot.
*/
link->l1ss.ctl1 |= LTR_L1_2_THRESHOLD_BITS;
/* Choose the greater of the two T_pwr_on */
val1 = (upreg->l1ss_cap >> 19) & 0x1F;
scale1 = (upreg->l1ss_cap >> 16) & 0x03;
val2 = (dwreg->l1ss_cap >> 19) & 0x1F;
scale2 = (dwreg->l1ss_cap >> 16) & 0x03;
if (calc_l1ss_pwron(link->pdev, scale1, val1) >
calc_l1ss_pwron(link->downstream, scale2, val2))
link->l1ss.ctl2 |= scale1 | (val1 << 3);
else
link->l1ss.ctl2 |= scale2 | (val2 << 3);
}
static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
{
struct pci_dev *child, *parent = link->pdev;
@ -353,8 +491,9 @@ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
/* Get upstream/downstream components' register state */
pcie_get_aspm_reg(parent, &upreg);
child = list_entry(linkbus->devices.next, struct pci_dev, bus_list);
child = pci_function_0(linkbus);
pcie_get_aspm_reg(child, &dwreg);
link->downstream = child;
/*
* If ASPM not supported, don't mess with the clocks and link,
@ -397,6 +536,28 @@ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
link->latency_up.l1 = calc_l1_latency(upreg.latency_encoding_l1);
link->latency_dw.l1 = calc_l1_latency(dwreg.latency_encoding_l1);
/* Setup L1 substate */
if (upreg.l1ss_cap & dwreg.l1ss_cap & PCI_L1SS_CAP_ASPM_L1_1)
link->aspm_support |= ASPM_STATE_L1_1;
if (upreg.l1ss_cap & dwreg.l1ss_cap & PCI_L1SS_CAP_ASPM_L1_2)
link->aspm_support |= ASPM_STATE_L1_2;
if (upreg.l1ss_cap & dwreg.l1ss_cap & PCI_L1SS_CAP_PCIPM_L1_1)
link->aspm_support |= ASPM_STATE_L1_1_PCIPM;
if (upreg.l1ss_cap & dwreg.l1ss_cap & PCI_L1SS_CAP_PCIPM_L1_2)
link->aspm_support |= ASPM_STATE_L1_2_PCIPM;
if (upreg.l1ss_ctl1 & dwreg.l1ss_ctl1 & PCI_L1SS_CTL1_ASPM_L1_1)
link->aspm_enabled |= ASPM_STATE_L1_1;
if (upreg.l1ss_ctl1 & dwreg.l1ss_ctl1 & PCI_L1SS_CTL1_ASPM_L1_2)
link->aspm_enabled |= ASPM_STATE_L1_2;
if (upreg.l1ss_ctl1 & dwreg.l1ss_ctl1 & PCI_L1SS_CTL1_PCIPM_L1_1)
link->aspm_enabled |= ASPM_STATE_L1_1_PCIPM;
if (upreg.l1ss_ctl1 & dwreg.l1ss_ctl1 & PCI_L1SS_CTL1_PCIPM_L1_2)
link->aspm_enabled |= ASPM_STATE_L1_2_PCIPM;
if (link->aspm_support & ASPM_STATE_L1SS)
aspm_calc_l1ss_info(link, &upreg, &dwreg);
/* Save default state */
link->aspm_default = link->aspm_enabled;
@ -435,6 +596,92 @@ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
}
}
static void pci_clear_and_set_dword(struct pci_dev *pdev, int pos,
u32 clear, u32 set)
{
u32 val;
pci_read_config_dword(pdev, pos, &val);
val &= ~clear;
val |= set;
pci_write_config_dword(pdev, pos, val);
}
/* Configure the ASPM L1 substates */
static void pcie_config_aspm_l1ss(struct pcie_link_state *link, u32 state)
{
u32 val, enable_req;
struct pci_dev *child = link->downstream, *parent = link->pdev;
u32 up_cap_ptr = link->l1ss.up_cap_ptr;
u32 dw_cap_ptr = link->l1ss.dw_cap_ptr;
enable_req = (link->aspm_enabled ^ state) & state;
/*
* Here are the rules specified in the PCIe spec for enabling L1SS:
* - When enabling L1.x, enable bit at parent first, then at child
* - When disabling L1.x, disable bit at child first, then at parent
* - When enabling ASPM L1.x, need to disable L1
* (at child followed by parent).
* - The ASPM/PCIPM L1.2 must be disabled while programming timing
* parameters
*
* To keep it simple, disable all L1SS bits first, and later enable
* what is needed.
*/
/* Disable all L1 substates */
pci_clear_and_set_dword(child, dw_cap_ptr + PCI_L1SS_CTL1,
PCI_L1SS_CTL1_L1SS_MASK, 0);
pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1,
PCI_L1SS_CTL1_L1SS_MASK, 0);
/*
* If needed, disable L1, and it gets enabled later
* in pcie_config_aspm_link().
*/
if (enable_req & (ASPM_STATE_L1_1 | ASPM_STATE_L1_2)) {
pcie_capability_clear_and_set_word(child, PCI_EXP_LNKCTL,
PCI_EXP_LNKCTL_ASPM_L1, 0);
pcie_capability_clear_and_set_word(parent, PCI_EXP_LNKCTL,
PCI_EXP_LNKCTL_ASPM_L1, 0);
}
if (enable_req & ASPM_STATE_L1_2_MASK) {
/* Program T_pwr_on in both ports */
pci_write_config_dword(parent, up_cap_ptr + PCI_L1SS_CTL2,
link->l1ss.ctl2);
pci_write_config_dword(child, dw_cap_ptr + PCI_L1SS_CTL2,
link->l1ss.ctl2);
/* Program T_cmn_mode in parent */
pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1,
0xFF00, link->l1ss.ctl1);
/* Program LTR L1.2 threshold in both ports */
pci_clear_and_set_dword(parent, dw_cap_ptr + PCI_L1SS_CTL1,
0xE3FF0000, link->l1ss.ctl1);
pci_clear_and_set_dword(child, dw_cap_ptr + PCI_L1SS_CTL1,
0xE3FF0000, link->l1ss.ctl1);
}
val = 0;
if (state & ASPM_STATE_L1_1)
val |= PCI_L1SS_CTL1_ASPM_L1_1;
if (state & ASPM_STATE_L1_2)
val |= PCI_L1SS_CTL1_ASPM_L1_2;
if (state & ASPM_STATE_L1_1_PCIPM)
val |= PCI_L1SS_CTL1_PCIPM_L1_1;
if (state & ASPM_STATE_L1_2_PCIPM)
val |= PCI_L1SS_CTL1_PCIPM_L1_2;
/* Enable what we need to enable */
pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1,
PCI_L1SS_CAP_L1_PM_SS, val);
pci_clear_and_set_dword(child, dw_cap_ptr + PCI_L1SS_CTL1,
PCI_L1SS_CAP_L1_PM_SS, val);
}
static void pcie_config_aspm_dev(struct pci_dev *pdev, u32 val)
{
pcie_capability_clear_and_set_word(pdev, PCI_EXP_LNKCTL,
@ -444,11 +691,23 @@ static void pcie_config_aspm_dev(struct pci_dev *pdev, u32 val)
static void pcie_config_aspm_link(struct pcie_link_state *link, u32 state)
{
u32 upstream = 0, dwstream = 0;
struct pci_dev *child, *parent = link->pdev;
struct pci_dev *child = link->downstream, *parent = link->pdev;
struct pci_bus *linkbus = parent->subordinate;
/* Nothing to do if the link is already in the requested state */
/* Enable only the states that were not explicitly disabled */
state &= (link->aspm_capable & ~link->aspm_disable);
/* Can't enable any substates if L1 is not enabled */
if (!(state & ASPM_STATE_L1))
state &= ~ASPM_STATE_L1SS;
/* Spec says both ports must be in D0 before enabling PCI PM substates*/
if (parent->current_state != PCI_D0 || child->current_state != PCI_D0) {
state &= ~ASPM_STATE_L1_SS_PCIPM;
state |= (link->aspm_enabled & ASPM_STATE_L1_SS_PCIPM);
}
/* Nothing to do if the link is already in the requested state */
if (link->aspm_enabled == state)
return;
/* Convert ASPM state to upstream/downstream ASPM register state */
@ -460,6 +719,10 @@ static void pcie_config_aspm_link(struct pcie_link_state *link, u32 state)
upstream |= PCI_EXP_LNKCTL_ASPM_L1;
dwstream |= PCI_EXP_LNKCTL_ASPM_L1;
}
if (link->aspm_capable & ASPM_STATE_L1SS)
pcie_config_aspm_l1ss(link, state);
/*
* Spec 2.0 suggests all functions should be configured the
* same setting for ASPM. Enabling ASPM L1 should be done in
@ -619,7 +882,8 @@ void pcie_aspm_init_link_state(struct pci_dev *pdev)
* the BIOS's expectation, we'll do so once pci_enable_device() is
* called.
*/
if (aspm_policy != POLICY_POWERSAVE) {
if (aspm_policy != POLICY_POWERSAVE &&
aspm_policy != POLICY_POWER_SUPERSAVE) {
pcie_config_aspm_path(link);
pcie_set_clkpm(link, policy_to_clkpm_state(link));
}
@ -719,7 +983,8 @@ void pcie_aspm_powersave_config_link(struct pci_dev *pdev)
if (aspm_disabled || !link)
return;
if (aspm_policy != POLICY_POWERSAVE)
if (aspm_policy != POLICY_POWERSAVE &&
aspm_policy != POLICY_POWER_SUPERSAVE)
return;
down_read(&pci_bus_sem);

View File

@ -19,8 +19,28 @@ struct dpc_dev {
struct pcie_device *dev;
struct work_struct work;
int cap_pos;
bool rp;
};
static int dpc_wait_rp_inactive(struct dpc_dev *dpc)
{
unsigned long timeout = jiffies + HZ;
struct pci_dev *pdev = dpc->dev->port;
u16 status;
pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS, &status);
while (status & PCI_EXP_DPC_RP_BUSY &&
!time_after(jiffies, timeout)) {
msleep(10);
pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS, &status);
}
if (status & PCI_EXP_DPC_RP_BUSY) {
dev_warn(&pdev->dev, "DPC root port still busy\n");
return -EBUSY;
}
return 0;
}
static void dpc_wait_link_inactive(struct pci_dev *pdev)
{
unsigned long timeout = jiffies + HZ;
@ -33,7 +53,7 @@ static void dpc_wait_link_inactive(struct pci_dev *pdev)
pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
}
if (lnk_status & PCI_EXP_LNKSTA_DLLLA)
dev_warn(&pdev->dev, "Link state not disabled for DPC event");
dev_warn(&pdev->dev, "Link state not disabled for DPC event\n");
}
static void interrupt_event_handler(struct work_struct *work)
@ -52,6 +72,8 @@ static void interrupt_event_handler(struct work_struct *work)
pci_unlock_rescan_remove();
dpc_wait_link_inactive(pdev);
if (dpc->rp && dpc_wait_rp_inactive(dpc))
return;
pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS,
PCI_EXP_DPC_STATUS_TRIGGER | PCI_EXP_DPC_STATUS_INTERRUPT);
}
@ -73,11 +95,15 @@ static irqreturn_t dpc_irq(int irq, void *context)
if (status & PCI_EXP_DPC_STATUS_TRIGGER) {
u16 reason = (status >> 1) & 0x3;
u16 ext_reason = (status >> 5) & 0x3;
dev_warn(&dpc->dev->device, "DPC %s triggered, remove downstream devices\n",
dev_warn(&dpc->dev->device, "DPC %s detected, remove downstream devices\n",
(reason == 0) ? "unmasked uncorrectable error" :
(reason == 1) ? "ERR_NONFATAL" :
(reason == 2) ? "ERR_FATAL" : "extended error");
(reason == 2) ? "ERR_FATAL" :
(ext_reason == 0) ? "RP PIO error" :
(ext_reason == 1) ? "software trigger" :
"reserved error");
schedule_work(&dpc->work);
}
return IRQ_HANDLED;
@ -111,6 +137,8 @@ static int dpc_probe(struct pcie_device *dev)
pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CAP, &cap);
pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, &ctl);
dpc->rp = (cap & PCI_EXP_DPC_CAP_RP_EXT);
ctl |= PCI_EXP_DPC_CTL_EN_NONFATAL | PCI_EXP_DPC_CTL_INT_EN;
pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl);

View File

@ -43,53 +43,17 @@ static void release_pcie_device(struct device *dev)
kfree(to_pcie_device(dev));
}
/**
* pcie_port_msix_add_entry - add entry to given array of MSI-X entries
* @entries: Array of MSI-X entries
* @new_entry: Index of the entry to add to the array
* @nr_entries: Number of entries already in the array
*
* Return value: Position of the added entry in the array
*/
static int pcie_port_msix_add_entry(
struct msix_entry *entries, int new_entry, int nr_entries)
{
int j;
for (j = 0; j < nr_entries; j++)
if (entries[j].entry == new_entry)
return j;
entries[j].entry = new_entry;
return j;
}
/**
* pcie_port_enable_msix - try to set up MSI-X as interrupt mode for given port
* @dev: PCI Express port to handle
* @vectors: Array of interrupt vectors to populate
* @irqs: Array of interrupt vectors to populate
* @mask: Bitmask of port capabilities returned by get_port_device_capability()
*
* Return value: 0 on success, error code on failure
*/
static int pcie_port_enable_msix(struct pci_dev *dev, int *vectors, int mask)
static int pcie_port_enable_msix(struct pci_dev *dev, int *irqs, int mask)
{
struct msix_entry *msix_entries;
int idx[PCIE_PORT_DEVICE_MAXSERVICES];
int nr_entries, status, pos, i, nvec;
u16 reg16;
u32 reg32;
nr_entries = pci_msix_vec_count(dev);
if (nr_entries < 0)
return nr_entries;
BUG_ON(!nr_entries);
if (nr_entries > PCIE_PORT_MAX_MSIX_ENTRIES)
nr_entries = PCIE_PORT_MAX_MSIX_ENTRIES;
msix_entries = kzalloc(sizeof(*msix_entries) * nr_entries, GFP_KERNEL);
if (!msix_entries)
return -ENOMEM;
int nr_entries, entry, nvec = 0;
/*
* Allocate as many entries as the port wants, so that we can check
@ -97,20 +61,13 @@ static int pcie_port_enable_msix(struct pci_dev *dev, int *vectors, int mask)
* equal to the number of entries this port actually uses, we'll happily
* go through without any tricks.
*/
for (i = 0; i < nr_entries; i++)
msix_entries[i].entry = i;
status = pci_enable_msix_exact(dev, msix_entries, nr_entries);
if (status)
goto Exit;
for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++)
idx[i] = -1;
status = -EIO;
nvec = 0;
nr_entries = pci_alloc_irq_vectors(dev, 1, PCIE_PORT_MAX_MSIX_ENTRIES,
PCI_IRQ_MSIX);
if (nr_entries < 0)
return nr_entries;
if (mask & (PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP)) {
int entry;
u16 reg16;
/*
* The code below follows the PCI Express Base Specification 2.0
@ -125,18 +82,16 @@ static int pcie_port_enable_msix(struct pci_dev *dev, int *vectors, int mask)
pcie_capability_read_word(dev, PCI_EXP_FLAGS, &reg16);
entry = (reg16 & PCI_EXP_FLAGS_IRQ) >> 9;
if (entry >= nr_entries)
goto Error;
goto out_free_irqs;
i = pcie_port_msix_add_entry(msix_entries, entry, nvec);
if (i == nvec)
nvec++;
irqs[PCIE_PORT_SERVICE_PME_SHIFT] = pci_irq_vector(dev, entry);
irqs[PCIE_PORT_SERVICE_HP_SHIFT] = pci_irq_vector(dev, entry);
idx[PCIE_PORT_SERVICE_PME_SHIFT] = i;
idx[PCIE_PORT_SERVICE_HP_SHIFT] = i;
nvec = max(nvec, entry + 1);
}
if (mask & PCIE_PORT_SERVICE_AER) {
int entry;
u32 reg32, pos;
/*
* The code below follows Section 7.10.10 of the PCI Express
@ -151,13 +106,11 @@ static int pcie_port_enable_msix(struct pci_dev *dev, int *vectors, int mask)
pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &reg32);
entry = reg32 >> 27;
if (entry >= nr_entries)
goto Error;
goto out_free_irqs;
i = pcie_port_msix_add_entry(msix_entries, entry, nvec);
if (i == nvec)
nvec++;
irqs[PCIE_PORT_SERVICE_AER_SHIFT] = pci_irq_vector(dev, entry);
idx[PCIE_PORT_SERVICE_AER_SHIFT] = i;
nvec = max(nvec, entry + 1);
}
/*
@ -165,41 +118,39 @@ static int pcie_port_enable_msix(struct pci_dev *dev, int *vectors, int mask)
* what we have. Otherwise, the port has some extra entries not for the
* services we know and we need to work around that.
*/
if (nvec == nr_entries) {
status = 0;
} else {
if (nvec != nr_entries) {
/* Drop the temporary MSI-X setup */
pci_disable_msix(dev);
pci_free_irq_vectors(dev);
/* Now allocate the MSI-X vectors for real */
status = pci_enable_msix_exact(dev, msix_entries, nvec);
if (status)
goto Exit;
nr_entries = pci_alloc_irq_vectors(dev, nvec, nvec,
PCI_IRQ_MSIX);
if (nr_entries < 0)
return nr_entries;
}
for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++)
vectors[i] = idx[i] >= 0 ? msix_entries[idx[i]].vector : -1;
return 0;
Exit:
kfree(msix_entries);
return status;
Error:
pci_disable_msix(dev);
goto Exit;
out_free_irqs:
pci_free_irq_vectors(dev);
return -EIO;
}
/**
* init_service_irqs - initialize irqs for PCI Express port services
* pcie_init_service_irqs - initialize irqs for PCI Express port services
* @dev: PCI Express port to handle
* @irqs: Array of irqs to populate
* @mask: Bitmask of port capabilities returned by get_port_device_capability()
*
* Return value: Interrupt mode associated with the port
*/
static int init_service_irqs(struct pci_dev *dev, int *irqs, int mask)
static int pcie_init_service_irqs(struct pci_dev *dev, int *irqs, int mask)
{
int i, irq = -1;
unsigned flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI;
int ret, i;
for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++)
irqs[i] = -1;
/*
* If MSI cannot be used for PCIe PME or hotplug, we have to use
@ -207,39 +158,23 @@ static int init_service_irqs(struct pci_dev *dev, int *irqs, int mask)
*/
if (((mask & PCIE_PORT_SERVICE_PME) && pcie_pme_no_msi()) ||
((mask & PCIE_PORT_SERVICE_HP) && pciehp_no_msi())) {
if (dev->irq)
irq = dev->irq;
goto no_msi;
flags &= ~PCI_IRQ_MSI;
} else {
/* Try to use MSI-X if supported */
if (!pcie_port_enable_msix(dev, irqs, mask))
return 0;
}
/* Try to use MSI-X if supported */
if (!pcie_port_enable_msix(dev, irqs, mask))
return 0;
/*
* We're not going to use MSI-X, so try MSI and fall back to INTx.
* If neither MSI/MSI-X nor INTx available, try other interrupt. On
* some platforms, root port doesn't support MSI/MSI-X/INTx in RC mode.
*/
if (!pci_enable_msi(dev) || dev->irq)
irq = dev->irq;
no_msi:
for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++)
irqs[i] = irq;
irqs[PCIE_PORT_SERVICE_VC_SHIFT] = -1;
if (irq < 0)
ret = pci_alloc_irq_vectors(dev, 1, 1, flags);
if (ret < 0)
return -ENODEV;
return 0;
}
static void cleanup_service_irqs(struct pci_dev *dev)
{
if (dev->msix_enabled)
pci_disable_msix(dev);
else if (dev->msi_enabled)
pci_disable_msi(dev);
for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++) {
if (i != PCIE_PORT_SERVICE_VC_SHIFT)
irqs[i] = pci_irq_vector(dev, 0);
}
return 0;
}
/**
@ -378,7 +313,7 @@ int pcie_port_device_register(struct pci_dev *dev)
* that can be used in the absence of irqs. Allow them to determine
* if that is to be used.
*/
status = init_service_irqs(dev, irqs, capabilities);
status = pcie_init_service_irqs(dev, irqs, capabilities);
if (status) {
capabilities &= PCIE_PORT_SERVICE_VC | PCIE_PORT_SERVICE_HP;
if (!capabilities)
@ -401,7 +336,7 @@ int pcie_port_device_register(struct pci_dev *dev)
return 0;
error_cleanup_irqs:
cleanup_service_irqs(dev);
pci_free_irq_vectors(dev);
error_disable:
pci_disable_device(dev);
return status;
@ -469,7 +404,7 @@ static int remove_iter(struct device *dev, void *data)
void pcie_port_device_remove(struct pci_dev *dev)
{
device_for_each_child(&dev->dev, NULL, remove_iter);
cleanup_service_irqs(dev);
pci_free_irq_vectors(dev);
pci_disable_device(dev);
}

View File

@ -1556,8 +1556,16 @@ static void program_hpp_type0(struct pci_dev *dev, struct hpp_type0 *hpp)
static void program_hpp_type1(struct pci_dev *dev, struct hpp_type1 *hpp)
{
if (hpp)
dev_warn(&dev->dev, "PCI-X settings not supported\n");
int pos;
if (!hpp)
return;
pos = pci_find_capability(dev, PCI_CAP_ID_PCIX);
if (!pos)
return;
dev_warn(&dev->dev, "PCI-X settings not supported\n");
}
static bool pcie_root_rcb_set(struct pci_dev *dev)
@ -1583,6 +1591,9 @@ static void program_hpp_type2(struct pci_dev *dev, struct hpp_type2 *hpp)
if (!hpp)
return;
if (!pci_is_pcie(dev))
return;
if (hpp->revision > 1) {
dev_warn(&dev->dev, "PCIe settings rev %d not supported\n",
hpp->revision);
@ -1652,12 +1663,30 @@ static void program_hpp_type2(struct pci_dev *dev, struct hpp_type2 *hpp)
*/
}
static void pci_configure_extended_tags(struct pci_dev *dev)
{
u32 dev_cap;
int ret;
if (!pci_is_pcie(dev))
return;
ret = pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &dev_cap);
if (ret)
return;
if (dev_cap & PCI_EXP_DEVCAP_EXT_TAG)
pcie_capability_set_word(dev, PCI_EXP_DEVCTL,
PCI_EXP_DEVCTL_EXT_TAG);
}
static void pci_configure_device(struct pci_dev *dev)
{
struct hotplug_params hpp;
int ret;
pci_configure_mps(dev);
pci_configure_extended_tags(dev);
memset(&hpp, 0, sizeof(hpp));
ret = pci_get_hp_params(dev, &hpp);

View File

@ -1634,6 +1634,7 @@ static void quirk_pcie_mch(struct pci_dev *pdev)
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7520_MCH, quirk_pcie_mch);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7320_MCH, quirk_pcie_mch);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7525_MCH, quirk_pcie_mch);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0x1610, quirk_pcie_mch);
/*
@ -2239,6 +2240,27 @@ DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_BROADCOM,
PCI_DEVICE_ID_TIGON3_5719,
quirk_brcm_5719_limit_mrrs);
#ifdef CONFIG_PCIE_IPROC_PLATFORM
static void quirk_paxc_bridge(struct pci_dev *pdev)
{
/* The PCI config space is shared with the PAXC root port and the first
* Ethernet device. So, we need to workaround this by telling the PCI
* code that the bridge is not an Ethernet device.
*/
if (pdev->hdr_type == PCI_HEADER_TYPE_BRIDGE)
pdev->class = PCI_CLASS_BRIDGE_PCI << 8;
/* MPSS is not being set properly (as it is currently 0). This is
* because that area of the PCI config space is hard coded to zero, and
* is not modifiable by firmware. Set this to 2 (e.g., 512 byte MPS)
* so that the MPS can be set to the real max value.
*/
pdev->pcie_mpss = 2;
}
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x16cd, quirk_paxc_bridge);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x16f0, quirk_paxc_bridge);
#endif
/* Originally in EDAC sources for i82875P:
* Intel tells BIOS developers to hide device 6 which
* configures the overflow device access containing
@ -3113,30 +3135,32 @@ static void quirk_remove_d3_delay(struct pci_dev *dev)
{
dev->d3_delay = 0;
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0c00, quirk_remove_d3_delay);
/* C600 Series devices do not need 10ms d3_delay */
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0412, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0c00, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0c0c, quirk_remove_d3_delay);
/* Lynxpoint-H PCH devices do not need 10ms d3_delay */
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c02, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c18, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c1c, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c20, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c22, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c26, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c2d, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c31, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c3a, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c3d, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c2d, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c20, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c18, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c1c, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c26, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c4e, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c02, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c22, quirk_remove_d3_delay);
/* Intel Cherrytrail devices do not need 10ms d3_delay */
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2280, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2298, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x229c, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b0, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b5, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b7, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b8, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22d8, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22dc, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b5, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b7, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2298, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x229c, quirk_remove_d3_delay);
/*
* Some devices may pass our check in pci_intx_mask_supported() if
@ -4135,6 +4159,26 @@ static int pci_quirk_intel_pch_acs(struct pci_dev *dev, u16 acs_flags)
return acs_flags & ~flags ? 0 : 1;
}
/*
* These QCOM root ports do provide ACS-like features to disable peer
* transactions and validate bus numbers in requests, but do not provide an
* actual PCIe ACS capability. Hardware supports source validation but it
* will report the issue as Completer Abort instead of ACS Violation.
* Hardware doesn't support peer-to-peer and each root port is a root
* complex with unique segment numbers. It is not possible for one root
* port to pass traffic to another root port. All PCIe transactions are
* terminated inside the root port.
*/
static int pci_quirk_qcom_rp_acs(struct pci_dev *dev, u16 acs_flags)
{
u16 flags = (PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_SV);
int ret = acs_flags & ~flags ? 0 : 1;
dev_info(&dev->dev, "Using QCOM ACS Quirk (%d)\n", ret);
return ret;
}
/*
* Sunrise Point PCH root ports implement ACS, but unfortunately as shown in
* the datasheet (Intel 100 Series Chipset Family PCH Datasheet, Vol. 2,
@ -4150,15 +4194,35 @@ static int pci_quirk_intel_pch_acs(struct pci_dev *dev, u16 acs_flags)
*
* N.B. This doesn't fix what lspci shows.
*
* The 100 series chipset specification update includes this as errata #23[3].
*
* The 200 series chipset (Union Point) has the same bug according to the
* specification update (Intel 200 Series Chipset Family Platform Controller
* Hub, Specification Update, January 2017, Revision 001, Document# 335194-001,
* Errata 22)[4]. Per the datasheet[5], root port PCI Device IDs for this
* chipset include:
*
* 0xa290-0xa29f PCI Express Root port #{0-16}
* 0xa2e7-0xa2ee PCI Express Root port #{17-24}
*
* [1] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-2.html
* [2] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-1.html
* [3] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-spec-update.html
* [4] http://www.intel.com/content/www/us/en/chipsets/200-series-chipset-pch-spec-update.html
* [5] http://www.intel.com/content/www/us/en/chipsets/200-series-chipset-pch-datasheet-vol-1.html
*/
static bool pci_quirk_intel_spt_pch_acs_match(struct pci_dev *dev)
{
return pci_is_pcie(dev) &&
pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT &&
((dev->device & ~0xf) == 0xa110 ||
(dev->device >= 0xa167 && dev->device <= 0xa16a));
if (!pci_is_pcie(dev) || pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT)
return false;
switch (dev->device) {
case 0xa110 ... 0xa11f: case 0xa167 ... 0xa16a: /* Sunrise Point */
case 0xa290 ... 0xa29f: case 0xa2e7 ... 0xa2ee: /* Union Point */
return true;
}
return false;
}
#define INTEL_SPT_ACS_CTRL (PCI_ACS_CAP + 4)
@ -4271,6 +4335,9 @@ static const struct pci_dev_acs_enabled {
/* I219 */
{ PCI_VENDOR_ID_INTEL, 0x15b7, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_INTEL, 0x15b8, pci_quirk_mf_endpoint_acs },
/* QCOM QDF2xxx root ports */
{ 0x17cb, 0x400, pci_quirk_qcom_rp_acs },
{ 0x17cb, 0x401, pci_quirk_qcom_rp_acs },
/* Intel PCH root ports */
{ PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_intel_pch_acs },
{ PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_intel_spt_pch_acs },

View File

@ -105,17 +105,8 @@ static struct pci_dev_resource *res_to_dev_res(struct list_head *head,
struct pci_dev_resource *dev_res;
list_for_each_entry(dev_res, head, list) {
if (dev_res->res == res) {
int idx = res - &dev_res->dev->resource[0];
dev_printk(KERN_DEBUG, &dev_res->dev->dev,
"res[%d]=%pR res_to_dev_res add_size %llx min_align %llx\n",
idx, dev_res->res,
(unsigned long long)dev_res->add_size,
(unsigned long long)dev_res->min_align);
if (dev_res->res == res)
return dev_res;
}
}
return NULL;

View File

@ -331,6 +331,14 @@ config PHY_EXYNOS5_USBDRD
This driver provides PHY interface for USB 3.0 DRD controller
present on Exynos5 SoC series.
config PHY_EXYNOS_PCIE
bool "Exynos PCIe PHY driver"
depends on OF && (ARCH_EXYNOS || COMPILE_TEST)
select GENERIC_PHY
help
Enable PCIe PHY support for Exynos SoC series.
This driver provides PHY interface for Exynos PCIe controller.
config PHY_PISTACHIO_USB
tristate "IMG Pistachio USB2.0 PHY driver"
depends on MACH_PISTACHIO

View File

@ -37,6 +37,7 @@ phy-exynos-usb2-$(CONFIG_PHY_EXYNOS4X12_USB2) += phy-exynos4x12-usb2.o
phy-exynos-usb2-$(CONFIG_PHY_EXYNOS5250_USB2) += phy-exynos5250-usb2.o
phy-exynos-usb2-$(CONFIG_PHY_S5PV210_USB2) += phy-s5pv210-usb2.o
obj-$(CONFIG_PHY_EXYNOS5_USBDRD) += phy-exynos5-usbdrd.o
obj-$(CONFIG_PHY_EXYNOS_PCIE) += phy-exynos-pcie.o
obj-$(CONFIG_PHY_QCOM_APQ8064_SATA) += phy-qcom-apq8064-sata.o
obj-$(CONFIG_PHY_ROCKCHIP_USB) += phy-rockchip-usb.o
obj-$(CONFIG_PHY_ROCKCHIP_INNO_USB2) += phy-rockchip-inno-usb2.o

View File

@ -0,0 +1,285 @@
/*
* Samsung EXYNOS SoC series PCIe PHY driver
*
* Phy provider for PCIe controller on Exynos SoC series
*
* Copyright (C) 2017 Samsung Electronics Co., Ltd.
* Jaehoon Chung <jh80.chung@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/delay.h>
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/phy/phy.h>
#include <linux/regmap.h>
/* PCIe Purple registers */
#define PCIE_PHY_GLOBAL_RESET 0x000
#define PCIE_PHY_COMMON_RESET 0x004
#define PCIE_PHY_CMN_REG 0x008
#define PCIE_PHY_MAC_RESET 0x00c
#define PCIE_PHY_PLL_LOCKED 0x010
#define PCIE_PHY_TRSVREG_RESET 0x020
#define PCIE_PHY_TRSV_RESET 0x024
/* PCIe PHY registers */
#define PCIE_PHY_IMPEDANCE 0x004
#define PCIE_PHY_PLL_DIV_0 0x008
#define PCIE_PHY_PLL_BIAS 0x00c
#define PCIE_PHY_DCC_FEEDBACK 0x014
#define PCIE_PHY_PLL_DIV_1 0x05c
#define PCIE_PHY_COMMON_POWER 0x064
#define PCIE_PHY_COMMON_PD_CMN BIT(3)
#define PCIE_PHY_TRSV0_EMP_LVL 0x084
#define PCIE_PHY_TRSV0_DRV_LVL 0x088
#define PCIE_PHY_TRSV0_RXCDR 0x0ac
#define PCIE_PHY_TRSV0_POWER 0x0c4
#define PCIE_PHY_TRSV0_PD_TSV BIT(7)
#define PCIE_PHY_TRSV0_LVCC 0x0dc
#define PCIE_PHY_TRSV1_EMP_LVL 0x144
#define PCIE_PHY_TRSV1_RXCDR 0x16c
#define PCIE_PHY_TRSV1_POWER 0x184
#define PCIE_PHY_TRSV1_PD_TSV BIT(7)
#define PCIE_PHY_TRSV1_LVCC 0x19c
#define PCIE_PHY_TRSV2_EMP_LVL 0x204
#define PCIE_PHY_TRSV2_RXCDR 0x22c
#define PCIE_PHY_TRSV2_POWER 0x244
#define PCIE_PHY_TRSV2_PD_TSV BIT(7)
#define PCIE_PHY_TRSV2_LVCC 0x25c
#define PCIE_PHY_TRSV3_EMP_LVL 0x2c4
#define PCIE_PHY_TRSV3_RXCDR 0x2ec
#define PCIE_PHY_TRSV3_POWER 0x304
#define PCIE_PHY_TRSV3_PD_TSV BIT(7)
#define PCIE_PHY_TRSV3_LVCC 0x31c
struct exynos_pcie_phy_data {
const struct phy_ops *ops;
};
/* For Exynos pcie phy */
struct exynos_pcie_phy {
const struct exynos_pcie_phy_data *drv_data;
void __iomem *phy_base;
void __iomem *blk_base; /* For exynos5440 */
};
static void exynos_pcie_phy_writel(void __iomem *base, u32 val, u32 offset)
{
writel(val, base + offset);
}
static u32 exynos_pcie_phy_readl(void __iomem *base, u32 offset)
{
return readl(base + offset);
}
/* For Exynos5440 specific functions */
static int exynos5440_pcie_phy_init(struct phy *phy)
{
struct exynos_pcie_phy *ep = phy_get_drvdata(phy);
/* DCC feedback control off */
exynos_pcie_phy_writel(ep->phy_base, 0x29, PCIE_PHY_DCC_FEEDBACK);
/* set TX/RX impedance */
exynos_pcie_phy_writel(ep->phy_base, 0xd5, PCIE_PHY_IMPEDANCE);
/* set 50Mhz PHY clock */
exynos_pcie_phy_writel(ep->phy_base, 0x14, PCIE_PHY_PLL_DIV_0);
exynos_pcie_phy_writel(ep->phy_base, 0x12, PCIE_PHY_PLL_DIV_1);
/* set TX Differential output for lane 0 */
exynos_pcie_phy_writel(ep->phy_base, 0x7f, PCIE_PHY_TRSV0_DRV_LVL);
/* set TX Pre-emphasis Level Control for lane 0 to minimum */
exynos_pcie_phy_writel(ep->phy_base, 0x0, PCIE_PHY_TRSV0_EMP_LVL);
/* set RX clock and data recovery bandwidth */
exynos_pcie_phy_writel(ep->phy_base, 0xe7, PCIE_PHY_PLL_BIAS);
exynos_pcie_phy_writel(ep->phy_base, 0x82, PCIE_PHY_TRSV0_RXCDR);
exynos_pcie_phy_writel(ep->phy_base, 0x82, PCIE_PHY_TRSV1_RXCDR);
exynos_pcie_phy_writel(ep->phy_base, 0x82, PCIE_PHY_TRSV2_RXCDR);
exynos_pcie_phy_writel(ep->phy_base, 0x82, PCIE_PHY_TRSV3_RXCDR);
/* change TX Pre-emphasis Level Control for lanes */
exynos_pcie_phy_writel(ep->phy_base, 0x39, PCIE_PHY_TRSV0_EMP_LVL);
exynos_pcie_phy_writel(ep->phy_base, 0x39, PCIE_PHY_TRSV1_EMP_LVL);
exynos_pcie_phy_writel(ep->phy_base, 0x39, PCIE_PHY_TRSV2_EMP_LVL);
exynos_pcie_phy_writel(ep->phy_base, 0x39, PCIE_PHY_TRSV3_EMP_LVL);
/* set LVCC */
exynos_pcie_phy_writel(ep->phy_base, 0x20, PCIE_PHY_TRSV0_LVCC);
exynos_pcie_phy_writel(ep->phy_base, 0xa0, PCIE_PHY_TRSV1_LVCC);
exynos_pcie_phy_writel(ep->phy_base, 0xa0, PCIE_PHY_TRSV2_LVCC);
exynos_pcie_phy_writel(ep->phy_base, 0xa0, PCIE_PHY_TRSV3_LVCC);
/* pulse for common reset */
exynos_pcie_phy_writel(ep->blk_base, 1, PCIE_PHY_COMMON_RESET);
udelay(500);
exynos_pcie_phy_writel(ep->blk_base, 0, PCIE_PHY_COMMON_RESET);
return 0;
}
static int exynos5440_pcie_phy_power_on(struct phy *phy)
{
struct exynos_pcie_phy *ep = phy_get_drvdata(phy);
u32 val;
exynos_pcie_phy_writel(ep->blk_base, 0, PCIE_PHY_COMMON_RESET);
exynos_pcie_phy_writel(ep->blk_base, 0, PCIE_PHY_CMN_REG);
exynos_pcie_phy_writel(ep->blk_base, 0, PCIE_PHY_TRSVREG_RESET);
exynos_pcie_phy_writel(ep->blk_base, 0, PCIE_PHY_TRSV_RESET);
val = exynos_pcie_phy_readl(ep->phy_base, PCIE_PHY_COMMON_POWER);
val &= ~PCIE_PHY_COMMON_PD_CMN;
exynos_pcie_phy_writel(ep->phy_base, val, PCIE_PHY_COMMON_POWER);
val = exynos_pcie_phy_readl(ep->phy_base, PCIE_PHY_TRSV0_POWER);
val &= ~PCIE_PHY_TRSV0_PD_TSV;
exynos_pcie_phy_writel(ep->phy_base, val, PCIE_PHY_TRSV0_POWER);
val = exynos_pcie_phy_readl(ep->phy_base, PCIE_PHY_TRSV1_POWER);
val &= ~PCIE_PHY_TRSV1_PD_TSV;
exynos_pcie_phy_writel(ep->phy_base, val, PCIE_PHY_TRSV1_POWER);
val = exynos_pcie_phy_readl(ep->phy_base, PCIE_PHY_TRSV2_POWER);
val &= ~PCIE_PHY_TRSV2_PD_TSV;
exynos_pcie_phy_writel(ep->phy_base, val, PCIE_PHY_TRSV2_POWER);
val = exynos_pcie_phy_readl(ep->phy_base, PCIE_PHY_TRSV3_POWER);
val &= ~PCIE_PHY_TRSV3_PD_TSV;
exynos_pcie_phy_writel(ep->phy_base, val, PCIE_PHY_TRSV3_POWER);
return 0;
}
static int exynos5440_pcie_phy_power_off(struct phy *phy)
{
struct exynos_pcie_phy *ep = phy_get_drvdata(phy);
u32 val;
if (readl_poll_timeout(ep->phy_base + PCIE_PHY_PLL_LOCKED, val,
(val != 0), 1, 500)) {
dev_err(&phy->dev, "PLL Locked: 0x%x\n", val);
return -ETIMEDOUT;
}
val = exynos_pcie_phy_readl(ep->phy_base, PCIE_PHY_COMMON_POWER);
val |= PCIE_PHY_COMMON_PD_CMN;
exynos_pcie_phy_writel(ep->phy_base, val, PCIE_PHY_COMMON_POWER);
val = exynos_pcie_phy_readl(ep->phy_base, PCIE_PHY_TRSV0_POWER);
val |= PCIE_PHY_TRSV0_PD_TSV;
exynos_pcie_phy_writel(ep->phy_base, val, PCIE_PHY_TRSV0_POWER);
val = exynos_pcie_phy_readl(ep->phy_base, PCIE_PHY_TRSV1_POWER);
val |= PCIE_PHY_TRSV1_PD_TSV;
exynos_pcie_phy_writel(ep->phy_base, val, PCIE_PHY_TRSV1_POWER);
val = exynos_pcie_phy_readl(ep->phy_base, PCIE_PHY_TRSV2_POWER);
val |= PCIE_PHY_TRSV2_PD_TSV;
exynos_pcie_phy_writel(ep->phy_base, val, PCIE_PHY_TRSV2_POWER);
val = exynos_pcie_phy_readl(ep->phy_base, PCIE_PHY_TRSV3_POWER);
val |= PCIE_PHY_TRSV3_PD_TSV;
exynos_pcie_phy_writel(ep->phy_base, val, PCIE_PHY_TRSV3_POWER);
return 0;
}
static int exynos5440_pcie_phy_reset(struct phy *phy)
{
struct exynos_pcie_phy *ep = phy_get_drvdata(phy);
exynos_pcie_phy_writel(ep->blk_base, 0, PCIE_PHY_MAC_RESET);
exynos_pcie_phy_writel(ep->blk_base, 1, PCIE_PHY_GLOBAL_RESET);
exynos_pcie_phy_writel(ep->blk_base, 0, PCIE_PHY_GLOBAL_RESET);
return 0;
}
static const struct phy_ops exynos5440_phy_ops = {
.init = exynos5440_pcie_phy_init,
.power_on = exynos5440_pcie_phy_power_on,
.power_off = exynos5440_pcie_phy_power_off,
.reset = exynos5440_pcie_phy_reset,
.owner = THIS_MODULE,
};
static const struct exynos_pcie_phy_data exynos5440_pcie_phy_data = {
.ops = &exynos5440_phy_ops,
};
static const struct of_device_id exynos_pcie_phy_match[] = {
{
.compatible = "samsung,exynos5440-pcie-phy",
.data = &exynos5440_pcie_phy_data,
},
{},
};
MODULE_DEVICE_TABLE(of, exynos_pcie_phy_match);
static int exynos_pcie_phy_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct exynos_pcie_phy *exynos_phy;
struct phy *generic_phy;
struct phy_provider *phy_provider;
struct resource *res;
const struct exynos_pcie_phy_data *drv_data;
drv_data = of_device_get_match_data(dev);
if (!drv_data)
return -ENODEV;
exynos_phy = devm_kzalloc(dev, sizeof(*exynos_phy), GFP_KERNEL);
if (!exynos_phy)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
exynos_phy->phy_base = devm_ioremap_resource(dev, res);
if (IS_ERR(exynos_phy->phy_base))
return PTR_ERR(exynos_phy->phy_base);
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
exynos_phy->blk_base = devm_ioremap_resource(dev, res);
if (IS_ERR(exynos_phy->phy_base))
return PTR_ERR(exynos_phy->phy_base);
exynos_phy->drv_data = drv_data;
generic_phy = devm_phy_create(dev, dev->of_node, drv_data->ops);
if (IS_ERR(generic_phy)) {
dev_err(dev, "failed to create PHY\n");
return PTR_ERR(generic_phy);
}
phy_set_drvdata(generic_phy, exynos_phy);
phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate);
return PTR_ERR_OR_ZERO(phy_provider);
}
static struct platform_driver exynos_pcie_phy_driver = {
.probe = exynos_pcie_phy_probe,
.driver = {
.of_match_table = exynos_pcie_phy_match,
.name = "exynos_pcie_phy",
}
};
module_platform_driver(exynos_pcie_phy_driver);
MODULE_DESCRIPTION("Samsung S5P/EXYNOS SoC PCIe PHY driver");
MODULE_AUTHOR("Jaehoon Chung <jh80.chung@samsung.com>");
MODULE_LICENSE("GPL v2");

View File

@ -325,12 +325,6 @@ void pci_msi_domain_write_msg(struct irq_data *irq_data, struct msi_msg *msg);
struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode,
struct msi_domain_info *info,
struct irq_domain *parent);
int pci_msi_domain_alloc_irqs(struct irq_domain *domain, struct pci_dev *dev,
int nvec, int type);
void pci_msi_domain_free_irqs(struct irq_domain *domain, struct pci_dev *dev);
struct irq_domain *pci_msi_create_default_irq_domain(struct fwnode_handle *fwnode,
struct msi_domain_info *info, struct irq_domain *parent);
irq_hw_number_t pci_msi_domain_calc_hwirq(struct pci_dev *dev,
struct msi_desc *desc);
int pci_msi_domain_check_cap(struct irq_domain *domain,

View File

@ -678,9 +678,6 @@ struct pci_error_handlers {
/* MMIO has been re-enabled, but not DMA */
pci_ers_result_t (*mmio_enabled)(struct pci_dev *dev);
/* PCI Express link has been reset */
pci_ers_result_t (*link_reset)(struct pci_dev *dev);
/* PCI slot has been reset */
pci_ers_result_t (*slot_reset)(struct pci_dev *dev);
@ -1308,14 +1305,7 @@ void pci_msix_shutdown(struct pci_dev *dev);
void pci_disable_msix(struct pci_dev *dev);
void pci_restore_msi_state(struct pci_dev *dev);
int pci_msi_enabled(void);
int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec);
static inline int pci_enable_msi_exact(struct pci_dev *dev, int nvec)
{
int rc = pci_enable_msi_range(dev, nvec, nvec);
if (rc < 0)
return rc;
return 0;
}
int pci_enable_msi(struct pci_dev *dev);
int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
int minvec, int maxvec);
static inline int pci_enable_msix_exact(struct pci_dev *dev,
@ -1346,10 +1336,7 @@ static inline void pci_msix_shutdown(struct pci_dev *dev) { }
static inline void pci_disable_msix(struct pci_dev *dev) { }
static inline void pci_restore_msi_state(struct pci_dev *dev) { }
static inline int pci_msi_enabled(void) { return 0; }
static inline int pci_enable_msi_range(struct pci_dev *dev, int minvec,
int maxvec)
{ return -ENOSYS; }
static inline int pci_enable_msi_exact(struct pci_dev *dev, int nvec)
static inline int pci_enable_msi(struct pci_dev *dev)
{ return -ENOSYS; }
static inline int pci_enable_msix_range(struct pci_dev *dev,
struct msix_entry *entries, int minvec, int maxvec)
@ -1425,8 +1412,6 @@ static inline void pcie_set_ecrc_checking(struct pci_dev *dev) { }
static inline void pcie_ecrc_get_policy(char *str) { }
#endif
#define pci_enable_msi(pdev) pci_enable_msi_exact(pdev, 1)
#ifdef CONFIG_HT_IRQ
/* The functions a driver should call */
int ht_create_irq(struct pci_dev *dev, int idx);

View File

@ -2516,6 +2516,8 @@
#define PCI_DEVICE_ID_KORENIX_JETCARDF2 0x1700
#define PCI_DEVICE_ID_KORENIX_JETCARDF3 0x17ff
#define PCI_VENDOR_ID_HUAWEI 0x19e5
#define PCI_VENDOR_ID_NETRONOME 0x19ee
#define PCI_DEVICE_ID_NETRONOME_NFP3200 0x3200
#define PCI_DEVICE_ID_NETRONOME_NFP3240 0x3240

View File

@ -682,6 +682,7 @@
#define PCI_EXT_CAP_ID_PMUX 0x1A /* Protocol Multiplexing */
#define PCI_EXT_CAP_ID_PASID 0x1B /* Process Address Space ID */
#define PCI_EXT_CAP_ID_DPC 0x1D /* Downstream Port Containment */
#define PCI_EXT_CAP_ID_L1SS 0x1E /* L1 PM Substates */
#define PCI_EXT_CAP_ID_PTM 0x1F /* Precision Time Measurement */
#define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_PTM
@ -973,6 +974,7 @@
#define PCI_EXP_DPC_STATUS 8 /* DPC Status */
#define PCI_EXP_DPC_STATUS_TRIGGER 0x01 /* Trigger Status */
#define PCI_EXP_DPC_STATUS_INTERRUPT 0x08 /* Interrupt Status */
#define PCI_EXP_DPC_RP_BUSY 0x10 /* Root Port Busy */
#define PCI_EXP_DPC_SOURCE_ID 10 /* DPC Source Identifier */
@ -985,4 +987,19 @@
#define PCI_PTM_CTRL_ENABLE 0x00000001 /* PTM enable */
#define PCI_PTM_CTRL_ROOT 0x00000002 /* Root select */
/* L1 PM Substates */
#define PCI_L1SS_CAP 4 /* capability register */
#define PCI_L1SS_CAP_PCIPM_L1_2 1 /* PCI PM L1.2 Support */
#define PCI_L1SS_CAP_PCIPM_L1_1 2 /* PCI PM L1.1 Support */
#define PCI_L1SS_CAP_ASPM_L1_2 4 /* ASPM L1.2 Support */
#define PCI_L1SS_CAP_ASPM_L1_1 8 /* ASPM L1.1 Support */
#define PCI_L1SS_CAP_L1_PM_SS 16 /* L1 PM Substates Support */
#define PCI_L1SS_CTL1 8 /* Control Register 1 */
#define PCI_L1SS_CTL1_PCIPM_L1_2 1 /* PCI PM L1.2 Enable */
#define PCI_L1SS_CTL1_PCIPM_L1_1 2 /* PCI PM L1.1 Support */
#define PCI_L1SS_CTL1_ASPM_L1_2 4 /* ASPM L1.2 Support */
#define PCI_L1SS_CTL1_ASPM_L1_1 8 /* ASPM L1.1 Support */
#define PCI_L1SS_CTL1_L1SS_MASK 0x0000000F
#define PCI_L1SS_CTL2 0xC /* Control Register 2 */
#endif /* LINUX_PCI_REGS_H */