Merge 3.19-rc7 into staging-next

We want those fixes in here for testing.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Greg Kroah-Hartman 2015-02-02 08:41:02 -08:00
commit 178cf7de6f
471 changed files with 4859 additions and 3616 deletions

View File

@ -1,60 +0,0 @@
What: /sys/class/leds/dell::kbd_backlight/als_setting
Date: December 2014
KernelVersion: 3.19
Contact: Gabriele Mazzotta <gabriele.mzt@gmail.com>,
Pali Rohár <pali.rohar@gmail.com>
Description:
This file allows to control the automatic keyboard
illumination mode on some systems that have an ambient
light sensor. Write 1 to this file to enable the auto
mode, 0 to disable it.
What: /sys/class/leds/dell::kbd_backlight/start_triggers
Date: December 2014
KernelVersion: 3.19
Contact: Gabriele Mazzotta <gabriele.mzt@gmail.com>,
Pali Rohár <pali.rohar@gmail.com>
Description:
This file allows to control the input triggers that
turn on the keyboard backlight illumination that is
disabled because of inactivity.
Read the file to see the triggers available. The ones
enabled are preceded by '+', those disabled by '-'.
To enable a trigger, write its name preceded by '+' to
this file. To disable a trigger, write its name preceded
by '-' instead.
For example, to enable the keyboard as trigger run:
echo +keyboard > /sys/class/leds/dell::kbd_backlight/start_triggers
To disable it:
echo -keyboard > /sys/class/leds/dell::kbd_backlight/start_triggers
Note that not all the available triggers can be configured.
What: /sys/class/leds/dell::kbd_backlight/stop_timeout
Date: December 2014
KernelVersion: 3.19
Contact: Gabriele Mazzotta <gabriele.mzt@gmail.com>,
Pali Rohár <pali.rohar@gmail.com>
Description:
This file allows to specify the interval after which the
keyboard illumination is disabled because of inactivity.
The timeouts are expressed in seconds, minutes, hours and
days, for which the symbols are 's', 'm', 'h' and 'd'
respectively.
To configure the timeout, write to this file a value along
with any the above units. If no unit is specified, the value
is assumed to be expressed in seconds.
For example, to set the timeout to 10 minutes run:
echo 10m > /sys/class/leds/dell::kbd_backlight/stop_timeout
Note that when this file is read, the returned value might be
expressed in a different unit than the one used when the timeout
was set.
Also note that only some timeouts are supported and that
some systems might fall back to a specific timeout in case
an invalid timeout is written to this file.

View File

@ -23,7 +23,7 @@ Required nodes:
range of 0x200 bytes.
- syscon: the root node of the Integrator platforms must have a
system controller node pointong to the control registers,
system controller node pointing to the control registers,
with the compatible string
"arm,integrator-ap-syscon"
"arm,integrator-cp-syscon"

View File

@ -0,0 +1,72 @@
* QEMU Firmware Configuration bindings for ARM
QEMU's arm-softmmu and aarch64-softmmu emulation / virtualization targets
provide the following Firmware Configuration interface on the "virt" machine
type:
- A write-only, 16-bit wide selector (or control) register,
- a read-write, 64-bit wide data register.
QEMU exposes the control and data register to ARM guests as memory mapped
registers; their location is communicated to the guest's UEFI firmware in the
DTB that QEMU places at the bottom of the guest's DRAM.
The guest writes a selector value (a key) to the selector register, and then
can read the corresponding data (produced by QEMU) via the data register. If
the selected entry is writable, the guest can rewrite it through the data
register.
The selector register takes keys in big endian byte order.
The data register allows accesses with 8, 16, 32 and 64-bit width (only at
offset 0 of the register). Accesses larger than a byte are interpreted as
arrays, bundled together only for better performance. The bytes constituting
such a word, in increasing address order, correspond to the bytes that would
have been transferred by byte-wide accesses in chronological order.
The interface allows guest firmware to download various parameters and blobs
that affect how the firmware works and what tables it installs for the guest
OS. For example, boot order of devices, ACPI tables, SMBIOS tables, kernel and
initrd images for direct kernel booting, virtual machine UUID, SMP information,
virtual NUMA topology, and so on.
The authoritative registry of the valid selector values and their meanings is
the QEMU source code; the structure of the data blobs corresponding to the
individual key values is also defined in the QEMU source code.
The presence of the registers can be verified by selecting the "signature" blob
with key 0x0000, and reading four bytes from the data register. The returned
signature is "QEMU".
The outermost protocol (involving the write / read sequences of the control and
data registers) is expected to be versioned, and/or described by feature bits.
The interface revision / feature bitmap can be retrieved with key 0x0001. The
blob to be read from the data register has size 4, and it is to be interpreted
as a uint32_t value in little endian byte order. The current value
(corresponding to the above outer protocol) is zero.
The guest kernel is not expected to use these registers (although it is
certainly allowed to); the device tree bindings are documented here because
this is where device tree bindings reside in general.
Required properties:
- compatible: "qemu,fw-cfg-mmio".
- reg: the MMIO region used by the device.
* Bytes 0x0 to 0x7 cover the data register.
* Bytes 0x8 to 0x9 cover the selector register.
* Further registers may be appended to the region in case of future interface
revisions / feature bits.
Example:
/ {
#size-cells = <0x2>;
#address-cells = <0x2>;
fw-cfg@9020000 {
compatible = "qemu,fw-cfg-mmio";
reg = <0x0 0x9020000 0x0 0xa>;
};
};

View File

@ -19,7 +19,7 @@ type of the connections, they just map their existence. Specific properties
may be described by specialized bindings depending on the type of connection.
To see how this binding applies to video pipelines, for example, see
Documentation/device-tree/bindings/media/video-interfaces.txt.
Documentation/devicetree/bindings/media/video-interfaces.txt.
Here the ports describe data interfaces, and the links between them are
the connecting data buses. A single port with multiple connections can
correspond to multiple devices being connected to the same physical bus.

View File

@ -31,7 +31,7 @@ i2c0: i2c@fed40000 {
compatible = "st,comms-ssc4-i2c";
reg = <0xfed40000 0x110>;
interrupts = <GIC_SPI 187 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&CLK_S_ICN_REG_0>;
clocks = <&clk_s_a0_ls CLK_ICN_REG>;
clock-names = "ssc";
clock-frequency = <400000>;
pinctrl-names = "default";

View File

@ -48,6 +48,7 @@ dallas,ds3232 Extremely Accurate I²C RTC with Integrated Crystal and SRAM
dallas,ds4510 CPU Supervisor with Nonvolatile Memory and Programmable I/O
dallas,ds75 Digital Thermometer and Thermostat
dlg,da9053 DA9053: flexible system level PMIC with multicore support
dlg,da9063 DA9063: system PMIC for quad-core application processors
epson,rx8025 High-Stability. I2C-Bus INTERFACE REAL TIME CLOCK MODULE
epson,rx8581 I2C-BUS INTERFACE REAL TIME CLOCK MODULE
fsl,mag3110 MAG3110: Xtrinsic High Accuracy, 3D Magnetometer

View File

@ -4,7 +4,8 @@ This file provides information, what the device node
for the davinci_emac interface contains.
Required properties:
- compatible: "ti,davinci-dm6467-emac" or "ti,am3517-emac"
- compatible: "ti,davinci-dm6467-emac", "ti,am3517-emac" or
"ti,dm816-emac"
- reg: Offset and length of the register set for the device
- ti,davinci-ctrl-reg-offset: offset to control register
- ti,davinci-ctrl-mod-reg-offset: offset to control module register

View File

@ -9,7 +9,6 @@ ad Avionic Design GmbH
adapteva Adapteva, Inc.
adi Analog Devices, Inc.
aeroflexgaisler Aeroflex Gaisler AB
ak Asahi Kasei Corp.
allwinner Allwinner Technology Co., Ltd.
altr Altera Corp.
amcc Applied Micro Circuits Corporation (APM, formally AMCC)
@ -20,6 +19,7 @@ amstaos AMS-Taos Inc.
apm Applied Micro Circuits Corporation (APM)
arm ARM Ltd.
armadeus ARMadeus Systems SARL
asahi-kasei Asahi Kasei Corp.
atmel Atmel Corporation
auo AU Optronics Corporation
avago Avago Technologies
@ -128,6 +128,7 @@ pixcir PIXCIR MICROELECTRONICS Co., Ltd
powervr PowerVR (deprecated, use img)
qca Qualcomm Atheros, Inc.
qcom Qualcomm Technologies, Inc
qemu QEMU, a generic and open source machine emulator and virtualizer
qnap QNAP Systems, Inc.
radxa Radxa
raidsonic RaidSonic Technology GmbH
@ -169,6 +170,7 @@ usi Universal Scientific Industrial Co., Ltd.
v3 V3 Semiconductor
variscite Variscite Ltd.
via VIA Technologies, Inc.
virtio Virtual I/O Device Specification, developed by the OASIS consortium
voipac Voipac Technologies s.r.o.
winbond Winbond Electronics corp.
wlf Wolfson Microelectronics

View File

@ -1277,6 +1277,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
i8042.notimeout [HW] Ignore timeout condition signalled by controller
i8042.reset [HW] Reset the controller during init and cleanup
i8042.unlock [HW] Unlock (ignore) the keylock
i8042.kbdreset [HW] Reset device connected to KBD port
i810= [HW,DRM]

View File

@ -696,7 +696,7 @@ L: alsa-devel@alsa-project.org (moderated for non-subscribers)
W: http://blackfin.uclinux.org/
S: Supported
F: sound/soc/blackfin/*
ANALOG DEVICES INC IIO DRIVERS
M: Lars-Peter Clausen <lars@metafoo.de>
M: Michael Hennerich <Michael.Hennerich@analog.com>
@ -708,6 +708,16 @@ X: drivers/iio/*/adjd*
F: drivers/staging/iio/*/ad*
F: staging/iio/trigger/iio-trig-bfin-timer.c
ANDROID DRIVERS
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
M: Arve Hjønnevåg <arve@android.com>
M: Riley Andrews <riandrews@android.com>
T: git git://git.kernel.org/pub/scm/linux/kernel/gregkh/staging.git
L: devel@driverdev.osuosl.org
S: Supported
F: drivers/android/
F: drivers/staging/android/
AOA (Apple Onboard Audio) ALSA DRIVER
M: Johannes Berg <johannes@sipsolutions.net>
L: linuxppc-dev@lists.ozlabs.org
@ -754,13 +764,6 @@ L: linux-media@vger.kernel.org
S: Maintained
F: drivers/media/i2c/aptina-pll.*
ARASAN COMPACT FLASH PATA CONTROLLER
M: Viresh Kumar <viresh.linux@gmail.com>
L: linux-ide@vger.kernel.org
S: Maintained
F: include/linux/pata_arasan_cf_data.h
F: drivers/ata/pata_arasan_cf.c
ARC FRAMEBUFFER DRIVER
M: Jaya Kumar <jayalk@intworks.biz>
S: Maintained
@ -2346,7 +2349,8 @@ CAN NETWORK LAYER
M: Oliver Hartkopp <socketcan@hartkopp.net>
L: linux-can@vger.kernel.org
W: http://gitorious.org/linux-can
T: git git://gitorious.org/linux-can/linux-can-next.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next.git
S: Maintained
F: Documentation/networking/can.txt
F: net/can/
@ -2361,7 +2365,8 @@ M: Wolfgang Grandegger <wg@grandegger.com>
M: Marc Kleine-Budde <mkl@pengutronix.de>
L: linux-can@vger.kernel.org
W: http://gitorious.org/linux-can
T: git git://gitorious.org/linux-can/linux-can-next.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next.git
S: Maintained
F: drivers/net/can/
F: include/linux/can/dev.h
@ -4767,14 +4772,14 @@ S: Supported
F: drivers/net/ethernet/ibm/ibmveth.*
IBM Power Virtual SCSI Device Drivers
M: Nathan Fontenot <nfont@linux.vnet.ibm.com>
M: Tyrel Datwyler <tyreld@linux.vnet.ibm.com>
L: linux-scsi@vger.kernel.org
S: Supported
F: drivers/scsi/ibmvscsi/ibmvscsi*
F: drivers/scsi/ibmvscsi/viosrp.h
IBM Power Virtual FC Device Drivers
M: Brian King <brking@linux.vnet.ibm.com>
M: Tyrel Datwyler <tyreld@linux.vnet.ibm.com>
L: linux-scsi@vger.kernel.org
S: Supported
F: drivers/scsi/ibmvscsi/ibmvfc*
@ -4942,7 +4947,6 @@ F: include/uapi/linux/inotify.h
INPUT (KEYBOARD, MOUSE, JOYSTICK, TOUCHSCREEN) DRIVERS
M: Dmitry Torokhov <dmitry.torokhov@gmail.com>
M: Dmitry Torokhov <dtor@mail.ru>
L: linux-input@vger.kernel.org
Q: http://patchwork.kernel.org/project/linux-input/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input.git
@ -4964,7 +4968,6 @@ K: \b(ABS|SYN)_MT_
INTEL C600 SERIES SAS CONTROLLER DRIVER
M: Intel SCU Linux support <intel-linux-scu@intel.com>
M: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
M: Dave Jiang <dave.jiang@intel.com>
L: linux-scsi@vger.kernel.org
T: git git://git.code.sf.net/p/intel-sas/isci
S: Supported
@ -5715,6 +5718,49 @@ F: drivers/lguest/
F: include/linux/lguest*.h
F: tools/lguest/
LIBATA SUBSYSTEM (Serial and Parallel ATA drivers)
M: Tejun Heo <tj@kernel.org>
L: linux-ide@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git
S: Maintained
F: drivers/ata/
F: include/linux/ata.h
F: include/linux/libata.h
LIBATA PATA ARASAN COMPACT FLASH CONTROLLER
M: Viresh Kumar <viresh.linux@gmail.com>
L: linux-ide@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git
S: Maintained
F: include/linux/pata_arasan_cf_data.h
F: drivers/ata/pata_arasan_cf.c
LIBATA PATA DRIVERS
M: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
M: Tejun Heo <tj@kernel.org>
L: linux-ide@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git
S: Maintained
F: drivers/ata/pata_*.c
F: drivers/ata/ata_generic.c
LIBATA SATA AHCI PLATFORM devices support
M: Hans de Goede <hdegoede@redhat.com>
M: Tejun Heo <tj@kernel.org>
L: linux-ide@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git
S: Maintained
F: drivers/ata/ahci_platform.c
F: drivers/ata/libahci_platform.c
F: include/linux/ahci_platform.h
LIBATA SATA PROMISE TX2/TX4 CONTROLLER DRIVER
M: Mikael Pettersson <mikpelinux@gmail.com>
L: linux-ide@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git
S: Maintained
F: drivers/ata/sata_promise.*
LIBLOCKDEP
M: Sasha Levin <sasha.levin@oracle.com>
S: Maintained
@ -6999,14 +7045,12 @@ OPEN FIRMWARE AND FLATTENED DEVICE TREE
M: Grant Likely <grant.likely@linaro.org>
M: Rob Herring <robh+dt@kernel.org>
L: devicetree@vger.kernel.org
W: http://fdt.secretlab.ca
T: git git://git.secretlab.ca/git/linux-2.6.git
W: http://www.devicetree.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/glikely/linux.git
S: Maintained
F: drivers/of/
F: include/linux/of*.h
F: scripts/dtc/
K: of_get_property
K: of_match_table
OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS
M: Rob Herring <robh+dt@kernel.org>
@ -7251,7 +7295,7 @@ S: Maintained
F: drivers/pci/host/*layerscape*
PCI DRIVER FOR IMX6
M: Richard Zhu <r65037@freescale.com>
M: Richard Zhu <Richard.Zhu@freescale.com>
M: Lucas Stach <l.stach@pengutronix.de>
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@ -7421,6 +7465,7 @@ F: drivers/crypto/picoxcell*
PIN CONTROL SUBSYSTEM
M: Linus Walleij <linus.walleij@linaro.org>
L: linux-gpio@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl.git
S: Maintained
F: drivers/pinctrl/
F: include/linux/pinctrl/
@ -7588,12 +7633,6 @@ W: http://wireless.kernel.org/en/users/Drivers/p54
S: Obsolete
F: drivers/net/wireless/prism54/
PROMISE SATA TX2/TX4 CONTROLLER LIBATA DRIVER
M: Mikael Pettersson <mikpelinux@gmail.com>
L: linux-ide@vger.kernel.org
S: Maintained
F: drivers/ata/sata_promise.*
PS3 NETWORK SUPPORT
M: Geoff Levand <geoff@infradead.org>
L: netdev@vger.kernel.org
@ -8567,25 +8606,6 @@ S: Maintained
F: drivers/misc/phantom.c
F: include/uapi/linux/phantom.h
SERIAL ATA (SATA) SUBSYSTEM
M: Tejun Heo <tj@kernel.org>
L: linux-ide@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git
S: Supported
F: drivers/ata/
F: include/linux/ata.h
F: include/linux/libata.h
SERIAL ATA AHCI PLATFORM devices support
M: Hans de Goede <hdegoede@redhat.com>
M: Tejun Heo <tj@kernel.org>
L: linux-ide@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git
S: Supported
F: drivers/ata/ahci_platform.c
F: drivers/ata/libahci_platform.c
F: include/linux/ahci_platform.h
SERVER ENGINES 10Gbps iSCSI - BladeEngine 2 DRIVER
M: Jayamohan Kallickal <jayamohan.kallickal@emulex.com>
L: linux-scsi@vger.kernel.org
@ -10176,6 +10196,7 @@ USERSPACE I/O (UIO)
M: "Hans J. Koch" <hjk@hansjkoch.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
F: Documentation/DocBook/uio-howto.tmpl
F: drivers/uio/
F: include/linux/uio*.h

View File

@ -1,7 +1,7 @@
VERSION = 3
PATCHLEVEL = 19
SUBLEVEL = 0
EXTRAVERSION = -rc5
EXTRAVERSION = -rc7
NAME = Diseased Newt
# *DOCUMENTATION*

View File

@ -285,8 +285,12 @@ pcibios_claim_one_bus(struct pci_bus *b)
if (r->parent || !r->start || !r->flags)
continue;
if (pci_has_flag(PCI_PROBE_ONLY) ||
(r->flags & IORESOURCE_PCI_FIXED))
pci_claim_resource(dev, i);
(r->flags & IORESOURCE_PCI_FIXED)) {
if (pci_claim_resource(dev, i) == 0)
continue;
pci_claim_bridge_resource(dev, i);
}
}
}

View File

@ -156,6 +156,8 @@ retry:
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
else if (fault & VM_FAULT_SIGSEGV)
goto bad_area;
else if (fault & VM_FAULT_SIGBUS)
goto do_sigbus;
BUG();

View File

@ -161,6 +161,8 @@ good_area:
if (fault & VM_FAULT_OOM)
goto out_of_memory;
else if (fault & VM_FAULT_SIGSEGV)
goto bad_area;
else if (fault & VM_FAULT_SIGBUS)
goto do_sigbus;

View File

@ -1257,6 +1257,8 @@
tx-fifo-resize;
maximum-speed = "super-speed";
dr_mode = "otg";
snps,dis_u3_susphy_quirk;
snps,dis_u2_susphy_quirk;
};
};
@ -1278,6 +1280,8 @@
tx-fifo-resize;
maximum-speed = "high-speed";
dr_mode = "otg";
snps,dis_u3_susphy_quirk;
snps,dis_u2_susphy_quirk;
};
};
@ -1299,6 +1303,8 @@
tx-fifo-resize;
maximum-speed = "high-speed";
dr_mode = "otg";
snps,dis_u3_susphy_quirk;
snps,dis_u2_susphy_quirk;
};
};

View File

@ -369,7 +369,7 @@
compatible = "fsl,imx25-pwm", "fsl,imx27-pwm";
#pwm-cells = <2>;
reg = <0x53fa0000 0x4000>;
clocks = <&clks 106>, <&clks 36>;
clocks = <&clks 106>, <&clks 52>;
clock-names = "ipg", "per";
interrupts = <36>;
};
@ -388,7 +388,7 @@
compatible = "fsl,imx25-pwm", "fsl,imx27-pwm";
#pwm-cells = <2>;
reg = <0x53fa8000 0x4000>;
clocks = <&clks 107>, <&clks 36>;
clocks = <&clks 107>, <&clks 52>;
clock-names = "ipg", "per";
interrupts = <41>;
};
@ -429,7 +429,7 @@
pwm4: pwm@53fc8000 {
compatible = "fsl,imx25-pwm", "fsl,imx27-pwm";
reg = <0x53fc8000 0x4000>;
clocks = <&clks 108>, <&clks 36>;
clocks = <&clks 108>, <&clks 52>;
clock-names = "ipg", "per";
interrupts = <42>;
};
@ -476,7 +476,7 @@
compatible = "fsl,imx25-pwm", "fsl,imx27-pwm";
#pwm-cells = <2>;
reg = <0x53fe0000 0x4000>;
clocks = <&clks 105>, <&clks 36>;
clocks = <&clks 105>, <&clks 52>;
clock-names = "ipg", "per";
interrupts = <26>;
};

View File

@ -166,12 +166,12 @@
#address-cells = <1>;
#size-cells = <0>;
ethphy1: ethernet-phy@0 {
reg = <0>;
ethphy1: ethernet-phy@1 {
reg = <1>;
};
ethphy2: ethernet-phy@1 {
reg = <1>;
ethphy2: ethernet-phy@2 {
reg = <2>;
};
};
};

View File

@ -17,14 +17,6 @@
aliases {
ethernet0 = &emac;
serial0 = &uart0;
serial1 = &uart1;
serial2 = &uart2;
serial3 = &uart3;
serial4 = &uart4;
serial5 = &uart5;
serial6 = &uart6;
serial7 = &uart7;
};
chosen {
@ -39,6 +31,14 @@
<&ahb_gates 44>;
status = "disabled";
};
framebuffer@1 {
compatible = "allwinner,simple-framebuffer", "simple-framebuffer";
allwinner,pipeline = "de_fe0-de_be0-lcd0-hdmi";
clocks = <&pll5 1>, <&ahb_gates 36>, <&ahb_gates 43>,
<&ahb_gates 44>, <&ahb_gates 46>;
status = "disabled";
};
};
cpus {
@ -438,8 +438,8 @@
reg-names = "phy_ctrl", "pmu1", "pmu2";
clocks = <&usb_clk 8>;
clock-names = "usb_phy";
resets = <&usb_clk 1>, <&usb_clk 2>;
reset-names = "usb1_reset", "usb2_reset";
resets = <&usb_clk 0>, <&usb_clk 1>, <&usb_clk 2>;
reset-names = "usb0_reset", "usb1_reset", "usb2_reset";
status = "disabled";
};

View File

@ -55,6 +55,12 @@
model = "Olimex A10s-Olinuxino Micro";
compatible = "olimex,a10s-olinuxino-micro", "allwinner,sun5i-a10s";
aliases {
serial0 = &uart0;
serial1 = &uart2;
serial2 = &uart3;
};
soc@01c00000 {
emac: ethernet@01c0b000 {
pinctrl-names = "default";

View File

@ -18,10 +18,6 @@
aliases {
ethernet0 = &emac;
serial0 = &uart0;
serial1 = &uart1;
serial2 = &uart2;
serial3 = &uart3;
};
chosen {
@ -390,8 +386,8 @@
reg-names = "phy_ctrl", "pmu1";
clocks = <&usb_clk 8>;
clock-names = "usb_phy";
resets = <&usb_clk 1>;
reset-names = "usb1_reset";
resets = <&usb_clk 0>, <&usb_clk 1>;
reset-names = "usb0_reset", "usb1_reset";
status = "disabled";
};

View File

@ -53,6 +53,10 @@
model = "HSG H702";
compatible = "hsg,h702", "allwinner,sun5i-a13";
aliases {
serial0 = &uart1;
};
soc@01c00000 {
mmc0: mmc@01c0f000 {
pinctrl-names = "default";

View File

@ -54,6 +54,10 @@
model = "Olimex A13-Olinuxino Micro";
compatible = "olimex,a13-olinuxino-micro", "allwinner,sun5i-a13";
aliases {
serial0 = &uart1;
};
soc@01c00000 {
mmc0: mmc@01c0f000 {
pinctrl-names = "default";

View File

@ -55,6 +55,10 @@
model = "Olimex A13-Olinuxino";
compatible = "olimex,a13-olinuxino", "allwinner,sun5i-a13";
aliases {
serial0 = &uart1;
};
soc@01c00000 {
mmc0: mmc@01c0f000 {
pinctrl-names = "default";

View File

@ -16,11 +16,6 @@
/ {
interrupt-parent = <&intc>;
aliases {
serial0 = &uart1;
serial1 = &uart3;
};
cpus {
#address-cells = <1>;
#size-cells = <0>;
@ -349,8 +344,8 @@
reg-names = "phy_ctrl", "pmu1";
clocks = <&usb_clk 8>;
clock-names = "usb_phy";
resets = <&usb_clk 1>;
reset-names = "usb1_reset";
resets = <&usb_clk 0>, <&usb_clk 1>;
reset-names = "usb0_reset", "usb1_reset";
status = "disabled";
};

View File

@ -53,12 +53,6 @@
interrupt-parent = <&gic>;
aliases {
serial0 = &uart0;
serial1 = &uart1;
serial2 = &uart2;
serial3 = &uart3;
serial4 = &uart4;
serial5 = &uart5;
ethernet0 = &gmac;
};

View File

@ -55,6 +55,12 @@
model = "LeMaker Banana Pi";
compatible = "lemaker,bananapi", "allwinner,sun7i-a20";
aliases {
serial0 = &uart0;
serial1 = &uart3;
serial2 = &uart7;
};
soc@01c00000 {
spi0: spi@01c05000 {
pinctrl-names = "default";

View File

@ -19,6 +19,14 @@
model = "Merrii A20 Hummingbird";
compatible = "merrii,a20-hummingbird", "allwinner,sun7i-a20";
aliases {
serial0 = &uart0;
serial1 = &uart2;
serial2 = &uart3;
serial3 = &uart4;
serial4 = &uart5;
};
soc@01c00000 {
mmc0: mmc@01c0f000 {
pinctrl-names = "default";

View File

@ -20,6 +20,9 @@
compatible = "olimex,a20-olinuxino-micro", "allwinner,sun7i-a20";
aliases {
serial0 = &uart0;
serial1 = &uart6;
serial2 = &uart7;
spi0 = &spi1;
spi1 = &spi2;
};

View File

@ -54,14 +54,6 @@
aliases {
ethernet0 = &gmac;
serial0 = &uart0;
serial1 = &uart1;
serial2 = &uart2;
serial3 = &uart3;
serial4 = &uart4;
serial5 = &uart5;
serial6 = &uart6;
serial7 = &uart7;
};
chosen {

View File

@ -55,6 +55,10 @@
model = "Ippo Q8H Dual Core Tablet (v5)";
compatible = "ippo,q8h-v5", "allwinner,sun8i-a23";
aliases {
serial0 = &r_uart;
};
chosen {
bootargs = "earlyprintk console=ttyS0,115200";
};

View File

@ -52,15 +52,6 @@
/ {
interrupt-parent = <&gic>;
aliases {
serial0 = &uart0;
serial1 = &uart1;
serial2 = &uart2;
serial3 = &uart3;
serial4 = &uart4;
serial5 = &r_uart;
};
cpus {
#address-cells = <1>;
#size-cells = <0>;

View File

@ -54,6 +54,11 @@
model = "Merrii A80 Optimus Board";
compatible = "merrii,a80-optimus", "allwinner,sun9i-a80";
aliases {
serial0 = &uart0;
serial1 = &uart4;
};
chosen {
bootargs = "earlyprintk console=ttyS0,115200";
};

View File

@ -52,16 +52,6 @@
/ {
interrupt-parent = <&gic>;
aliases {
serial0 = &uart0;
serial1 = &uart1;
serial2 = &uart2;
serial3 = &uart3;
serial4 = &uart4;
serial5 = &uart5;
serial6 = &r_uart;
};
cpus {
#address-cells = <1>;
#size-cells = <0>;

View File

@ -406,7 +406,7 @@
clock-frequency = <400000>;
magnetometer@c {
compatible = "ak,ak8975";
compatible = "asahi-kasei,ak8975";
reg = <0xc>;
interrupt-parent = <&gpio>;
interrupts = <TEGRA_GPIO(N, 5) IRQ_TYPE_LEVEL_HIGH>;

View File

@ -38,6 +38,16 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
vcpu->arch.hcr = HCR_GUEST_MASK;
}
static inline unsigned long vcpu_get_hcr(struct kvm_vcpu *vcpu)
{
return vcpu->arch.hcr;
}
static inline void vcpu_set_hcr(struct kvm_vcpu *vcpu, unsigned long hcr)
{
vcpu->arch.hcr = hcr;
}
static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
{
return 1;

View File

@ -125,9 +125,6 @@ struct kvm_vcpu_arch {
* Anything that is not used directly from assembly code goes
* here.
*/
/* dcache set/way operation pending */
int last_pcpu;
cpumask_t require_dcache_flush;
/* Don't run the guest on this vcpu */
bool pause;

View File

@ -44,6 +44,7 @@
#ifndef __ASSEMBLY__
#include <linux/highmem.h>
#include <asm/cacheflush.h>
#include <asm/pgalloc.h>
@ -161,13 +162,10 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
return (vcpu->arch.cp15[c1_SCTLR] & 0b101) == 0b101;
}
static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
unsigned long size,
bool ipa_uncached)
static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu, pfn_t pfn,
unsigned long size,
bool ipa_uncached)
{
if (!vcpu_has_cache_enabled(vcpu) || ipa_uncached)
kvm_flush_dcache_to_poc((void *)hva, size);
/*
* If we are going to insert an instruction page and the icache is
* either VIPT or PIPT, there is a potential problem where the host
@ -179,18 +177,77 @@ static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
*
* VIVT caches are tagged using both the ASID and the VMID and doesn't
* need any kind of flushing (DDI 0406C.b - Page B3-1392).
*
* We need to do this through a kernel mapping (using the
* user-space mapping has proved to be the wrong
* solution). For that, we need to kmap one page at a time,
* and iterate over the range.
*/
if (icache_is_pipt()) {
__cpuc_coherent_user_range(hva, hva + size);
} else if (!icache_is_vivt_asid_tagged()) {
bool need_flush = !vcpu_has_cache_enabled(vcpu) || ipa_uncached;
VM_BUG_ON(size & PAGE_MASK);
if (!need_flush && !icache_is_pipt())
goto vipt_cache;
while (size) {
void *va = kmap_atomic_pfn(pfn);
if (need_flush)
kvm_flush_dcache_to_poc(va, PAGE_SIZE);
if (icache_is_pipt())
__cpuc_coherent_user_range((unsigned long)va,
(unsigned long)va + PAGE_SIZE);
size -= PAGE_SIZE;
pfn++;
kunmap_atomic(va);
}
vipt_cache:
if (!icache_is_pipt() && !icache_is_vivt_asid_tagged()) {
/* any kind of VIPT cache */
__flush_icache_all();
}
}
static inline void __kvm_flush_dcache_pte(pte_t pte)
{
void *va = kmap_atomic(pte_page(pte));
kvm_flush_dcache_to_poc(va, PAGE_SIZE);
kunmap_atomic(va);
}
static inline void __kvm_flush_dcache_pmd(pmd_t pmd)
{
unsigned long size = PMD_SIZE;
pfn_t pfn = pmd_pfn(pmd);
while (size) {
void *va = kmap_atomic_pfn(pfn);
kvm_flush_dcache_to_poc(va, PAGE_SIZE);
pfn++;
size -= PAGE_SIZE;
kunmap_atomic(va);
}
}
static inline void __kvm_flush_dcache_pud(pud_t pud)
{
}
#define kvm_virt_to_phys(x) virt_to_idmap((unsigned long)(x))
void stage2_flush_vm(struct kvm *kvm);
void kvm_set_way_flush(struct kvm_vcpu *vcpu);
void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled);
#endif /* !__ASSEMBLY__ */

View File

@ -253,21 +253,22 @@
.endm
.macro restore_user_regs, fast = 0, offset = 0
ldr r1, [sp, #\offset + S_PSR] @ get calling cpsr
ldr lr, [sp, #\offset + S_PC]! @ get pc
mov r2, sp
ldr r1, [r2, #\offset + S_PSR] @ get calling cpsr
ldr lr, [r2, #\offset + S_PC]! @ get pc
msr spsr_cxsf, r1 @ save in spsr_svc
#if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_32v6K)
@ We must avoid clrex due to Cortex-A15 erratum #830321
strex r1, r2, [sp] @ clear the exclusive monitor
strex r1, r2, [r2] @ clear the exclusive monitor
#endif
.if \fast
ldmdb sp, {r1 - lr}^ @ get calling r1 - lr
ldmdb r2, {r1 - lr}^ @ get calling r1 - lr
.else
ldmdb sp, {r0 - lr}^ @ get calling r0 - lr
ldmdb r2, {r0 - lr}^ @ get calling r0 - lr
.endif
mov r0, r0 @ ARMv5T and earlier require a nop
@ after ldm {}^
add sp, sp, #S_FRAME_SIZE - S_PC
add sp, sp, #\offset + S_FRAME_SIZE
movs pc, lr @ return & move spsr_svc into cpsr
.endm

View File

@ -116,8 +116,14 @@ int armpmu_event_set_period(struct perf_event *event)
ret = 1;
}
if (left > (s64)armpmu->max_period)
left = armpmu->max_period;
/*
* Limit the maximum period to prevent the counter value
* from overtaking the one we are about to program. In
* effect we are reducing max_period to account for
* interrupt latency (and we are being very conservative).
*/
if (left > (armpmu->max_period >> 1))
left = armpmu->max_period >> 1;
local64_set(&hwc->prev_count, (u64)-left);

View File

@ -657,10 +657,13 @@ int __init arm_add_memory(u64 start, u64 size)
/*
* Ensure that start/size are aligned to a page boundary.
* Size is appropriately rounded down, start is rounded up.
* Size is rounded down, start is rounded up.
*/
size -= start & ~PAGE_MASK;
aligned_start = PAGE_ALIGN(start);
if (aligned_start > start + size)
size = 0;
else
size -= aligned_start - start;
#ifndef CONFIG_ARCH_PHYS_ADDR_T_64BIT
if (aligned_start > ULONG_MAX) {

View File

@ -281,15 +281,6 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
vcpu->cpu = cpu;
vcpu->arch.host_cpu_context = this_cpu_ptr(kvm_host_cpu_state);
/*
* Check whether this vcpu requires the cache to be flushed on
* this physical CPU. This is a consequence of doing dcache
* operations by set/way on this vcpu. We do it here to be in
* a non-preemptible section.
*/
if (cpumask_test_and_clear_cpu(cpu, &vcpu->arch.require_dcache_flush))
flush_cache_all(); /* We'd really want v7_flush_dcache_all() */
kvm_arm_set_running_vcpu(vcpu);
}
@ -541,7 +532,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);
vcpu->mode = OUTSIDE_GUEST_MODE;
vcpu->arch.last_pcpu = smp_processor_id();
kvm_guest_exit();
trace_kvm_exit(*vcpu_pc(vcpu));
/*

View File

@ -189,82 +189,40 @@ static bool access_l2ectlr(struct kvm_vcpu *vcpu,
return true;
}
/* See note at ARM ARM B1.14.4 */
/*
* See note at ARMv7 ARM B1.14.4 (TL;DR: S/W ops are not easily virtualized).
*/
static bool access_dcsw(struct kvm_vcpu *vcpu,
const struct coproc_params *p,
const struct coproc_reg *r)
{
unsigned long val;
int cpu;
if (!p->is_write)
return read_from_write_only(vcpu, p);
cpu = get_cpu();
cpumask_setall(&vcpu->arch.require_dcache_flush);
cpumask_clear_cpu(cpu, &vcpu->arch.require_dcache_flush);
/* If we were already preempted, take the long way around */
if (cpu != vcpu->arch.last_pcpu) {
flush_cache_all();
goto done;
}
val = *vcpu_reg(vcpu, p->Rt1);
switch (p->CRm) {
case 6: /* Upgrade DCISW to DCCISW, as per HCR.SWIO */
case 14: /* DCCISW */
asm volatile("mcr p15, 0, %0, c7, c14, 2" : : "r" (val));
break;
case 10: /* DCCSW */
asm volatile("mcr p15, 0, %0, c7, c10, 2" : : "r" (val));
break;
}
done:
put_cpu();
kvm_set_way_flush(vcpu);
return true;
}
/*
* Generic accessor for VM registers. Only called as long as HCR_TVM
* is set.
* is set. If the guest enables the MMU, we stop trapping the VM
* sys_regs and leave it in complete control of the caches.
*
* Used by the cpu-specific code.
*/
static bool access_vm_reg(struct kvm_vcpu *vcpu,
const struct coproc_params *p,
const struct coproc_reg *r)
bool access_vm_reg(struct kvm_vcpu *vcpu,
const struct coproc_params *p,
const struct coproc_reg *r)
{
bool was_enabled = vcpu_has_cache_enabled(vcpu);
BUG_ON(!p->is_write);
vcpu->arch.cp15[r->reg] = *vcpu_reg(vcpu, p->Rt1);
if (p->is_64bit)
vcpu->arch.cp15[r->reg + 1] = *vcpu_reg(vcpu, p->Rt2);
return true;
}
/*
* SCTLR accessor. Only called as long as HCR_TVM is set. If the
* guest enables the MMU, we stop trapping the VM sys_regs and leave
* it in complete control of the caches.
*
* Used by the cpu-specific code.
*/
bool access_sctlr(struct kvm_vcpu *vcpu,
const struct coproc_params *p,
const struct coproc_reg *r)
{
access_vm_reg(vcpu, p, r);
if (vcpu_has_cache_enabled(vcpu)) { /* MMU+Caches enabled? */
vcpu->arch.hcr &= ~HCR_TVM;
stage2_flush_vm(vcpu->kvm);
}
kvm_toggle_cache(vcpu, was_enabled);
return true;
}

View File

@ -153,8 +153,8 @@ static inline int cmp_reg(const struct coproc_reg *i1,
#define is64 .is_64 = true
#define is32 .is_64 = false
bool access_sctlr(struct kvm_vcpu *vcpu,
const struct coproc_params *p,
const struct coproc_reg *r);
bool access_vm_reg(struct kvm_vcpu *vcpu,
const struct coproc_params *p,
const struct coproc_reg *r);
#endif /* __ARM_KVM_COPROC_LOCAL_H__ */

View File

@ -34,7 +34,7 @@
static const struct coproc_reg a15_regs[] = {
/* SCTLR: swapped by interrupt.S. */
{ CRn( 1), CRm( 0), Op1( 0), Op2( 0), is32,
access_sctlr, reset_val, c1_SCTLR, 0x00C50078 },
access_vm_reg, reset_val, c1_SCTLR, 0x00C50078 },
};
static struct kvm_coproc_target_table a15_target_table = {

View File

@ -37,7 +37,7 @@
static const struct coproc_reg a7_regs[] = {
/* SCTLR: swapped by interrupt.S. */
{ CRn( 1), CRm( 0), Op1( 0), Op2( 0), is32,
access_sctlr, reset_val, c1_SCTLR, 0x00C50878 },
access_vm_reg, reset_val, c1_SCTLR, 0x00C50878 },
};
static struct kvm_coproc_target_table a7_target_table = {

View File

@ -58,6 +58,26 @@ static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, kvm, ipa);
}
/*
* D-Cache management functions. They take the page table entries by
* value, as they are flushing the cache using the kernel mapping (or
* kmap on 32bit).
*/
static void kvm_flush_dcache_pte(pte_t pte)
{
__kvm_flush_dcache_pte(pte);
}
static void kvm_flush_dcache_pmd(pmd_t pmd)
{
__kvm_flush_dcache_pmd(pmd);
}
static void kvm_flush_dcache_pud(pud_t pud)
{
__kvm_flush_dcache_pud(pud);
}
static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache,
int min, int max)
{
@ -119,6 +139,26 @@ static void clear_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr)
put_page(virt_to_page(pmd));
}
/*
* Unmapping vs dcache management:
*
* If a guest maps certain memory pages as uncached, all writes will
* bypass the data cache and go directly to RAM. However, the CPUs
* can still speculate reads (not writes) and fill cache lines with
* data.
*
* Those cache lines will be *clean* cache lines though, so a
* clean+invalidate operation is equivalent to an invalidate
* operation, because no cache lines are marked dirty.
*
* Those clean cache lines could be filled prior to an uncached write
* by the guest, and the cache coherent IO subsystem would therefore
* end up writing old data to disk.
*
* This is why right after unmapping a page/section and invalidating
* the corresponding TLBs, we call kvm_flush_dcache_p*() to make sure
* the IO subsystem will never hit in the cache.
*/
static void unmap_ptes(struct kvm *kvm, pmd_t *pmd,
phys_addr_t addr, phys_addr_t end)
{
@ -128,9 +168,16 @@ static void unmap_ptes(struct kvm *kvm, pmd_t *pmd,
start_pte = pte = pte_offset_kernel(pmd, addr);
do {
if (!pte_none(*pte)) {
pte_t old_pte = *pte;
kvm_set_pte(pte, __pte(0));
put_page(virt_to_page(pte));
kvm_tlb_flush_vmid_ipa(kvm, addr);
/* No need to invalidate the cache for device mappings */
if ((pte_val(old_pte) & PAGE_S2_DEVICE) != PAGE_S2_DEVICE)
kvm_flush_dcache_pte(old_pte);
put_page(virt_to_page(pte));
}
} while (pte++, addr += PAGE_SIZE, addr != end);
@ -149,8 +196,13 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud,
next = kvm_pmd_addr_end(addr, end);
if (!pmd_none(*pmd)) {
if (kvm_pmd_huge(*pmd)) {
pmd_t old_pmd = *pmd;
pmd_clear(pmd);
kvm_tlb_flush_vmid_ipa(kvm, addr);
kvm_flush_dcache_pmd(old_pmd);
put_page(virt_to_page(pmd));
} else {
unmap_ptes(kvm, pmd, addr, next);
@ -173,8 +225,13 @@ static void unmap_puds(struct kvm *kvm, pgd_t *pgd,
next = kvm_pud_addr_end(addr, end);
if (!pud_none(*pud)) {
if (pud_huge(*pud)) {
pud_t old_pud = *pud;
pud_clear(pud);
kvm_tlb_flush_vmid_ipa(kvm, addr);
kvm_flush_dcache_pud(old_pud);
put_page(virt_to_page(pud));
} else {
unmap_pmds(kvm, pud, addr, next);
@ -209,10 +266,9 @@ static void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd,
pte = pte_offset_kernel(pmd, addr);
do {
if (!pte_none(*pte)) {
hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT);
kvm_flush_dcache_to_poc((void*)hva, PAGE_SIZE);
}
if (!pte_none(*pte) &&
(pte_val(*pte) & PAGE_S2_DEVICE) != PAGE_S2_DEVICE)
kvm_flush_dcache_pte(*pte);
} while (pte++, addr += PAGE_SIZE, addr != end);
}
@ -226,12 +282,10 @@ static void stage2_flush_pmds(struct kvm *kvm, pud_t *pud,
do {
next = kvm_pmd_addr_end(addr, end);
if (!pmd_none(*pmd)) {
if (kvm_pmd_huge(*pmd)) {
hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT);
kvm_flush_dcache_to_poc((void*)hva, PMD_SIZE);
} else {
if (kvm_pmd_huge(*pmd))
kvm_flush_dcache_pmd(*pmd);
else
stage2_flush_ptes(kvm, pmd, addr, next);
}
}
} while (pmd++, addr = next, addr != end);
}
@ -246,12 +300,10 @@ static void stage2_flush_puds(struct kvm *kvm, pgd_t *pgd,
do {
next = kvm_pud_addr_end(addr, end);
if (!pud_none(*pud)) {
if (pud_huge(*pud)) {
hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT);
kvm_flush_dcache_to_poc((void*)hva, PUD_SIZE);
} else {
if (pud_huge(*pud))
kvm_flush_dcache_pud(*pud);
else
stage2_flush_pmds(kvm, pud, addr, next);
}
}
} while (pud++, addr = next, addr != end);
}
@ -278,7 +330,7 @@ static void stage2_flush_memslot(struct kvm *kvm,
* Go through the stage 2 page tables and invalidate any cache lines
* backing memory already mapped to the VM.
*/
void stage2_flush_vm(struct kvm *kvm)
static void stage2_flush_vm(struct kvm *kvm)
{
struct kvm_memslots *slots;
struct kvm_memory_slot *memslot;
@ -905,6 +957,12 @@ static bool kvm_is_device_pfn(unsigned long pfn)
return !pfn_valid(pfn);
}
static void coherent_cache_guest_page(struct kvm_vcpu *vcpu, pfn_t pfn,
unsigned long size, bool uncached)
{
__coherent_cache_guest_page(vcpu, pfn, size, uncached);
}
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_memory_slot *memslot, unsigned long hva,
unsigned long fault_status)
@ -994,8 +1052,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
kvm_set_s2pmd_writable(&new_pmd);
kvm_set_pfn_dirty(pfn);
}
coherent_cache_guest_page(vcpu, hva & PMD_MASK, PMD_SIZE,
fault_ipa_uncached);
coherent_cache_guest_page(vcpu, pfn, PMD_SIZE, fault_ipa_uncached);
ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
} else {
pte_t new_pte = pfn_pte(pfn, mem_type);
@ -1003,8 +1060,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
kvm_set_s2pte_writable(&new_pte);
kvm_set_pfn_dirty(pfn);
}
coherent_cache_guest_page(vcpu, hva, PAGE_SIZE,
fault_ipa_uncached);
coherent_cache_guest_page(vcpu, pfn, PAGE_SIZE, fault_ipa_uncached);
ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte,
pgprot_val(mem_type) == pgprot_val(PAGE_S2_DEVICE));
}
@ -1411,3 +1467,71 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
unmap_stage2_range(kvm, gpa, size);
spin_unlock(&kvm->mmu_lock);
}
/*
* See note at ARMv7 ARM B1.14.4 (TL;DR: S/W ops are not easily virtualized).
*
* Main problems:
* - S/W ops are local to a CPU (not broadcast)
* - We have line migration behind our back (speculation)
* - System caches don't support S/W at all (damn!)
*
* In the face of the above, the best we can do is to try and convert
* S/W ops to VA ops. Because the guest is not allowed to infer the
* S/W to PA mapping, it can only use S/W to nuke the whole cache,
* which is a rather good thing for us.
*
* Also, it is only used when turning caches on/off ("The expected
* usage of the cache maintenance instructions that operate by set/way
* is associated with the cache maintenance instructions associated
* with the powerdown and powerup of caches, if this is required by
* the implementation.").
*
* We use the following policy:
*
* - If we trap a S/W operation, we enable VM trapping to detect
* caches being turned on/off, and do a full clean.
*
* - We flush the caches on both caches being turned on and off.
*
* - Once the caches are enabled, we stop trapping VM ops.
*/
void kvm_set_way_flush(struct kvm_vcpu *vcpu)
{
unsigned long hcr = vcpu_get_hcr(vcpu);
/*
* If this is the first time we do a S/W operation
* (i.e. HCR_TVM not set) flush the whole memory, and set the
* VM trapping.
*
* Otherwise, rely on the VM trapping to wait for the MMU +
* Caches to be turned off. At that point, we'll be able to
* clean the caches again.
*/
if (!(hcr & HCR_TVM)) {
trace_kvm_set_way_flush(*vcpu_pc(vcpu),
vcpu_has_cache_enabled(vcpu));
stage2_flush_vm(vcpu->kvm);
vcpu_set_hcr(vcpu, hcr | HCR_TVM);
}
}
void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled)
{
bool now_enabled = vcpu_has_cache_enabled(vcpu);
/*
* If switching the MMU+caches on, need to invalidate the caches.
* If switching it off, need to clean the caches.
* Clean + invalidate does the trick always.
*/
if (now_enabled != was_enabled)
stage2_flush_vm(vcpu->kvm);
/* Caches are now on, stop trapping VM ops (until a S/W op) */
if (now_enabled)
vcpu_set_hcr(vcpu, vcpu_get_hcr(vcpu) & ~HCR_TVM);
trace_kvm_toggle_cache(*vcpu_pc(vcpu), was_enabled, now_enabled);
}

View File

@ -223,6 +223,45 @@ TRACE_EVENT(kvm_hvc,
__entry->vcpu_pc, __entry->r0, __entry->imm)
);
TRACE_EVENT(kvm_set_way_flush,
TP_PROTO(unsigned long vcpu_pc, bool cache),
TP_ARGS(vcpu_pc, cache),
TP_STRUCT__entry(
__field( unsigned long, vcpu_pc )
__field( bool, cache )
),
TP_fast_assign(
__entry->vcpu_pc = vcpu_pc;
__entry->cache = cache;
),
TP_printk("S/W flush at 0x%016lx (cache %s)",
__entry->vcpu_pc, __entry->cache ? "on" : "off")
);
TRACE_EVENT(kvm_toggle_cache,
TP_PROTO(unsigned long vcpu_pc, bool was, bool now),
TP_ARGS(vcpu_pc, was, now),
TP_STRUCT__entry(
__field( unsigned long, vcpu_pc )
__field( bool, was )
__field( bool, now )
),
TP_fast_assign(
__entry->vcpu_pc = vcpu_pc;
__entry->was = was;
__entry->now = now;
),
TP_printk("VM op at 0x%016lx (cache was %s, now %s)",
__entry->vcpu_pc, __entry->was ? "on" : "off",
__entry->now ? "on" : "off")
);
#endif /* _TRACE_KVM_H */
#undef TRACE_INCLUDE_PATH

View File

@ -189,6 +189,13 @@ static void __init armada_375_380_coherency_init(struct device_node *np)
coherency_cpu_base = of_iomap(np, 0);
arch_ioremap_caller = armada_pcie_wa_ioremap_caller;
/*
* We should switch the PL310 to I/O coherency mode only if
* I/O coherency is actually enabled.
*/
if (!coherency_available())
return;
/*
* Add the PL310 property "arm,io-coherent". This makes sure the
* outer sync operation is not used, which allows to
@ -246,9 +253,14 @@ static int coherency_type(void)
return type;
}
/*
* As a precaution, we currently completely disable hardware I/O
* coherency, until enough testing is done with automatic I/O
* synchronization barriers to validate that it is a proper solution.
*/
int coherency_available(void)
{
return coherency_type() != COHERENCY_FABRIC_TYPE_NONE;
return false;
}
int __init coherency_init(void)

View File

@ -211,6 +211,7 @@ extern struct device *omap2_get_iva_device(void);
extern struct device *omap2_get_l3_device(void);
extern struct device *omap4_get_dsp_device(void);
unsigned int omap4_xlate_irq(unsigned int hwirq);
void omap_gic_of_init(void);
#ifdef CONFIG_CACHE_L2X0

View File

@ -256,6 +256,38 @@ static int __init omap4_sar_ram_init(void)
}
omap_early_initcall(omap4_sar_ram_init);
static struct of_device_id gic_match[] = {
{ .compatible = "arm,cortex-a9-gic", },
{ .compatible = "arm,cortex-a15-gic", },
{ },
};
static struct device_node *gic_node;
unsigned int omap4_xlate_irq(unsigned int hwirq)
{
struct of_phandle_args irq_data;
unsigned int irq;
if (!gic_node)
gic_node = of_find_matching_node(NULL, gic_match);
if (WARN_ON(!gic_node))
return hwirq;
irq_data.np = gic_node;
irq_data.args_count = 3;
irq_data.args[0] = 0;
irq_data.args[1] = hwirq - OMAP44XX_IRQ_GIC_START;
irq_data.args[2] = IRQ_TYPE_LEVEL_HIGH;
irq = irq_create_of_mapping(&irq_data);
if (WARN_ON(!irq))
irq = hwirq;
return irq;
}
void __init omap_gic_of_init(void)
{
struct device_node *np;

View File

@ -3534,9 +3534,15 @@ int omap_hwmod_fill_resources(struct omap_hwmod *oh, struct resource *res)
mpu_irqs_cnt = _count_mpu_irqs(oh);
for (i = 0; i < mpu_irqs_cnt; i++) {
unsigned int irq;
if (oh->xlate_irq)
irq = oh->xlate_irq((oh->mpu_irqs + i)->irq);
else
irq = (oh->mpu_irqs + i)->irq;
(res + r)->name = (oh->mpu_irqs + i)->name;
(res + r)->start = (oh->mpu_irqs + i)->irq;
(res + r)->end = (oh->mpu_irqs + i)->irq;
(res + r)->start = irq;
(res + r)->end = irq;
(res + r)->flags = IORESOURCE_IRQ;
r++;
}

View File

@ -676,6 +676,7 @@ struct omap_hwmod {
spinlock_t _lock;
struct list_head node;
struct omap_hwmod_ocp_if *_mpu_port;
unsigned int (*xlate_irq)(unsigned int);
u16 flags;
u8 mpu_rt_idx;
u8 response_lat;

View File

@ -479,6 +479,7 @@ static struct omap_hwmod omap44xx_dma_system_hwmod = {
.class = &omap44xx_dma_hwmod_class,
.clkdm_name = "l3_dma_clkdm",
.mpu_irqs = omap44xx_dma_system_irqs,
.xlate_irq = omap4_xlate_irq,
.main_clk = "l3_div_ck",
.prcm = {
.omap4 = {
@ -640,6 +641,7 @@ static struct omap_hwmod omap44xx_dss_dispc_hwmod = {
.class = &omap44xx_dispc_hwmod_class,
.clkdm_name = "l3_dss_clkdm",
.mpu_irqs = omap44xx_dss_dispc_irqs,
.xlate_irq = omap4_xlate_irq,
.sdma_reqs = omap44xx_dss_dispc_sdma_reqs,
.main_clk = "dss_dss_clk",
.prcm = {
@ -693,6 +695,7 @@ static struct omap_hwmod omap44xx_dss_dsi1_hwmod = {
.class = &omap44xx_dsi_hwmod_class,
.clkdm_name = "l3_dss_clkdm",
.mpu_irqs = omap44xx_dss_dsi1_irqs,
.xlate_irq = omap4_xlate_irq,
.sdma_reqs = omap44xx_dss_dsi1_sdma_reqs,
.main_clk = "dss_dss_clk",
.prcm = {
@ -726,6 +729,7 @@ static struct omap_hwmod omap44xx_dss_dsi2_hwmod = {
.class = &omap44xx_dsi_hwmod_class,
.clkdm_name = "l3_dss_clkdm",
.mpu_irqs = omap44xx_dss_dsi2_irqs,
.xlate_irq = omap4_xlate_irq,
.sdma_reqs = omap44xx_dss_dsi2_sdma_reqs,
.main_clk = "dss_dss_clk",
.prcm = {
@ -784,6 +788,7 @@ static struct omap_hwmod omap44xx_dss_hdmi_hwmod = {
*/
.flags = HWMOD_SWSUP_SIDLE,
.mpu_irqs = omap44xx_dss_hdmi_irqs,
.xlate_irq = omap4_xlate_irq,
.sdma_reqs = omap44xx_dss_hdmi_sdma_reqs,
.main_clk = "dss_48mhz_clk",
.prcm = {

View File

@ -288,6 +288,7 @@ static struct omap_hwmod omap54xx_dma_system_hwmod = {
.class = &omap54xx_dma_hwmod_class,
.clkdm_name = "dma_clkdm",
.mpu_irqs = omap54xx_dma_system_irqs,
.xlate_irq = omap4_xlate_irq,
.main_clk = "l3_iclk_div",
.prcm = {
.omap4 = {

View File

@ -498,6 +498,7 @@ struct omap_prcm_irq_setup {
u8 nr_irqs;
const struct omap_prcm_irq *irqs;
int irq;
unsigned int (*xlate_irq)(unsigned int);
void (*read_pending_irqs)(unsigned long *events);
void (*ocp_barrier)(void);
void (*save_and_clear_irqen)(u32 *saved_mask);

View File

@ -49,6 +49,7 @@ static struct omap_prcm_irq_setup omap4_prcm_irq_setup = {
.irqs = omap4_prcm_irqs,
.nr_irqs = ARRAY_SIZE(omap4_prcm_irqs),
.irq = 11 + OMAP44XX_IRQ_GIC_START,
.xlate_irq = omap4_xlate_irq,
.read_pending_irqs = &omap44xx_prm_read_pending_irqs,
.ocp_barrier = &omap44xx_prm_ocp_barrier,
.save_and_clear_irqen = &omap44xx_prm_save_and_clear_irqen,
@ -751,8 +752,10 @@ static int omap44xx_prm_late_init(void)
}
/* Once OMAP4 DT is filled as well */
if (irq_num >= 0)
if (irq_num >= 0) {
omap4_prcm_irq_setup.irq = irq_num;
omap4_prcm_irq_setup.xlate_irq = NULL;
}
}
omap44xx_prm_enable_io_wakeup();

View File

@ -187,6 +187,7 @@ int omap_prcm_event_to_irq(const char *name)
*/
void omap_prcm_irq_cleanup(void)
{
unsigned int irq;
int i;
if (!prcm_irq_setup) {
@ -211,7 +212,11 @@ void omap_prcm_irq_cleanup(void)
kfree(prcm_irq_setup->priority_mask);
prcm_irq_setup->priority_mask = NULL;
irq_set_chained_handler(prcm_irq_setup->irq, NULL);
if (prcm_irq_setup->xlate_irq)
irq = prcm_irq_setup->xlate_irq(prcm_irq_setup->irq);
else
irq = prcm_irq_setup->irq;
irq_set_chained_handler(irq, NULL);
if (prcm_irq_setup->base_irq > 0)
irq_free_descs(prcm_irq_setup->base_irq,
@ -259,6 +264,7 @@ int omap_prcm_register_chain_handler(struct omap_prcm_irq_setup *irq_setup)
int offset, i;
struct irq_chip_generic *gc;
struct irq_chip_type *ct;
unsigned int irq;
if (!irq_setup)
return -EINVAL;
@ -298,7 +304,11 @@ int omap_prcm_register_chain_handler(struct omap_prcm_irq_setup *irq_setup)
1 << (offset & 0x1f);
}
irq_set_chained_handler(irq_setup->irq, omap_prcm_irq_handler);
if (irq_setup->xlate_irq)
irq = irq_setup->xlate_irq(irq_setup->irq);
else
irq = irq_setup->irq;
irq_set_chained_handler(irq, omap_prcm_irq_handler);
irq_setup->base_irq = irq_alloc_descs(-1, 0, irq_setup->nr_regs * 32,
0);

View File

@ -66,19 +66,24 @@ void __init omap_pmic_init(int bus, u32 clkrate,
omap_register_i2c_bus(bus, clkrate, &pmic_i2c_board_info, 1);
}
#ifdef CONFIG_ARCH_OMAP4
void __init omap4_pmic_init(const char *pmic_type,
struct twl4030_platform_data *pmic_data,
struct i2c_board_info *devices, int nr_devices)
{
/* PMIC part*/
unsigned int irq;
omap_mux_init_signal("sys_nirq1", OMAP_PIN_INPUT_PULLUP | OMAP_PIN_OFF_WAKEUPENABLE);
omap_mux_init_signal("fref_clk0_out.sys_drm_msecure", OMAP_PIN_OUTPUT);
omap_pmic_init(1, 400, pmic_type, 7 + OMAP44XX_IRQ_GIC_START, pmic_data);
irq = omap4_xlate_irq(7 + OMAP44XX_IRQ_GIC_START);
omap_pmic_init(1, 400, pmic_type, irq, pmic_data);
/* Register additional devices on i2c1 bus if needed */
if (devices)
i2c_register_board_info(1, devices, nr_devices);
}
#endif
void __init omap_pmic_late_init(void)
{

View File

@ -18,6 +18,8 @@
#include <linux/gpio_keys.h>
#include <linux/input.h>
#include <linux/interrupt.h>
#include <linux/irqchip.h>
#include <linux/irqchip/arm-gic.h>
#include <linux/kernel.h>
#include <linux/mfd/tmio.h>
#include <linux/mmc/host.h>
@ -273,6 +275,22 @@ static void __init ape6evm_add_standard_devices(void)
sizeof(ape6evm_leds_pdata));
}
static void __init ape6evm_legacy_init_time(void)
{
/* Do not invoke DT-based timers via clocksource_of_init() */
}
static void __init ape6evm_legacy_init_irq(void)
{
void __iomem *gic_dist_base = ioremap_nocache(0xf1001000, 0x1000);
void __iomem *gic_cpu_base = ioremap_nocache(0xf1002000, 0x1000);
gic_init(0, 29, gic_dist_base, gic_cpu_base);
/* Do not invoke DT-based interrupt code via irqchip_init() */
}
static const char *ape6evm_boards_compat_dt[] __initdata = {
"renesas,ape6evm",
NULL,
@ -280,7 +298,9 @@ static const char *ape6evm_boards_compat_dt[] __initdata = {
DT_MACHINE_START(APE6EVM_DT, "ape6evm")
.init_early = shmobile_init_delay,
.init_irq = ape6evm_legacy_init_irq,
.init_machine = ape6evm_add_standard_devices,
.init_late = shmobile_init_late,
.dt_compat = ape6evm_boards_compat_dt,
.init_time = ape6evm_legacy_init_time,
MACHINE_END

View File

@ -21,6 +21,8 @@
#include <linux/input.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqchip.h>
#include <linux/irqchip/arm-gic.h>
#include <linux/kernel.h>
#include <linux/leds.h>
#include <linux/mfd/tmio.h>
@ -811,6 +813,16 @@ static void __init lager_init(void)
lager_ksz8041_fixup);
}
static void __init lager_legacy_init_irq(void)
{
void __iomem *gic_dist_base = ioremap_nocache(0xf1001000, 0x1000);
void __iomem *gic_cpu_base = ioremap_nocache(0xf1002000, 0x1000);
gic_init(0, 29, gic_dist_base, gic_cpu_base);
/* Do not invoke DT-based interrupt code via irqchip_init() */
}
static const char * const lager_boards_compat_dt[] __initconst = {
"renesas,lager",
NULL,
@ -819,6 +831,7 @@ static const char * const lager_boards_compat_dt[] __initconst = {
DT_MACHINE_START(LAGER_DT, "lager")
.smp = smp_ops(r8a7790_smp_ops),
.init_early = shmobile_init_delay,
.init_irq = lager_legacy_init_irq,
.init_time = rcar_gen2_timer_init,
.init_machine = lager_init,
.init_late = shmobile_init_late,

View File

@ -576,11 +576,18 @@ void __init r8a7778_init_irq_extpin(int irlm)
void __init r8a7778_init_irq_dt(void)
{
void __iomem *base = ioremap_nocache(0xfe700000, 0x00100000);
#ifdef CONFIG_ARCH_SHMOBILE_LEGACY
void __iomem *gic_dist_base = ioremap_nocache(0xfe438000, 0x1000);
void __iomem *gic_cpu_base = ioremap_nocache(0xfe430000, 0x1000);
#endif
BUG_ON(!base);
#ifdef CONFIG_ARCH_SHMOBILE_LEGACY
gic_init(0, 29, gic_dist_base, gic_cpu_base);
#else
irqchip_init();
#endif
/* route all interrupts to ARM */
__raw_writel(0x73ffffff, base + INT2NTSR0);
__raw_writel(0xffffffff, base + INT2NTSR1);

View File

@ -720,10 +720,17 @@ static int r8a7779_set_wake(struct irq_data *data, unsigned int on)
void __init r8a7779_init_irq_dt(void)
{
#ifdef CONFIG_ARCH_SHMOBILE_LEGACY
void __iomem *gic_dist_base = ioremap_nocache(0xf0001000, 0x1000);
void __iomem *gic_cpu_base = ioremap_nocache(0xf0000100, 0x1000);
#endif
gic_arch_extn.irq_set_wake = r8a7779_set_wake;
#ifdef CONFIG_ARCH_SHMOBILE_LEGACY
gic_init(0, 29, gic_dist_base, gic_cpu_base);
#else
irqchip_init();
#endif
/* route all interrupts to ARM */
__raw_writel(0xffffffff, INT2NTSR0);
__raw_writel(0x3fffffff, INT2NTSR1);

View File

@ -133,7 +133,9 @@ void __init rcar_gen2_timer_init(void)
#ifdef CONFIG_COMMON_CLK
rcar_gen2_clocks_init(mode);
#endif
#ifdef CONFIG_ARCH_SHMOBILE_MULTI
clocksource_of_init();
#endif
}
struct memory_reserve_config {

View File

@ -70,6 +70,18 @@ void __init shmobile_init_delay(void)
if (!max_freq)
return;
#ifdef CONFIG_ARCH_SHMOBILE_LEGACY
/* Non-multiplatform r8a73a4 SoC cannot use arch timer due
* to GIC being initialized from C and arch timer via DT */
if (of_machine_is_compatible("renesas,r8a73a4"))
has_arch_timer = false;
/* Non-multiplatform r8a7790 SoC cannot use arch timer due
* to GIC being initialized from C and arch timer via DT */
if (of_machine_is_compatible("renesas,r8a7790"))
has_arch_timer = false;
#endif
if (!has_arch_timer || !IS_ENABLED(CONFIG_ARM_ARCH_TIMER)) {
if (is_a7_a8_a9)
shmobile_setup_delay_hz(max_freq, 1, 3);

View File

@ -1940,18 +1940,8 @@ void arm_iommu_release_mapping(struct dma_iommu_mapping *mapping)
}
EXPORT_SYMBOL_GPL(arm_iommu_release_mapping);
/**
* arm_iommu_attach_device
* @dev: valid struct device pointer
* @mapping: io address space mapping structure (returned from
* arm_iommu_create_mapping)
*
* Attaches specified io address space mapping to the provided device,
* More than one client might be attached to the same io address space
* mapping.
*/
int arm_iommu_attach_device(struct device *dev,
struct dma_iommu_mapping *mapping)
static int __arm_iommu_attach_device(struct device *dev,
struct dma_iommu_mapping *mapping)
{
int err;
@ -1965,15 +1955,35 @@ int arm_iommu_attach_device(struct device *dev,
pr_debug("Attached IOMMU controller to %s device.\n", dev_name(dev));
return 0;
}
EXPORT_SYMBOL_GPL(arm_iommu_attach_device);
/**
* arm_iommu_detach_device
* arm_iommu_attach_device
* @dev: valid struct device pointer
* @mapping: io address space mapping structure (returned from
* arm_iommu_create_mapping)
*
* Detaches the provided device from a previously attached map.
* Attaches specified io address space mapping to the provided device.
* This replaces the dma operations (dma_map_ops pointer) with the
* IOMMU aware version.
*
* More than one client might be attached to the same io address space
* mapping.
*/
void arm_iommu_detach_device(struct device *dev)
int arm_iommu_attach_device(struct device *dev,
struct dma_iommu_mapping *mapping)
{
int err;
err = __arm_iommu_attach_device(dev, mapping);
if (err)
return err;
set_dma_ops(dev, &iommu_ops);
return 0;
}
EXPORT_SYMBOL_GPL(arm_iommu_attach_device);
static void __arm_iommu_detach_device(struct device *dev)
{
struct dma_iommu_mapping *mapping;
@ -1989,6 +1999,19 @@ void arm_iommu_detach_device(struct device *dev)
pr_debug("Detached IOMMU controller from %s device.\n", dev_name(dev));
}
/**
* arm_iommu_detach_device
* @dev: valid struct device pointer
*
* Detaches the provided device from a previously attached map.
* This voids the dma operations (dma_map_ops pointer)
*/
void arm_iommu_detach_device(struct device *dev)
{
__arm_iommu_detach_device(dev);
set_dma_ops(dev, NULL);
}
EXPORT_SYMBOL_GPL(arm_iommu_detach_device);
static struct dma_map_ops *arm_get_iommu_dma_map_ops(bool coherent)
@ -2011,7 +2034,7 @@ static bool arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size,
return false;
}
if (arm_iommu_attach_device(dev, mapping)) {
if (__arm_iommu_attach_device(dev, mapping)) {
pr_warn("Failed to attached device %s to IOMMU_mapping\n",
dev_name(dev));
arm_iommu_release_mapping(mapping);
@ -2025,7 +2048,7 @@ static void arm_teardown_iommu_dma_ops(struct device *dev)
{
struct dma_iommu_mapping *mapping = dev->archdata.mapping;
arm_iommu_detach_device(dev);
__arm_iommu_detach_device(dev);
arm_iommu_release_mapping(mapping);
}

View File

@ -85,6 +85,7 @@ vdso_install:
# We use MRPROPER_FILES and CLEAN_FILES now
archclean:
$(Q)$(MAKE) $(clean)=$(boot)
$(Q)$(MAKE) $(clean)=$(boot)/dts
define archhelp
echo '* Image.gz - Compressed kernel image (arch/$(ARCH)/boot/Image.gz)'

View File

@ -3,6 +3,4 @@ dts-dirs += apm
dts-dirs += arm
dts-dirs += cavium
always := $(dtb-y)
subdir-y := $(dts-dirs)
clean-files := *.dtb

View File

@ -22,7 +22,7 @@
};
chosen {
stdout-path = &soc_uart0;
stdout-path = "serial0:115200n8";
};
psci {

View File

@ -45,6 +45,16 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
vcpu->arch.hcr_el2 &= ~HCR_RW;
}
static inline unsigned long vcpu_get_hcr(struct kvm_vcpu *vcpu)
{
return vcpu->arch.hcr_el2;
}
static inline void vcpu_set_hcr(struct kvm_vcpu *vcpu, unsigned long hcr)
{
vcpu->arch.hcr_el2 = hcr;
}
static inline unsigned long *vcpu_pc(const struct kvm_vcpu *vcpu)
{
return (unsigned long *)&vcpu_gp_regs(vcpu)->regs.pc;

View File

@ -116,9 +116,6 @@ struct kvm_vcpu_arch {
* Anything that is not used directly from assembly code goes
* here.
*/
/* dcache set/way operation pending */
int last_pcpu;
cpumask_t require_dcache_flush;
/* Don't run the guest */
bool pause;

View File

@ -243,24 +243,46 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
return (vcpu_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101;
}
static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
unsigned long size,
bool ipa_uncached)
static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu, pfn_t pfn,
unsigned long size,
bool ipa_uncached)
{
void *va = page_address(pfn_to_page(pfn));
if (!vcpu_has_cache_enabled(vcpu) || ipa_uncached)
kvm_flush_dcache_to_poc((void *)hva, size);
kvm_flush_dcache_to_poc(va, size);
if (!icache_is_aliasing()) { /* PIPT */
flush_icache_range(hva, hva + size);
flush_icache_range((unsigned long)va,
(unsigned long)va + size);
} else if (!icache_is_aivivt()) { /* non ASID-tagged VIVT */
/* any kind of VIPT cache */
__flush_icache_all();
}
}
static inline void __kvm_flush_dcache_pte(pte_t pte)
{
struct page *page = pte_page(pte);
kvm_flush_dcache_to_poc(page_address(page), PAGE_SIZE);
}
static inline void __kvm_flush_dcache_pmd(pmd_t pmd)
{
struct page *page = pmd_page(pmd);
kvm_flush_dcache_to_poc(page_address(page), PMD_SIZE);
}
static inline void __kvm_flush_dcache_pud(pud_t pud)
{
struct page *page = pud_page(pud);
kvm_flush_dcache_to_poc(page_address(page), PUD_SIZE);
}
#define kvm_virt_to_phys(x) __virt_to_phys((unsigned long)(x))
void stage2_flush_vm(struct kvm *kvm);
void kvm_set_way_flush(struct kvm_vcpu *vcpu);
void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled);
#endif /* __ASSEMBLY__ */
#endif /* __ARM64_KVM_MMU_H__ */

View File

@ -69,68 +69,31 @@ static u32 get_ccsidr(u32 csselr)
return ccsidr;
}
static void do_dc_cisw(u32 val)
{
asm volatile("dc cisw, %x0" : : "r" (val));
dsb(ish);
}
static void do_dc_csw(u32 val)
{
asm volatile("dc csw, %x0" : : "r" (val));
dsb(ish);
}
/* See note at ARM ARM B1.14.4 */
/*
* See note at ARMv7 ARM B1.14.4 (TL;DR: S/W ops are not easily virtualized).
*/
static bool access_dcsw(struct kvm_vcpu *vcpu,
const struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
unsigned long val;
int cpu;
if (!p->is_write)
return read_from_write_only(vcpu, p);
cpu = get_cpu();
cpumask_setall(&vcpu->arch.require_dcache_flush);
cpumask_clear_cpu(cpu, &vcpu->arch.require_dcache_flush);
/* If we were already preempted, take the long way around */
if (cpu != vcpu->arch.last_pcpu) {
flush_cache_all();
goto done;
}
val = *vcpu_reg(vcpu, p->Rt);
switch (p->CRm) {
case 6: /* Upgrade DCISW to DCCISW, as per HCR.SWIO */
case 14: /* DCCISW */
do_dc_cisw(val);
break;
case 10: /* DCCSW */
do_dc_csw(val);
break;
}
done:
put_cpu();
kvm_set_way_flush(vcpu);
return true;
}
/*
* Generic accessor for VM registers. Only called as long as HCR_TVM
* is set.
* is set. If the guest enables the MMU, we stop trapping the VM
* sys_regs and leave it in complete control of the caches.
*/
static bool access_vm_reg(struct kvm_vcpu *vcpu,
const struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
unsigned long val;
bool was_enabled = vcpu_has_cache_enabled(vcpu);
BUG_ON(!p->is_write);
@ -143,25 +106,7 @@ static bool access_vm_reg(struct kvm_vcpu *vcpu,
vcpu_cp15_64_low(vcpu, r->reg) = val & 0xffffffffUL;
}
return true;
}
/*
* SCTLR_EL1 accessor. Only called as long as HCR_TVM is set. If the
* guest enables the MMU, we stop trapping the VM sys_regs and leave
* it in complete control of the caches.
*/
static bool access_sctlr(struct kvm_vcpu *vcpu,
const struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
access_vm_reg(vcpu, p, r);
if (vcpu_has_cache_enabled(vcpu)) { /* MMU+Caches enabled? */
vcpu->arch.hcr_el2 &= ~HCR_TVM;
stage2_flush_vm(vcpu->kvm);
}
kvm_toggle_cache(vcpu, was_enabled);
return true;
}
@ -377,7 +322,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
NULL, reset_mpidr, MPIDR_EL1 },
/* SCTLR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
access_sctlr, reset_val, SCTLR_EL1, 0x00C50078 },
access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
/* CPACR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b010),
NULL, reset_val, CPACR_EL1, 0 },
@ -657,7 +602,7 @@ static const struct sys_reg_desc cp14_64_regs[] = {
* register).
*/
static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 1), CRm( 0), Op2( 0), access_sctlr, NULL, c1_SCTLR },
{ Op1( 0), CRn( 1), CRm( 0), Op2( 0), access_vm_reg, NULL, c1_SCTLR },
{ Op1( 0), CRn( 2), CRm( 0), Op2( 0), access_vm_reg, NULL, c2_TTBR0 },
{ Op1( 0), CRn( 2), CRm( 0), Op2( 1), access_vm_reg, NULL, c2_TTBR1 },
{ Op1( 0), CRn( 2), CRm( 0), Op2( 2), access_vm_reg, NULL, c2_TTBCR },

View File

@ -15,6 +15,7 @@
*/
#include <linux/debugfs.h>
#include <linux/fs.h>
#include <linux/io.h>
#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/seq_file.h>

View File

@ -19,12 +19,10 @@
#include <linux/moduleloader.h>
#include <linux/vmalloc.h>
void module_free(struct module *mod, void *module_region)
void module_arch_freeing_init(struct module *mod)
{
vfree(mod->arch.syminfo);
mod->arch.syminfo = NULL;
vfree(module_region);
}
static inline int check_rela(Elf32_Rela *rela, struct module *module,
@ -291,12 +289,3 @@ int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab,
return ret;
}
int module_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs,
struct module *module)
{
vfree(module->arch.syminfo);
module->arch.syminfo = NULL;
return 0;
}

View File

@ -142,6 +142,8 @@ good_area:
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
else if (fault & VM_FAULT_SIGSEGV)
goto bad_area;
else if (fault & VM_FAULT_SIGBUS)
goto do_sigbus;
BUG();

View File

@ -604,7 +604,7 @@ static ssize_t __sync_serial_read(struct file *file,
struct timespec *ts)
{
unsigned long flags;
int dev = MINOR(file->f_dentry->d_inode->i_rdev);
int dev = MINOR(file_inode(file)->i_rdev);
int avail;
struct sync_port *port;
unsigned char *start;

View File

@ -36,7 +36,7 @@ void *module_alloc(unsigned long size)
}
/* Free memory returned from module_alloc */
void module_free(struct module *mod, void *module_region)
void module_memfree(void *module_region)
{
kfree(module_region);
}

View File

@ -176,6 +176,8 @@ retry:
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
else if (fault & VM_FAULT_SIGSEGV)
goto bad_area;
else if (fault & VM_FAULT_SIGBUS)
goto do_sigbus;
BUG();

View File

@ -94,7 +94,7 @@ static void __init pcibios_allocate_bus_resources(struct list_head *bus_list)
r = &dev->resource[idx];
if (!r->start)
continue;
pci_claim_resource(dev, idx);
pci_claim_bridge_resource(dev, idx);
}
}
pcibios_allocate_bus_resources(&bus->children);

View File

@ -168,6 +168,8 @@ asmlinkage void do_page_fault(int datammu, unsigned long esr0, unsigned long ear
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
else if (fault & VM_FAULT_SIGSEGV)
goto bad_area;
else if (fault & VM_FAULT_SIGBUS)
goto do_sigbus;
BUG();

View File

@ -305,14 +305,12 @@ plt_target (struct plt_entry *plt)
#endif /* !USE_BRL */
void
module_free (struct module *mod, void *module_region)
module_arch_freeing_init (struct module *mod)
{
if (mod && mod->arch.init_unw_table &&
module_region == mod->module_init) {
if (mod->arch.init_unw_table) {
unw_remove_unwind_table(mod->arch.init_unw_table);
mod->arch.init_unw_table = NULL;
}
vfree(module_region);
}
/* Have we already seen one of these relocations? */

View File

@ -172,6 +172,8 @@ retry:
*/
if (fault & VM_FAULT_OOM) {
goto out_of_memory;
} else if (fault & VM_FAULT_SIGSEGV) {
goto bad_area;
} else if (fault & VM_FAULT_SIGBUS) {
signal = SIGBUS;
goto bad_area;

View File

@ -487,45 +487,39 @@ int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
return 0;
}
static int is_valid_resource(struct pci_dev *dev, int idx)
{
unsigned int i, type_mask = IORESOURCE_IO | IORESOURCE_MEM;
struct resource *devr = &dev->resource[idx], *busr;
if (!dev->bus)
return 0;
pci_bus_for_each_resource(dev->bus, busr, i) {
if (!busr || ((busr->flags ^ devr->flags) & type_mask))
continue;
if ((devr->start) && (devr->start >= busr->start) &&
(devr->end <= busr->end))
return 1;
}
return 0;
}
static void pcibios_fixup_resources(struct pci_dev *dev, int start, int limit)
{
int i;
for (i = start; i < limit; i++) {
if (!dev->resource[i].flags)
continue;
if ((is_valid_resource(dev, i)))
pci_claim_resource(dev, i);
}
}
void pcibios_fixup_device_resources(struct pci_dev *dev)
{
pcibios_fixup_resources(dev, 0, PCI_BRIDGE_RESOURCES);
int idx;
if (!dev->bus)
return;
for (idx = 0; idx < PCI_BRIDGE_RESOURCES; idx++) {
struct resource *r = &dev->resource[idx];
if (!r->flags || r->parent || !r->start)
continue;
pci_claim_resource(dev, idx);
}
}
EXPORT_SYMBOL_GPL(pcibios_fixup_device_resources);
static void pcibios_fixup_bridge_resources(struct pci_dev *dev)
{
pcibios_fixup_resources(dev, PCI_BRIDGE_RESOURCES, PCI_NUM_RESOURCES);
int idx;
if (!dev->bus)
return;
for (idx = PCI_BRIDGE_RESOURCES; idx < PCI_NUM_RESOURCES; idx++) {
struct resource *r = &dev->resource[idx];
if (!r->flags || r->parent || !r->start)
continue;
pci_claim_bridge_resource(dev, idx);
}
}
/*

View File

@ -200,6 +200,8 @@ good_area:
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
else if (fault & VM_FAULT_SIGSEGV)
goto bad_area;
else if (fault & VM_FAULT_SIGBUS)
goto do_sigbus;
BUG();

View File

@ -145,6 +145,8 @@ good_area:
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
else if (fault & VM_FAULT_SIGSEGV)
goto map_err;
else if (fault & VM_FAULT_SIGBUS)
goto bus_err;
BUG();

View File

@ -141,6 +141,8 @@ good_area:
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
else if (fault & VM_FAULT_SIGSEGV)
goto bad_area;
else if (fault & VM_FAULT_SIGBUS)
goto do_sigbus;
BUG();

View File

@ -224,6 +224,8 @@ good_area:
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
else if (fault & VM_FAULT_SIGSEGV)
goto bad_area;
else if (fault & VM_FAULT_SIGBUS)
goto do_sigbus;
BUG();

View File

@ -1026,6 +1026,8 @@ static void pcibios_allocate_bus_resources(struct pci_bus *bus)
pr, (pr && pr->name) ? pr->name : "nil");
if (pr && !(pr->flags & IORESOURCE_UNSET)) {
struct pci_dev *dev = bus->self;
if (request_resource(pr, res) == 0)
continue;
/*
@ -1035,6 +1037,12 @@ static void pcibios_allocate_bus_resources(struct pci_bus *bus)
*/
if (reparent_resources(pr, res) == 0)
continue;
if (dev && i < PCI_BRIDGE_RESOURCE_NUM &&
pci_claim_bridge_resource(dev,
i + PCI_BRIDGE_RESOURCES) == 0)
continue;
}
pr_warn("PCI: Cannot allocate resource region ");
pr_cont("%d of PCI bridge %d, will remap\n", i, bus->number);
@ -1227,7 +1235,10 @@ void pcibios_claim_one_bus(struct pci_bus *bus)
(unsigned long long)r->end,
(unsigned int)r->flags);
pci_claim_resource(dev, i);
if (pci_claim_resource(dev, i) == 0)
continue;
pci_claim_bridge_resource(dev, i);
}
}

View File

@ -158,6 +158,8 @@ good_area:
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
else if (fault & VM_FAULT_SIGSEGV)
goto bad_area;
else if (fault & VM_FAULT_SIGBUS)
goto do_sigbus;
BUG();

View File

@ -1388,7 +1388,7 @@ out:
void bpf_jit_free(struct bpf_prog *fp)
{
if (fp->jited)
module_free(NULL, fp->bpf_func);
module_memfree(fp->bpf_func);
bpf_prog_unlock_free(fp);
}

View File

@ -262,6 +262,8 @@ good_area:
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
else if (fault & VM_FAULT_SIGSEGV)
goto bad_area;
else if (fault & VM_FAULT_SIGBUS)
goto do_sigbus;
BUG();

View File

@ -106,7 +106,7 @@ static void __init pcibios_allocate_bus_resources(struct list_head *bus_list)
if (!r->flags)
continue;
if (!r->start ||
pci_claim_resource(dev, idx) < 0) {
pci_claim_bridge_resource(dev, idx) < 0) {
printk(KERN_ERR "PCI:"
" Cannot allocate resource"
" region %d of bridge %s\n",

View File

@ -281,42 +281,37 @@ static int __init pci_check_direct(void)
return -ENODEV;
}
static int is_valid_resource(struct pci_dev *dev, int idx)
{
unsigned int i, type_mask = IORESOURCE_IO | IORESOURCE_MEM;
struct resource *devr = &dev->resource[idx], *busr;
if (dev->bus) {
pci_bus_for_each_resource(dev->bus, busr, i) {
if (!busr || (busr->flags ^ devr->flags) & type_mask)
continue;
if (devr->start &&
devr->start >= busr->start &&
devr->end <= busr->end)
return 1;
}
}
return 0;
}
static void pcibios_fixup_device_resources(struct pci_dev *dev)
{
int limit, i;
int idx;
if (dev->bus->number != 0)
if (!dev->bus)
return;
limit = (dev->hdr_type == PCI_HEADER_TYPE_NORMAL) ?
PCI_BRIDGE_RESOURCES : PCI_NUM_RESOURCES;
for (idx = 0; idx < PCI_BRIDGE_RESOURCES; idx++) {
struct resource *r = &dev->resource[idx];
for (i = 0; i < limit; i++) {
if (!dev->resource[i].flags)
if (!r->flags || r->parent || !r->start)
continue;
if (is_valid_resource(dev, i))
pci_claim_resource(dev, i);
pci_claim_resource(dev, idx);
}
}
static void pcibios_fixup_bridge_resources(struct pci_dev *dev)
{
int idx;
if (!dev->bus)
return;
for (idx = PCI_BRIDGE_RESOURCES; idx < PCI_NUM_RESOURCES; idx++) {
struct resource *r = &dev->resource[idx];
if (!r->flags || r->parent || !r->start)
continue;
pci_claim_bridge_resource(dev, idx);
}
}
@ -330,7 +325,7 @@ void pcibios_fixup_bus(struct pci_bus *bus)
if (bus->self) {
pci_read_bridge_bases(bus);
pcibios_fixup_device_resources(bus->self);
pcibios_fixup_bridge_resources(bus->self);
}
list_for_each_entry(dev, &bus->devices, bus_list)

View File

@ -36,7 +36,7 @@ void *module_alloc(unsigned long size)
}
/* Free memory returned from module_alloc */
void module_free(struct module *mod, void *module_region)
void module_memfree(void *module_region)
{
kfree(module_region);
}

View File

@ -200,7 +200,7 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
/* Set up to return from userspace; jump to fixed address sigreturn
trampoline on kuser page. */
regs->ra = (unsigned long) (0x1040);
regs->ra = (unsigned long) (0x1044);
/* Set up registers for signal handler */
regs->sp = (unsigned long) frame;

View File

@ -135,6 +135,8 @@ survive:
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
else if (fault & VM_FAULT_SIGSEGV)
goto bad_area;
else if (fault & VM_FAULT_SIGBUS)
goto do_sigbus;
BUG();

View File

@ -171,6 +171,8 @@ good_area:
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
else if (fault & VM_FAULT_SIGSEGV)
goto bad_area;
else if (fault & VM_FAULT_SIGBUS)
goto do_sigbus;
BUG();

View File

@ -298,14 +298,10 @@ static inline unsigned long count_stubs(const Elf_Rela *rela, unsigned long n)
}
#endif
/* Free memory returned from module_alloc */
void module_free(struct module *mod, void *module_region)
void module_arch_freeing_init(struct module *mod)
{
kfree(mod->arch.section);
mod->arch.section = NULL;
vfree(module_region);
}
/* Additional bytes needed in front of individual sections */

View File

@ -256,6 +256,8 @@ good_area:
*/
if (fault & VM_FAULT_OOM)
goto out_of_memory;
else if (fault & VM_FAULT_SIGSEGV)
goto bad_area;
else if (fault & VM_FAULT_SIGBUS)
goto bad_area;
BUG();

View File

@ -154,4 +154,5 @@ module_exit(sha1_powerpc_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm");
MODULE_ALIAS_CRYPTO("sha1");
MODULE_ALIAS_CRYPTO("sha1-powerpc");

Some files were not shown because too many files have changed in this diff Show More