Tegra clk updates for 3.18

-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQIcBAABAgAGBQJUHBvYAAoJEFFb18rurjwTyaoP/37XBxRXfN2mL+WroUC7rb4m
 qb7PKg0RTOjjWV6Dp2EzmXThxsy0sGRpML9ZNp6LYIqPOH4sBHNeRK0+Qf1dB1XZ
 /DySj4oqEgJ+x8GHTqceM5x4C5fwDrF4sl5uacYIZcA42ih3nY331zlJaNPpkSpx
 iP91SJsQghK9OpdnOTl8oOV9p6PIjFfn712GPIRL7PBIBkVswdkJHufozzvcX+mI
 O30/FGEGbQCmCs1pFXufFNwJQcOy0qK5zyZPhGbzyji9gkCXz4OQOqWLdE1kXbWY
 CDfolp0Xdr+hLN/6YkQBEkZSjvKzLZf8OOYmVsXWXplVDjLf9/2p/Kq33p8JGKUa
 R8BC17MtmKjNgJ7g0Zo870meRWDcsLOfr60Wu4MIhzSJKSVDp6oq0ZqYay10/ZuF
 iMv49Mcpbgl4d0D6DgJztMPYwQZrMOkTLByGo+s/UxB+gY2XtvUO/6mPchYfhYUX
 M7y28Im4j5pmTKw5WhulfDWTBUPul8tvqGLdB4i9zKOA0iszby7pnGzjKfZd4RYg
 FD5Byzd225X92R/kKxWPTv4SQBM7flVkfvJLbOV2SiedoCHcyLXJjtSgaFlE3XYY
 gUt9uqM5cwiw6hwPVzP1su02419L6cZYkCWOlVMNlMZ3fjd6js6NUtNjxH30tU7H
 TrwJ4tAYaIkuTw0jBnFE
 =WIPU
 -----END PGP SIGNATURE-----

Merge tag 'tegra-clk-3.18' of git://nv-tegra.nvidia.com/user/pdeschrijver/linux into clk-next

Tegra clk updates for 3.18
This commit is contained in:
Mike Turquette 2014-09-26 16:09:39 -07:00
commit b6b2fe5b6e
482 changed files with 5431 additions and 2805 deletions

View File

@ -794,6 +794,7 @@ Greg Kroah-Hartman, "How to piss off a kernel subsystem maintainer".
<http://www.kroah.com/log/linux/maintainer-03.html>
<http://www.kroah.com/log/linux/maintainer-04.html>
<http://www.kroah.com/log/linux/maintainer-05.html>
<http://www.kroah.com/log/linux/maintainer-06.html>
NO!!!! No more huge patch bombs to linux-kernel@vger.kernel.org people!
<https://lkml.org/lkml/2005/7/11/336>

View File

@ -16,9 +16,9 @@ Example:
* DMA client
Required properties:
- dmas: a list of <[DMA multiplexer phandle] [SRS/DRS value]> pairs,
where SRS/DRS values are fixed handles, specified in the SoC
manual as the value that would be written into the PDMACHCR.
- dmas: a list of <[DMA multiplexer phandle] [SRS << 8 | DRS]> pairs.
where SRS/DRS are specified in the SoC manual.
It will be written into PDMACHCR as high 16-bit parts.
- dma-names: a list of DMA channel names, one per "dmas" entry
Example:

View File

@ -15,6 +15,17 @@ Optional properties for main touchpad device:
keycode generated by each GPIO. Linux keycodes are defined in
<dt-bindings/input/input.h>.
- linux,gpio-keymap: When enabled, the SPT_GPIOPWN_T19 object sends messages
on GPIO bit changes. An array of up to 8 entries can be provided
indicating the Linux keycode mapped to each bit of the status byte,
starting at the LSB. Linux keycodes are defined in
<dt-bindings/input/input.h>.
Note: the numbering of the GPIOs and the bit they start at varies between
maXTouch devices. You must either refer to the documentation, or
experiment to determine which bit corresponds to which input. Use
KEY_RESERVED for unused padding values.
Example:
touch@4b {

View File

@ -39,6 +39,10 @@ Optional properties:
further clocks may be specified in derived bindings.
- clock-names: One name for each entry in the clocks property, the
first one should be "stmmaceth".
- clk_ptp_ref: this is the PTP reference clock; in case of the PTP is
available this clock is used for programming the Timestamp Addend Register.
If not passed then the system clock will be used and this is fine on some
platforms.
Examples:

View File

@ -45,8 +45,8 @@ Example:
infet5-supply = <&some_reg>;
infet6-supply = <&some_reg>;
infet7-supply = <&some_reg>;
vsys_l1-supply = <&some_reg>;
vsys_l2-supply = <&some_reg>;
vsys-l1-supply = <&some_reg>;
vsys-l2-supply = <&some_reg>;
regulators {
dcdc1 {

View File

@ -1,7 +1,7 @@
ADI AXI-SPDIF controller
Required properties:
- compatible : Must be "adi,axi-spdif-1.00.a"
- compatible : Must be "adi,axi-spdif-tx-1.00.a"
- reg : Must contain SPDIF core's registers location and length
- clocks : Pairs of phandle and specifier referencing the controller's clocks.
The controller expects two clocks, the clock used for the AXI interface and

View File

@ -5,6 +5,7 @@ Required properties:
* "fsl,imx23-usbphy" for imx23 and imx28
* "fsl,imx6q-usbphy" for imx6dq and imx6dl
* "fsl,imx6sl-usbphy" for imx6sl
* "fsl,imx6sx-usbphy" for imx6sx
"fsl,imx23-usbphy" is still a fallback for other strings
- reg: Should contain registers location and length
- interrupts: Should contain phy interrupt

View File

@ -2,7 +2,7 @@ Analog TV Connector
===================
Required properties:
- compatible: "composite-connector" or "svideo-connector"
- compatible: "composite-video-connector" or "svideo-connector"
Optional properties:
- label: a symbolic name for the connector
@ -14,7 +14,7 @@ Example
-------
tv: connector {
compatible = "composite-connector";
compatible = "composite-video-connector";
label = "tv";
port {

View File

@ -138,9 +138,9 @@ Installation
- Build, install, reboot
The NFS/RDMA code will be enabled automatically if NFS and RDMA
are turned on. The NFS/RDMA client and server are configured via the hidden
SUNRPC_XPRT_RDMA config option that depends on SUNRPC and INFINIBAND. The
value of SUNRPC_XPRT_RDMA will be:
are turned on. The NFS/RDMA client and server are configured via the
SUNRPC_XPRT_RDMA_CLIENT and SUNRPC_XPRT_RDMA_SERVER config options that both
depend on SUNRPC and INFINIBAND. The default value of both options will be:
- N if either SUNRPC or INFINIBAND are N, in this case the NFS/RDMA client
and server will not be built
@ -235,8 +235,9 @@ NFS/RDMA Setup
- Start the NFS server
If the NFS/RDMA server was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in
kernel config), load the RDMA transport module:
If the NFS/RDMA server was built as a module
(CONFIG_SUNRPC_XPRT_RDMA_SERVER=m in kernel config), load the RDMA
transport module:
$ modprobe svcrdma
@ -255,8 +256,9 @@ NFS/RDMA Setup
- On the client system
If the NFS/RDMA client was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in
kernel config), load the RDMA client module:
If the NFS/RDMA client was built as a module
(CONFIG_SUNRPC_XPRT_RDMA_CLIENT=m in kernel config), load the RDMA client
module:
$ modprobe xprtrdma.ko

View File

@ -235,6 +235,39 @@ be used for more than one file, you can store an arbitrary pointer in the
private field of the seq_file structure; that value can then be retrieved
by the iterator functions.
There is also a wrapper function to seq_open() called seq_open_private(). It
kmallocs a zero filled block of memory and stores a pointer to it in the
private field of the seq_file structure, returning 0 on success. The
block size is specified in a third parameter to the function, e.g.:
static int ct_open(struct inode *inode, struct file *file)
{
return seq_open_private(file, &ct_seq_ops,
sizeof(struct mystruct));
}
There is also a variant function, __seq_open_private(), which is functionally
identical except that, if successful, it returns the pointer to the allocated
memory block, allowing further initialisation e.g.:
static int ct_open(struct inode *inode, struct file *file)
{
struct mystruct *p =
__seq_open_private(file, &ct_seq_ops, sizeof(*p));
if (!p)
return -ENOMEM;
p->foo = bar; /* initialize my stuff */
...
p->baz = true;
return 0;
}
A corresponding close function, seq_release_private() is available which
frees the memory allocated in the corresponding open.
The other operations of interest - read(), llseek(), and release() - are
all implemented by the seq_file code itself. So a virtual file's
file_operations structure will look like:

View File

@ -53,7 +53,20 @@ with IS_ERR() (they will never return a NULL pointer). -ENOENT will be returned
if and only if no GPIO has been assigned to the device/function/index triplet,
other error codes are used for cases where a GPIO has been assigned but an error
occurred while trying to acquire it. This is useful to discriminate between mere
errors and an absence of GPIO for optional GPIO parameters.
errors and an absence of GPIO for optional GPIO parameters. For the common
pattern where a GPIO is optional, the gpiod_get_optional() and
gpiod_get_index_optional() functions can be used. These functions return NULL
instead of -ENOENT if no GPIO has been assigned to the requested function:
struct gpio_desc *gpiod_get_optional(struct device *dev,
const char *con_id,
enum gpiod_flags flags)
struct gpio_desc *gpiod_get_index_optional(struct device *dev,
const char *con_id,
unsigned int index,
enum gpiod_flags flags)
Device-managed variants of these functions are also defined:
@ -65,6 +78,15 @@ Device-managed variants of these functions are also defined:
unsigned int idx,
enum gpiod_flags flags)
struct gpio_desc *devm_gpiod_get_optional(struct device *dev,
const char *con_id,
enum gpiod_flags flags)
struct gpio_desc * devm_gpiod_get_index_optional(struct device *dev,
const char *con_id,
unsigned int index,
enum gpiod_flags flags)
A GPIO descriptor can be disposed of using the gpiod_put() function:
void gpiod_put(struct gpio_desc *desc)

View File

@ -57,12 +57,12 @@ Well, you are all set up now. You can now use SMBus commands or plain
I2C to communicate with your device. SMBus commands are preferred if
the device supports them. Both are illustrated below.
__u8 register = 0x10; /* Device register to access */
__u8 reg = 0x10; /* Device register to access */
__s32 res;
char buf[10];
/* Using SMBus commands */
res = i2c_smbus_read_word_data(file, register);
res = i2c_smbus_read_word_data(file, reg);
if (res < 0) {
/* ERROR HANDLING: i2c transaction failed */
} else {
@ -70,11 +70,11 @@ the device supports them. Both are illustrated below.
}
/* Using I2C Write, equivalent of
i2c_smbus_write_word_data(file, register, 0x6543) */
buf[0] = register;
i2c_smbus_write_word_data(file, reg, 0x6543) */
buf[0] = reg;
buf[1] = 0x43;
buf[2] = 0x65;
if (write(file, buf, 3) ! =3) {
if (write(file, buf, 3) != 3) {
/* ERROR HANDLING: i2c transaction failed */
}

View File

@ -3541,6 +3541,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
bogus residue values);
s = SINGLE_LUN (the device has only one
Logical Unit);
u = IGNORE_UAS (don't bind to the uas driver);
w = NO_WP_DETECT (don't test whether the
medium is write-protected).
Example: quirks=0419:aaf5:rl,0421:0433:rc

View File

@ -59,7 +59,7 @@ acts similar to /dev/rtc and reacts on free-fall interrupts received
from the device. It supports blocking operations, poll/select and
fasync operation modes. You must read 1 bytes from the device. The
result is number of free-fall interrupts since the last successful
read (or 255 if number of interrupts would not fit). See the hpfall.c
read (or 255 if number of interrupts would not fit). See the freefall.c
file for an example on using the device.

View File

@ -143,8 +143,9 @@ This will cause the core to recalculate the total load on the regulator (based
on all its consumers) and change operating mode (if necessary and permitted)
to best match the current operating load.
The load_uA value can be determined from the consumers datasheet. e.g.most
datasheets have tables showing the max current consumed in certain situations.
The load_uA value can be determined from the consumer's datasheet. e.g. most
datasheets have tables showing the maximum current consumed in certain
situations.
Most consumers will use indirect operating mode control since they have no
knowledge of the regulator or whether the regulator is shared with other
@ -173,7 +174,7 @@ Consumers can register interest in regulator events by calling :-
int regulator_register_notifier(struct regulator *regulator,
struct notifier_block *nb);
Consumers can uregister interest by calling :-
Consumers can unregister interest by calling :-
int regulator_unregister_notifier(struct regulator *regulator,
struct notifier_block *nb);

View File

@ -9,14 +9,14 @@ Safety
- Errors in regulator configuration can have very serious consequences
for the system, potentially including lasting hardware damage.
- It is not possible to automatically determine the power confugration
- It is not possible to automatically determine the power configuration
of the system - software-equivalent variants of the same chip may
have different power requirments, and not all components with power
have different power requirements, and not all components with power
requirements are visible to software.
=> The API should make no changes to the hardware state unless it has
specific knowledge that these changes are safe to do perform on
this particular system.
specific knowledge that these changes are safe to perform on this
particular system.
Consumer use cases
------------------

View File

@ -11,7 +11,7 @@ Consider the following machine :-
+-> [Consumer B @ 3.3V]
The drivers for consumers A & B must be mapped to the correct regulator in
order to control their power supply. This mapping can be achieved in machine
order to control their power supplies. This mapping can be achieved in machine
initialisation code by creating a struct regulator_consumer_supply for
each regulator.
@ -39,7 +39,7 @@ to the 'Vcc' supply for Consumer A.
Constraints can now be registered by defining a struct regulator_init_data
for each regulator power domain. This structure also maps the consumers
to their supply regulator :-
to their supply regulators :-
static struct regulator_init_data regulator1_data = {
.constraints = {

View File

@ -36,11 +36,11 @@ Some terms used in this document:-
Consumers can be classified into two types:-
Static: consumer does not change its supply voltage or
current limit. It only needs to enable or disable it's
current limit. It only needs to enable or disable its
power supply. Its supply voltage is set by the hardware,
bootloader, firmware or kernel board initialisation code.
Dynamic: consumer needs to change it's supply voltage or
Dynamic: consumer needs to change its supply voltage or
current limit to meet operation demands.
@ -156,7 +156,7 @@ relevant to non SoC devices and is split into the following four interfaces:-
This interface is for machine specific code and allows the creation of
voltage/current domains (with constraints) for each regulator. It can
provide regulator constraints that will prevent device damage through
overvoltage or over current caused by buggy client drivers. It also
overvoltage or overcurrent caused by buggy client drivers. It also
allows the creation of a regulator tree whereby some regulators are
supplied by others (similar to a clock tree).

View File

@ -13,7 +13,7 @@ Drivers can register a regulator by calling :-
struct regulator_dev *regulator_register(struct regulator_desc *regulator_desc,
const struct regulator_config *config);
This will register the regulators capabilities and operations to the regulator
This will register the regulator's capabilities and operations to the regulator
core.
Regulators can be unregistered by calling :-
@ -23,8 +23,8 @@ void regulator_unregister(struct regulator_dev *rdev);
Regulator Events
================
Regulators can send events (e.g. over temp, under voltage, etc) to consumer
drivers by calling :-
Regulators can send events (e.g. overtemperature, undervoltage, etc) to
consumer drivers by calling :-
int regulator_notifier_call_chain(struct regulator_dev *rdev,
unsigned long event, void *data);

View File

@ -6424,7 +6424,8 @@ F: Documentation/scsi/NinjaSCSI.txt
F: drivers/scsi/nsp32*
NTB DRIVER
M: Jon Mason <jon.mason@intel.com>
M: Jon Mason <jdmason@kudzu.us>
M: Dave Jiang <dave.jiang@intel.com>
S: Supported
W: https://github.com/jonmason/ntb/wiki
T: git git://github.com/jonmason/ntb.git
@ -7053,7 +7054,7 @@ S: Maintained
F: drivers/pinctrl/sh-pfc/
PIN CONTROLLER - SAMSUNG
M: Tomasz Figa <t.figa@samsung.com>
M: Tomasz Figa <tomasz.figa@gmail.com>
M: Thomas Abraham <thomas.abraham@linaro.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
@ -7899,7 +7900,8 @@ S: Supported
F: drivers/media/i2c/s5k5baf.c
SAMSUNG SOC CLOCK DRIVERS
M: Tomasz Figa <t.figa@samsung.com>
M: Sylwester Nawrocki <s.nawrocki@samsung.com>
M: Tomasz Figa <tomasz.figa@gmail.com>
S: Supported
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
F: drivers/clk/samsung/
@ -7912,6 +7914,19 @@ S: Supported
L: netdev@vger.kernel.org
F: drivers/net/ethernet/samsung/sxgbe/
SAMSUNG USB2 PHY DRIVER
M: Kamil Debski <k.debski@samsung.com>
L: linux-kernel@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/phy/samsung-phy.txt
F: Documentation/phy/samsung-usb2.txt
F: drivers/phy/phy-exynos4210-usb2.c
F: drivers/phy/phy-exynos4x12-usb2.c
F: drivers/phy/phy-exynos5250-usb2.c
F: drivers/phy/phy-s5pv210-usb2.c
F: drivers/phy/phy-samsung-usb2.c
F: drivers/phy/phy-samsung-usb2.h
SERIAL DRIVERS
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-serial@vger.kernel.org
@ -10070,9 +10085,9 @@ F: Documentation/x86/
F: arch/x86/
X86 PLATFORM DRIVERS
M: Matthew Garrett <matthew.garrett@nebula.com>
M: Darren Hart <dvhart@infradead.org>
L: platform-driver-x86@vger.kernel.org
T: git git://cavan.codon.org.uk/platform-drivers-x86.git
T: git git://git.infradead.org/users/dvhart/linux-platform-drivers-x86.git
S: Maintained
F: drivers/platform/x86/

View File

@ -1,7 +1,7 @@
VERSION = 3
PATCHLEVEL = 17
SUBLEVEL = 0
EXTRAVERSION = -rc3
EXTRAVERSION = -rc5
NAME = Shuffling Zombie Juror
# *DOCUMENTATION*

View File

@ -427,7 +427,7 @@ struct ic_inv_args {
static void __ic_line_inv_vaddr_helper(void *info)
{
struct ic_inv *ic_inv_args = (struct ic_inv_args *) info;
struct ic_inv_args *ic_inv = info;
__ic_line_inv_vaddr_local(ic_inv->paddr, ic_inv->vaddr, ic_inv->sz);
}

View File

@ -804,7 +804,7 @@
usb1: usb@48390000 {
compatible = "synopsys,dwc3";
reg = <0x48390000 0x17000>;
reg = <0x48390000 0x10000>;
interrupts = <GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>;
phys = <&usb2_phy1>;
phy-names = "usb2-phy";
@ -826,7 +826,7 @@
usb2: usb@483d0000 {
compatible = "synopsys,dwc3";
reg = <0x483d0000 0x17000>;
reg = <0x483d0000 0x10000>;
interrupts = <GIC_SPI 174 IRQ_TYPE_LEVEL_HIGH>;
phys = <&usb2_phy2>;
phy-names = "usb2-phy";

View File

@ -260,7 +260,7 @@
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&i2c0_pins>;
clock-frequency = <400000>;
clock-frequency = <100000>;
tps65218: tps65218@24 {
reg = <0x24>;
@ -424,7 +424,7 @@
ranges = <0 0 0 0x01000000>; /* minimum GPMC partition = 16MB */
nand@0,0 {
reg = <0 0 4>; /* device IO registers */
ti,nand-ecc-opt = "bch8";
ti,nand-ecc-opt = "bch16";
ti,elm-id = <&elm>;
nand-bus-width = <8>;
gpmc,device-width = <1>;
@ -443,8 +443,6 @@
gpmc,rd-cycle-ns = <40>;
gpmc,wr-cycle-ns = <40>;
gpmc,wait-pin = <0>;
gpmc,wait-on-read;
gpmc,wait-on-write;
gpmc,bus-turnaround-ns = <0>;
gpmc,cycle2cycle-delay-ns = <0>;
gpmc,clk-activation-ns = <0>;

View File

@ -435,13 +435,13 @@
};
&gpmc {
status = "okay";
status = "okay"; /* Disable QSPI when enabling GPMC (NAND) */
pinctrl-names = "default";
pinctrl-0 = <&nand_flash_x8>;
ranges = <0 0 0x08000000 0x10000000>; /* CS0: NAND */
nand@0,0 {
reg = <0 0 0>; /* CS0, offset 0 */
ti,nand-ecc-opt = "bch8";
ti,nand-ecc-opt = "bch16";
ti,elm-id = <&elm>;
nand-bus-width = <8>;
gpmc,device-width = <1>;
@ -459,8 +459,7 @@
gpmc,access-ns = <30>; /* tCEA + 4*/
gpmc,rd-cycle-ns = <40>;
gpmc,wr-cycle-ns = <40>;
gpmc,wait-on-read = "true";
gpmc,wait-on-write = "true";
gpmc,wait-pin = <0>;
gpmc,bus-turnaround-ns = <0>;
gpmc,cycle2cycle-delay-ns = <0>;
gpmc,clk-activation-ns = <0>;
@ -557,7 +556,7 @@
};
&qspi {
status = "okay";
status = "disabled"; /* Disable GPMC (NAND) when enabling QSPI */
pinctrl-names = "default";
pinctrl-0 = <&qspi1_default>;

View File

@ -149,7 +149,7 @@
usb: usbck {
compatible = "atmel,at91rm9200-clk-usb";
#clock-cells = <0>;
atmel,clk-divisors = <1 2>;
atmel,clk-divisors = <1 2 0 0>;
clocks = <&pllb>;
};

View File

@ -40,6 +40,7 @@
};
pllb: pllbck {
compatible = "atmel,at91sam9g20-clk-pllb";
atmel,clk-input-range = <2000000 32000000>;
atmel,pll-clk-output-ranges = <30000000 100000000 0 0>;
};

View File

@ -8,6 +8,7 @@
/dts-v1/;
#include "dra74x.dtsi"
#include <dt-bindings/gpio/gpio.h>
/ {
model = "TI DRA742";
@ -24,9 +25,29 @@
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
};
vtt_fixed: fixedregulator-vtt {
compatible = "regulator-fixed";
regulator-name = "vtt_fixed";
regulator-min-microvolt = <1350000>;
regulator-max-microvolt = <1350000>;
regulator-always-on;
regulator-boot-on;
enable-active-high;
gpio = <&gpio7 11 GPIO_ACTIVE_HIGH>;
};
};
&dra7_pmx_core {
pinctrl-names = "default";
pinctrl-0 = <&vtt_pin>;
vtt_pin: pinmux_vtt_pin {
pinctrl-single,pins = <
0x3b4 (PIN_OUTPUT | MUX_MODE14) /* spi1_cs1.gpio7_11 */
>;
};
i2c1_pins: pinmux_i2c1_pins {
pinctrl-single,pins = <
0x400 (PIN_INPUT | MUX_MODE0) /* i2c1_sda */
@ -43,20 +64,19 @@
i2c3_pins: pinmux_i2c3_pins {
pinctrl-single,pins = <
0x410 (PIN_INPUT | MUX_MODE0) /* i2c3_sda */
0x414 (PIN_INPUT | MUX_MODE0) /* i2c3_scl */
0x288 (PIN_INPUT | MUX_MODE9) /* gpio6_14.i2c3_sda */
0x28c (PIN_INPUT | MUX_MODE9) /* gpio6_15.i2c3_scl */
>;
};
mcspi1_pins: pinmux_mcspi1_pins {
pinctrl-single,pins = <
0x3a4 (PIN_INPUT | MUX_MODE0) /* spi2_clk */
0x3a8 (PIN_INPUT | MUX_MODE0) /* spi2_d1 */
0x3ac (PIN_INPUT | MUX_MODE0) /* spi2_d0 */
0x3b0 (PIN_INPUT_SLEW | MUX_MODE0) /* spi2_cs0 */
0x3b4 (PIN_INPUT_SLEW | MUX_MODE0) /* spi2_cs1 */
0x3b8 (PIN_INPUT_SLEW | MUX_MODE6) /* spi2_cs2 */
0x3bc (PIN_INPUT_SLEW | MUX_MODE6) /* spi2_cs3 */
0x3a4 (PIN_INPUT | MUX_MODE0) /* spi1_sclk */
0x3a8 (PIN_INPUT | MUX_MODE0) /* spi1_d1 */
0x3ac (PIN_INPUT | MUX_MODE0) /* spi1_d0 */
0x3b0 (PIN_INPUT_SLEW | MUX_MODE0) /* spi1_cs0 */
0x3b8 (PIN_INPUT_SLEW | MUX_MODE6) /* spi1_cs2.hdmi1_hpd */
0x3bc (PIN_INPUT_SLEW | MUX_MODE6) /* spi1_cs3.hdmi1_cec */
>;
};
@ -284,7 +304,7 @@
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&i2c3_pins>;
clock-frequency = <3400000>;
clock-frequency = <400000>;
};
&mcspi1 {
@ -483,7 +503,7 @@
reg = <0x001c0000 0x00020000>;
};
partition@7 {
label = "NAND.u-boot-env";
label = "NAND.u-boot-env.backup1";
reg = <0x001e0000 0x00020000>;
};
partition@8 {
@ -504,3 +524,8 @@
&usb2_phy2 {
phy-supply = <&ldousb_reg>;
};
&gpio7 {
ti,no-reset-on-init;
ti,no-idle-on-init;
};

View File

@ -93,7 +93,7 @@
};
tv: connector {
compatible = "composite-connector";
compatible = "composite-video-connector";
label = "tv";
port {

View File

@ -467,6 +467,7 @@
ti,bit-shift = <0x1e>;
reg = <0x0d00>;
ti,set-bit-to-disable;
ti,set-rate-parent;
};
dpll4_m6_ck: dpll4_m6_ck {

View File

@ -116,7 +116,6 @@
msp2: msp@80117000 {
pinctrl-names = "default";
pinctrl-0 = <&msp2_default_mode>;
status = "okay";
};
msp3: msp@80125000 {

View File

@ -1443,14 +1443,14 @@ void edma_assign_channel_eventq(unsigned channel, enum dma_event_q eventq_no)
EXPORT_SYMBOL(edma_assign_channel_eventq);
static int edma_setup_from_hw(struct device *dev, struct edma_soc_info *pdata,
struct edma *edma_cc)
struct edma *edma_cc, int cc_id)
{
int i;
u32 value, cccfg;
s8 (*queue_priority_map)[2];
/* Decode the eDMA3 configuration from CCCFG register */
cccfg = edma_read(0, EDMA_CCCFG);
cccfg = edma_read(cc_id, EDMA_CCCFG);
value = GET_NUM_REGN(cccfg);
edma_cc->num_region = BIT(value);
@ -1464,7 +1464,8 @@ static int edma_setup_from_hw(struct device *dev, struct edma_soc_info *pdata,
value = GET_NUM_EVQUE(cccfg);
edma_cc->num_tc = value + 1;
dev_dbg(dev, "eDMA3 HW configuration (cccfg: 0x%08x):\n", cccfg);
dev_dbg(dev, "eDMA3 CC%d HW configuration (cccfg: 0x%08x):\n", cc_id,
cccfg);
dev_dbg(dev, "num_region: %u\n", edma_cc->num_region);
dev_dbg(dev, "num_channel: %u\n", edma_cc->num_channels);
dev_dbg(dev, "num_slot: %u\n", edma_cc->num_slots);
@ -1684,7 +1685,7 @@ static int edma_probe(struct platform_device *pdev)
return -ENOMEM;
/* Get eDMA3 configuration from IP */
ret = edma_setup_from_hw(dev, info[j], edma_cc[j]);
ret = edma_setup_from_hw(dev, info[j], edma_cc[j], j);
if (ret)
return ret;

View File

@ -26,25 +26,14 @@ static inline void xen_dma_map_page(struct device *hwdev, struct page *page,
__generic_dma_ops(hwdev)->map_page(hwdev, page, offset, size, dir, attrs);
}
static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
if (__generic_dma_ops(hwdev)->unmap_page)
__generic_dma_ops(hwdev)->unmap_page(hwdev, handle, size, dir, attrs);
}
struct dma_attrs *attrs);
static inline void xen_dma_sync_single_for_cpu(struct device *hwdev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
if (__generic_dma_ops(hwdev)->sync_single_for_cpu)
__generic_dma_ops(hwdev)->sync_single_for_cpu(hwdev, handle, size, dir);
}
void xen_dma_sync_single_for_cpu(struct device *hwdev,
dma_addr_t handle, size_t size, enum dma_data_direction dir);
void xen_dma_sync_single_for_device(struct device *hwdev,
dma_addr_t handle, size_t size, enum dma_data_direction dir);
static inline void xen_dma_sync_single_for_device(struct device *hwdev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
if (__generic_dma_ops(hwdev)->sync_single_for_device)
__generic_dma_ops(hwdev)->sync_single_for_device(hwdev, handle, size, dir);
}
#endif /* _ASM_ARM_XEN_PAGE_COHERENT_H */

View File

@ -33,7 +33,6 @@ typedef struct xpaddr {
#define INVALID_P2M_ENTRY (~0UL)
unsigned long __pfn_to_mfn(unsigned long pfn);
unsigned long __mfn_to_pfn(unsigned long mfn);
extern struct rb_root phys_to_mach;
static inline unsigned long pfn_to_mfn(unsigned long pfn)
@ -51,14 +50,6 @@ static inline unsigned long pfn_to_mfn(unsigned long pfn)
static inline unsigned long mfn_to_pfn(unsigned long mfn)
{
unsigned long pfn;
if (phys_to_mach.rb_node != NULL) {
pfn = __mfn_to_pfn(mfn);
if (pfn != INVALID_P2M_ENTRY)
return pfn;
}
return mfn;
}

View File

@ -93,6 +93,8 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
else
kvm_vcpu_block(vcpu);
kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
return 1;
}

View File

@ -99,6 +99,10 @@ __do_hyp_init:
mrc p15, 0, r0, c10, c2, 1
mcr p15, 4, r0, c10, c2, 1
@ Invalidate the stale TLBs from Bootloader
mcr p15, 4, r0, c8, c7, 0 @ TLBIALLH
dsb ish
@ Set the HSCTLR to:
@ - ARM/THUMB exceptions: Kernel config (Thumb-2 kernel)
@ - Endianness: Kernel config

View File

@ -14,6 +14,7 @@
#include <linux/gpio.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/clk-provider.h>
#include <asm/setup.h>
#include <asm/irq.h>
@ -35,13 +36,21 @@ static void __init at91rm9200_dt_init_irq(void)
of_irq_init(irq_of_match);
}
static void __init at91rm9200_dt_timer_init(void)
{
#if defined(CONFIG_COMMON_CLK)
of_clk_init(NULL);
#endif
at91rm9200_timer_init();
}
static const char *at91rm9200_dt_board_compat[] __initdata = {
"atmel,at91rm9200",
NULL
};
DT_MACHINE_START(at91rm9200_dt, "Atmel AT91RM9200 (Device Tree)")
.init_time = at91rm9200_timer_init,
.init_time = at91rm9200_dt_timer_init,
.map_io = at91_map_io,
.handle_irq = at91_aic_handle_irq,
.init_early = at91rm9200_dt_initialize,

View File

@ -1207,8 +1207,7 @@ int gpmc_cs_program_settings(int cs, struct gpmc_settings *p)
}
}
if ((p->wait_on_read || p->wait_on_write) &&
(p->wait_pin > gpmc_nr_waitpins)) {
if (p->wait_pin > gpmc_nr_waitpins) {
pr_err("%s: invalid wait-pin (%d)\n", __func__, p->wait_pin);
return -EINVAL;
}
@ -1288,8 +1287,8 @@ void gpmc_read_settings_dt(struct device_node *np, struct gpmc_settings *p)
p->wait_on_write = of_property_read_bool(np,
"gpmc,wait-on-write");
if (!p->wait_on_read && !p->wait_on_write)
pr_warn("%s: read/write wait monitoring not enabled!\n",
__func__);
pr_debug("%s: rd/wr wait monitoring not enabled!\n",
__func__);
}
}

View File

@ -1 +1 @@
obj-y := enlighten.o hypercall.o grant-table.o p2m.o mm.o
obj-y := enlighten.o hypercall.o grant-table.o p2m.o mm.o mm32.o

View File

@ -260,6 +260,12 @@ static int __init xen_guest_init(void)
xen_domain_type = XEN_HVM_DOMAIN;
xen_setup_features();
if (!xen_feature(XENFEAT_grant_map_identity)) {
pr_warn("Please upgrade your Xen.\n"
"If your platform has any non-coherent DMA devices, they won't work properly.\n");
}
if (xen_feature(XENFEAT_dom0))
xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED;
else

202
arch/arm/xen/mm32.c Normal file
View File

@ -0,0 +1,202 @@
#include <linux/cpu.h>
#include <linux/dma-mapping.h>
#include <linux/gfp.h>
#include <linux/highmem.h>
#include <xen/features.h>
static DEFINE_PER_CPU(unsigned long, xen_mm32_scratch_virt);
static DEFINE_PER_CPU(pte_t *, xen_mm32_scratch_ptep);
static int alloc_xen_mm32_scratch_page(int cpu)
{
struct page *page;
unsigned long virt;
pmd_t *pmdp;
pte_t *ptep;
if (per_cpu(xen_mm32_scratch_ptep, cpu) != NULL)
return 0;
page = alloc_page(GFP_KERNEL);
if (page == NULL) {
pr_warn("Failed to allocate xen_mm32_scratch_page for cpu %d\n", cpu);
return -ENOMEM;
}
virt = (unsigned long)__va(page_to_phys(page));
pmdp = pmd_offset(pud_offset(pgd_offset_k(virt), virt), virt);
ptep = pte_offset_kernel(pmdp, virt);
per_cpu(xen_mm32_scratch_virt, cpu) = virt;
per_cpu(xen_mm32_scratch_ptep, cpu) = ptep;
return 0;
}
static int xen_mm32_cpu_notify(struct notifier_block *self,
unsigned long action, void *hcpu)
{
int cpu = (long)hcpu;
switch (action) {
case CPU_UP_PREPARE:
if (alloc_xen_mm32_scratch_page(cpu))
return NOTIFY_BAD;
break;
default:
break;
}
return NOTIFY_OK;
}
static struct notifier_block xen_mm32_cpu_notifier = {
.notifier_call = xen_mm32_cpu_notify,
};
static void* xen_mm32_remap_page(dma_addr_t handle)
{
unsigned long virt = get_cpu_var(xen_mm32_scratch_virt);
pte_t *ptep = __get_cpu_var(xen_mm32_scratch_ptep);
*ptep = pfn_pte(handle >> PAGE_SHIFT, PAGE_KERNEL);
local_flush_tlb_kernel_page(virt);
return (void*)virt;
}
static void xen_mm32_unmap(void *vaddr)
{
put_cpu_var(xen_mm32_scratch_virt);
}
/* functions called by SWIOTLB */
static void dma_cache_maint(dma_addr_t handle, unsigned long offset,
size_t size, enum dma_data_direction dir,
void (*op)(const void *, size_t, int))
{
unsigned long pfn;
size_t left = size;
pfn = (handle >> PAGE_SHIFT) + offset / PAGE_SIZE;
offset %= PAGE_SIZE;
do {
size_t len = left;
void *vaddr;
if (!pfn_valid(pfn))
{
/* Cannot map the page, we don't know its physical address.
* Return and hope for the best */
if (!xen_feature(XENFEAT_grant_map_identity))
return;
vaddr = xen_mm32_remap_page(handle) + offset;
op(vaddr, len, dir);
xen_mm32_unmap(vaddr - offset);
} else {
struct page *page = pfn_to_page(pfn);
if (PageHighMem(page)) {
if (len + offset > PAGE_SIZE)
len = PAGE_SIZE - offset;
if (cache_is_vipt_nonaliasing()) {
vaddr = kmap_atomic(page);
op(vaddr + offset, len, dir);
kunmap_atomic(vaddr);
} else {
vaddr = kmap_high_get(page);
if (vaddr) {
op(vaddr + offset, len, dir);
kunmap_high(page);
}
}
} else {
vaddr = page_address(page) + offset;
op(vaddr, len, dir);
}
}
offset = 0;
pfn++;
left -= len;
} while (left);
}
static void __xen_dma_page_dev_to_cpu(struct device *hwdev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{
/* Cannot use __dma_page_dev_to_cpu because we don't have a
* struct page for handle */
if (dir != DMA_TO_DEVICE)
outer_inv_range(handle, handle + size);
dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, dmac_unmap_area);
}
static void __xen_dma_page_cpu_to_dev(struct device *hwdev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{
dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, dmac_map_area);
if (dir == DMA_FROM_DEVICE) {
outer_inv_range(handle, handle + size);
} else {
outer_clean_range(handle, handle + size);
}
}
void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
if (!__generic_dma_ops(hwdev)->unmap_page)
return;
if (dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
return;
__xen_dma_page_dev_to_cpu(hwdev, handle, size, dir);
}
void xen_dma_sync_single_for_cpu(struct device *hwdev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
if (!__generic_dma_ops(hwdev)->sync_single_for_cpu)
return;
__xen_dma_page_dev_to_cpu(hwdev, handle, size, dir);
}
void xen_dma_sync_single_for_device(struct device *hwdev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
if (!__generic_dma_ops(hwdev)->sync_single_for_device)
return;
__xen_dma_page_cpu_to_dev(hwdev, handle, size, dir);
}
int __init xen_mm32_init(void)
{
int cpu;
if (!xen_initial_domain())
return 0;
register_cpu_notifier(&xen_mm32_cpu_notifier);
get_online_cpus();
for_each_online_cpu(cpu) {
if (alloc_xen_mm32_scratch_page(cpu)) {
put_online_cpus();
unregister_cpu_notifier(&xen_mm32_cpu_notifier);
return -ENOMEM;
}
}
put_online_cpus();
return 0;
}
arch_initcall(xen_mm32_init);

View File

@ -21,14 +21,12 @@ struct xen_p2m_entry {
unsigned long pfn;
unsigned long mfn;
unsigned long nr_pages;
struct rb_node rbnode_mach;
struct rb_node rbnode_phys;
};
static rwlock_t p2m_lock;
struct rb_root phys_to_mach = RB_ROOT;
EXPORT_SYMBOL_GPL(phys_to_mach);
static struct rb_root mach_to_phys = RB_ROOT;
static int xen_add_phys_to_mach_entry(struct xen_p2m_entry *new)
{
@ -41,8 +39,6 @@ static int xen_add_phys_to_mach_entry(struct xen_p2m_entry *new)
parent = *link;
entry = rb_entry(parent, struct xen_p2m_entry, rbnode_phys);
if (new->mfn == entry->mfn)
goto err_out;
if (new->pfn == entry->pfn)
goto err_out;
@ -88,64 +84,6 @@ unsigned long __pfn_to_mfn(unsigned long pfn)
}
EXPORT_SYMBOL_GPL(__pfn_to_mfn);
static int xen_add_mach_to_phys_entry(struct xen_p2m_entry *new)
{
struct rb_node **link = &mach_to_phys.rb_node;
struct rb_node *parent = NULL;
struct xen_p2m_entry *entry;
int rc = 0;
while (*link) {
parent = *link;
entry = rb_entry(parent, struct xen_p2m_entry, rbnode_mach);
if (new->mfn == entry->mfn)
goto err_out;
if (new->pfn == entry->pfn)
goto err_out;
if (new->mfn < entry->mfn)
link = &(*link)->rb_left;
else
link = &(*link)->rb_right;
}
rb_link_node(&new->rbnode_mach, parent, link);
rb_insert_color(&new->rbnode_mach, &mach_to_phys);
goto out;
err_out:
rc = -EINVAL;
pr_warn("%s: cannot add pfn=%pa -> mfn=%pa: pfn=%pa -> mfn=%pa already exists\n",
__func__, &new->pfn, &new->mfn, &entry->pfn, &entry->mfn);
out:
return rc;
}
unsigned long __mfn_to_pfn(unsigned long mfn)
{
struct rb_node *n = mach_to_phys.rb_node;
struct xen_p2m_entry *entry;
unsigned long irqflags;
read_lock_irqsave(&p2m_lock, irqflags);
while (n) {
entry = rb_entry(n, struct xen_p2m_entry, rbnode_mach);
if (entry->mfn <= mfn &&
entry->mfn + entry->nr_pages > mfn) {
read_unlock_irqrestore(&p2m_lock, irqflags);
return entry->pfn + (mfn - entry->mfn);
}
if (mfn < entry->mfn)
n = n->rb_left;
else
n = n->rb_right;
}
read_unlock_irqrestore(&p2m_lock, irqflags);
return INVALID_P2M_ENTRY;
}
EXPORT_SYMBOL_GPL(__mfn_to_pfn);
int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
struct gnttab_map_grant_ref *kmap_ops,
struct page **pages, unsigned int count)
@ -192,7 +130,6 @@ bool __set_phys_to_machine_multi(unsigned long pfn,
p2m_entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys);
if (p2m_entry->pfn <= pfn &&
p2m_entry->pfn + p2m_entry->nr_pages > pfn) {
rb_erase(&p2m_entry->rbnode_mach, &mach_to_phys);
rb_erase(&p2m_entry->rbnode_phys, &phys_to_mach);
write_unlock_irqrestore(&p2m_lock, irqflags);
kfree(p2m_entry);
@ -217,8 +154,7 @@ bool __set_phys_to_machine_multi(unsigned long pfn,
p2m_entry->mfn = mfn;
write_lock_irqsave(&p2m_lock, irqflags);
if ((rc = xen_add_phys_to_mach_entry(p2m_entry) < 0) ||
(rc = xen_add_mach_to_phys_entry(p2m_entry) < 0)) {
if ((rc = xen_add_phys_to_mach_entry(p2m_entry)) < 0) {
write_unlock_irqrestore(&p2m_lock, irqflags);
return false;
}

View File

@ -150,7 +150,6 @@ static void sha2_finup(struct shash_desc *desc, const u8 *data,
kernel_neon_begin_partial(28);
sha2_ce_transform(blocks, data, sctx->state, NULL, len);
kernel_neon_end();
data += blocks * SHA256_BLOCK_SIZE;
}
static int sha224_finup(struct shash_desc *desc, const u8 *data,

View File

@ -79,7 +79,6 @@ static inline void decode_ctrl_reg(u32 reg,
*/
#define ARM_MAX_BRP 16
#define ARM_MAX_WRP 16
#define ARM_MAX_HBP_SLOTS (ARM_MAX_BRP + ARM_MAX_WRP)
/* Virtual debug register bases. */
#define AARCH64_DBG_REG_BVR 0

View File

@ -139,7 +139,7 @@ extern struct task_struct *cpu_switch_to(struct task_struct *prev,
((struct pt_regs *)(THREAD_START_SP + task_stack_page(p)) - 1)
#define KSTK_EIP(tsk) ((unsigned long)task_pt_regs(tsk)->pc)
#define KSTK_ESP(tsk) ((unsigned long)task_pt_regs(tsk)->sp)
#define KSTK_ESP(tsk) user_stack_pointer(task_pt_regs(tsk))
/*
* Prefetching support

View File

@ -137,7 +137,7 @@ struct pt_regs {
(!((regs)->pstate & PSR_F_BIT))
#define user_stack_pointer(regs) \
(!compat_user_mode(regs)) ? ((regs)->sp) : ((regs)->compat_sp)
(!compat_user_mode(regs) ? (regs)->sp : (regs)->compat_sp)
static inline unsigned long regs_return_value(struct pt_regs *regs)
{

View File

@ -270,6 +270,7 @@ static int fpsimd_cpu_pm_notifier(struct notifier_block *self,
case CPU_PM_ENTER:
if (current->mm && !test_thread_flag(TIF_FOREIGN_FPSTATE))
fpsimd_save_state(&current->thread.fpsimd_state);
this_cpu_write(fpsimd_last_state, NULL);
break;
case CPU_PM_EXIT:
if (current->mm)

View File

@ -373,10 +373,6 @@ ENTRY(__boot_cpu_mode)
.long 0
.popsection
.align 3
2: .quad .
.quad PAGE_OFFSET
#ifdef CONFIG_SMP
.align 3
1: .quad .

View File

@ -97,19 +97,15 @@ static bool migrate_one_irq(struct irq_desc *desc)
if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity))
return false;
if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids)
if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) {
affinity = cpu_online_mask;
ret = true;
}
/*
* when using forced irq_set_affinity we must ensure that the cpu
* being offlined is not present in the affinity mask, it may be
* selected as the target CPU otherwise
*/
affinity = cpu_online_mask;
c = irq_data_get_irq_chip(d);
if (!c->irq_set_affinity)
pr_debug("IRQ%u: unable to set affinity\n", d->irq);
else if (c->irq_set_affinity(d, affinity, true) == IRQ_SET_MASK_OK && ret)
else if (c->irq_set_affinity(d, affinity, false) == IRQ_SET_MASK_OK && ret)
cpumask_copy(d->affinity, affinity);
return ret;

View File

@ -24,6 +24,12 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
return regs->compat_lr;
}
if ((u32)idx == PERF_REG_ARM64_SP)
return regs->sp;
if ((u32)idx == PERF_REG_ARM64_PC)
return regs->pc;
return regs->regs[idx];
}

View File

@ -230,9 +230,27 @@ void exit_thread(void)
{
}
static void tls_thread_flush(void)
{
asm ("msr tpidr_el0, xzr");
if (is_compat_task()) {
current->thread.tp_value = 0;
/*
* We need to ensure ordering between the shadow state and the
* hardware state, so that we don't corrupt the hardware state
* with a stale shadow state during context switch.
*/
barrier();
asm ("msr tpidrro_el0, xzr");
}
}
void flush_thread(void)
{
fpsimd_flush_thread();
tls_thread_flush();
flush_ptrace_hw_breakpoint(current);
}

View File

@ -87,7 +87,8 @@ static void ptrace_hbptriggered(struct perf_event *bp,
break;
}
}
for (i = ARM_MAX_BRP; i < ARM_MAX_HBP_SLOTS && !bp; ++i) {
for (i = 0; i < ARM_MAX_WRP; ++i) {
if (current->thread.debug.hbp_watch[i] == bp) {
info.si_errno = -((i << 1) + 1);
break;
@ -662,8 +663,10 @@ static int compat_gpr_get(struct task_struct *target,
kbuf += sizeof(reg);
} else {
ret = copy_to_user(ubuf, &reg, sizeof(reg));
if (ret)
if (ret) {
ret = -EFAULT;
break;
}
ubuf += sizeof(reg);
}
@ -701,8 +704,10 @@ static int compat_gpr_set(struct task_struct *target,
kbuf += sizeof(reg);
} else {
ret = copy_from_user(&reg, ubuf, sizeof(reg));
if (ret)
return ret;
if (ret) {
ret = -EFAULT;
break;
}
ubuf += sizeof(reg);
}

View File

@ -78,6 +78,7 @@ unsigned int compat_elf_hwcap2 __read_mostly;
#endif
static const char *cpu_name;
static const char *machine_name;
phys_addr_t __fdt_pointer __initdata;
/*
@ -309,6 +310,8 @@ static void __init setup_machine_fdt(phys_addr_t dt_phys)
while (true)
cpu_relax();
}
machine_name = of_flat_dt_get_machine_name();
}
/*
@ -447,21 +450,10 @@ static int c_show(struct seq_file *m, void *v)
{
int i;
/*
* Dump out the common processor features in a single line. Userspace
* should read the hwcaps with getauxval(AT_HWCAP) rather than
* attempting to parse this.
*/
seq_puts(m, "features\t:");
for (i = 0; hwcap_str[i]; i++)
if (elf_hwcap & (1 << i))
seq_printf(m, " %s", hwcap_str[i]);
seq_puts(m, "\n\n");
seq_printf(m, "Processor\t: %s rev %d (%s)\n",
cpu_name, read_cpuid_id() & 15, ELF_PLATFORM);
for_each_online_cpu(i) {
struct cpuinfo_arm64 *cpuinfo = &per_cpu(cpu_data, i);
u32 midr = cpuinfo->reg_midr;
/*
* glibc reads /proc/cpuinfo to determine the number of
* online processors, looking for lines beginning with
@ -470,13 +462,25 @@ static int c_show(struct seq_file *m, void *v)
#ifdef CONFIG_SMP
seq_printf(m, "processor\t: %d\n", i);
#endif
seq_printf(m, "implementer\t: 0x%02x\n",
MIDR_IMPLEMENTOR(midr));
seq_printf(m, "variant\t\t: 0x%x\n", MIDR_VARIANT(midr));
seq_printf(m, "partnum\t\t: 0x%03x\n", MIDR_PARTNUM(midr));
seq_printf(m, "revision\t: 0x%x\n\n", MIDR_REVISION(midr));
}
/* dump out the processor features */
seq_puts(m, "Features\t: ");
for (i = 0; hwcap_str[i]; i++)
if (elf_hwcap & (1 << i))
seq_printf(m, "%s ", hwcap_str[i]);
seq_printf(m, "\nCPU implementer\t: 0x%02x\n", read_cpuid_id() >> 24);
seq_printf(m, "CPU architecture: AArch64\n");
seq_printf(m, "CPU variant\t: 0x%x\n", (read_cpuid_id() >> 20) & 15);
seq_printf(m, "CPU part\t: 0x%03x\n", (read_cpuid_id() >> 4) & 0xfff);
seq_printf(m, "CPU revision\t: %d\n", read_cpuid_id() & 15);
seq_puts(m, "\n");
seq_printf(m, "Hardware\t: %s\n", machine_name);
return 0;
}

View File

@ -79,6 +79,12 @@ long compat_arm_syscall(struct pt_regs *regs)
case __ARM_NR_compat_set_tls:
current->thread.tp_value = regs->regs[0];
/*
* Protect against register corruption from context switch.
* See comment in tls_thread_flush.
*/
barrier();
asm ("msr tpidrro_el0, %0" : : "r" (regs->regs[0]));
return 0;

View File

@ -66,6 +66,8 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
else
kvm_vcpu_block(vcpu);
kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
return 1;
}

View File

@ -80,6 +80,10 @@ __do_hyp_init:
msr mair_el2, x4
isb
/* Invalidate the stale TLBs from Bootloader */
tlbi alle2
dsb sy
mrs x4, sctlr_el2
and x4, x4, #SCTLR_EL2_EE // preserve endianness of EL2
ldr x5, =SCTLR_EL2_FLAGS

View File

@ -4,7 +4,7 @@
#include <uapi/asm/unistd.h>
#define NR_syscalls 352
#define NR_syscalls 354
#define __ARCH_WANT_OLD_READDIR
#define __ARCH_WANT_OLD_STAT

View File

@ -357,5 +357,7 @@
#define __NR_sched_setattr 349
#define __NR_sched_getattr 350
#define __NR_renameat2 351
#define __NR_getrandom 352
#define __NR_memfd_create 353
#endif /* _UAPI_ASM_M68K_UNISTD_H_ */

View File

@ -372,4 +372,6 @@ ENTRY(sys_call_table)
.long sys_sched_setattr
.long sys_sched_getattr /* 350 */
.long sys_renameat2
.long sys_getrandom
.long sys_memfd_create

View File

@ -127,7 +127,7 @@ config SECCOMP
endmenu
menu "Advanced setup"
menu "Kernel features"
config ADVANCED_OPTIONS
bool "Prompt for advanced kernel configuration options"
@ -248,10 +248,10 @@ config MICROBLAZE_64K_PAGES
endchoice
endmenu
source "mm/Kconfig"
endmenu
menu "Executable file formats"
source "fs/Kconfig.binfmt"

View File

@ -15,6 +15,7 @@
#include <asm/percpu.h>
#include <asm/ptrace.h>
#include <linux/linkage.h>
/*
* These are per-cpu variables required in entry.S, among other

View File

@ -98,13 +98,13 @@ static inline int access_ok(int type, const void __user *addr,
if ((get_fs().seg < ((unsigned long)addr)) ||
(get_fs().seg < ((unsigned long)addr + size - 1))) {
pr_debug("ACCESS fail: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
pr_devel("ACCESS fail: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
type ? "WRITE" : "READ ", (__force u32)addr, (u32)size,
(u32)get_fs().seg);
return 0;
}
ok:
pr_debug("ACCESS OK: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
pr_devel("ACCESS OK: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
type ? "WRITE" : "READ ", (__force u32)addr, (u32)size,
(u32)get_fs().seg);
return 1;

View File

@ -38,6 +38,6 @@
#endif /* __ASSEMBLY__ */
#define __NR_syscalls 381
#define __NR_syscalls 387
#endif /* _ASM_MICROBLAZE_UNISTD_H */

View File

@ -321,6 +321,22 @@ source "fs/Kconfig"
source "arch/parisc/Kconfig.debug"
config SECCOMP
def_bool y
prompt "Enable seccomp to safely compute untrusted bytecode"
---help---
This kernel feature is useful for number crunching applications
that may need to compute untrusted bytecode during their
execution. By using pipes or other transports made available to
the process as file descriptors supporting the read/write
syscalls, it's possible to isolate those applications in
their own address space using seccomp. Once seccomp is
enabled via prctl(PR_SET_SECCOMP), it cannot be disabled
and the task is only allowed to execute a few safe syscalls
defined by each seccomp mode.
If unsure, say Y. Only embedded should say N here.
source "security/Kconfig"
source "crypto/Kconfig"

View File

@ -456,7 +456,7 @@ int hpux_sysfs(int opcode, unsigned long arg1, unsigned long arg2)
}
/* String could be altered by userspace after strlen_user() */
fsname[len] = '\0';
fsname[len - 1] = '\0';
printk(KERN_DEBUG "that is '%s' as (char *)\n", fsname);
if ( !strcmp(fsname, "hfs") ) {

View File

@ -0,0 +1,16 @@
#ifndef _ASM_PARISC_SECCOMP_H
#define _ASM_PARISC_SECCOMP_H
#include <linux/unistd.h>
#define __NR_seccomp_read __NR_read
#define __NR_seccomp_write __NR_write
#define __NR_seccomp_exit __NR_exit
#define __NR_seccomp_sigreturn __NR_rt_sigreturn
#define __NR_seccomp_read_32 __NR_read
#define __NR_seccomp_write_32 __NR_write
#define __NR_seccomp_exit_32 __NR_exit
#define __NR_seccomp_sigreturn_32 __NR_rt_sigreturn
#endif /* _ASM_PARISC_SECCOMP_H */

View File

@ -60,6 +60,7 @@ struct thread_info {
#define TIF_NOTIFY_RESUME 8 /* callback before returning to user */
#define TIF_SINGLESTEP 9 /* single stepping? */
#define TIF_BLOCKSTEP 10 /* branch stepping? */
#define TIF_SECCOMP 11 /* secure computing */
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
@ -70,11 +71,13 @@ struct thread_info {
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP)
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
#define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | \
_TIF_NEED_RESCHED)
#define _TIF_SYSCALL_TRACE_MASK (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP | \
_TIF_BLOCKSTEP | _TIF_SYSCALL_AUDIT)
_TIF_BLOCKSTEP | _TIF_SYSCALL_AUDIT | \
_TIF_SECCOMP)
#ifdef CONFIG_64BIT
# ifdef CONFIG_COMPAT

View File

@ -830,8 +830,11 @@
#define __NR_sched_getattr (__NR_Linux + 335)
#define __NR_utimes (__NR_Linux + 336)
#define __NR_renameat2 (__NR_Linux + 337)
#define __NR_seccomp (__NR_Linux + 338)
#define __NR_getrandom (__NR_Linux + 339)
#define __NR_memfd_create (__NR_Linux + 340)
#define __NR_Linux_syscalls (__NR_renameat2 + 1)
#define __NR_Linux_syscalls (__NR_memfd_create + 1)
#define __IGNORE_select /* newselect */

View File

@ -270,6 +270,12 @@ long do_syscall_trace_enter(struct pt_regs *regs)
{
long ret = 0;
/* Do the secure computing check first. */
if (secure_computing(regs->gr[20])) {
/* seccomp failures shouldn't expose any additional code. */
return -1;
}
if (test_thread_flag(TIF_SYSCALL_TRACE) &&
tracehook_report_syscall_entry(regs))
ret = -1L;

View File

@ -74,7 +74,7 @@ ENTRY(linux_gateway_page)
/* ADDRESS 0xb0 to 0xb8, lws uses two insns for entry */
/* Light-weight-syscall entry must always be located at 0xb0 */
/* WARNING: Keep this number updated with table size changes */
#define __NR_lws_entries (2)
#define __NR_lws_entries (3)
lws_entry:
gate lws_start, %r0 /* increase privilege */
@ -502,7 +502,7 @@ lws_exit:
/***************************************************
Implementing CAS as an atomic operation:
Implementing 32bit CAS as an atomic operation:
%r26 - Address to examine
%r25 - Old value to check (old)
@ -659,6 +659,230 @@ cas_action:
ASM_EXCEPTIONTABLE_ENTRY(2b-linux_gateway_page, 3b-linux_gateway_page)
/***************************************************
New CAS implementation which uses pointers and variable size
information. The value pointed by old and new MUST NOT change
while performing CAS. The lock only protect the value at %r26.
%r26 - Address to examine
%r25 - Pointer to the value to check (old)
%r24 - Pointer to the value to set (new)
%r23 - Size of the variable (0/1/2/3 for 8/16/32/64 bit)
%r28 - Return non-zero on failure
%r21 - Kernel error code
%r21 has the following meanings:
EAGAIN - CAS is busy, ldcw failed, try again.
EFAULT - Read or write failed.
Scratch: r20, r22, r28, r29, r1, fr4 (32bit for 64bit CAS only)
****************************************************/
/* ELF32 Process entry path */
lws_compare_and_swap_2:
#ifdef CONFIG_64BIT
/* Clip the input registers */
depdi 0, 31, 32, %r26
depdi 0, 31, 32, %r25
depdi 0, 31, 32, %r24
depdi 0, 31, 32, %r23
#endif
/* Check the validity of the size pointer */
subi,>>= 4, %r23, %r0
b,n lws_exit_nosys
/* Jump to the functions which will load the old and new values into
registers depending on the their size */
shlw %r23, 2, %r29
blr %r29, %r0
nop
/* 8bit load */
4: ldb 0(%sr3,%r25), %r25
b cas2_lock_start
5: ldb 0(%sr3,%r24), %r24
nop
nop
nop
nop
nop
/* 16bit load */
6: ldh 0(%sr3,%r25), %r25
b cas2_lock_start
7: ldh 0(%sr3,%r24), %r24
nop
nop
nop
nop
nop
/* 32bit load */
8: ldw 0(%sr3,%r25), %r25
b cas2_lock_start
9: ldw 0(%sr3,%r24), %r24
nop
nop
nop
nop
nop
/* 64bit load */
#ifdef CONFIG_64BIT
10: ldd 0(%sr3,%r25), %r25
11: ldd 0(%sr3,%r24), %r24
#else
/* Load new value into r22/r23 - high/low */
10: ldw 0(%sr3,%r25), %r22
11: ldw 4(%sr3,%r25), %r23
/* Load new value into fr4 for atomic store later */
12: flddx 0(%sr3,%r24), %fr4
#endif
cas2_lock_start:
/* Load start of lock table */
ldil L%lws_lock_start, %r20
ldo R%lws_lock_start(%r20), %r28
/* Extract four bits from r26 and hash lock (Bits 4-7) */
extru %r26, 27, 4, %r20
/* Find lock to use, the hash is either one of 0 to
15, multiplied by 16 (keep it 16-byte aligned)
and add to the lock table offset. */
shlw %r20, 4, %r20
add %r20, %r28, %r20
rsm PSW_SM_I, %r0 /* Disable interrupts */
/* COW breaks can cause contention on UP systems */
LDCW 0(%sr2,%r20), %r28 /* Try to acquire the lock */
cmpb,<>,n %r0, %r28, cas2_action /* Did we get it? */
cas2_wouldblock:
ldo 2(%r0), %r28 /* 2nd case */
ssm PSW_SM_I, %r0
b lws_exit /* Contended... */
ldo -EAGAIN(%r0), %r21 /* Spin in userspace */
/*
prev = *addr;
if ( prev == old )
*addr = new;
return prev;
*/
/* NOTES:
This all works becuse intr_do_signal
and schedule both check the return iasq
and see that we are on the kernel page
so this process is never scheduled off
or is ever sent any signal of any sort,
thus it is wholly atomic from usrspaces
perspective
*/
cas2_action:
/* Jump to the correct function */
blr %r29, %r0
/* Set %r28 as non-zero for now */
ldo 1(%r0),%r28
/* 8bit CAS */
13: ldb,ma 0(%sr3,%r26), %r29
sub,= %r29, %r25, %r0
b,n cas2_end
14: stb,ma %r24, 0(%sr3,%r26)
b cas2_end
copy %r0, %r28
nop
nop
/* 16bit CAS */
15: ldh,ma 0(%sr3,%r26), %r29
sub,= %r29, %r25, %r0
b,n cas2_end
16: sth,ma %r24, 0(%sr3,%r26)
b cas2_end
copy %r0, %r28
nop
nop
/* 32bit CAS */
17: ldw,ma 0(%sr3,%r26), %r29
sub,= %r29, %r25, %r0
b,n cas2_end
18: stw,ma %r24, 0(%sr3,%r26)
b cas2_end
copy %r0, %r28
nop
nop
/* 64bit CAS */
#ifdef CONFIG_64BIT
19: ldd,ma 0(%sr3,%r26), %r29
sub,= %r29, %r25, %r0
b,n cas2_end
20: std,ma %r24, 0(%sr3,%r26)
copy %r0, %r28
#else
/* Compare first word */
19: ldw,ma 0(%sr3,%r26), %r29
sub,= %r29, %r22, %r0
b,n cas2_end
/* Compare second word */
20: ldw,ma 4(%sr3,%r26), %r29
sub,= %r29, %r23, %r0
b,n cas2_end
/* Perform the store */
21: fstdx %fr4, 0(%sr3,%r26)
copy %r0, %r28
#endif
cas2_end:
/* Free lock */
stw,ma %r20, 0(%sr2,%r20)
/* Enable interrupts */
ssm PSW_SM_I, %r0
/* Return to userspace, set no error */
b lws_exit
copy %r0, %r21
22:
/* Error occurred on load or store */
/* Free lock */
stw %r20, 0(%sr2,%r20)
ssm PSW_SM_I, %r0
ldo 1(%r0),%r28
b lws_exit
ldo -EFAULT(%r0),%r21 /* set errno */
nop
nop
nop
/* Exception table entries, for the load and store, return EFAULT.
Each of the entries must be relocated. */
ASM_EXCEPTIONTABLE_ENTRY(4b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(5b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(6b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(7b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(8b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(9b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(10b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(11b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(13b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(14b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(15b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(16b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(17b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(18b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(19b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(20b-linux_gateway_page, 22b-linux_gateway_page)
#ifndef CONFIG_64BIT
ASM_EXCEPTIONTABLE_ENTRY(12b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(21b-linux_gateway_page, 22b-linux_gateway_page)
#endif
/* Make sure nothing else is placed on this page */
.align PAGE_SIZE
END(linux_gateway_page)
@ -675,8 +899,9 @@ ENTRY(end_linux_gateway_page)
/* Light-weight-syscall table */
/* Start of lws table. */
ENTRY(lws_table)
LWS_ENTRY(compare_and_swap32) /* 0 - ELF32 Atomic compare and swap */
LWS_ENTRY(compare_and_swap64) /* 1 - ELF64 Atomic compare and swap */
LWS_ENTRY(compare_and_swap32) /* 0 - ELF32 Atomic 32bit CAS */
LWS_ENTRY(compare_and_swap64) /* 1 - ELF64 Atomic 32bit CAS */
LWS_ENTRY(compare_and_swap_2) /* 2 - ELF32 Atomic 64bit CAS */
END(lws_table)
/* End of lws table */

View File

@ -433,6 +433,9 @@
ENTRY_SAME(sched_getattr) /* 335 */
ENTRY_COMP(utimes)
ENTRY_SAME(renameat2)
ENTRY_SAME(seccomp)
ENTRY_SAME(getrandom)
ENTRY_SAME(memfd_create) /* 340 */
/* Nothing yet */

View File

@ -5,6 +5,7 @@ CONFIG_SMP=y
CONFIG_NR_CPUS=4
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=15

View File

@ -5,6 +5,7 @@ CONFIG_SMP=y
CONFIG_NR_CPUS=4
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=15

View File

@ -4,6 +4,7 @@ CONFIG_ALTIVEC=y
CONFIG_SMP=y
CONFIG_NR_CPUS=24
CONFIG_SYSVIPC=y
CONFIG_FHANDLE=y
CONFIG_IRQ_DOMAIN_DEBUG=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y

View File

@ -5,6 +5,7 @@ CONFIG_NR_CPUS=4
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_BLK_DEV_INITRD=y

View File

@ -4,6 +4,7 @@ CONFIG_NR_CPUS=4
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_COMPAT_BRK is not set

View File

@ -3,6 +3,7 @@ CONFIG_ALTIVEC=y
CONFIG_SMP=y
CONFIG_NR_CPUS=2
CONFIG_SYSVIPC=y
CONFIG_FHANDLE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_INITRD=y

View File

@ -4,6 +4,7 @@ CONFIG_VSX=y
CONFIG_SMP=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_IRQ_DOMAIN_DEBUG=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y

View File

@ -3,6 +3,7 @@ CONFIG_PPC_BOOK3E_64=y
CONFIG_SMP=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_TASKSTATS=y

View File

@ -5,6 +5,7 @@ CONFIG_SMP=y
CONFIG_NR_CPUS=2
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_RD_LZMA=y

View File

@ -5,6 +5,7 @@ CONFIG_SMP=y
CONFIG_NR_CPUS=2048
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_IRQ_DOMAIN_DEBUG=y

View File

@ -6,6 +6,7 @@ CONFIG_NR_CPUS=2048
CONFIG_CPU_LITTLE_ENDIAN=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_IRQ_DOMAIN_DEBUG=y

View File

@ -47,6 +47,12 @@
STACK_FRAME_OVERHEAD + KERNEL_REDZONE_SIZE)
#define STACK_FRAME_MARKER 12
#if defined(_CALL_ELF) && _CALL_ELF == 2
#define STACK_FRAME_MIN_SIZE 32
#else
#define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD
#endif
/* Size of dummy stack frame allocated when calling signal handler. */
#define __SIGNAL_FRAMESIZE 128
#define __SIGNAL_FRAMESIZE32 64
@ -60,6 +66,7 @@
#define STACK_FRAME_REGS_MARKER ASM_CONST(0x72656773)
#define STACK_INT_FRAME_SIZE (sizeof(struct pt_regs) + STACK_FRAME_OVERHEAD)
#define STACK_FRAME_MARKER 2
#define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD
/* Size of stack frame allocated when calling signal handler. */
#define __SIGNAL_FRAMESIZE 64

View File

@ -362,3 +362,6 @@ SYSCALL(ni_syscall) /* sys_kcmp */
SYSCALL_SPU(sched_setattr)
SYSCALL_SPU(sched_getattr)
SYSCALL_SPU(renameat2)
SYSCALL_SPU(seccomp)
SYSCALL_SPU(getrandom)
SYSCALL_SPU(memfd_create)

View File

@ -12,7 +12,7 @@
#include <uapi/asm/unistd.h>
#define __NR_syscalls 358
#define __NR_syscalls 361
#define __NR__exit __NR_exit
#define NR_syscalls __NR_syscalls

View File

@ -380,5 +380,8 @@
#define __NR_sched_setattr 355
#define __NR_sched_getattr 356
#define __NR_renameat2 357
#define __NR_seccomp 358
#define __NR_getrandom 359
#define __NR_memfd_create 360
#endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */

View File

@ -62,10 +62,10 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp)
}
kvm->arch.hpt_cma_alloc = 0;
page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT));
page = kvm_alloc_hpt(1ul << (order - PAGE_SHIFT));
if (page) {
hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page));
memset((void *)hpt, 0, (1 << order));
memset((void *)hpt, 0, (1ul << order));
kvm->arch.hpt_cma_alloc = 1;
}

View File

@ -35,7 +35,7 @@ static int valid_next_sp(unsigned long sp, unsigned long prev_sp)
return 0; /* must be 16-byte aligned */
if (!validate_sp(sp, current, STACK_FRAME_OVERHEAD))
return 0;
if (sp >= prev_sp + STACK_FRAME_OVERHEAD)
if (sp >= prev_sp + STACK_FRAME_MIN_SIZE)
return 1;
/*
* sp could decrease when we jump off an interrupt stack

View File

@ -28,6 +28,7 @@
#include <asm/opal.h>
#include <asm/cputable.h>
#include <asm/machdep.h>
static int opal_hmi_handler_nb_init;
struct OpalHmiEvtNode {
@ -185,4 +186,4 @@ static int __init opal_hmi_handler_init(void)
}
return 0;
}
subsys_initcall(opal_hmi_handler_init);
machine_subsys_initcall(powernv, opal_hmi_handler_init);

View File

@ -113,7 +113,7 @@ out:
static int pseries_remove_mem_node(struct device_node *np)
{
const char *type;
const unsigned int *regs;
const __be32 *regs;
unsigned long base;
unsigned int lmb_size;
int ret = -EINVAL;
@ -132,8 +132,8 @@ static int pseries_remove_mem_node(struct device_node *np)
if (!regs)
return ret;
base = *(unsigned long *)regs;
lmb_size = regs[3];
base = be64_to_cpu(*(unsigned long *)regs);
lmb_size = be32_to_cpu(regs[3]);
pseries_remove_memblock(base, lmb_size);
return 0;
@ -153,7 +153,7 @@ static inline int pseries_remove_mem_node(struct device_node *np)
static int pseries_add_mem_node(struct device_node *np)
{
const char *type;
const unsigned int *regs;
const __be32 *regs;
unsigned long base;
unsigned int lmb_size;
int ret = -EINVAL;
@ -172,8 +172,8 @@ static int pseries_add_mem_node(struct device_node *np)
if (!regs)
return ret;
base = *(unsigned long *)regs;
lmb_size = regs[3];
base = be64_to_cpu(*(unsigned long *)regs);
lmb_size = be32_to_cpu(regs[3]);
/*
* Update memory region to represent the memory add
@ -187,14 +187,14 @@ static int pseries_update_drconf_memory(struct of_prop_reconfig *pr)
struct of_drconf_cell *new_drmem, *old_drmem;
unsigned long memblock_size;
u32 entries;
u32 *p;
__be32 *p;
int i, rc = -EINVAL;
memblock_size = pseries_memory_block_size();
if (!memblock_size)
return -EINVAL;
p = (u32 *) pr->old_prop->value;
p = (__be32 *) pr->old_prop->value;
if (!p)
return -EINVAL;
@ -203,28 +203,30 @@ static int pseries_update_drconf_memory(struct of_prop_reconfig *pr)
* entries. Get the niumber of entries and skip to the array of
* of_drconf_cell's.
*/
entries = *p++;
entries = be32_to_cpu(*p++);
old_drmem = (struct of_drconf_cell *)p;
p = (u32 *)pr->prop->value;
p = (__be32 *)pr->prop->value;
p++;
new_drmem = (struct of_drconf_cell *)p;
for (i = 0; i < entries; i++) {
if ((old_drmem[i].flags & DRCONF_MEM_ASSIGNED) &&
(!(new_drmem[i].flags & DRCONF_MEM_ASSIGNED))) {
rc = pseries_remove_memblock(old_drmem[i].base_addr,
if ((be32_to_cpu(old_drmem[i].flags) & DRCONF_MEM_ASSIGNED) &&
(!(be32_to_cpu(new_drmem[i].flags) & DRCONF_MEM_ASSIGNED))) {
rc = pseries_remove_memblock(
be64_to_cpu(old_drmem[i].base_addr),
memblock_size);
break;
} else if ((!(old_drmem[i].flags & DRCONF_MEM_ASSIGNED)) &&
(new_drmem[i].flags & DRCONF_MEM_ASSIGNED)) {
rc = memblock_add(old_drmem[i].base_addr,
} else if ((!(be32_to_cpu(old_drmem[i].flags) &
DRCONF_MEM_ASSIGNED)) &&
(be32_to_cpu(new_drmem[i].flags) &
DRCONF_MEM_ASSIGNED)) {
rc = memblock_add(be64_to_cpu(old_drmem[i].base_addr),
memblock_size);
rc = (rc < 0) ? -EINVAL : 0;
break;
}
}
return rc;
}

View File

@ -17,12 +17,12 @@
#define IPL_PARM_BLK_FCP_LEN (sizeof(struct ipl_list_hdr) + \
sizeof(struct ipl_block_fcp))
#define IPL_PARM_BLK0_FCP_LEN (sizeof(struct ipl_block_fcp) + 8)
#define IPL_PARM_BLK0_FCP_LEN (sizeof(struct ipl_block_fcp) + 16)
#define IPL_PARM_BLK_CCW_LEN (sizeof(struct ipl_list_hdr) + \
sizeof(struct ipl_block_ccw))
#define IPL_PARM_BLK0_CCW_LEN (sizeof(struct ipl_block_ccw) + 8)
#define IPL_PARM_BLK0_CCW_LEN (sizeof(struct ipl_block_ccw) + 16)
#define IPL_MAX_SUPPORTED_VERSION (0)
@ -38,10 +38,11 @@ struct ipl_list_hdr {
u8 pbt;
u8 flags;
u16 reserved2;
u8 loadparm[8];
} __attribute__((packed));
struct ipl_block_fcp {
u8 reserved1[313-1];
u8 reserved1[305-1];
u8 opt;
u8 reserved2[3];
u16 reserved3;
@ -62,7 +63,6 @@ struct ipl_block_fcp {
offsetof(struct ipl_block_fcp, scp_data)))
struct ipl_block_ccw {
u8 load_parm[8];
u8 reserved1[84];
u8 reserved2[2];
u16 devno;

View File

@ -1127,7 +1127,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep)
{
pgste_t pgste;
pte_t pte;
pte_t pte, oldpte;
int young;
if (mm_has_pgste(vma->vm_mm)) {
@ -1135,12 +1135,13 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
pgste = pgste_ipte_notify(vma->vm_mm, ptep, pgste);
}
pte = *ptep;
oldpte = pte = *ptep;
ptep_flush_direct(vma->vm_mm, addr, ptep);
young = pte_young(pte);
pte = pte_mkold(pte);
if (mm_has_pgste(vma->vm_mm)) {
pgste = pgste_update_all(&oldpte, pgste, vma->vm_mm);
pgste = pgste_set_pte(ptep, pgste, pte);
pgste_set_unlock(ptep, pgste);
} else
@ -1330,6 +1331,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
ptep_flush_direct(vma->vm_mm, address, ptep);
if (mm_has_pgste(vma->vm_mm)) {
pgste_set_key(ptep, pgste, entry, vma->vm_mm);
pgste = pgste_set_pte(ptep, pgste, entry);
pgste_set_unlock(ptep, pgste);
} else

View File

@ -455,22 +455,6 @@ DEFINE_IPL_ATTR_RO(ipl_fcp, bootprog, "%lld\n", (unsigned long long)
DEFINE_IPL_ATTR_RO(ipl_fcp, br_lba, "%lld\n", (unsigned long long)
IPL_PARMBLOCK_START->ipl_info.fcp.br_lba);
static struct attribute *ipl_fcp_attrs[] = {
&sys_ipl_type_attr.attr,
&sys_ipl_device_attr.attr,
&sys_ipl_fcp_wwpn_attr.attr,
&sys_ipl_fcp_lun_attr.attr,
&sys_ipl_fcp_bootprog_attr.attr,
&sys_ipl_fcp_br_lba_attr.attr,
NULL,
};
static struct attribute_group ipl_fcp_attr_group = {
.attrs = ipl_fcp_attrs,
};
/* CCW ipl device attributes */
static ssize_t ipl_ccw_loadparm_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
@ -487,6 +471,23 @@ static ssize_t ipl_ccw_loadparm_show(struct kobject *kobj,
static struct kobj_attribute sys_ipl_ccw_loadparm_attr =
__ATTR(loadparm, 0444, ipl_ccw_loadparm_show, NULL);
static struct attribute *ipl_fcp_attrs[] = {
&sys_ipl_type_attr.attr,
&sys_ipl_device_attr.attr,
&sys_ipl_fcp_wwpn_attr.attr,
&sys_ipl_fcp_lun_attr.attr,
&sys_ipl_fcp_bootprog_attr.attr,
&sys_ipl_fcp_br_lba_attr.attr,
&sys_ipl_ccw_loadparm_attr.attr,
NULL,
};
static struct attribute_group ipl_fcp_attr_group = {
.attrs = ipl_fcp_attrs,
};
/* CCW ipl device attributes */
static struct attribute *ipl_ccw_attrs_vm[] = {
&sys_ipl_type_attr.attr,
&sys_ipl_device_attr.attr,
@ -765,28 +766,10 @@ DEFINE_IPL_ATTR_RW(reipl_fcp, br_lba, "%lld\n", "%lld\n",
DEFINE_IPL_ATTR_RW(reipl_fcp, device, "0.0.%04llx\n", "0.0.%llx\n",
reipl_block_fcp->ipl_info.fcp.devno);
static struct attribute *reipl_fcp_attrs[] = {
&sys_reipl_fcp_device_attr.attr,
&sys_reipl_fcp_wwpn_attr.attr,
&sys_reipl_fcp_lun_attr.attr,
&sys_reipl_fcp_bootprog_attr.attr,
&sys_reipl_fcp_br_lba_attr.attr,
NULL,
};
static struct attribute_group reipl_fcp_attr_group = {
.attrs = reipl_fcp_attrs,
};
/* CCW reipl device attributes */
DEFINE_IPL_ATTR_RW(reipl_ccw, device, "0.0.%04llx\n", "0.0.%llx\n",
reipl_block_ccw->ipl_info.ccw.devno);
static void reipl_get_ascii_loadparm(char *loadparm,
struct ipl_parameter_block *ibp)
{
memcpy(loadparm, ibp->ipl_info.ccw.load_parm, LOADPARM_LEN);
memcpy(loadparm, ibp->hdr.loadparm, LOADPARM_LEN);
EBCASC(loadparm, LOADPARM_LEN);
loadparm[LOADPARM_LEN] = 0;
strim(loadparm);
@ -821,13 +804,50 @@ static ssize_t reipl_generic_loadparm_store(struct ipl_parameter_block *ipb,
return -EINVAL;
}
/* initialize loadparm with blanks */
memset(ipb->ipl_info.ccw.load_parm, ' ', LOADPARM_LEN);
memset(ipb->hdr.loadparm, ' ', LOADPARM_LEN);
/* copy and convert to ebcdic */
memcpy(ipb->ipl_info.ccw.load_parm, buf, lp_len);
ASCEBC(ipb->ipl_info.ccw.load_parm, LOADPARM_LEN);
memcpy(ipb->hdr.loadparm, buf, lp_len);
ASCEBC(ipb->hdr.loadparm, LOADPARM_LEN);
return len;
}
/* FCP wrapper */
static ssize_t reipl_fcp_loadparm_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
return reipl_generic_loadparm_show(reipl_block_fcp, page);
}
static ssize_t reipl_fcp_loadparm_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t len)
{
return reipl_generic_loadparm_store(reipl_block_fcp, buf, len);
}
static struct kobj_attribute sys_reipl_fcp_loadparm_attr =
__ATTR(loadparm, S_IRUGO | S_IWUSR, reipl_fcp_loadparm_show,
reipl_fcp_loadparm_store);
static struct attribute *reipl_fcp_attrs[] = {
&sys_reipl_fcp_device_attr.attr,
&sys_reipl_fcp_wwpn_attr.attr,
&sys_reipl_fcp_lun_attr.attr,
&sys_reipl_fcp_bootprog_attr.attr,
&sys_reipl_fcp_br_lba_attr.attr,
&sys_reipl_fcp_loadparm_attr.attr,
NULL,
};
static struct attribute_group reipl_fcp_attr_group = {
.attrs = reipl_fcp_attrs,
};
/* CCW reipl device attributes */
DEFINE_IPL_ATTR_RW(reipl_ccw, device, "0.0.%04llx\n", "0.0.%llx\n",
reipl_block_ccw->ipl_info.ccw.devno);
/* NSS wrapper */
static ssize_t reipl_nss_loadparm_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
@ -1125,11 +1145,10 @@ static void reipl_block_ccw_fill_parms(struct ipl_parameter_block *ipb)
/* LOADPARM */
/* check if read scp info worked and set loadparm */
if (sclp_ipl_info.is_valid)
memcpy(ipb->ipl_info.ccw.load_parm,
&sclp_ipl_info.loadparm, LOADPARM_LEN);
memcpy(ipb->hdr.loadparm, &sclp_ipl_info.loadparm, LOADPARM_LEN);
else
/* read scp info failed: set empty loadparm (EBCDIC blanks) */
memset(ipb->ipl_info.ccw.load_parm, 0x40, LOADPARM_LEN);
memset(ipb->hdr.loadparm, 0x40, LOADPARM_LEN);
ipb->hdr.flags = DIAG308_FLAGS_LP_VALID;
/* VM PARM */
@ -1251,9 +1270,16 @@ static int __init reipl_fcp_init(void)
return rc;
}
if (ipl_info.type == IPL_TYPE_FCP)
if (ipl_info.type == IPL_TYPE_FCP) {
memcpy(reipl_block_fcp, IPL_PARMBLOCK_START, PAGE_SIZE);
else {
/*
* Fix loadparm: There are systems where the (SCSI) LOADPARM
* is invalid in the SCSI IPL parameter block, so take it
* always from sclp_ipl_info.
*/
memcpy(reipl_block_fcp->hdr.loadparm, sclp_ipl_info.loadparm,
LOADPARM_LEN);
} else {
reipl_block_fcp->hdr.len = IPL_PARM_BLK_FCP_LEN;
reipl_block_fcp->hdr.version = IPL_PARM_BLOCK_VERSION;
reipl_block_fcp->hdr.blk0_len = IPL_PARM_BLK0_FCP_LEN;
@ -1864,7 +1890,23 @@ static void __init shutdown_actions_init(void)
static int __init s390_ipl_init(void)
{
char str[8] = {0x40, 0x40, 0x40, 0x40, 0x40, 0x40, 0x40, 0x40};
sclp_get_ipl_info(&sclp_ipl_info);
/*
* Fix loadparm: There are systems where the (SCSI) LOADPARM
* returned by read SCP info is invalid (contains EBCDIC blanks)
* when the system has been booted via diag308. In that case we use
* the value from diag308, if available.
*
* There are also systems where diag308 store does not work in
* case the system is booted from HMC. Fortunately in this case
* READ SCP info provides the correct value.
*/
if (memcmp(sclp_ipl_info.loadparm, str, sizeof(str)) == 0 &&
diag308_set_works)
memcpy(sclp_ipl_info.loadparm, ipl_block.hdr.loadparm,
LOADPARM_LEN);
shutdown_actions_init();
shutdown_triggers_init();
return 0;

View File

@ -22,13 +22,11 @@ __kernel_clock_gettime:
basr %r5,0
0: al %r5,21f-0b(%r5) /* get &_vdso_data */
chi %r2,__CLOCK_REALTIME
je 10f
je 11f
chi %r2,__CLOCK_MONOTONIC
jne 19f
/* CLOCK_MONOTONIC */
ltr %r3,%r3
jz 9f /* tp == NULL */
1: l %r4,__VDSO_UPD_COUNT+4(%r5) /* load update counter */
tml %r4,0x0001 /* pending update ? loop */
jnz 1b
@ -67,12 +65,10 @@ __kernel_clock_gettime:
j 6b
8: st %r2,0(%r3) /* store tp->tv_sec */
st %r1,4(%r3) /* store tp->tv_nsec */
9: lhi %r2,0
lhi %r2,0
br %r14
/* CLOCK_REALTIME */
10: ltr %r3,%r3 /* tp == NULL */
jz 18f
11: l %r4,__VDSO_UPD_COUNT+4(%r5) /* load update counter */
tml %r4,0x0001 /* pending update ? loop */
jnz 11b
@ -111,7 +107,7 @@ __kernel_clock_gettime:
j 15b
17: st %r2,0(%r3) /* store tp->tv_sec */
st %r1,4(%r3) /* store tp->tv_nsec */
18: lhi %r2,0
lhi %r2,0
br %r14
/* Fallback to system call */

View File

@ -21,7 +21,7 @@ __kernel_clock_gettime:
.cfi_startproc
larl %r5,_vdso_data
cghi %r2,__CLOCK_REALTIME
je 4f
je 5f
cghi %r2,__CLOCK_THREAD_CPUTIME_ID
je 9f
cghi %r2,-2 /* Per-thread CPUCLOCK with PID=0, VIRT=1 */
@ -30,8 +30,6 @@ __kernel_clock_gettime:
jne 12f
/* CLOCK_MONOTONIC */
ltgr %r3,%r3
jz 3f /* tp == NULL */
0: lg %r4,__VDSO_UPD_COUNT(%r5) /* load update counter */
tmll %r4,0x0001 /* pending update ? loop */
jnz 0b
@ -53,12 +51,10 @@ __kernel_clock_gettime:
j 1b
2: stg %r0,0(%r3) /* store tp->tv_sec */
stg %r1,8(%r3) /* store tp->tv_nsec */
3: lghi %r2,0
lghi %r2,0
br %r14
/* CLOCK_REALTIME */
4: ltr %r3,%r3 /* tp == NULL */
jz 8f
5: lg %r4,__VDSO_UPD_COUNT(%r5) /* load update counter */
tmll %r4,0x0001 /* pending update ? loop */
jnz 5b
@ -80,7 +76,7 @@ __kernel_clock_gettime:
j 6b
7: stg %r0,0(%r3) /* store tp->tv_sec */
stg %r1,8(%r3) /* store tp->tv_nsec */
8: lghi %r2,0
lghi %r2,0
br %r14
/* CLOCK_THREAD_CPUTIME_ID for this thread */

View File

@ -1317,19 +1317,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
return -EINVAL;
}
switch (kvm_run->exit_reason) {
case KVM_EXIT_S390_SIEIC:
case KVM_EXIT_UNKNOWN:
case KVM_EXIT_INTR:
case KVM_EXIT_S390_RESET:
case KVM_EXIT_S390_UCONTROL:
case KVM_EXIT_S390_TSCH:
case KVM_EXIT_DEBUG:
break;
default:
BUG();
}
vcpu->arch.sie_block->gpsw.mask = kvm_run->psw_mask;
vcpu->arch.sie_block->gpsw.addr = kvm_run->psw_addr;
if (kvm_run->kvm_dirty_regs & KVM_SYNC_PREFIX) {

View File

@ -986,11 +986,21 @@ int set_guest_storage_key(struct mm_struct *mm, unsigned long addr,
pte_t *ptep;
down_read(&mm->mmap_sem);
retry:
ptep = get_locked_pte(current->mm, addr, &ptl);
if (unlikely(!ptep)) {
up_read(&mm->mmap_sem);
return -EFAULT;
}
if (!(pte_val(*ptep) & _PAGE_INVALID) &&
(pte_val(*ptep) & _PAGE_PROTECT)) {
pte_unmap_unlock(*ptep, ptl);
if (fixup_user_fault(current, mm, addr, FAULT_FLAG_WRITE)) {
up_read(&mm->mmap_sem);
return -EFAULT;
}
goto retry;
}
new = old = pgste_get_lock(ptep);
pgste_val(new) &= ~(PGSTE_GR_BIT | PGSTE_GC_BIT |

View File

@ -105,6 +105,8 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
page = pte_page(pte);
get_page(page);
__flush_anon_page(page, addr);
flush_dcache_page(page);
pages[*nr] = page;
(*nr)++;

View File

@ -23,6 +23,7 @@ config X86
def_bool y
select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI
select ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS
select ARCH_HAS_FAST_MULTIPLIER
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
select HAVE_AOUT if X86_32

View File

@ -497,8 +497,6 @@ static __always_inline int fls64(__u64 x)
#include <asm-generic/bitops/sched.h>
#define ARCH_HAS_FAST_MULTIPLIER 1
#include <asm/arch_hweight.h>
#include <asm-generic/bitops/const_hweight.h>

Some files were not shown because too many files have changed in this diff Show More