Commit Graph

706465 Commits

Author SHA1 Message Date
Sebastian Ott 0dcd91a9e6 s390/debug: only write data once
debug_event_common memsets the active debug entry with zeros to
prevent stale data leakage. This is overwritten with the actual
debug data in the next step. Only write zeros to that part of the
debug entry that's not used by new debug data.

Micro benchmarks show a 2-10% reduction of cpu cycles with this
approach.

Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Acked-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-10-18 14:11:23 +02:00
Sebastian Ott 94158e544f s390/debug: improve debug_event
debug_event currently truncates the data if used with a size larger than
the buf_size of the debug feature. For lots of callers of this function,
wrappers have been implemented that loop until all data is handled.

Move that functionality into debug_event_common and get rid of the wrappers.

Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Acked-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-10-18 14:11:19 +02:00
Philipp Rudo 7c3eaaa391 s390/kexec: Fix checksum validation return code for kdump
Before kexec boots to a crash kernel it checks whether the image in memory
changed after load. This is done by the function kdump_csum_valid, which
returns true, i.e. an int != 0, on success and 0 otherwise. In other words
when kdump_csum_valid returns an error code it means that the validation
succeeded. This is not only counterintuitive but also produces the wrong
result if the kernel was build without CONFIG_CRASH_DUMP. Fix this by
making kdump_csum_valid return a bool.

Signed-off-by: Philipp Rudo <prudo@linux.vnet.ibm.com>
Acked-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-10-18 14:11:16 +02:00
Martin Schwidefsky 7f581d03bd Some improvements in data handling for vfio-ccw.
-----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJZ5H6IAAoJEN7Pa5PG8C+vkIMP/jOw1YDx0307QoL4m5fLfu2P
 sxHbNGiOmoIWdCl1muHUoYzyORAUVOS9q1UPlPXQ2M9EBba12ZXGabzQU3bhWhwO
 CFRIlnNK7+D7JswImjwm24AgU/QiSE/XJDtHC8iuzTbDvihFdJG131bVg0bSb6Or
 BrpySjNOHDIWe7ySnJlHdoC4noPtst4l33xkzNlY6plhVpT0OQvm2A/UFYV1CRaX
 PrTGIw20Uf9Z1IlFwf0yuP/u0r/iRJCi06ym89XqUAUdtzcWpAiwMpydop1uIsmR
 A15SYC6dvBYN0n3snqjI6EK+/+XHLlUbmaX/ughUnG3rXx5J4KGJRAeP6qPO7GMV
 mYvavLe6FrlRvL5uk9zqAaPRYo3iWKhm43Q7923RFNzr5PLUSKDnN9EFsP22Ppi6
 i1lv8khBHh4jx5ST/WjhGb77k6OZ6WbCdB6txOmV7I/VQIoF6XuVmlgpJAR0ZivA
 1FtS7H62o+jTUyPV3a4sCjqfMg2PPJ2CKCVtYN1R33IWDjiovqNZsHRaIX7T7AIl
 8AJLJOpUoWACplwilFpsIkJ0sjjAvGoTpAAfxiJEtLSYjqPMYyQMqpLPIMMQhFCi
 KSNbuHpIpLujrSTCoSpgF5VWy5QgBQy9J3QN8nF/i2PE5izlPUoql3K1sb9dOzZQ
 It83jpoK4vFl6a1Ra4SJ
 =7y/+
 -----END PGP SIGNATURE-----

Merge tag 'vfio-ccw-20171016' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/vfio-ccw into features

Pull vfio-ccw update from Cornelia Huck:
"Some improvements in data handling for vfio-ccw."
2017-10-16 12:43:12 +02:00
Dong Jia Shi 4cebc5d6a6 vfio: ccw: validate the count field of a ccw before pinning
If the count field of a ccw is zero, there is no need to
try to pin page(s) for it. Let's check the count value
before starting pinning operations.

Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Signed-off-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Message-Id: <20171011023822.42948-3-bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-16 11:14:48 +02:00
Dong Jia Shi 688c29533f vfio: ccw: bypass bad idaw address when fetching IDAL ccws
We currently return the same error code (-EFAULT) to indicate two
different error cases:
1. a bug in vfio-ccw implementation has been found.
2. a buggy channel program has been detected.

This brings difficulty for userland program (specifically Qemu) to
handle.

Let's use -EFAULT to only indicate the first case. For the second
case, we simply hand over the ccws to lower level for further
handling.

Notice:
Once a bad idaw address is detected, the current behavior is to
suppress the ssch. With this fix, the channel program will be
accepted, and part of the channel program (the part ahead of
the bad idaw) could possibly be executed by the device before
I/O conclusion.

Suggested-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Signed-off-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Message-Id: <20171011023822.42948-2-bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-16 11:10:13 +02:00
Martin Schwidefsky fe3af62553 s390: update defconfig
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-10-16 08:19:28 +02:00
Heiko Carstens 496da0d706 s390/debug: adjust coding style
The debug feature code hasn't been touched in ages and the code also
looks like this. Therefore clean up the code so it looks a bit more
like current coding style.

There is no functional change - actually I made also sure that the
generated code with performance_defconfig is identical.
A diff of old vs new with "objdump -d" is empty.

The code is still not checkpatch clean, but that was not the goal.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-10-16 08:19:26 +02:00
Vasyl Gomonovych 0bb6bba5fb s390/pkey: fix kzalloc-simple.cocci warnings
drivers/s390/crypto/pkey_api.c:128:11-18: WARNING:
kzalloc should be used for cprbmem, instead of kmalloc/memset

Use kzalloc rather than kmalloc followed by memset with 0

Signed-off-by: Vasyl Gomonovych <gomonovych@gmail.com>
Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-10-16 08:19:23 +02:00
Heiko Carstens df8bbd0c98 s390/kprobes: remove KPROBE_SWAP_INST state
For an unknown reason the s390 kprobes instruction replacement
function modifies the kprobe_status of the current CPU to
KPROBE_SWAP_INST. This was supposed to catch traps that happened
during instruction patching. Such a fault is not supposed to happen,
and silently discarding such a fault is certainly also not what we
want. In fact s390 is the only architecture which has this odd piece
of code.

Just remove this and behave like all other architectures. This was
pointed out by Jens Remus.

Reported-by: Jens Remus <jremus@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-10-12 07:16:50 +02:00
Heiko Carstens 49913f1fd0 s390: cleanup string ops prototypes
Just some trivial changes like removing the extern keyword from the
header file, renaming arguments to match the man pages, and whitespace
removal.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-10-09 11:18:08 +02:00
Heiko Carstens 993fef95b9 s390: optimize memset implementation
Like for the memset16/32/64 variants avoid that subsequent mvc
instructions depend on each other since that might have negative
performance impacts.

This patch is currently hardly relevant since at least gcc 7.1
generates only inline memset code and not a single memset call.
However there is no reason to not provide an optimized version
just in case gcc generates memset calls again, like it did in
the past.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-10-09 11:18:07 +02:00
Heiko Carstens 41879ff65d s390/mm: use memset64 instead of clear_table
Use memset64 instead of the (now) open-coded variant clear_table.
Performance wise there is no difference.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-10-09 11:18:06 +02:00
Heiko Carstens 0b77d6701c s390: implement memset16, memset32 & memset64
Provide fast versions of the new memset variants. E.g. the generic
memset64 is ten times slower than the optimized version if used on a
whole page.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-10-09 11:18:04 +02:00
Martin Schwidefsky 3bdf5679c9 Merge branch 'sthyi' into features
Add the store-hypervisor-information code into features
using a tip branch for parallel merging into the KVM tree.
2017-10-09 11:16:49 +02:00
QingFeng Hao 3d8757b87d s390/sthyi: add s390_sthyi system call
Add a syscall of s390_sthyi to implement STHYI instruction in LPAR
which reuses the implementation for KVM by Janosch Frank -
commit 95ca2cb579 ("KVM: s390: Add sthyi emulation").

STHYI(Store Hypervisor Information) is an emulated z/VM instruction that
provides a guest with basic information about the layers it is running
on. This includes information about the cpu configuration of both the
machine and the lpar, as well as their names, machine model and
machine type. This information enables an application to determine the
maximum capacity of CPs and IFLs available to software.

For the arguments of s390_sthyi, code shall be 0 and flags is reserved for
future use, info is the output argument to store the required hypervisor
info.

Signed-off-by: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-10-09 11:15:36 +02:00
QingFeng Hao 9fb6c9b3fe s390/sthyi: add cache to store hypervisor info
STHYI requires extensive locking in the higher hypervisors and is
very computational/memory expensive. Therefore we cache the retrieved
hypervisor info whose valid period is 1s with mutex to allow concurrent
access. rw semaphore can't benefit here due to cache line bounce.

Signed-off-by: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-10-09 11:15:35 +02:00
QingFeng Hao b7c92f1a4e s390/sthyi: reorganize sthyi implementation
As we need to support sthyi instruction on LPAR too, move the common code
to kernel part and kvm related code to intercept.c for better reuse.

Signed-off-by: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-10-09 11:15:33 +02:00
Heiko Carstens 91a1fad759 s390: use generic rwsem implementation
We never optimized our rwsem inline assemblies to make use of the new
atomic instructions. The generic rwsem implementation implicitly makes
use of the new instructions, since it implements the required rwsem
primitives with atomic operations, which we did optimize.

However even when compiling for old architectures the generic variant
still generates better code. So it's time to simply remove our old
code and switch to the generic implementation.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-10-04 10:30:31 +02:00
Heiko Carstens e0d281d067 s390/disassembler: add new z14 instructions
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-29 15:52:24 +02:00
Heiko Carstens ea7c360b10 s390/disassembler: add missing z13 instructions
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-29 15:52:18 +02:00
Heiko Carstens 630f789e80 s390/disassembler: add sthyi instruction
This instruction came with a z/VM extension and not with a specific
machine generation.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-29 15:52:11 +02:00
Heiko Carstens 7e1263b720 s390/disassembler: remove double instructions
Remove a couple of instructions that are listed twice.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-29 15:52:06 +02:00
Heiko Carstens caefea1d3a s390/disassembler: fix LRDFU format
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-29 15:52:03 +02:00
Heiko Carstens 5c50538752 s390/disassembler: add missing end marker for e7 table
The e7 opcode table does not have an end marker. Hence when trying to
find an unknown e7 instruction the code will access memory behind the
table until it finds something that matches the opcode, or the kernel
crashes, whatever comes first.

This affects not only the in-kernel disassembler but also uprobes and
kprobes which refuse to set a probe on unknown instructions, and
therefore search the opcode tables to figure out if instructions are
known or not.

Cc: <stable@vger.kernel.org> # v3.18+
Fixes: 3585cb0280 ("s390/disassembler: add vector instructions")
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-29 15:51:59 +02:00
Heiko Carstens d67ce18eb6 s390/virtio: simplify Makefile
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-29 15:51:53 +02:00
Thomas Huth 7fb2b2d512 s390/virtio: remove the old KVM virtio transport
There is no recent user space application available anymore which still
supports this old virtio transport. Additionally, commit 3b2fbb3f06
("virtio/s390: deprecate old transport") introduced a deprecation message
in the driver, and apparently nobody complained so far that it is still
required. So let's simply remove it.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Acked-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-29 15:51:35 +02:00
Julian Wiedmann f9a5d70cfa s390/ccwgroup: tie a ccwgroup driver to its ccw driver
When grouping devices, the ccwgroup core only checks whether all of the
devices are bound to the same ccw_driver. It has no means of checking
if the requesting ccwgroup driver actually supports this device type.
qeth implements its own device matching in qeth_core_probe_device(),
while ctcm and lcs currently have no sanity-checking at all.

Enable ccwgroup drivers to optionally defer the device type checking to
the ccwgroup core, by specifying their supported ccw_driver.
This allows us drop the device type matching from qeth, and improves
the robustness of ctcm and lcs.

Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Acked-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-29 15:51:30 +02:00
Harald Freudenberger bf7fa03870 s390/crypto: add s390 platform specific aes gcm support.
This patch introduces gcm(aes) support into the aes_s390 kernel module.

Signed-off-by: Patrick Steuer <patrick.steuer@de.ibm.com>
Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-29 15:51:25 +02:00
Patrick Steuer eecd49c462 s390/crypto: add inline assembly for KMA instruction to cpacf.h
Signed-off-by: Patrick Steuer <patrick.steuer@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-29 15:51:19 +02:00
Martin Schwidefsky eb3b7b848f s390/rwlock: introduce rwlock wait queueing
Like the common queued rwlock code the s390 implementation uses the
queued spinlock code on a spinlock_t embedded in the rwlock_t to achieve
the queueing. The encoding of the rwlock_t differs though, the counter
field in the rwlock_t is split into two parts. The upper two bytes hold
the write bit and the write wait counter, the lower two bytes hold the
read counter.

The arch_read_lock operation works exactly like the common qrwlock but
the enqueue operation for a writer follows a diffent logic. After the
failed inline try to get the rwlock in write, the writer first increases
the write wait counter, acquires the wait spin_lock for the queueing,
and then loops until there are no readers and the write bit is zero.
Without the write wait counter a CPU that just released the rwlock
could immediately reacquire the lock in the inline code, bypassing all
outstanding read and write waiters. For s390 this would cause massive
imbalances in favour of writers in case of a contended rwlock.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:44 +02:00
Martin Schwidefsky b96f7d881a s390/spinlock: introduce spinlock wait queueing
The queued spinlock code for s390 follows the principles of the common
code qspinlock implementation but with a few notable differences.

The format of the spinlock_t locking word differs, s390 needs to store
the logical CPU number of the lock holder in the spinlock_t to be able
to use the diagnose 9c directed yield hypervisor call.

The inline code sequences for spin_lock and spin_unlock are nice and
short. The inline portion of a spin_lock now typically looks like this:

	lhi	%r0,0			# 0 indicates an empty lock
	l	%r1,0x3a0		# CPU number + 1 from lowcore
	cs	%r0,%r1,<some_lock>	# lock operation
	jnz	call_wait		# on failure call wait function
locked:
	...
call_wait:
	la	%r2,<some_lock>
	brasl	%r14,arch_spin_lock_wait
	j	locked

A spin_unlock is as simple as before:

	lhi	%r0,0
	sth	%r0,2(%r2)		# unlock operation

After a CPU has queued itself it may not enable interrupts again for the
arch_spin_lock_flags() variant. The arch_spin_lock_wait_flags wait function
is removed.

To improve performance the code implements opportunistic lock stealing.
If the wait function finds a spinlock_t that indicates that the lock is
free but there are queued waiters, the CPU may steal the lock up to three
times without queueing itself. The lock stealing update the steal counter
in the lock word to prevent more than 3 steals. The counter is reset at
the time the CPU next in the queue successfully takes the lock.

While the queued spinlocks improve performance in a system with dedicated
CPUs, in a virtualized environment with continuously overcommitted CPUs
the queued spinlocks can have a negative effect on performance. This
is due to the fact that a queued CPU that is preempted by the hypervisor
will block the queue at some point even without holding the lock. With
the classic spinlock it does not matter if a CPU is preempted that waits
for the lock. Therefore use the queued spinlock code only if the system
runs with dedicated CPUs and fall back to classic spinlocks when running
with shared CPUs.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:44 +02:00
Martin Schwidefsky 8153380379 s390/spinlock: use the cpu number +1 as spinlock value
The queued spinlock code will come out simpler if the encoding of
the CPU that holds the spinlock is (cpu+1) instead of (~cpu).

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:44 +02:00
Martin Schwidefsky 1887aa07b6 s390/topology: add detection of dedicated vs shared CPUs
The topology information returned by STSI 15.x.x contains a flag
if the CPUs of a topology-list are dedicated or shared. Make this
information available if the machine provides topology information.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:43 +02:00
Himanshu Jha 8179c7ba10 s390/sclp: Use setup_timer and mod_timer
Use setup_timer and mod_timer API instead of structure assignments.

This is done using Coccinelle and semantic patch used
for this as follows:

@@
expression x,y,z,a,b;
@@

-init_timer (&x);
+setup_timer (&x, y, z);
+mod_timer (&a, b);
-x.function = y;
-x.data = z;
-x.expires = b;
-add_timer(&a);

Signed-off-by: Himanshu Jha <himanshujha199640@gmail.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:43 +02:00
Heiko Carstens 1922099979 s390/cpumf: remove superfluous nr_cpumask_bits check
Paul Burton reported that the nr_cpumask_bits check
within cpumsf_pmu_event_init() is not necessary.

Actually there is already a prior check within
perf_event_alloc(). Therefore remove the check.

Reported-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:43 +02:00
Harald Freudenberger 76b3138192 s390/zcrypt: Explicitly check input data length.
The function to prepare MEX type 50 ap messages did
not explicitly check for the data length in case of
data > 512 bytes. Instead the function assumes the
boundary check done in the ioctl function will always
reject requests with invalid data length values.
However, screening just the function code may give the
illusion, that there may be a gap which could be
exploited by userspace for buffer overwrite attacks.

Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:43 +02:00
Sebastian Ott 08c6df97d6 s390/cmf: use tod_to_ns()
Instead of open coding tod clock to ns conversions use the timex helper.

Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:43 +02:00
Sebastian Ott cb09b356cd s390/cmf: avg_utilization
All (but one) cmf related sysfs attributes have been converted to read
directly from the measurement block using cmf_read. This is not
possible for the avg_utilization attribute since this is an aggregation
of several values for which cmf_read only returns average values.

Move the computation of the utilization value to the cmf_read interface
such that it can use the raw data.

Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:42 +02:00
Sebastian Ott d4d287e81f s390/cmf: read from hw buffer
To ensure data consistency when reading from the channel measurement
block we wait for the subchannel to become idle and copy the whole
block. This is unnecessary when all we do is export the individual
values via sysfs. Read the values for sysfs export directly from
the measurement block.

Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:42 +02:00
Sebastian Ott 81b050b564 s390/cmf: simplify copy_block
cmf_copy_block tries to ensure data consistency by copying the
channel measurement block twice and comparing the data. This was
needed on very old machines only. Nowadays we guarantee
consistency by copying the data when the channel is idle.

Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:42 +02:00
Sebastian Ott 60f3eac3a1 s390/cmf: simplify cmb_copy_wait
No need for refcounting - the data can be on stack. Also change
the locking in this function to only use spin_lock_irq (the
function waits, thus it's never called from IRQ context).

Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:42 +02:00
Sebastian Ott eeec1e435f s390/cmf: simplify set_schib_wait
No need for refcounting - the data can be on stack.

Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:41 +02:00
Sebastian Ott adc69b4d76 s390/cmf: set_schib_wait add timeout
When enabling channel measurement fails with a busy condition we wait
for the next interrupt to arrive before we retry the operation. For
devices which usually don't create interrupts we wait forever.

Although the waiting is done interruptible that behavior is not
expected and confused some users. Abort the operation after a 10s
timeout.

Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:41 +02:00
Jean Delvare b08e19defc s390/char: fix cdev_add usage
Function cdev_add does set cdev->dev, so there is no point in setting
it prior to calling this function.

Signed-off-by: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:41 +02:00
Johannes Thumshirn e16c5dd515 samples/kprobes: Add s390 case in kprobe example module
Add info prints in sample kprobe handlers for S/390

Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:41 +02:00
Alice Frosi 262832bc5a s390/ptrace: add runtime instrumention register get/set
Add runtime instrumention register get and set which allows to read
and modify the runtime instrumention control block.

Signed-off-by: Alice Frosi <alice@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:41 +02:00
Alice Frosi bb59c2da3f s390/runtime_instrumentation: clean up struct runtime_instr_cb
Update runtime_instr_cb structure to be consistent with the runtime
instrumentation documentation.

Signed-off-by: Alice Frosi <alice@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:40 +02:00
Heiko Carstens 79962038df s390: add support for FORTIFY_SOURCE
This is the quite trivial backend for s390 which is required to enable
FORTIFY_SOURCE support.

See commit 6974f0c455 ("include/linux/string.h: add the option of
fortified string.h functions") for more details.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:40 +02:00
Heiko Carstens 59a19ea9a0 s390: get rid of exit_thread()
exit_thread() is empty now. Therefore remove it and get rid of a
pointless branch.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-09-28 07:29:40 +02:00