-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJbdbIoAAoJENSXKoln91pl9KwIAJQ0CB4V/eRlvPOj828qLJyt
FxCETpCKBD1mSRX80E2HdYdaj8KfhhKrd9R2Wrk2VsiTQhLA0+hYsLxuT8VfPjsX
8WSN7egxzzA8Hzm1+wbwFCDNL13xjuhHWPk9YA1RYS6eQJot/Y27KYfvt/OPTrBO
jGaTmvXOE9q3qXZczP2TwipYkehg1+Ss5ZJl7dUmHc3Iek5t/H+kumkhBPoJqwAQ
wcwQuPm2KRGLuq9aB2IVbtfJA5Rme1GMFDWZicitJX2uK+Mx76fgi9IorrW6XAyM
6LM8DjvrivxW+UgZ+3QdJKkQCbyIrXqDAp1MRd3CWMlB576WWKm6RUTZclSo9gI=
=m1CJ
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-aug-2018' into staging
MIPS queue Aug 16, 2018
# gpg: Signature made Thu 16 Aug 2018 18:19:36 BST
# gpg: using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <amarkovic@wavecomp.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01 DD75 D497 2A89 67F7 5A65
* remotes/amarkovic/tags/mips-queue-aug-2018:
qemu-doc: Amend MIPS-related items
linux-user: Add preprocessor availability control to some syscalls
linux-user: Update MIPS syscall numbers up to kernel 4.18 headers
elf: Add ELF flags for MIPS machine variants
elf: Remove duplicate preprocessor constant definition
target/mips: Check ELPA flag only in some cases of MFHC0 and MTHC0
target/mips: Don't update BadVAddr register in Debug Mode
target/mips: Implement CP0 Config1.WR bit functionality
target/mips: Add CP0 BadInstrX register
target/mips: Update some CP0 registers bit definitions
target/mips: Fix two instances of shadow variables
target/mips: Mark switch fallthroughs with interpretable comments
target/mips: Avoid case statements formulated by ranges - part 2
target/mips: Avoid case statements formulated by ranges - part 1
MAINTAINERS: Update target/mips maintainer's email addresses
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add ability to target platforms to individually include user-mode
support for system calls from "stat" group of system calls.
This change is related to new nanoMIPS platform in the sense that
it supports a different set of "stat" system calls than any other
target. nanoMIPS does not support structures stat and stat64 at
all. Also, support for certain number of other system calls is
dropped in nanoMIPS (those are most of the time obsoleted system
calls).
Without this patch, build for nanoMIPS would fail.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Synchronize content of linux-user/mips/syscall_nr.h and
linux-user/mips64/syscall_nr.h with Linux kernel 4.18 headers.
This adds 9 new syscall numbers, the last being NR_io_pgetevents.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
This allows the default (and maximum) vector length to be set
from the command-line. Which is extraordinarily helpful in
debugging problems depending on vector length without having to
bake knowledge of PR_SET_SVE_VL into every guest binary.
Cc: qemu-stable@nongnu.org (3.0.1)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
r11 is a volatile register on PPC as per calling conventions.
The safe_syscall code uses it to check if the signal_pending
is set during the safe_syscall. When a syscall is interrupted
on return from signal handling, the r11 might be corrupted
before we retry the syscall leading to a crash. The registers
r0-r13 are not to be used here as they have
volatile/designated/reserved usages.
Change the code to use r14 which is non-volatile.
Use SP+16 which is a slot for LR, for save/restore of previous value
of r14. SP+16 can be used, as LR is preserved across the syscall.
Steps to reproduce:
On PPC host, issue `qemu-x86_64 /usr/bin/cc -E -`
Attempt Ctrl-C, the issue is reproduced.
Reference:
https://refspecs.linuxfoundation.org/ELF/ppc64/PPC-elf64abi-1.9.html#REGhttps://openpowerfoundation.org/wp-content/uploads/2016/03/ABI64BitOpenPOWERv1.1_16July2015_pub4.pdf
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Tested-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <153301568965.30312.10498134581068746871.stgit@dhcp-9-109-246-16>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
I've slightly re-organised the check to more closely match the
sequence that the kernel uses in do_mmap(). We check for both the zero
case (EINVAL) and the overflow length case (ENOMEM).
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Cc: umarcor <1783362@bugs.launchpad.net>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180730134321.19898-2-alex.bennee@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
This allows the tests generated by debian-powerpc-user-cross
to function properly, especially tests/test-coroutine.
Technically this syscall is available to both ppc32 and ppc64,
but only ppc32 glibc actually uses it. Thus the ppc64 path is
untested.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180718200648.22529-1-richard.henderson@linaro.org>
When we try to use some targets on ppc64, it can happen the target
doesn't support the host page size to align ELF load sections and
fails with:
ELF load command alignment not page-aligned
Since commit a70daba377 ("linux-user: Tell guest about big host
page sizes") the host page size is used to align ELF sections, but
this doesn't work if the alignment required by the load section is
smaller than the host one. For these cases, we continue to use the
TARGET_PAGE_SIZE instead of the host one.
I have tested this change on ppc64, and it fixes qemu linux-user for:
s390x, m68k, i386, arm, aarch64, hppa
and I have tested it doesn't break the following targets:
x86_64, mips64el, sh4
mips and mipsel abort, but I think for another reason.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[lv: fixed "info->alignment = 0"]
Message-Id: <20180716195349.29959-1-laurent@vivier.eu>
If this is not done, qemu would drop any control message after the first
one.
This is because glibc's `CMSG_NXTHDR` macro accesses the uninitialized
cmsghdr's length field in order to find out if the message fits into the
`msg_control` buffer, wrongly assuming that it doesn't because the
length field contains garbage. Accessing the length field is fine for
completed messages we receive from the kernel, but is - as far as I know
- not needed since the kernel won't return such an invalid cmsghdr in
the first place.
This is tracked as this glibc bug:
https://sourceware.org/bugzilla/show_bug.cgi?id=13500
It's probably also a good idea to bail with an error if `CMSG_NXTHDR`
returns NULL but `TARGET_CMSG_NXTHDR` doesn't (ie. we still expect
cmsgs).
Signed-off-by: Jonas Schievink <jonasschievink@gmail.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180711221244.31869-1-jonasschievink@gmail.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The value given by mmap_find_vma_reserved() is used with mmap(),
so it is needed to be aligned with the host page size.
Since commit 18e80c55bb, reserved_va is only aligned to TARGET_PAGE_SIZE,
and it works well if this size is greater or equal to the host page size.
But ppc64 hosts have 64kB page size and when we start a 4kiB page size
guest (like i386), it fails when it tries to mmap the stack:
mmap stack: Invalid argument
Fixes: 18e80c55bb (linux-user: Tidy and enforce reserved_va initialization)
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180714193553.30846-1-laurent@vivier.eu>
Commit 435da5e709 didn't convert a fcntl() call to safe_fcntl()
for TARGET_NR_fcntl64 case. There is no reason to not use it
in this case.
Fixes: 435da5e709 linux-user: Use safe_syscall wrapper for fcntl
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180713125805.10749-1-laurent@vivier.eu>
Qemu includes the glibc headers for the host defines and target headers are
part of the qemu source themselves. The glibc has the F_GETLK64, F_SETLK64
and F_SETLKW64 defined to 12, 13 and 14 for all archs in
sysdeps/unix/sysv/linux/bits/fcntl-linux.h. The linux kernel generic
definition for F_*LK is 5, 6 & 7 and F_*LK64* is 12,13, and 14 as seen in
include/uapi/asm-generic/fcntl.h. On 64bit machine, by default the kernel
assumes all F_*LK to 64bit calls and doesnt support use of F_*LK64* as
can be seen in include/linux/fcntl.h in linux source.
On x86_64 host, the values for F_*LK64* are set to 5, 6 and 7
explicitly in /usr/include/x86_64-linux-gnu/bits/fcntl.h by the glibc.
Whereas, a PPC64 host doesn't have such a definition in
/usr/include/powerpc64le-linux-gnu/bits/fcntl.h by the glibc. So,
the sources on PPC64 host sees the default value of F_*LK64*
as 12, 13 & 14(fcntl-linux.h).
Since the 64bit kernel doesnt support 12, 13 & 14; the glibc fcntl syscall
implementation(__libc_fcntl*(), __fcntl64_nocancel) does the F_*LK64* value
convertion back to F_*LK* values on PPC64 as seen in
sysdeps/unix/sysv/linux/powerpc/powerpc64/sysdep.h with FCNTL_ADJUST_CMD()
macro. Whereas on x86_64 host the values for F_*LK64* are set to 5, 6 and 7
and no adjustments are needed.
Since qemu doesnt use the glibc fcntl, but makes the safe_syscall* on its
own, the PPC64 qemu is calling the syscall with 12, 13, and 14(without
adjustment) and they all fail. The fcntl calls to F_GETLK/F_SETLK|W all
fail by all pplications run on PPC64 host user emulation.
The fix here could be to see why on PPC64 the glibc is still keeping
F_*LK64* different from F_*LK and why adjusting them to 5, 6 and 7 before
the syscall for PPC only. See if we can make the
/usr/include/powerpc64le-linux-gnu/bits/fcntl.h to have the values
5, 6 & 7 just like x86_64 and remove the adjustment code in glibc. That
way, qemu sources see the kernel supported values in glibc headers.
OR
On PPC64 host, qemu sources see both F_*LK & F_*LK64* as same and set to
12, 13 and 14 because __USE_FILE_OFFSET64 is defined in qemu
sources(also refer sysdeps/unix/sysv/linux/bits/fcntl-linux.h).
Do the value adjustment just like it is done by glibc source by using
F_GETLK value of 5. That way, we make the syscalls with the actual
supported values in Qemu. The patch is taking this approach.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <153148521235.87746.14142430397318741182.stgit@lep8c.aus.stglabs.ibm.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
This can still be reported using the "-d unimp" command line option.
Code change produced with:
git ls-files linux-user | \
xargs sed -i -E 's/fprintf\(stderr,\s?(".*not implemented\\n")\);/qemu_log_mask(LOG_UNIMP, \1);/g'
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180706155127.7483-3-f4bug@amsat.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
This can still be reported using the "-d unimp" command line option.
Fixes: https://bugs.launchpad.net/qemu/+bug/1777226
Reported-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180706155127.7483-2-f4bug@amsat.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
As we don't always take the normal exit path when running a guest we
can skip the normal exit destructors where gcov normally dumps it's
info. The GCC manual suggests long running programs use __gcov_dump()
to flush out the coverage state periodically so we use that here.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
To avoid repeating ourselves move our preexit clean-up code into a
helper function. I figured the continuing effort to split of the
syscalls made it worthwhile creating a new file for it now.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Mostly patches from Richard Henderson fixing multiple things:
* Fix singlestepping in GDB.
* Use more TB linking.
* Fixes to exit TB after updating SPRs to enable registering of state
changes.
* Significant optimizations and refactors to the TLB
* Split out disassembly from translation.
* Add qemu-or1k to qemu-binfmt-conf.sh.
* Implement signal handling for linux-user.
Then there are a few fixups from me:
* Fix delay slot detections to match hardware, this was masking a bug
in the linus kernel.
* Fix stores to the PIC mask register
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJbO32qAAoJEMOzHC1eZifkdHwP/2zV/dE3A0XvEynghJU4XeVe
KlNRupCjp2civk9d9E+BJwIOVDMPfBQbBKfGC2fjzBGOuop8ZjUvvUuazNTEQoov
9RfeXPMkP8xJUzGp02Gl87ZcMY9ZXJrqlPb2BaJ//8f/E0CF+91ODnkeLK62UXnb
EbBCf5IlJy/B6Fp9icfdE09/nYx6SmQHPJZo9nC8xiWNZ8LewXn+DWGH81EHd8w1
j99FV5ijImwB/LNOP6aVelyyKV9ZpInI6ZqC1LztWWaZftJ42TuvUq4vboP1P2s9
vC9RV5oWl3/DL9HQxEphrynqKNvrxcceoQhxXirEzbLeYG83Tx1ed2a7J3x83gtY
reChNmnwRuuchCot3cK4xDn2e0dY87dT24wtBM9HNmLsGgEzudZuGPwdlHrBoRFP
o3exRItbfFr6SFdgUZaMjeC0vRSVU/FPqPRswJESWelEEMCi1R/CeQFKaw8BmaOG
rw9Ed1rtX23R1Ce/ggEQgkxh2cWGyV1Tc0q5M09UDDmHKq0pd27d7CLlVMYsqZqk
8guPjPzsYP12vNFTjx6tC8inOwJalK7kDVJv1a+c/bpyqaulcT22o7ck8rqlCZbU
wpQbhAAGbbVrwnjQevO11MZsXk+6FANw6KIXxEDdDVbfqQ+WLtHOfXsUkG/xVUuR
K1hiEeYKl77HCrDFkhqB
=mbu2
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/shorne/tags/pull-or-20180703' into staging
OpenRISC cleanups and Fixes for QEMU 3.0
Mostly patches from Richard Henderson fixing multiple things:
* Fix singlestepping in GDB.
* Use more TB linking.
* Fixes to exit TB after updating SPRs to enable registering of state
changes.
* Significant optimizations and refactors to the TLB
* Split out disassembly from translation.
* Add qemu-or1k to qemu-binfmt-conf.sh.
* Implement signal handling for linux-user.
Then there are a few fixups from me:
* Fix delay slot detections to match hardware, this was masking a bug
in the linus kernel.
* Fix stores to the PIC mask register
# gpg: Signature made Tue 03 Jul 2018 14:44:10 BST
# gpg: using RSA key C3B31C2D5E6627E4
# gpg: Good signature from "Stafford Horne <shorne@gmail.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: D9C4 7354 AEF8 6C10 3A25 EFF1 C3B3 1C2D 5E66 27E4
* remotes/shorne/tags/pull-or-20180703: (25 commits)
target/openrisc: Fix writes to interrupt mask register
target/openrisc: Fix delay slot exception flag to match spec
linux-user: Fix struct sigaltstack for openrisc
linux-user: Implement signals for openrisc
target/openrisc: Add support in scripts/qemu-binfmt-conf.sh
target/openrisc: Reorg tlb lookup
target/openrisc: Increase the TLB size
target/openrisc: Stub out handle_mmu_fault for softmmu
target/openrisc: Use identical sizes for ITLB and DTLB
target/openrisc: Fix cpu_mmu_index
target/openrisc: Fix tlb flushing in mtspr
target/openrisc: Reduce tlb to a single dimension
target/openrisc: Merge mmu_helper.c into mmu.c
target/openrisc: Remove indirect function calls for mmu
target/openrisc: Merge tlb allocation into CPUOpenRISCState
target/openrisc: Form the spr index from tcg
target/openrisc: Exit the TB after l.mtspr
target/openrisc: Split out is_user
target/openrisc: Link more translation blocks
target/openrisc: Fix singlestep_enabled
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Here's a last minue pull request before today's soft freeze. Ideally
I would have sent this earlier, but I was waiting for a couple of
extra fixes I knew were close. And the freeze crept up on me, like
always.
Most of the changes here are bugfixes in any case. There are some
cleanups as well, which have been in my staging tree for a little
while. There are a couple of truly new features (some extensions to
the sam460ex platform), but these are low risk, since they only affect
a new and not really stabilized machine type anyway.
Higlights are:
* Mac platform improvements from Mark Cave-Ayland
* Sam460ex improvements from BALATON Zoltan et al.
* XICS interrupt handler cleanups from Cédric Le Goater
* TCG improvements for atomic loads and stores from Richard
Henderson
* Assorted other bugfixes
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAls7D8oACgkQbDjKyiDZ
s5Lxmg//YzPfC/nKqTTKkyJPzh/NnSC+kRTMAT3mbxdRIc7yfgMqJtWGGbS1iKgK
EeJ9hl5Qm0HfscfDuzf0xasU62ZEv3kNdLnWJEIgkqiXrxoO5KCnC0y4D8NN1W03
mvINNCa8+QDg2OsirGmNUTkriiG3wLIrHTpLZ4+JuC2Bd9H3nTHZgJ0MXON/1VWY
oRgr6kMZ5+IAzPhvYLFR6l3nPI883fgJOFyRo7YqYrkVBKFrFkfK0Xjw6vpsNxcx
2dE/YCHhNIriLuBG5noewL7GuqZRtLnl6rjjee5VAKIe1EmFeR+jsXwNjzGOVOJg
dhjOtsJsQQ3WdEw5uImJzE64kV228WCgmkeXzZd1010JBLr7sUkrd2EuoZ23vvat
uvZAHVSBrJg5WvzMo1VMEoPU3VeeZQ5HL+MI80iKiU6oUgRK11gVJcebtA0sEKt+
zhJC4JiUlHtZLTGIpMBmU8DJZ3Tyk1cBEm+Ky+SaPE+dsz16UHI0fazFQXJnXphE
MLHEGAyQgzWYp7kIcAjUFev0Geq/Uovy4JKIGI6ISop1wRPEQDxkthfkfRyQxQkE
zuse4EBcEH/Undw9KrmEQa0hCe+8BRkxklVbPesFPPdqH3PKNxtHYuWpSShQF0PW
XMjw43O2Rbsl8kBUHCpy4pYSugD1hpfgaw/mVUOU1u/M1O6toTw=
=AHrx
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.0-20180703' into staging
ppc patch queue 2018-07-03
Here's a last minue pull request before today's soft freeze. Ideally
I would have sent this earlier, but I was waiting for a couple of
extra fixes I knew were close. And the freeze crept up on me, like
always.
Most of the changes here are bugfixes in any case. There are some
cleanups as well, which have been in my staging tree for a little
while. There are a couple of truly new features (some extensions to
the sam460ex platform), but these are low risk, since they only affect
a new and not really stabilized machine type anyway.
Higlights are:
* Mac platform improvements from Mark Cave-Ayland
* Sam460ex improvements from BALATON Zoltan et al.
* XICS interrupt handler cleanups from Cédric Le Goater
* TCG improvements for atomic loads and stores from Richard
Henderson
* Assorted other bugfixes
# gpg: Signature made Tue 03 Jul 2018 06:55:22 BST
# gpg: using RSA key 6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-3.0-20180703: (35 commits)
ppc: Include vga cirrus card into the compiling process
target/ppc: Relax reserved bitmask of indexed store instructions
target/ppc: set is_jmp on ppc_tr_breakpoint_check
spapr: compute default value of "hpt-max-page-size" later
target/ppc/kvm: don't pass cpu to kvm_get_smmu_info()
target/ppc/kvm: get rid of kvm_get_fallback_smmu_info()
ppc440_uc: Basic emulation of PPC440 DMA controller
sam460ex: Add RTC device
hw/timer: Add basic M41T80 emulation
ppc4xx_i2c: Rewrite to model hardware more closely
hw/ppc: Give sam46ex its own config option
fpu_helper.c: fix setting FPSCR[FI] bit
target/ppc: Implement the rest of gen_st_atomic
target/ppc: Implement the rest of gen_ld_atomic
target/ppc: Use atomic min/max helpers
target/ppc: Use MO_ALIGN for EXIWX and ECOWX
target/ppc: Split out gen_st_atomic
target/ppc: Split out gen_ld_atomic
target/ppc: Split out gen_load_locked
target/ppc: Tidy gen_conditional_store
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# Conflicts:
# hw/ppc/spapr.c
All of the existing code was boilerplate from elsewhere,
and would crash the guest upon the first signal.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
---
v2:
Add a comment to the new definition of target_pt_regs.
Install the signal mask into the ucontext.
v3:
Incorporate feedback from Laurent.
Always use the gen_conditional_store implementation that uses
atomic_cmpxchg. Make sure and clear reserve_addr across most
interrupts crossing the cpu_loop.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
add IPV6_MULTICAST_HOPS and IPV6_MULTICAST_LOOP that need
32bit value conversion
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180627212152.26525-3-laurent@vivier.eu>
It eases code review, unit is explicit.
Patch generated using:
$ git grep -n '[<>][<>]= ?[1-5]0'
and modified manually.
Suggested-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180625124238.25339-46-f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We've already added the helpers with an SVE patch, all that remains
is to wire up the aa64 and aa32 translators. Enable the feature
within -cpu max for CONFIG_USER_ONLY.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-36-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Enable ARM_FEATURE_SVE for the generic "max" cpu.
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-35-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Place them in exec.c, exec-all.h and ram_addr.h. This removes
knowledge of translate-all.h (which is an internal header) from
several files outside accel/tcg and removes knowledge of
AddressSpace from translate-all.c (as it only operates on ram_addr_t).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use mmap_lock in user-mode to protect TCG state and the page descriptors.
In !user-mode, each vCPU has its own TCG state, so no locks needed.
Per-page locks are used to protect the page descriptors.
Per-TB locks are used in both modes to protect TB jumps.
Some notes:
- tb_lock is removed from notdirty_mem_write by passing a
locked page_collection to tb_invalidate_phys_page_fast.
- tcg_tb_lookup/remove/insert/etc have their own internal lock(s),
so there is no need to further serialize access to them.
- do_tb_flush is run in a safe async context, meaning no other
vCPU threads are running. Therefore acquiring mmap_lock there
is just to please tools such as thread sanitizer.
- Not visible in the diff, but tb_invalidate_phys_page already
has an assert_memory_lock.
- cpu_io_recompile is !user-only, so no mmap_lock there.
- Added mmap_unlock()'s before all siglongjmp's that could
be called in user-mode while mmap_lock is held.
+ Added an assert for !have_mmap_lock() after returning from
the longjmp in cpu_exec, just like we do in cpu_exec_step_atomic.
Performance numbers before/after:
Host: AMD Opteron(tm) Processor 6376
ubuntu 17.04 ppc64 bootup+shutdown time
700 +-+--+----+------+------------+-----------+------------*--+-+
| + + + + + *B |
| before ***B*** ** * |
|tb lock removal ###D### *** |
600 +-+ *** +-+
| ** # |
| *B* #D |
| *** * ## |
500 +-+ *** ### +-+
| * *** ### |
| *B* # ## |
| ** * #D# |
400 +-+ ** ## +-+
| ** ### |
| ** ## |
| ** # ## |
300 +-+ * B* #D# +-+
| B *** ### |
| * ** #### |
| * *** ### |
200 +-+ B *B #D# +-+
| #B* * ## # |
| #* ## |
| + D##D# + + + + |
100 +-+--+----+------+------------+-----------+------------+--+-+
1 8 16 Guest CPUs 48 64
png: https://imgur.com/HwmBHXe
debian jessie aarch64 bootup+shutdown time
90 +-+--+-----+-----+------------+------------+------------+--+-+
| + + + + + + |
| before ***B*** B |
80 +tb lock removal ###D### **D +-+
| **### |
| **## |
70 +-+ ** # +-+
| ** ## |
| ** # |
60 +-+ *B ## +-+
| ** ## |
| *** #D |
50 +-+ *** ## +-+
| * ** ### |
| **B* ### |
40 +-+ **** # ## +-+
| **** #D# |
| ***B** ### |
30 +-+ B***B** #### +-+
| B * * # ### |
| B ###D# |
20 +-+ D ##D## +-+
| D# |
| + + + + + + |
10 +-+--+-----+-----+------------+------------+------------+--+-+
1 8 16 Guest CPUs 48 64
png: https://imgur.com/iGpGFtv
The gains are high for 4-8 CPUs. Beyond that point, however, unrelated
lock contention significantly hurts scalability.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These were named incorrectly, going so far as to invade strace.list.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180607184844.30126-2-richard.henderson@linaro.org>
[lv: replace tabs by spaces]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Remove a "#if defined(XX) || defined(YY) || ..." with all available
targets
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180529194207.31503-16-laurent@vivier.eu>