Commit Graph

6912 Commits

Author SHA1 Message Date
Richard Henderson
a47dc220e9 target/arm: Implement SVE2 integer halving add/subtract (predicated)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525010358.152808-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Richard Henderson
45d9503d0a target/arm: Implement SVE2 saturating/rounding bitwise shift left (predicated)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525010358.152808-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Richard Henderson
8b3f15b0a3 target/arm: Split out saturating/rounding shifts from neon
Split these operations out into a header that can be shared
between neon and sve.  The "sat" pointer acts both as a boolean
for control of saturating behavior and controls the difference
in behavior between neon and sve -- QC bit or no QC bit.

Widen the shift operand in the new helpers, as the SVE2 insns treat
the whole input element as significant.  For the neon uses, truncate
the shift to int8_t while passing the parameter.

Implement right-shift rounding as

    tmp = src >> (shift - 1);
    dst = (tmp >> 1) + (tmp & 1);

This is the same number of instructions as the current

    tmp = 1 << (shift - 1);
    dst = (src + tmp) >> shift;

without any possibility of intermediate overflow.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525010358.152808-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Richard Henderson
db366da809 target/arm: Implement SVE2 integer unary operations (predicated)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525010358.152808-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Richard Henderson
d4b1e59d98 target/arm: Implement SVE2 integer pairwise add and accumulate long
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525010358.152808-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Richard Henderson
5dad1ba52f target/arm: Implement SVE2 Integer Multiply - Unpredicated
For MUL, we can rely on generic support.  For SMULH and UMULH,
create some trivial helpers.  For PMUL, back in a21bb78e58,
we organized helper_gvec_pmul_b in preparation for this use.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525010358.152808-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Richard Henderson
2dc10fa2f9 target/arm: Add ID_AA64ZFR0 fields and isar_feature_aa64_sve2
Will be used for SVE2 isa subset enablement.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525010358.152808-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Rebecca Cran
7b9171cc83 target/arm: set ID_AA64ISAR0.TLB to 2 for max AARCH64 CPU type
Indicate support for FEAT_TLBIOS and FEAT_TLBIRANGE by setting
ID_AA64ISAR0.TLB to 2 for the max AARCH64 CPU type.

Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210512182337.18563-4-rebecca@nuviainc.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Rebecca Cran
7113d61850 target/arm: Add support for FEAT_TLBIOS
ARMv8.4 adds the mandatory FEAT_TLBIOS. It provides TLBI
maintenance instructions that extend to the Outer Shareable domain.

Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210512182337.18563-3-rebecca@nuviainc.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Rebecca Cran
84940ed825 target/arm: Add support for FEAT_TLBIRANGE
ARMv8.4 adds the mandatory FEAT_TLBIRANGE. It provides TLBI
maintenance instructions that apply to a range of input addresses.

Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210512182337.18563-2-rebecca@nuviainc.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Peter Maydell
659f042ba8 target/arm: Use correct SP in M-profile exception return
When an M-profile CPU is restoring registers from the stack on
exception return, the stack pointer to use is determined based on
bits in the magic exception return type value.  We were not getting
this logic entirely correct.

Whether we use one of the Secure stack pointers or one of the
Non-Secure stack pointers depends on the EXCRET.S bit.  However,
whether we use the MSP or the PSP then depends on the SPSEL bit in
either the CONTROL_S or CONTROL_NS register.  We were incorrectly
selecting MSP vs PSP based on the EXCRET.SPSEL bit.

(In the pseudocode this is in the PopStack() function, which calls
LookUpSp_with_security_mode() which in turn looks at the relevant
CONTROL.SPSEL bit.)

The buggy behaviour wasn't noticeable in most cases, because we write
EXCRET.SPSEL to the CONTROL.SPSEL bit for the S/NS register selected
by EXCRET.ES, so we only do the wrong thing when EXCRET.S and
EXCRET.ES are different.  This will happen when secure code takes a
secure exception, which then tail-chains to a non-secure exception
which finally returns to the original secure code.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210520130905.2049-1-peter.maydell@linaro.org
2021-05-25 16:01:43 +01:00
Max Filippov
583e6a5f55 target/xtensa: clean up unaligned access
Xtensa cores may or may not have hardware support for unaligned memory
access. Remove TARGET_ALIGNED_ONLY=y from all xtensa configurations and
pass MO_ALIGN in memory access flags for all operations that would raise
an exception.
Simplify use of gen_load_store_alignment by passing access size and
alignment requirements in single parameter.
Drop condition from xtensa_cpu_do_unaligned_access and replace it with
assertion.
Add a test.

Suggested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2021-05-20 13:02:58 -07:00
Max Filippov
735aa900e4 target/xtensa: fix access ring in l32ex
l32ex does memory access as all regular load/store operations at CRING
level. Fix apparent pasto from l32e that caused it to use RING instead.

This is a correctness issue, not a security issue, because in the worst
case the privilege level of memory access may be lowered, resulting in
an exception when the correct implementation would've succeeded.
In no case it would allow memory access that would've raised an
exception in the correct implementation.

Cc: qemu-stable@nongnu.org
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2021-05-20 13:02:58 -07:00
Max Filippov
cb2d627a00 target/xtensa: don't generate extra EXCP_DEBUG on exception
target/xtensa used to generate an extra EXCP_DEBUG exception before the
first instruction executed after an interrupt or an exception is taken
to allow single-stepping that instruction in the debugger.
This is no longer needed after the following commits:
a7ba744f40 ("tcg/cpu-exec: precise single-stepping after an exception")
ba3c35d9c4 ("tcg/cpu-exec: precise single-stepping after an interrupt")
Drop exception state tracking/extra EXCP_DEBUG generation code.

Cc: qemu-stable@nongnu.org # v5.1, v5.2, v6.0
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2021-05-20 13:02:58 -07:00
Peter Maydell
972e848b53 s390x fixes and cleanups; also related fixes in xtensa,
arm, and x86 code
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEw9DWbcNiT/aowBjO3s9rk8bwL68FAmCmVLMSHGNvaHVja0By
 ZWRoYXQuY29tAAoJEN7Pa5PG8C+vihcP/2yiwThQBll+ZDKYimRu91hMkmty+24c
 F3YNv+6HnKTmnFPoo35O1iH4phd5LVZJTVicOl+XAw75DzFMpwMh8ukfq4hIYvPY
 9QSYdDBj/JX0CHTo0u2Wl92dr87vsVGwMwgqojnNZXUOMYyQGpDT/RgHqTfoCzNH
 Dl6/MqgmTNBSCZGS6GOfkmUC6bT9ZTaiSHpXPJCfvgpANDG6l2Mblz8ihcOjygoP
 e8KVXKERoUGViT+MXTAJLUlMu6valDFY6pZUh6u3EOzqqLSRXrAJACLz+zv77X7P
 Ryn03md1KWj0PRh8eEC/VfadeRbIXHrhw5T8oK8HwHW4VErL5fcAwt1EybRNWe6U
 UEj446qT37hwA9TthqZtZiR+aZHO70JRmf0svnxXaM6WepRVxzwHexDnKNi6gJvd
 cdH+yIcIzu5fEnoHNC0famYdJT4f+hmPj2r+FtbMWZXLRxMT26p4mlE0joY7EjOg
 saGBlGSdHTcSGk2X7RV/iX38s/BYpOuYM6dsi6EKn3Z1/vQbvrJ9ZZWaDDhmykJE
 1n4nOgwj7kOolNw3VlJOEBhJvozh1mf9Sr0SsXEAQQYWLwPFgX4nNnOwkk5jBTY5
 fH5Oy/aUk5tf8mmST8Sw/oSM377YC+ez3o8mtKkXtu3H0W4HTm1mnSIHbWG7xhw2
 WjmfHyRrEWT1
 =secp
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/cohuck-gitlab/tags/s390x-20210520-v2' into staging

s390x fixes and cleanups; also related fixes in xtensa,
arm, and x86 code

# gpg: Signature made Thu 20 May 2021 13:23:15 BST
# gpg:                using RSA key C3D0D66DC3624FF6A8C018CEDECF6B93C6F02FAF
# gpg:                issuer "cohuck@redhat.com"
# gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>" [unknown]
# gpg:                 aka "Cornelia Huck <huckc@linux.vnet.ibm.com>" [full]
# gpg:                 aka "Cornelia Huck <cornelia.huck@de.ibm.com>" [full]
# gpg:                 aka "Cornelia Huck <cohuck@kernel.org>" [unknown]
# gpg:                 aka "Cornelia Huck <cohuck@redhat.com>" [unknown]
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0  18CE DECF 6B93 C6F0 2FAF

* remotes/cohuck-gitlab/tags/s390x-20210520-v2:
  tests/tcg/x86_64: add vsyscall smoke test
  target/i386: Make sure that vsyscall's tb->size != 0
  vfio-ccw: Attempt to clean up all IRQs on error
  hw/s390x/ccw: Register qbus type in abstract TYPE_CCW_DEVICE parent
  vfio-ccw: Permit missing IRQs
  accel/tcg: Assert that tb->size != 0 after translation
  target/xtensa: Make sure that tb->size != 0
  target/arm: Make sure that commpage's tb->size != 0
  target/s390x: Fix translation exception on illegal instruction

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-20 18:42:00 +01:00
Ilya Leoshkevich
9b21049edd target/i386: Make sure that vsyscall's tb->size != 0
tb_gen_code() assumes that tb->size must never be zero, otherwise it
may produce spurious exceptions. For x86_64 this may happen when
creating a translation block for the vsyscall page.

Fix by pretending that vsyscall translation blocks have at least one
instruction.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210519045738.1335210-2-iii@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-05-20 14:19:30 +02:00
Ilya Leoshkevich
f689befde6 target/xtensa: Make sure that tb->size != 0
tb_gen_code() assumes that tb->size must never be zero, otherwise it
may produce spurious exceptions. For xtensa this may happen when
decoding an unknown instruction, when handling a write into the
CCOUNT or CCOMPARE special register and when single-stepping the first
instruction of an exception handler.

Fix by pretending that the size of the respective translation block is
1 in all these cases.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Max Filippov <jcmvbkbc@gmail.com>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Message-Id: <20210416154939.32404-4-iii@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-05-20 14:19:30 +02:00
Ilya Leoshkevich
48a130923c target/arm: Make sure that commpage's tb->size != 0
tb_gen_code() assumes that tb->size must never be zero, otherwise it
may produce spurious exceptions. For ARM this may happen when creating
a translation block for the commpage.

Fix by pretending that commpage translation blocks have at least one
instruction.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210416154939.32404-3-iii@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-05-20 14:19:30 +02:00
Ilya Leoshkevich
86131c71b1 target/s390x: Fix translation exception on illegal instruction
Hitting an uretprobe in a s390x TCG guest causes a SIGSEGV. What
happens is:

* uretprobe maps a userspace page containing an invalid instruction.
* uretprobe replaces the target function's return address with the
  address of that page.
* When tb_gen_code() is called on that page, tb->size ends up being 0
  (because the page starts with the invalid instruction), which causes
  virt_page2 to point to the previous page.
* The previous page is not mapped, so this causes a spurious
  translation exception.

tb->size must never be 0: even if there is an illegal instruction, the
instruction bytes that have been looked at must count towards tb->size.
So adjust s390x's translate_one() to act this way for both illegal
instructions and instructions that are known to generate exceptions.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210416154939.32404-2-iii@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-05-20 14:19:30 +02:00
Peter Maydell
be05216b01 Eliminate user-only helper stubs for privledged insns.
-----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmClV9sdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/ZUwf+LbFwBEaTnH4DHstc
 CygLp5zmQr565+HIvQkeAjpjj1wYDvPg1yzvUHk2s0PObsfDRjyYb2G80kyWRSZ3
 w+05Tmt9jXICfgP+xkgPvmigYxxUBcbiSjle4vSjlvqlp8bgonG1BOvf7EpII6R9
 omT55KvOVkfLQz+fAszNsGurFLkIE8ToYfnpo/1j6RaDGwWUyx9ylwPM37YPfcl9
 OwZFoiWjfEc5SG4cRhd8PdxmZrGvVODeadUP+xbn/j6CJw+ReMeTj2lzyUOHwjoC
 uQItSAZPjD6BiFgYcn204yLVXuhp219CzVHVOGOEbehaGC5A7rZjS2L8zAp7u0io
 CK0pOA==
 =UP1f
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-x86-20210519' into staging

Eliminate user-only helper stubs for privledged insns.

# gpg: Signature made Wed 19 May 2021 19:24:27 BST
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* remotes/rth-gitlab/tags/pull-x86-20210519: (50 commits)
  target/i386: Remove user-only i/o stubs
  target/i386: Move helper_check_io to sysemu
  target/i386: Create helper_check_io
  target/i386: Pass in port to gen_check_io
  target/i386: Tidy gen_check_io
  target/i386: Exit tb after wrmsr
  target/i386: Eliminate user stubs for read/write_crN, rd/wrmsr
  target/i386: Inline user cpu_svm_check_intercept_param
  target/i386: Unify invlpg, invlpga
  target/i386: Move invlpg, hlt, monitor, mwait to sysemu
  target/i386: Pass env to do_pause and do_hlt
  target/i386: Cleanup read_crN, write_crN, lmsw
  target/i386: Remove user stub for cpu_vmexit
  target/i386: Remove pc_start argument to gen_svm_check_intercept
  target/i386: Tidy svm_check_intercept from tcg
  target/i386: Simplify gen_debug usage
  target/i386: Mark some helpers as noreturn
  target/i386: Eliminate SVM helpers for user-only
  target/i386: Implement skinit in translate.c
  target/i386: Assert !GUEST for user-only
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-20 13:04:12 +01:00
Richard Henderson
7fb7c42394 target/i386: Remove user-only i/o stubs
With the previous patch for check_io, we now have enough for
the compiler to dead-code eliminate all of the i/o helpers.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-51-richard.henderson@linaro.org>
2021-05-19 12:17:23 -05:00
Richard Henderson
d76b9c6f07 target/i386: Move helper_check_io to sysemu
The we never allow i/o from user-only, and the tss check
that helper_check_io does will always fail.  Use an ifdef
within gen_check_io and return false, indicating that an
exception is known to be raised.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-50-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
e497803556 target/i386: Create helper_check_io
Drop helper_check_io[bwl] and expose their common
subroutine to tcg directly.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210514151342.384376-49-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
1bca40fe42 target/i386: Pass in port to gen_check_io
Pass in a pre-truncated TCGv_i32 value.  We were doing the
truncation of EDX in multiple places, now only once per insn.
While all callers use s->tmp2_i32, for cleanliness of the
subroutine, use a parameter anyway.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-48-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
bc2e436d7c target/i386: Tidy gen_check_io
Get cur_eip from DisasContext.  Do not require the caller
to use svm_is_rep; get prefix from DisasContext.  Use the
proper symbolic constants for SVM_IOIO_*.

While we're touching all call sites, return bool in
preparation for gen_check_io raising #GP.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-47-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
244843b757 target/i386: Exit tb after wrmsr
At minimum, wrmsr can change efer, which affects HF_LMA.

Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-46-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
f7803b7759 target/i386: Eliminate user stubs for read/write_crN, rd/wrmsr
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-45-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
27bd3216a7 target/i386: Inline user cpu_svm_check_intercept_param
The user-version is a no-op.  This lets us completely
remove tcg/user/svm_stubs.c.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-44-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
35e5a5d5cb target/i386: Unify invlpg, invlpga
Use a single helper, flush_page, to do the work.
Use gen_svm_check_intercept.
Perform the zero-extension for invlpga inline.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-43-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
4ea2449b58 target/i386: Move invlpg, hlt, monitor, mwait to sysemu
These instructions are all privileged.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-42-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
eb26784fe1 target/i386: Pass env to do_pause and do_hlt
Having the callers upcast to X86CPU is a waste, since we
don't need it.  We even have to recover env in do_hlt.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-41-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
7eff2e7c65 target/i386: Cleanup read_crN, write_crN, lmsw
Pull the svm intercept check into the translator.
Pull the entire implementation of lmsw into the translator.
Push the check for CR8LEG into the regno validation switch.
Unify the gen_io_start check between read/write.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210514151342.384376-40-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
3d4fce8b8e target/i386: Remove user stub for cpu_vmexit
This function is only called from tcg/sysemu/.
There is no need for a stub in tcg/user/.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-39-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
b53605dbd2 target/i386: Remove pc_start argument to gen_svm_check_intercept
When exiting helper_svm_check_intercept via exception, cpu_vmexit
calls cpu_restore_state, which will recover eip and cc_op via unwind.
Therefore we do not need to store eip or cc_op before the call.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-38-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
d051ea04d0 target/i386: Tidy svm_check_intercept from tcg
The param argument to helper_svm_check_intercept_param is always 0;
eliminate it and rename to helper_svm_check_intercept.  Fold
gen_svm_check_intercept_param into gen_svm_check_intercept.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-37-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
ed3c4739e9 target/i386: Simplify gen_debug usage
Both invocations pass the start of the current instruction,
which is available as s->base.pc_next.  The function sets
is_jmp, so we can eliminate a second setting.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-36-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
b82055aece target/i386: Mark some helpers as noreturn
Any helper that always raises an exception or interrupt,
or simply exits to the main loop, can be so marked.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-35-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
8d6806c7dd target/i386: Eliminate SVM helpers for user-only
Use STUB_HELPER to ensure that such calls are always eliminated.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-34-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
e6aeb948bb target/i386: Implement skinit in translate.c
Our sysemu implementation is a stub.  We can already intercept
instructions for vmexit, and raising #UD is trivial.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-33-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
b322b3afc1 target/i386: Assert !GUEST for user-only
For user-only, we do not need to check for VMM intercept.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-32-richard.henderson@linaro.org>
2021-05-19 12:17:11 -05:00
Richard Henderson
5d2238896a target/i386: Assert !SVME for user-only
Most of the VMM instructions are already disabled for user-only,
by being usable only from ring 0.

The spec is intentionally loose for VMMCALL, allowing the VMM to
define syscalls for user-only.  However, we're not emulating any
VMM, so VMMCALL can just raise #UD unconditionally.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210514151342.384376-31-richard.henderson@linaro.org>
2021-05-19 12:16:48 -05:00
Richard Henderson
9f55e5a947 target/i386: Add stub generator for helper_set_dr
This removes an ifdef from the middle of disas_insn,
and ensures that the branch is not reachable.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-30-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
a6f62100a8 target/i386: Reorder DisasContext members
Sort all of the single-byte members to the same area
of the structure, eliminating 8 bytes of padding.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-29-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
3236c2ade2 target/i386: Fix the comment for repz_opt
After fixing a typo in the comment, fixup for CODING_STYLE.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210514151342.384376-28-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
305d08e512 target/i386: Reduce DisasContext jmp_opt, repz_opt to bool
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-27-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
c1de1a1ace target/i386: Leave TF in DisasContext.flags
It's just as easy to clear the flag with AND than assignment.
In two cases the test for the bit can be folded together with
the test for HF_INHIBIT_IRQ_MASK.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-26-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
5862579473 target/i386: Reduce DisasContext popl_esp_hack and rip_offset to uint8_t
Both of these fields store the size of a single memory access,
so the range of values is 0-8.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-25-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
a77ca425d7 target/i386: Reduce DisasContext.vex_[lv] to uint8_t
Currently, vex_l is either {0,1}; if in the future we implement
AVX-512, the max value will be 2.  In vex_v we store a register
number.  This is 0-15 for SSE, and 0-31 for AVX-512.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-24-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
a8b9b657a0 target/i386: Reduce DisasContext.prefix to uint8_t
The highest bit in this set is 0x40 (PREFIX_REX).

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-23-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00
Richard Henderson
c651f3a3cb target/i386: Reduce DisasContext.override to int8_t
The range of values is -1 (none) to 5 (R_GS).

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-22-richard.henderson@linaro.org>
2021-05-19 12:15:47 -05:00