Commit Graph

3515 Commits

Author SHA1 Message Date
Alistair Francis aad5ac2311
riscv: pmp: Log pmp access errors as guest errors
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-03-19 05:14:38 -07:00
Jim Wilson 5371f5cd71
RISC-V: Add hooks to use the gdb xml files.
The gdb CSR xml file has registers in documentation order, not numerical
order, so we need a table to map the register numbers.  This also adds
fairly standard gdb hooks to access xml specified registers.

notice:
    The fpu xml from gdb 8.3 has unused register #, 65 and make first
    csr register # become 69. We register extra register on gdb to correct
    csr offset calculation

Signed-off-by: Jim Wilson <jimw@sifive.com>
Signed-off-by: Chih-Min Chao <chihmin.chao@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-03-19 05:13:24 -07:00
Jim Wilson 753e3fe207
RISC-V: Add debug support for accessing CSRs.
Add a debugger field to CPURISCVState.  Add riscv_csrrw_debug function
to set it.  Disable mode checks when debugger field true.

Signed-off-by: Jim Wilson <jimw@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20190212230903.9215-1-jimw@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-03-19 05:13:24 -07:00
Jim Wilson 8e73df6aa3
RISC-V: Fixes to CSR_* register macros.
This adds some missing CSR_* register macros, and documents some as being
priv v1.9.1 specific.

Signed-off-by: Jim Wilson <jimw@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20190212230830.9160-1-jimw@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-03-19 05:13:24 -07:00
Singh, Brijesh cedc0ad539 target/i386: sev: Do not pin the ram device memory region
The RAM device presents a memory region that should be handled
as an IO region and should not be pinned.

In the case of the vfio-pci, RAM device represents a MMIO BAR
and the memory region is not backed by pages hence
KVM_MEMORY_ENCRYPT_REG_REGION fails to lock the memory range.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1667249
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Message-Id: <20190204222322.26766-3-brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-18 09:39:57 +01:00
Bastian Koppelmann f330433b36
target/riscv: Fix manually parsed 16 bit insn
during the refactor to decodetree we removed the manual decoding that is
necessary for c.jal/c.addiw and removed the translation of c.flw/c.ld
and c.fsw/c.sd. This reintroduces the manual parsing and the
omited implementation.

Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Tested-by: Palmer Dabbelt <palmer@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Tested-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-03-17 22:21:32 -07:00
Richard Henderson c5d0aec25f target/hppa: Avoid squishing DISAS_IAQ_N_STALE_EXIT
Within a delay slot, we were squishing both DISAS_IAQ_N_STALE and
DISAS_IAQ_N_STALE_EXIT to DISAS_IAQ_N_UPDATED.  This lost the
required exit to the main loop, and could result in interrupts
never being delivered.

Tested-by: Sven Schnelle <svens@stackframe.org>
Reported-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-15 10:09:55 -07:00
Amir Charif 5de56742a3 target/arm: Check access permission to ADDVL/ADDPL/RDVL
These instructions do not trap when SVE is disabled in EL0,
causing them to be executed with wrong size information.

Signed-off-by: Amir Charif <amir.charif@cea.fr>
Message-id: 1552579248-31025-1-git-send-email-amir.charif@cea.fr
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: added 'target/arm' prefix to subject]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-15 11:12:29 +00:00
Dongjiu Geng daf1dc5f82 target/arm: change arch timer registers access permission
Some generic arch timer registers are Config-RW in the EL0,
which means the EL0 exception level can have write permission
if it is appropriately configured.

When VM access registers, QEMU firstly checks whether they have RW
permission, then check whether it is appropriately configured.
If they are defined to read only in EL0, even though they have been
appropriately configured, they still do not have write permission.
So need to add the write permission according to ARMV8 spec when
define it.

Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
Message-id: 1552395177-12608-1-git-send-email-gengdongjiu@huawei.com
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-15 11:12:29 +00:00
Peter Maydell 1fa87eb56e target/riscv: Convert to decodetree
Bastian: this patchset converts the RISC-V decoder to decodetree in four major steps:
 
 1) Convert 32-bit instructions to decodetree [Patch 1-15]:
     Many of the gen_* functions are called by the decode functions for 16-bit
     and 32-bit functions. If we move translation code from the gen_*
     functions to the generated trans_* functions of decode-tree, we get a lot of
     duplication. Therefore, we mostly generate calls to the old gen_* function
     which are properly replaced after step 2).
 
     Each of the trans_ functions are grouped into files corresponding to their
     ISA extension, e.g. addi which is in RV32I is translated in the file
     'trans_rvi.inc.c'.
 
 2) Convert 16-bit instructions to decodetree [Patch 16-18]:
     All 16 bit instructions have a direct mapping to a 32 bit instruction. Thus,
     we convert the arguments in the 16 bit trans_ function to the arguments of
     the corresponding 32 bit instruction and call the 32 bit trans_ function.
 
 3) Remove old manual decoding in gen_* function [Patch 19-29]:
     this move all manual translation code into the trans_* instructions of
     decode tree, such that we can remove the old decode_* functions.
 
 Palmer: This, with some additional cleanup patches, passed Alistar's
 testing on rv32 and rv64 as well as my testing on rv64, so I think it's
 good to go.  I've run my standard test against this exact tag.
 
 I still don't have a Mac to try this on, sorry!
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCAAxFiEEAM520YNJYN/OiG3470yhUCzLq0EFAlyJCVETHHBhbG1lckBk
 YWJiZWx0LmNvbQAKCRDvTKFQLMurQeF6D/0UPlrbX7Gq8aPjs/Obca39SzNuQRqc
 BsFjy6sKm62iSCsawYRtdclqb0UT+5DaiR9TypoguG+FUrU4aiFTqVUCHkcBuFql
 53gk3PGc/neODu9SZxWmDDv5qf7iZaDgngNFOy2zczHiL7+Cw0v0+iLBxNQmDWNI
 pGrmLUgYBMLHQl6GouDLrVW0jzVOqPXlgFcRagnmvozFrYE56ArZqTnN/urxVvAM
 FhXgNKpbYcAVnDE+ruVqeKcQFgjuGSooBO6wx2dWEhoqlpPKpE0ONZjxNKLjuv1a
 MyCUoBowukGENceNAts1wCkIAjRP+rGNgC9c26MH4ZYvnj3ThBsX73iQ56goHnQp
 Pc8BbSrftdQYayaG+Ba+rATLOBqvAZekmozzSV6EyqGyJLcnMZYDg+wBH2nhb9dD
 wlyYYoKPJFLrhYwn2nYhRplFTMTZ+vAmLxehG6BzRgddfmnaOKAkUP4OiMeQ/PG/
 n8dXZUqev+mwPRA0ddxQYxeoxnw11zNJPfvnfXg879SutFdLHb/D3ZfBiTXT8SBp
 rMT8pnD0Pyi58MwdBFNas9woS/m8L6/lrMBfJ9VvMDKusPzjpgpdgw2Nf1/EUqQe
 cdrsJpTAKhTeXXax/kSSOHWqtXxbKhbOA+GU/BkWr8dCCeZUM9+M20rfWjkj7oyM
 FTQH3dfRT36FMw==
 =t7se
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/palmer/tags/riscv-for-master-4.0-sf4' into staging

target/riscv: Convert to decodetree

Bastian: this patchset converts the RISC-V decoder to decodetree in four major steps:

1) Convert 32-bit instructions to decodetree [Patch 1-15]:
    Many of the gen_* functions are called by the decode functions for 16-bit
    and 32-bit functions. If we move translation code from the gen_*
    functions to the generated trans_* functions of decode-tree, we get a lot of
    duplication. Therefore, we mostly generate calls to the old gen_* function
    which are properly replaced after step 2).

    Each of the trans_ functions are grouped into files corresponding to their
    ISA extension, e.g. addi which is in RV32I is translated in the file
    'trans_rvi.inc.c'.

2) Convert 16-bit instructions to decodetree [Patch 16-18]:
    All 16 bit instructions have a direct mapping to a 32 bit instruction. Thus,
    we convert the arguments in the 16 bit trans_ function to the arguments of
    the corresponding 32 bit instruction and call the 32 bit trans_ function.

3) Remove old manual decoding in gen_* function [Patch 19-29]:
    this move all manual translation code into the trans_* instructions of
    decode tree, such that we can remove the old decode_* functions.

Palmer: This, with some additional cleanup patches, passed Alistar's
testing on rv32 and rv64 as well as my testing on rv64, so I think it's
good to go.  I've run my standard test against this exact tag.

I still don't have a Mac to try this on, sorry!

# gpg: Signature made Wed 13 Mar 2019 13:44:49 GMT
# gpg:                using RSA key 00CE76D1834960DFCE886DF8EF4CA1502CCBAB41
# gpg:                issuer "palmer@dabbelt.com"
# gpg: Good signature from "Palmer Dabbelt <palmer@dabbelt.com>" [unknown]
# gpg:                 aka "Palmer Dabbelt <palmer@sifive.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 00CE 76D1 8349 60DF CE88  6DF8 EF4C A150 2CCB AB41

* remotes/palmer/tags/riscv-for-master-4.0-sf4: (29 commits)
  target/riscv: Remove decode_RV32_64G()
  target/riscv: Remove gen_system()
  target/riscv: Rename trans_arith to gen_arith
  target/riscv: Remove manual decoding of RV32/64M insn
  target/riscv: Remove shift and slt insn manual decoding
  target/riscv: make ADD/SUB/OR/XOR/AND insn use arg lists
  target/riscv: Move gen_arith_imm() decoding into trans_* functions
  target/riscv: Remove manual decoding from gen_store()
  target/riscv: Remove manual decoding from gen_load()
  target/riscv: Remove manual decoding from gen_branch()
  target/riscv: Remove gen_jalr()
  target/riscv: Convert quadrant 2 of RVXC insns to decodetree
  target/riscv: Convert quadrant 1 of RVXC insns to decodetree
  target/riscv: Convert quadrant 0 of RVXC insns to decodetree
  target/riscv: Convert RV priv insns to decodetree
  target/riscv: Convert RV64D insns to decodetree
  target/riscv: Convert RV32D insns to decodetree
  target/riscv: Convert RV64F insns to decodetree
  target/riscv: Convert RV32F insns to decodetree
  target/riscv: Convert RV64A insns to decodetree
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-13 22:20:27 +00:00
Bastian Koppelmann 25e6ca30c6 target/riscv: Remove decode_RV32_64G()
decodetree handles all instructions now so the fallback is not necessary
anymore.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:40:50 +01:00
Bastian Koppelmann 8f7bc27386 target/riscv: Remove gen_system()
with all 16 bit insns moved to decodetree no path is falling back to
gen_system(), so we can remove it.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:40:50 +01:00
Bastian Koppelmann 8dc9e8a8b0 target/riscv: Rename trans_arith to gen_arith
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
2019-03-13 10:40:50 +01:00
Bastian Koppelmann 1288701682 target/riscv: Remove manual decoding of RV32/64M insn
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:40:50 +01:00
Bastian Koppelmann 34446e8458 target/riscv: Remove shift and slt insn manual decoding
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:40:50 +01:00
Bastian Koppelmann f2ab172867 target/riscv: make ADD/SUB/OR/XOR/AND insn use arg lists
manual decoding in gen_arith() is not necessary with decodetree. For now
the function is called trans_arith as the original gen_arith still
exists. The former will be renamed to gen_arith as soon as the old
gen_arith can be removed.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:40:50 +01:00
Bastian Koppelmann 7a50d3e2ae target/riscv: Move gen_arith_imm() decoding into trans_* functions
gen_arith_imm() does a lot of decoding manually, which was hard to read
in case of the shift instructions and is not necessary anymore with
decodetree.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:40:50 +01:00
Bastian Koppelmann bce8a342a1 target/riscv: Remove manual decoding from gen_store()
With decodetree we don't need to convert RISC-V opcodes into to MemOps
as the old gen_store() did.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:40:50 +01:00
Bastian Koppelmann 98898b20e9 target/riscv: Remove manual decoding from gen_load()
With decodetree we don't need to convert RISC-V opcodes into to MemOps
as the old gen_load() did.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:40:50 +01:00
Bastian Koppelmann 090cc2c898 target/riscv: Remove manual decoding from gen_branch()
We now utilizes argument-sets of decodetree such that no manual
decoding is necessary.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:40:50 +01:00
Bastian Koppelmann 9e92c57d83 target/riscv: Remove gen_jalr()
trans_jalr() is the only caller, so move the code into trans_jalr().

Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:40:50 +01:00
Bastian Koppelmann 97b0be81f6 target/riscv: Convert quadrant 2 of RVXC insns to decodetree
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:40:46 +01:00
Bastian Koppelmann 07b001c6fc target/riscv: Convert quadrant 1 of RVXC insns to decodetree
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:38:16 +01:00
Bastian Koppelmann e98d9140f2 target/riscv: Convert quadrant 0 of RVXC insns to decodetree
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Bastian Koppelmann 4ba79c47a2 target/riscv: Convert RV priv insns to decodetree
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Bastian Koppelmann 31fe4d35f2 target/riscv: Convert RV64D insns to decodetree
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Bastian Koppelmann 97f8b49372 target/riscv: Convert RV32D insns to decodetree
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Bastian Koppelmann 95561ee3b4 target/riscv: Convert RV64F insns to decodetree
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Bastian Koppelmann 6f0e74ff4b target/riscv: Convert RV32F insns to decodetree
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Bastian Koppelmann 40b9faecfe target/riscv: Convert RV64A insns to decodetree
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Bastian Koppelmann 3b77c289ae target/riscv: Convert RV32A insns to decodetree
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Bastian Koppelmann d2e2c1e406 target/riscv: Convert RVXM insns to decodetree
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Bastian Koppelmann 771fbe156a target/riscv: Convert RVXI csr insns to decodetree
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Bastian Koppelmann 0c865e856a target/riscv: Convert RVXI fence insns to decodetree
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Bastian Koppelmann b73a987b09 target/riscv: Convert RVXI arithmetic insns to decodetree
we cannot remove the call to gen_arith() in decode_RV32_64G() since it
is used to translate multiply instructions.

Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Bastian Koppelmann 7e45a682ed target/riscv: Convert RV64I load/store insns to decodetree
this splits the 64-bit only instructions into its own decode file such
that we generate the decoder for these instructions only for the RISC-V
64 bit target.

Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Bastian Koppelmann c1000d4e1b target/riscv: Convert RV32I load/store insns to decodetree
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Bastian Koppelmann 3cca75a6fe target/riscv: Convert RVXI branch insns to decodetree
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Bastian Koppelmann 2a53cff418 target/riscv: Activate decodetree and implemnt LUI & AUIPC
for now only LUI & AUIPC are decoded and translated. If decodetree fails, we
fall back to the old decoder.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
2019-03-13 10:34:06 +01:00
Sven Schnelle 32dc75698c target/hppa: exit TB if either Data or Instruction TLB changes
The current code assumes that we don't need to exit the TB
if a Data Cache Flush or Insert has happend. However, as we
have a shared Data/Instruction TLB, a Data cache flush also
flushes Instruction TLB entries, and a Data cache TLB insert
might also evict a Instruction TLB entry.

So exit the TB in all cases if Instruction translation is enabled.

Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-11-svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-12 09:13:43 -07:00
Sven Schnelle d5de20bd84 target/hppa: add TLB protection id check
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-10-svens@stackframe.org>
[rth: Add required tlb flushing when prot id registers change.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-12 09:13:43 -07:00
Sven Schnelle 38188fd216 target/hppa: allow multiple itlbp without itlba
The ODE software calls itlbp on existing TLB entries without
calling itlba first, so this seems to be valid.

Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-9-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-12 09:13:43 -07:00
Sven Schnelle 6e5f530025 target/hppa: fix b,gate instruction
b,gate does GR[t] ← cat(GR[t]{0..29},IAOQ_Front{30..31});
instead of saving the link address to register t.

Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-8-svens@stackframe.org>
[rth: Move link check outside of ifndef CONFIG_USER_ONLY;
 use ctx->privilege; nullify the insn earlier.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-12 09:13:43 -07:00
Sven Schnelle 15da177bb4 target/hppa: ignore DIAG opcode
DIAG is usually only used by diagnostics software as it's CPU
specific. In most of the cases it's better to ignore it and log
a message that it's not implemented.

Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-7-svens@stackframe.org>
[rth: Free the nullify condition.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-12 09:13:43 -07:00
Sven Schnelle b06c0618c0 target/hppa: remove PSW I/R/Q bit check
HP ODE use rfi to set the Q bit, and i don't see anything in the
documentation that this is forbidden. So remove it.

Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-6-svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-12 09:13:43 -07:00
Sven Schnelle 23c3d569f4 target/hppa: add TLB trace events
To ease TLB debugging add a few trace events, which are disabled
by default so that there's no performance impact.

Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-5-svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-12 09:13:43 -07:00
Sven Schnelle acd6ba74d6 target/hppa: report ITLB_EXCP_MISS for ITLB misses
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-4-svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-12 09:13:43 -07:00
Sven Schnelle 0b49c33988 target/hppa: fix TLB handling for page 0
Assume the following sequence:

pitlbe r0(sr0,r0)
iitlba r4,(sr0,r0)
ldil L%3000000,r5
iitlbp r5,(sr0,r0)

This will purge the whole TLB and add an entry for page 0. However
the current TLB implementation in helper_iitlba() will store to
the last empty TLB entry, while helper_iitlbp() will write to the
first empty entry. That is because an empty entry will match address
0 in helper_iitlba()

Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-3-svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-12 09:13:43 -07:00
Sven Schnelle 43675d2015 target/hppa: fix overwriting source reg in addb
When one of the source registers is the same as the destination register,
the source register gets overwritten with the destionation value before
do_add_sv() is called, which leads to unexpection condition matches.

Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-2-svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-12 09:13:43 -07:00
Richard Henderson f3b423ec6e target/hppa: Check for page crossings in use_goto_tb
We got away with eliding this check when target/hppa was user-only,
but missed adding this check when adding system support.

Fixes an early crash in the HP-UX 11 installer.

Reported-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-12 09:13:43 -07:00
David Gibson ce2918cbc3 spapr: Use CamelCase properly
The qemu coding standard is to use CamelCase for type and structure names,
and the pseries code follows that... sort of.  There are quite a lot of
places where we bend the rules in order to preserve the capitalization of
internal acronyms like "PHB", "TCE", "DIMM" and most commonly "sPAPR".

That was a bad idea - it frequently leads to names ending up with hard to
read clusters of capital letters, and means they don't catch the eye as
type identifiers, which is kind of the point of the CamelCase convention in
the first place.

In short, keeping type identifiers look like CamelCase is more important
than preserving standard capitalization of internal "words".  So, this
patch renames a heap of spapr internal type names to a more standard
CamelCase.

In addition to case changes, we also make some other identifier renames:
  VIOsPAPR* -> SpaprVio*
    The reverse word ordering was only ever used to mitigate the capital
    cluster, so revert to the natural ordering.
  VIOsPAPRVTYDevice -> SpaprVioVty
  VIOsPAPRVLANDevice -> SpaprVioVlan
    Brevity, since the "Device" didn't add useful information
  sPAPRDRConnector -> SpaprDrc
  sPAPRDRConnectorClass -> SpaprDrcClass
    Brevity, and makes it clearer this is the same thing as a "DRC"
    mentioned in many other places in the code

This is 100% a mechanical search-and-replace patch.  It will, however,
conflict with essentially any and all outstanding patches touching the
spapr code.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 14:33:05 +11:00
Philippe Mathieu-Daudé dd977e4f45 target/ppc: Optimize x[sv]xsigdp using deposit_i64()
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20190309214255.9952-3-f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 14:33:05 +11:00
Philippe Mathieu-Daudé cde0a41c12 target/ppc: Optimize xviexpdp() using deposit_i64()
The t0 tcg_temp register is now unused, remove it.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20190309214255.9952-2-f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 14:33:05 +11:00
Cédric Le Goater da874d90ad target/ppc: add HV support for POWER9
We now have enough support to boot a PowerNV machine with a POWER9
processor. Allow HV mode on POWER9.

Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190307223548.20516-16-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 14:33:05 +11:00
Mark Cave-Ayland d59d1182b1 target/ppc: introduce vsr64_offset() to simplify get_cpu_vsr{l,h}() and set_cpu_vsr{l,h}()
Now that all VSX registers are stored in host endian order, there is no need
to go via different accessors depending upon the register number. Instead we
introduce vsr64_offset() and use it directly from within get_cpu_vsr{l,h}() and
set_cpu_vsr{l,h}().

This also allows us to rewrite avr64_offset() and fpr_offset() in terms of the
new vsr64_offset() function to more clearly express the relationship between the
VSX, FPR and VMX registers, and also remove vsrl_offset() which is no longer
required.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20190307180520.13868-8-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 14:33:04 +11:00
Mark Cave-Ayland 8a14d31b00 target/ppc: switch fpr/vsrl registers so all VSX registers are in host endian order
When VSX support was initially added, the fpr registers were added at
offset 0 of the VSR register and the vsrl registers were added at offset
1. This is in contrast to the VMX registers (the last 32 VSX registers) which
are stored in host-endian order.

Switch the fpr/vsrl registers so that the lower 32 VSX registers are now also
stored in host endian order to match the VMX registers. This ensures that TCG
vector operations involving mixed VMX and VSX registers will function
correctly.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190307180520.13868-7-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 14:33:04 +11:00
Mark Cave-Ayland 37da91f163 target/ppc: improve avr64_offset() and use it to simplify get_avr64()/set_avr64()
By using the VsrD macro in avr64_offset() the same offset calculation can be
used regardless of the host endian. This allows get_avr64() and set_avr64() to
be simplified accordingly.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20190307180520.13868-6-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 14:33:04 +11:00
Mark Cave-Ayland c82a8a8542 target/ppc: introduce avr_full_offset() function
All TCG vector operations require pointers to the base address of the vector
rather than separate access to the top and bottom 64-bits. Convert the VMX TCG
instructions to use a new avr_full_offset() function instead of avr64_offset()
which can then itself be written as a simple wrapper onto vsr_full_offset().

This same function can also reused in cpu_avr_ptr() to avoid having more than
one copy of the offset calculation logic.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20190307180520.13868-5-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 14:33:04 +11:00
Mark Cave-Ayland da7815ef31 target/ppc: move Vsr* macros from internal.h to cpu.h
It isn't possible to include internal.h from cpu.h so move the Vsr* macros
into cpu.h alongside the other VMX/VSX register access functions.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20190307180520.13868-4-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 14:33:04 +11:00
Mark Cave-Ayland 45141dfd23 target/ppc: introduce single vsrl_offset() function
Instead of having multiple copies of the offset calculation logic, move it to a
single vsrl_offset() function.

This commit also renames the existing get_vsr()/set_vsr() functions to
get_vsrl()/set_vsrl() which better describes their purpose.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190307180520.13868-3-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 14:33:04 +11:00
Mark Cave-Ayland e7d3b272ed target/ppc: introduce single fpr_offset() function
Instead of having multiple copies of the offset calculation logic, move it to a
single fpr_offset() function.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190307180520.13868-2-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 14:33:04 +11:00
Suraj Jitindar Singh 68f9f70841 target/ppc/spapr: Enable H_PAGE_INIT in-kernel handling
The H_CALL H_PAGE_INIT can be used to zero or copy a page of guest
memory. Enable the in-kernel H_PAGE_INIT handler.

The in-kernel handler takes half the time to complete compared to
handling the H_CALL in userspace.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Message-Id: <20190306060608.19935-1-sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 14:33:04 +11:00
Fabiano Rosas 468e3a1a94 target/ppc: Refactor kvm_handle_debug
There are four scenarios being handled in this function:

- single stepping
- hardware breakpoints
- software breakpoints
- fallback (no debug supported)

A future patch will add code to handle specific single step and
software breakpoints cases so let's split each scenario into its own
function now to avoid hurting readability.

Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20190228225759.21328-5-farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 14:33:04 +11:00
Fabiano Rosas 2cbd158131 target/ppc: Move handling of hardware breakpoints to a separate function
This is in preparation for a refactoring of the kvm_handle_debug
function in the next patch.

Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20190228225759.21328-4-farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 14:33:04 +11:00
Fabiano Rosas 2586a4d7a0 target/ppc: Move exception vector offset computation into a function
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20190228225759.21328-2-farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 14:33:04 +11:00
Suraj Jitindar Singh 8ff43ee404 target/ppc/spapr: Add SPAPR_CAP_CCF_ASSIST
Introduce a new spapr_cap SPAPR_CAP_CCF_ASSIST to be used to indicate
the requirement for a hw-assisted version of the count cache flush
workaround.

The count cache flush workaround is a software workaround which can be
used to flush the count cache on context switch. Some revisions of
hardware may have a hardware accelerated flush, in which case the
software flush can be shortened. This cap is used to set the
availability of such hardware acceleration for the count cache flush
routine.

The availability of such hardware acceleration is indicated by the
H_CPU_CHAR_BCCTR_FLUSH_ASSIST flag being set in the characteristics
returned from the KVM_PPC_GET_CPU_CHAR ioctl.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Message-Id: <20190301031912.28809-2-sjitindarsingh@gmail.com>
[dwg: Small style fixes]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 12:07:49 +11:00
Suraj Jitindar Singh 399b2896d4 target/ppc/spapr: Add workaround option to SPAPR_CAP_IBS
The spapr_cap SPAPR_CAP_IBS is used to indicate the level of capability
for mitigations for indirect branch speculation. Currently the available
values are broken (default), fixed-ibs (fixed by serialising indirect
branches) and fixed-ccd (fixed by diabling the count cache).

Introduce a new value for this capability denoted workaround, meaning that
software can work around the issue by flushing the count cache on
context switch. This option is available if the hypervisor sets the
H_CPU_BEHAV_FLUSH_COUNT_CACHE flag in the cpu behaviours returned from
the KVM_PPC_GET_CPU_CHAR ioctl.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Message-Id: <20190301031912.28809-1-sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 12:07:49 +11:00
Suraj Jitindar Singh 7d050527e3 target/ppc: Implement large decrementer support for KVM
Implement support to allow KVM guests to take advantage of the large
decrementer introduced on POWER9 cpus.

To determine if the host can support the requested large decrementer
size, we check it matches that specified in the ibm,dec-bits device-tree
property. We also need to enable it in KVM by setting the LPCR_LD bit in
the LPCR. Note that to do this we need to try and set the bit, then read
it back to check the host allowed us to set it, if so we can use it but
if we were unable to set it the host cannot support it and we must not
use the large decrementer.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190301024317.22137-3-sjitindarsingh@gmail.com>
[dwg: Small style fixes]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 12:07:49 +11:00
Suraj Jitindar Singh a8dafa5251 target/ppc: Implement large decrementer support for TCG
Prior to POWER9 the decrementer was a 32-bit register which decremented
with each tick of the timebase. From POWER9 onwards the decrementer can
be set to operate in a mode called large decrementer where it acts as a
n-bit decrementing register which is visible as a 64-bit register, that
is the value of the decrementer is sign extended to 64 bits (where n is
implementation dependant).

The mode in which the decrementer operates is controlled by the LPCR_LD
bit in the logical paritition control register (LPCR).

>From POWER9 onwards the HDEC (hypervisor decrementer) was enlarged to
h-bits, also sign extended to 64 bits (where h is implementation
dependant). Note this isn't configurable and is always enabled.

On POWER9 the large decrementer and hdec are both 56 bits, as
represented by the lrg_decr_bits cpu class property. Since they are the
same size we only add one property for now, which could be extended in
the case they ever differ in the future.

We also add the lrg_decr_bits property for POWER5+/7/8 since it is used
to determine the size of the hdec, which is only generated on the
POWER5+ processor and later. On these processors it is 32 bits.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190301024317.22137-2-sjitindarsingh@gmail.com>
[dwg: Small style fixes]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12 12:07:49 +11:00
Peter Maydell 377b155bde * allow building QEMU without TCG or KVM support (Anthony)
* update AMD IOMMU copyright (David)
 * compilation fixes for GCC and BSDs (Alexey, David, Paolo, Philippe)
 * coalesced I/O bugfix (Jagannathan)
 * Processor Tracing cpuid fix (Luwei)
 * Kconfig fixes (Paolo, David)
 * Cleanups (Paolo, Wei)
 * PVH vs. multiboot fix (Stefano)
 * LSI bugfixes (Sven)
 * elf2dmp Coverity fix (Victor)
 * scsi-disk fix (Zhengui)
 * authorization support for chardev TLS (Daniel)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQEcBAABAgAGBQJchojQAAoJEL/70l94x66DWbEH/Rdhyh1Yrd0qDTEUzHiWvDKA
 ZrNnvmgXkxbH4JxkrkmDVIfftPSvlY3ZA1I3S+VQsvgq9Pz2w3rMcS3syaCK4pyw
 YFHMhutYvLmXiYgKyygD+ysaxtcC/vXDS3k7QpFDu/4OULZi6Fxe7/lMYRiFeiIt
 olUNFeyCa6ckJ+TrSu83PSeIX0AVHOxP5FQrI7RupZimeSARFUy/Swkw+bzeeVKp
 mfD8bxzhdPQd+3dMPG2kW9QS8G/QlDL+EMI0q9WUrGPxxpMBK5Gz4QMrjLyjSuO/
 7xdV3AntXlPxTTo1+m01Vn7PmutS3YgolHVx2HxVA4zvXZUpa5jlQllEHhigjrY=
 =7nFp
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

* allow building QEMU without TCG or KVM support (Anthony)
* update AMD IOMMU copyright (David)
* compilation fixes for GCC and BSDs (Alexey, David, Paolo, Philippe)
* coalesced I/O bugfix (Jagannathan)
* Processor Tracing cpuid fix (Luwei)
* Kconfig fixes (Paolo, David)
* Cleanups (Paolo, Wei)
* PVH vs. multiboot fix (Stefano)
* LSI bugfixes (Sven)
* elf2dmp Coverity fix (Victor)
* scsi-disk fix (Zhengui)
* authorization support for chardev TLS (Daniel)

# gpg: Signature made Mon 11 Mar 2019 16:12:00 GMT
# gpg:                using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini/tags/for-upstream: (31 commits)
  qemugdb: fix licensing
  chardev: add support for authorization for TLS clients
  qom: cpu: destroy work_mutex in cpu_common_finalize
  exec.c: refactor function flatview_add_to_dispatch()
  lsi: 810/895A are always little endian
  lsi: return dfifo value
  lsi: use SCSI phase names instead of numbers in trace
  lsi: use enum type for s->msg_action
  lsi: use enum type for s->waiting
  lsi: use ldn_le_p()/stn_le_p()
  scsi-disk: Fix crash if request is invaild or disk is no medium
  configure: Disable W^X on OpenBSD
  oslib-posix: Ignore fcntl("/dev/null", F_SETFL, O_NONBLOCK) failure
  accel: Allow to build QEMU without TCG or KVM support
  build: clean trace/generated-helpers.c
  build: remove unnecessary assignments from Makefile.target
  build: get rid of target-obj-y
  update copyright notice
  lsi: check if SIGP bit is already set in Wait reselect
  lsi: implement basic SBCL functionality
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-11 18:26:37 +00:00
Luwei Kang f24c3a79a4 i386: extended the cpuid_level when Intel PT is enabled
Intel Processor Trace required CPUID[0x14] but the cpuid_level
have no change when create a kvm guest with
e.g. "-cpu qemu64,+intel-pt".

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Message-Id: <1548805979-12321-1-git-send-email-luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-11 16:33:49 +01:00
Paolo Bonzini 840159e48c target-i386: add kvm stubs to user-mode emulators
The CPUID code will call kvm_arch_get_supported_cpuid() and, even though
it is undef kvm_enabled() so it never runs for user-mode emulators,
sometimes clang will not optimize it out at -O0.

That could be considered a compiler bug, however at -O0 we give it
a pass and just add the stubs.

Reported-by: Kamil Rytarowski <n54@gmx.com>
Tested-by: Kamil Rytarowski <n54@gmx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-11 16:33:49 +01:00
David Hildenbrand 2c7590c8ea s390x/tcg: Implement VECTOR UNPACK *
Combine all variant in a single handler. As source and destination
have different element sizes, we can't use gvec expansion. Expand
manually. Also watch out for overlapping source and destination
registers. Use a safe evaluation order depending on the operation.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-33-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 0e0a5b49ad s390x/tcg: Implement VECTOR STORE WITH LENGTH
Very similar to VECTOR LOAD WITH LENGTH, just the opposite direction.
Properly probe write access before modifying memory.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-32-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 29b8bcf140 s390x/tcg: Implement VECTOR STORE MULTIPLE
Similar to VECTOR LOAD MULTIPLE, just the opposite direction. Probe
write access first.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-31-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 7b9a236ea7 s390x/tcg: Implement VECTOR STORE ELEMENT
As we only store one element, there is nothing to consider regarding
exceptions.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-30-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 15e12add0b s390x/tcg: Implement VECTOR STORE
Properly probe the whole access first.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-29-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand c5a7392cfb s390x/tcg: Provide probe_write_access helper
Instead of checking e.g. the first access on every touched page, we should
check the actual access, otherwise we might get false positives when Low
Address Protection (LAP) is active. As probe_write() can only deal with
accesses to one page, we have to loop.

Use i64 for the length, although not needed - easier to reuse
TCG temps we already have in the translation functions where this will
be used. Also allow it to be used from other helpers.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-28-david@redhat.com>
[CH: add missing page_check_range()]
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand a2338cfb07 s390x/tcg: Implement VECTOR SIGN EXTEND TO DOUBLEWORD
Load both elements signed and store them into the two 64 bit elements.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-27-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand db23070c76 s390x/tcg: Implement VECTOR SELECT
Provide an implementation based on i64 and on real host vectors.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-26-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 7007ec27a9 s390x/tcg: Implement VECTOR SCATTER ELEMENT
Similar to VECTOR GATHER ELEMENT, but the other direction.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-25-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 3a338e29df s390x/tcg: Implement VECTOR REPLICATE IMMEDIATE
Like VECTOR REPLICATE, but the element to be replicated comes from an
immediate.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-24-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 28d08731b1 s390x/tcg: Implement VECTOR REPLICATE
Replicate via the special gvec helper.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-23-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 66bb3333bc s390x/tcg: Implement VECTOR PERMUTE DOUBLEWORD IMMEDIATE
Read the whole input before modifying the destination vector.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-22-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 7aaf844d46 s390x/tcg: Implement VECTOR PERMUTE
Take care of overlying inputs and outputs by using a temporary vector.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-21-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 73946f0d55 s390x/tcg: Implement VECTOR PACK *
This is a big one. Luckily we only have a limited set of such nasty
instructions.

We'll implement all variants with helpers, except when sources and
the destination don't overlap for VECTOR PACK. Provide different helpers
when the cc is to be modified. We'll return the cc then via env->cc_op.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-20-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 2ff47e6cce s390x/tcg: Implement VECTOR MERGE (HIGH|LOW)
We cannot use gvec expansion as source and destination elements are
have different element numbers. So we'll expand using a fancy loop.
Also, we have to take care of overlapping source and destination
registers, therefore use a safe evaluation irder depending on the
operation.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-19-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand f6c7ff6757 s390x/tcg: Implement VECTOR LOAD WITH LENGTH
We can reuse the helper introduced along with VECTOR LOAD TO BLOCK
BOUNDARY. We just have to take care of converting the highest index into
a length.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-18-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 76dbd28935 s390x/tcg: Implement VECTOR LOAD VR FROM GRS DISJOINT
Fairly easy, just load from to gprs into a single vector.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-17-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 90e3af6bb8 s390x/tcg: Implement VECTOR LOAD VR ELEMENT FROM GR
Very similar to VECTOR LOAD GR FROM VR ELEMENT, just the opposite
direction. Also provide a fast path in case we don't care about the
register content.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-16-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 60e9e3f1b0 s390x/tcg: Implement VECTOR LOAD TO BLOCK BOUNDARY
Very similar to LOAD COUNT TO BLOCK BOUNDARY, but instead of only
calculating, the actual vector is loaded. Use a temporary vector to
not modify the real vector on exceptions. Initialize that one to zero,
to not leak any data. Provide a fast path if we're loading a full
vector.

As we don't have gvec ool handlers for single vectors, just calculate
the vector address manually.

We can reuse the helper later on for VECTOR LOAD WITH LENGTH. In fact,
we are going to name it "vll" right from the beginning, because that's
a better match.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-15-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 0a85f8257a s390x/tcg: Implement VECTOR LOAD MULTIPLE
Try to load the last element first. Access to the first element will
be checked afterwards. This way, we can guarantee that the vector is
not modified before we checked for all possible exceptions. (16 vectors
cannot cross more than two pages)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-14-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand f180da83c0 s390x/tcg: Implement VECTOR LOAD LOGICAL ELEMENT AND ZERO
Fairly easy, zero out the vector before we load the desired element.
Load the element before touching the vector.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-13-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 5d203bea59 s390x/tcg: Implement VECTOR LOAD GR FROM VR ELEMENT
To avoid an helper, we have to do the actual calculation of the element
address (offset in cpu_env + cpu_env) manually. Factor that out into
get_vec_element_ptr_i64(). The same logic will be reused for "VECTOR
LOAD VR ELEMENT FROM GR".

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-12-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand e6790d3211 s390x/tcg: Implement VECTOR LOAD ELEMENT IMMEDIATE
Take care of properly sign-extending the immediate.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-11-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 4b66439463 s390x/tcg: Implement VECTOR LOAD ELEMENT
Fairly easy, load with desired size and store it into the right element.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-10-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 79c1620aea s390x/tcg: Implement VECTOR LOAD AND REPLICATE
We can use tcg_gen_gvec_dup_i64() to carry out the duplication.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-9-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand b4f5ae47d8 s390x/tcg: Implement VECTOR LOAD
When loading from memory, load both elements into temps first before
modifying the target vector

Loading with strange alingment from the end of the address space will
not properly wrap, we can ignore that for now.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-8-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand eeb11a90a6 s390x/tcg: Implement VECTOR GENERATE MASK
Add gen_gvec_dupi() for handling duplication of immediates, so it can
be reused later.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 64052062a4 s390x/tcg: Implement VECTOR GENERATE BYTE MASK
Let's optimize it for the common cases (setting a vector to zero or all
ones) - courtesy of Richard.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 6d841663be s390x/tcg: Implement VECTOR GATHER ELEMENT
Let's start with a more involved one, but it is the first in the list
of vector support instructions (introduced with the vector facility).

Good thing is, we need a lot of basic infrastructure for this. Reading
and writing vector elements as well as checking element validity.

All vector instruction related translation functions will reside in
translate_vx.inc.c, to be included in translate.c - similar to how
other architectures handle it.

While at it, directly add some documentation (which contains parts about
things added in follow-up patches, but splitting this up does not make
too much sense). Also add ES_* defines heavily used later.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 5b5d2090de s390x/tcg: Utilities for vector instruction helpers
We'll have to read/write vector elements quite frequently from helpers.
The tricky bit is properly taking care of endianess. Handle it similar
to aarch64.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-4-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand b971a2fda3 s390x/tcg: Check vector register instructions at central point
Check them at a central point. We'll use a new instruction flag to
flag all vector instructions (IF_VEC) and handle it very similar to
AFP, whereby we use another unused position in the PSW mask to store
the state of vector register enablement per translation block.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
David Hildenbrand 481accd4f5 s390x/tcg: Define vector instruction formats
These are the new instruction formats related to vector instructions as
up to the z14 (a.k.a. latest PoP).

As v2 appeares (like x2 in VRX) with d2/b2 in VRV, we have to assign it a
higher field number to avoid collisions.

Properly take care of the MSB (to be able to address 32 registers) for
each vector register field stored in the RXB field (Bit 36 - 30  for all
vector instructions). As we have 32 bit vector registers and the
"v" fields are only 4 bit in size, the 5th bit is stored in the RXB.
We use a new type to indicate that the MSB has to be fetched from the
RXB.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
Thomas Huth a5f5ca5eaf target/s390x: Remove non-architected entries from struct LowCore
There are some fields in our struct LowCore which apparently have
been copied from a very old version of the Linux kernel. These
fields are not architected in the "Principles of Operation", and
only used on these memory locations in Linux kernels older than
2.6.29. Newer Linux kernels moved the entries to different locations
or are not using them at all anymore. Thus we should never access
these fields from the QEMU side, so they should be removed.

While we're at it, also add a QEMU_BUILD_BUG_ON() statement to
assert that struct LowCore has the right size.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1551775581-27989-1-git-send-email-thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11 09:31:01 +01:00
Peter Maydell 1eb5da3b73 Fixes mixed up operands in CADDN and CADD
-----BEGIN PGP SIGNATURE-----
 
 iQJTBAABCgA9FiEEbmNqfoPy3Qz6bm43CtLGOWtpyhQFAlyCOaEfHGtiYXN0aWFu
 QG1haWwudW5pLXBhZGVyYm9ybi5kZQAKCRAK0sY5a2nKFIBjEACQWORowBxQJbA4
 2//8LOrYL35u/Dejx1AzusWQ23kr6CkRj61i6iQHO6DZw1bHunmcLRcOIqHtqLju
 /WL6qoH2idOQ2skC8EAZBvL0vRnlVvdX7TMUQ8qG6G6OPqt0xD3CH44qRUCMupI4
 tZlMRHT2T6DKUen96hCWxjZlH4w36GM9llLKaKfJKm/Dn51n0JS8HGG8vcDULsCZ
 yYd3AmWvN4UdGcWKsbUB3o9UyfTaUlt6GFgAOaLsrl1SvZYAfx6JH7XOWr/W4KtL
 jGza58kALYghEVbmZYI6YyMPnCYnF6zR5cZx1f9Oqtk5ReEhD1oZ3Q4Vzg45PcMW
 LztTAtcJq2W1Wjg69ceW/hD9KTsIeSLnKAQoyot1lWEfbak4wJjXZ7ThPMuhmNYm
 ySUQiYWNQi6rKH0xU5EdQsZYCI1QSl3JSuILCTq+L6RFbLJE1Vev69ddf/9l6Jji
 2TyMC+zDSvbJ/6BHu6iP6vfYH/5Vj+3tWmzi+61X+jBEdAPDYimhSXNTM8+QbPCw
 XtTLZntFUEa1xnyA5ocCDFXV6O5fJP3NvARktNK+ks7AD3DiKY63PuQa55LiAxiN
 VT2IyEJEUVjTwohQrCWf+RPwOeOZVjgnescNMmu1dyEESEEyF4Ie5lIo2L5hEhrg
 h8iEnEXqE6KGXHvG+hOTMTIJOsWwtg==
 =JqjZ
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bkoppelmann/tags/pull-tricore-2019-03-08' into staging

Fixes mixed up operands in CADDN and CADD

# gpg: Signature made Fri 08 Mar 2019 09:45:05 GMT
# gpg:                using RSA key 6E636A7E83F2DD0CFA6E6E370AD2C6396B69CA14
# gpg:                issuer "kbastian@mail.uni-paderborn.de"
# gpg: Good signature from "Bastian Koppelmann <kbastian@mail.uni-paderborn.de>" [full]
# Primary key fingerprint: 6E63 6A7E 83F2 DD0C FA6E  6E37 0AD2 C639 6B69 CA14

* remotes/bkoppelmann/tags/pull-tricore-2019-03-08:
  tricore: fixed RCR_CADDN instruction
  tricore: fixed RCR_CADD instruction

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-08 18:29:41 +00:00
David Brenken d1c1d88ce6 tricore: fixed RCR_CADDN instruction
Signed-off-by: Christian Richter <christian.richter@efs-auto.de>
Signed-off-by: David Brenken <david.brenken@efs-auto.de>
Signed-off-by: Georg Hofstetter <georg.hofstetter@efs-auto.de>
Signed-off-by: Robert Rasche <robert.rasche@efs-auto.de>
Signed-off-by: Lars Biermanski <lars.biermanski@efs-auto.de>

Message-Id: <20190207073928.4048-3-david.brenken@efs-auto.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
2019-03-08 10:00:59 +01:00
David Brenken a00585eef2 tricore: fixed RCR_CADD instruction
Signed-off-by: Christian Richter <christian.richter@efs-auto.de>
Signed-off-by: David Brenken <david.brenken@efs-auto.de>
Signed-off-by: Georg Hofstetter <georg.hofstetter@efs-auto.de>
Signed-off-by: Robert Rasche <robert.rasche@efs-auto.de>
Signed-off-by: Lars Biermanski <lars.biermanski@efs-auto.de>

Message-Id: <20190207073928.4048-2-david.brenken@efs-auto.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
2019-03-08 10:00:59 +01:00
Richard Henderson b35aec8597 target/hppa: Optimize blr r0,rn
We can eliminate an extra TB in this case, which merely
loads a "return address" into rn.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-07 17:43:12 -08:00
Richard Henderson 993119fe58 target/hppa: Do not return freed temporary
For priv levels 1 & 2, we were doing so from do_ibranch_priv.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-07 17:43:12 -08:00
Eric Auger a27382e210 kvm: add kvm_arm_get_max_vm_ipa_size
Add the kvm_arm_get_max_vm_ipa_size() helper that returns the
number of bits in the IPA address space supported by KVM.

This capability needs to be known to create the VM with a
specific IPA max size (kvm_type passed along KVM_CREATE_VM ioctl.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-id: 20190304101339.25970-6-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05 15:55:09 +00:00
Richard Henderson 6bea25631a target/arm: Implement ARMv8.5-FRINT
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-11-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05 15:55:08 +00:00
Richard Henderson 0e4db23d1f target/arm: Restructure handle_fp_1src_{single, double}
This will allow sharing code that adjusts rmode beyond
the existing users.

Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05 15:55:08 +00:00
Richard Henderson 5ef84f1114 target/arm: Implement ARMv8.5-CondM
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-9-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05 15:55:08 +00:00
Richard Henderson b89d9c988a target/arm: Implement ARMv8.4-CondM
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: fixed up block comment style]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05 15:55:08 +00:00
Richard Henderson 2fba34f70d target/arm: Rearrange disas_data_proc_reg
This decoding more closely matches the ARMv8.4 Table C4-6,
Encoding table for Data Processing - Register Group.

In particular, op2 == 0 is now more than just Add/sub (with carry).

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05 15:55:08 +00:00
Richard Henderson 22ac3c4964 target/arm: Add set/clear_pstate_bits, share gen_ss_advance
We do not need an out-of-line helper for manipulating bits in pstate.
While changing things, share the implementation of gen_ss_advance.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05 15:55:08 +00:00
Richard Henderson ff730e9666 target/arm: Split helper_msr_i_pstate into 3
The EL0+UMA check is unique to DAIF.  While SPSel had avoided the
check by nature of already checking EL >= 1, the other post v8.0
extensions to MSR (imm) allow EL0 and do not require UMA.  Avoid
the unconditional write to pc and use raise_exception_ra to unwind.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05 15:55:08 +00:00
Richard Henderson cb570bd318 target/arm: Implement ARMv8.0-PredInv
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05 15:55:07 +00:00
Richard Henderson 9888bd1e20 target/arm: Implement ARMv8.0-SB
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05 15:55:07 +00:00
Richard Henderson 64e40755cd target/arm: Split out arm_sctlr
Minimize the number of places that will need updating when
the virtual host extensions are added.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05 15:55:07 +00:00
Richard Henderson 9d090d1723 target/arm: Fix PC test for LDM (exception return)
Found by inspection: Rn is the base register against which the
load began; I is the register within the mask being processed.
The exception return should of course be processed from the loaded PC.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190301202921.21209-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05 15:55:07 +00:00
David Hildenbrand df192fbc51 s390x: Add floating-point extension facility to "qemu" cpu model
The floating-point extension facility implemented certain changes to
BFP, HFP and DFP instructions.

As we don't implement HFP/DFP, we can ignore those completely. Related
to BFP, the changes include
- SET BFP ROUNDING MODE (SRNMB) instruction
- BFP-rounding-mode field in the FPC register is changed to 3 bits
- CONVERT FROM LOGICAL instructions
- CONVERT TO LOGICAL instructions
- Changes (rounding mode + XxC) added to
-- CONVERT TO FIXED
-- CONVERT FROM FIXED
-- LOAD FP INTEGER
-- LOAD ROUNDED
-- DIVIDE TO INTEGER

For TCG, we don't implement DIVIDE TO INTEGER, and it is harder to
implement, so skip that. Also, as we don't implement PFPO, we can skip
changes to that as well. The other parts are now implemented, we can
indicate the facility.

z14 PoP mentions that "The floating-point extension facility is installed
in the z/Architecture architectural mode. When bit 37 is one, bit 42 is
also one.", meaning that the DFP (decimal-floating-point) facility also
has to be indicated. We can ignore that for now.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-16-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand b12b103ecd s390x/tcg: Handle all rounding modes overwritten by BFP instructions
"round to nearest with ties away from 0" maps to float_round_ties_away.
"round to prepare for shorter precision" maps to float_round_to_odd.

As all instructions properly check for valid rounding modes in translate.c
we can add an assert. Fix one missing empty line.

Cc: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-15-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand bdcfcd445d s390x/tcg: Implement rounding mode and XxC for LOAD ROUNDED
With the floating-point extension facility, LOAD ROUNDED has
a rounding mode specification and the inexact-exception control (XxC).

Handle them just like e.g. LOAD FP INTEGER.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-14-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand dce0a58fd6 s390x/tcg: Implement XxC and checks for most FP instructions
With the floating-point extension facility
- CONVERT FROM LOGICAL
- CONVERT TO LOGICAL
- CONVERT TO FIXED
- CONVERT FROM FIXED
- LOAD FP INTEGER
have both, a rounding mode specification and the inexact-exception control
(XxC). Other instructions will be handled separatly.

Check for valid rounding modes and forward also the XxC (via m4). To avoid
a lot of boilerplate code and changes to the helpers, combine both, the
m3 and m4 field in a combined 32 bit TCG variable. Perform checks at
a central place, taking in account if the m3 or m4 field was ignore
before the floating-point extension facility was introduced.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-13-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand cf97f9ff94 s390x/tcg: Prepare for IEEE-inexact-exception control (XxC)
Some instructions allow to suppress IEEE inexact exceptions.

z14 PoP, 9-23, "Suppression of Certain IEEE Exceptions"
    IEEE-inexact-exception control (XxC): Bit 1 of
    the M4 field is the XxC bit. If XxC is zero, recogni-
    tion of IEEE-inexact exception is not suppressed;
    if XxC is one, recognition of IEEE-inexact excep-
    tion is suppressed.

Especially, handling for overflow/unerflow remains as is, inexact is
reported along

z14 PoP, 9-23, "Suppression of Certain IEEE Exceptions"
    For example, the IEEE-inexact-exception control (XxC)
    has no effect on the DXC; that is, the DXC for IEEE-
    overflow or IEEE-underflow exceptions along with the
    detail for exact, inexact and truncated, or inexact and
    incremented, is reported according to the actual con-
    dition.

Follow up patches will wire it correctly up for the applicable
instructions.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-12-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand c0ee7015fd s390x/tcg: Refactor saving/restoring the bfp rounding mode
We want to reuse this in the context of vector instructions. So use
better matching names and introduce s390_restore_bfp_rounding_mode().

While at it, add proper newlines.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-11-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand b9c737f58e s390x/tcg: Check for exceptions in SET BFP ROUNDING MODE
Let's split handling of BFP/DFP rounding mode configuration. Also,
let's not reuse the sfpc handler, use a separate handler so we can
properly check for specification exceptions for SRNMB.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-10-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand 2aea83c672 s390x/tcg: Handle SET FPC AND LOAD FPC 3-bit BFP rounding modes
We already forward the 3 bits correctly in the translation functions. We
also have to handle them properly and check for specification
exceptions.

Setting an invalid rounding mode (BFP only, all DFP rounding modes)
results in a specification exception. Setting unassigned bits in the
fpc, results in a specification exception.

This fixes LOAD FPC (AND SIGNAL), SET FPC (AND SIGNAL). Also for,
SET BFP ROUNDING MODE, 3-bit rounding mode is now explicitly checked.

Note: TCG_CALL_NO_WG is required for sfpc handler, as we now inject
exceptions.

We won't be modeling abscence of the "floating-point extension facility"
for now, not necessary as most take the facility for granted without
checking.

z14 PoP, 9-23, "LOAD FPC"
    When the floating-point extension facility is
    installed, bits 29-31 of the second operand must
    specify a valid BFP rounding mode and bits 6-7,
    14-15, 24, and 28 must be zero; otherwise, a
    specification exception is recognized.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-9-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand 8772bbe4e7 s390x/tcg: Fix simulated-IEEE exceptions
The trap is triggered based on priority of the enabled signaling flags.
Only overflow and underflow allow a concurrent inexact exception.

z14 PoP, 9-33, Figure 9-21

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-8-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand f66a0ecf23 s390x/tcg: Refactor SET FPC AND SIGNAL handling
We can directly work on the uint64_t value, no need for a temporary
uint32_t value.

Also cleanup and shorten the comments.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand 6d6ad1d14e s390x/tcg: Hide IEEE underflows in some scenarios
IEEE underflows are not reported when the mask bit is off and we don't
also have an inexact exception.

z14 PoP, 9-20, "IEEE Underflow":
    An IEEE-underflow exception is recognized for an
    IEEE target when the tininess condition exists and
    either: (1) the IEEE-underflow mask bit in the FPC
    register is zero and the result value is inexact, or (2)
    the IEEE-underflow mask bit in the FPC register is
    one.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand fcb9e9f2a1 s390x/tcg: Fix parts of IEEE exception handling
Many things are wrong and some parts cannot be fixed yet. Fix what we
can fix easily and add two FIXMEs:

The fpc flags are not updated in case an exception is actually injected.
Inexact exceptions have to be handled separately, as they are the only
exceptions that can coexist with underflows and overflows.

I reread the horribly complicated chapters in the PoP at least 5 times
and hope I got it right.

For references:
- z14 PoP, 9-18, "IEEE Exceptions"
- z14 PoP, 19-9, Figure 19-8

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand 4b70fc5497 s390x/tcg: Factor out conversion of softfloat exceptions
We want to reuse that function in vector instruction context. While at it,
cleanup the code, using defines for magic values and avoiding the
handcrafted bit conversion.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand 3af471f915 s390x/tcg: Fix rounding from float128 to uint64_t/uint32_t
Let's use the proper conversion functions now that we have them.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand fc7cc951b6 s390x/tcg: Fix TEST DATA CLASS instructions
Let's detect normal and denormal ("subnormal") numbers reliably. Also
test for quiet NaN's. As only one class is possible, test common cases
first.

While at it, use a better check to test for the mask bits in the data
class mask. The data class mask has 12 bits, whereby bit 0 is the
leftmost bit and bit 11 the rightmost bit. In the PoP an easy to read
table with the numbers is provided for the VECTOR FP TEST DATA CLASS
IMMEDIATE instruction, the table for TEST DATA CLASS is more confusing
as it is based on 64 bit values.

Factor the checks out into separate functions, as they will also be
needed for floating point vector instructions. We can use a makro to
generate the functions.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-2-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand 6d9303322e s390x/tcg: Implement LOAD COUNT TO BLOCK BOUNDARY
Use a new CC helper to calculate the CC lazily if needed. While the
PoP mentions that "A 32-bit unsigned binary integer" is placed into the
first operand, there is no word telling that the other 32 bits (high
part) are left untouched. Maybe the other 32-bit are unpredictable.
So store 64 bit for now.

Bit magic courtesy of Richard.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-8-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand 86b59624c4 s390x/tcg: Implement LOAD LENGTHENED short HFP to long HFP
Nice trick to load a 32 bit value into vector element 0 (32 bit element
size) from memory, zeroing out element1. The short HFP to long HFP
conversion really only is a shift.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand 2a1cf84452 s390x/tcg: Factor out gen_addi_and_wrap_i64() from get_address()
Also properly wrap in 24bit mode. While at it, convert the comment (and
drop the comment about fundamental TCG optimizations).

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand 27197fec12 s390x/tcg: Factor out vec_full_reg_offset()
We'll use that a lot along with gvec helpers, to calculate the start
address of a vector.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand ffdd8ebb79 s390x/tcg: Clarify terminology in vec_reg_offset()
We will use s390x speak "Element Size" (es) for MO_8 == 0, MO_16 == 1
... Simple rename of variables.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand 076081ec8c s390x/tcg: Simplify disassembler operands initialization
Let's simplify initialization to 0.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand 5cf9550665 s390x/tcg: RXE has an optional M3 field
Will be needed, so add it to the format description.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand 9693379ba2 s390x/tcg: Save vregs to extended mchk save area
If we have vector registers and the designation is not zero, we have
to try to write the vector registers. If the designation is zero or
if storing fails, we must not indicate validity. s390_build_validity_mcic()
automatically already sets validity if the vector instruction facility
is installed.

As long as we don't support the guarded-storage facility, the alignment
and size of the area is always 1024 bytes.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190222081153.14206-4-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand 257619be42 s390x: use a QEMU-style typedef + name for SIGP save area struct
Convert this to QEMU style.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190222081153.14206-3-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
David Hildenbrand 2cca53fd5c s390x: Use cpu_to_be64 in SIGP STORE ADDITIONAL STATUS
As we will support vector instructions soon, and vector registers are
stored in 64bit host chunks, let's use cpu_to_be64. Same applies to the
guarded storage control block.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190222081153.14206-2-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04 11:49:31 +01:00
Peter Maydell 4179575898 target/xtensa: FLIX support, various fixes and test improvements
- add FLIX (flexible length instructions extension) support;
 - make testsuite runnable on wider range of xtensa cores;
 - add floating point opcode tests;
 - don't add duplicate 'static' in import_core.sh script;
 - fix undefined opcodes detection in test_mmuhifi_c3 overlay.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCAAxFiEEK2eFS5jlMn3N6xfYUfnMkfg/oEQFAlx32cMTHGpjbXZia2Jj
 QGdtYWlsLmNvbQAKCRBR+cyR+D+gRGcJD/4li2uLwhAjuTWYyn4QhNueWXNBLvWl
 0XJ0n6FgBrJ/PrrbfzZHygqHLgz7Mf8LsKxr9UsQLrf2BEv3lbRdMGW3WzXa/d30
 UqdNv1swNFehzLk1YpIcxbVJma2yNx9WQ2XQadgGK83Nqzi5/UqEYL7v5r3y78iY
 fae+Ln5Rmgwv/DfUaI/yE23gotfORh5J9Ez3fNLMA7Y0lsKmDXstHtgZWTATg0mZ
 nrnRmeTB8ZuBvaHDNTgiUrjof94l3O3u2mx66Z78OuA6OQWKY/E4JseM2ds/AfGg
 xZeDz+wWsnpPI94QgVGuMj6W1mD5xToLMRwRIcwKdmcC3YCoVAnO9w+B6eyvaKTA
 Yp9EfegA8LiFQaaaVV7M5nsqn/ghQGZDb4m0O4S+2t5ut6hwMOc24iCrO4VhrbVA
 qXb1aM8q45rK0VY0XdMmQ263Ia2bOtXQeABxqF8alRcx5ArjeUJ0FiKiSNYd7lEM
 DQLtMLWPWnU7Fj9OM2OQP+ZSlDWfc1/Fxld/sOd2xfpefsxk+79qZ6uxoX43zRJ6
 qgPoNt9ghMYlE/oU/ajiCxR3U9tLgoLizhJLCem95uq1ppOSIHes4+D1WQ9eunJR
 d+v30aC9wVC6DahIG1ZZY8Zps2FZw53hRjnbsdwKEnbR+F8uloy7XyBFqTLXxViA
 jBiRS3i0MWz34A==
 =lWCC
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/xtensa/tags/20190228-xtensa' into staging

target/xtensa: FLIX support, various fixes and test improvements

- add FLIX (flexible length instructions extension) support;
- make testsuite runnable on wider range of xtensa cores;
- add floating point opcode tests;
- don't add duplicate 'static' in import_core.sh script;
- fix undefined opcodes detection in test_mmuhifi_c3 overlay.

# gpg: Signature made Thu 28 Feb 2019 12:53:23 GMT
# gpg:                using RSA key 2B67854B98E5327DCDEB17D851F9CC91F83FA044
# gpg:                issuer "jcmvbkbc@gmail.com"
# gpg: Good signature from "Max Filippov <filippov@cadence.com>" [unknown]
# gpg:                 aka "Max Filippov <max.filippov@cogentembedded.com>" [full]
# gpg:                 aka "Max Filippov <jcmvbkbc@gmail.com>" [full]
# Primary key fingerprint: 2B67 854B 98E5 327D CDEB  17D8 51F9 CC91 F83F A044

* remotes/xtensa/tags/20190228-xtensa: (40 commits)
  tests/tcg/xtensa: add FPU2000 coprocessor tests
  tests/tcg/xtensa: add FP1 group tests
  tests/tcg/xtensa: add FP0 group conversion tests
  tests/tcg/xtensa: add FP0 group arithmetic tests
  tests/tcg/xtensa: add LSCI/LSCX group tests
  tests/tcg/xtensa: add test for FLIX
  tests/tcg/xtensa: conditionalize MMU-related tests
  tests/tcg/xtensa: conditionalize windowed register tests
  tests/tcg/xtensa: conditionalize and fix s32c1i tests
  tests/tcg/xtensa: fix SR tests for big endian configs
  tests/tcg/xtensa: conditionalize and expand SR tests
  tests/tcg/xtensa: conditionalize timer/CCOUNT tests
  tests/tcg/xtensa: conditionalize interrupt tests
  tests/tcg/xtensa: add straightforward conditionals
  tests/tcg/xtensa: conditionalize cache option tests
  tests/tcg/xtensa: conditionalize debug option tests
  tests/tcg/xtensa: enable boolean tests
  tests/tcg/xtensa: fix endianness issues in test_b
  tests/tcg/xtensa: don't use optional opcodes in generic code
  tests/tcg/xtensa: support configs with LITBASE
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-28 19:04:16 +00:00
Peter Maydell 9403bccfe3 target-arm queue:
* add MHU and dual-core support to Musca boards
  * refactor some VFP insns to be gated by ID registers
  * Revert "arm: Allow system registers for KVM guests to be changed by QEMU code"
  * Implement ARMv8.2-FHM extension
  * Advertise JSCVT via HWCAP for linux-user
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAlx3wM8ZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3t+yD/4hbg4UCNDNHvnHv5N0dwVo
 xDnEwN8Ath5jhcIlwjB4sPg44wO1dTy9PXK75UskGbUXnJfl4VFQsTVOg6GELVPc
 RJJ7S1hBjaipRxaS7tgBl+sE03JFSFniGaYuU5cpwxh62HWlZRBZ85+Pw3iNb9So
 UgrnQeThPNb9STKt2x0T8TvgjmwuS6fRYqA0DSVqUWT7FRNgIpfJ+dVkGxAhC8Mh
 YJVmLfR1Z/HS3lWRHkZHDBkv036by7XnrRdTEb7yftNflmFHaX0OdSO/4+Uueslf
 Lz9uem7LUOwnz9x0tBDSdaUrfJ4hmJSNXZhoeINR0V4MUKQBVWvRUrlfymRlFL15
 SlI7i19FS0OleFTZs26TflGutgLwvMTRzAvhVR/F+pBqlYs1UxvNk4eMPLZFYPuc
 OlRsgoUUtmF722TjW2l+Uewixo22AMatyv9VsiR6Ut7etmLIj8HHABkDX5kQbqFc
 wz60pkUvPcywGGATaMImQJ+uoHOTXZhegBPyfYZYhbTVXshjvEYxFSLtmfhoyVAo
 SyUUhsQyu4KGRVm4zGXKQuAPALElaDcKJ/T1H11pobMrCgM48C3br3EGsSZyOEFp
 2A7ulT73sYL+7EjQ2fS/4kTXUGOiIWijo1oR9ANvqYbcROQiKDYsl023oEz0dpVY
 n2tWg1Gzt/KjeM0md8B/Lg==
 =7MnB
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20190228-1' into staging

target-arm queue:
 * add MHU and dual-core support to Musca boards
 * refactor some VFP insns to be gated by ID registers
 * Revert "arm: Allow system registers for KVM guests to be changed by QEMU code"
 * Implement ARMv8.2-FHM extension
 * Advertise JSCVT via HWCAP for linux-user

# gpg: Signature made Thu 28 Feb 2019 11:06:55 GMT
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE

* remotes/pmaydell/tags/pull-target-arm-20190228-1:
  linux-user: Enable HWCAP_ASIMDFHM, HWCAP_JSCVT
  target/arm: Enable ARMv8.2-FHM for -cpu max
  target/arm: Implement VFMAL and VFMSL for aarch32
  target/arm: Implement FMLAL and FMLSL for aarch64
  target/arm: Add helpers for FMLAL
  Revert "arm: Allow system registers for KVM guests to be changed by QEMU code"
  target/arm: Gate "miscellaneous FP" insns by ID register field
  target/arm: Use MVFR1 feature bits to gate A32/T32 FP16 instructions
  hw/arm/armsse: Unify init-svtor and cpuwait handling
  hw/arm/iotkit-sysctl: Implement CPUWAIT and INITSVTOR*
  hw/arm/iotkit-sysctl: Add SSE-200 registers
  hw/misc/iotkit-sysctl: Correct typo in INITSVTOR0 register name
  target/arm/arm-powerctl: Add new arm_set_cpu_on_and_reset()
  target/arm/cpu: Allow init-svtor property to be set after realize
  hw/arm/armsse: Wire up the MHUs
  hw/misc/armsse-mhu.c: Model the SSE-200 Message Handling Unit

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-28 17:35:42 +00:00
Peter Maydell 711d13d5e2 MIPS queue for February 27th, 2019
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJcdpBIAAoJENSXKoln91plXDcH/377ByoCFuKu0uYN0f8m3eJ5
 wCwOrcwExM36vga/zaMCkkj44TbrzpNtjeo/frBn+8pabFDpfF6NOXlBSC+CE/hg
 i3G4Wm09GeNOyPH9JIdvItE1LvL3EEOf10pbheNdv6PeuFPRnUAV4pyQ/Rcu9USC
 7pAwIJvR3GYXAEhsqa8sKbbuCBq1oiFXWpsEuBNwybWKgdVEpia6IJVYDi+xwnVc
 FcpMF7BAqZDIX13kCSIgOAaa/XCKRFgxUYnZMd3bwD9m+x3iC442eS7Idx/HyXXK
 5HDeyubff8bMKBzTUWFuM1J8t0uuQsRDqR61WptQ0rxf5Qf9Uiv9OcAww+LIENI=
 =9h9l
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-feb-27-2019' into staging

MIPS queue for February 27th, 2019

# gpg: Signature made Wed 27 Feb 2019 13:27:36 GMT
# gpg:                using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <amarkovic@wavecomp.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01  DD75 D497 2A89 67F7 5A65

* remotes/amarkovic/tags/mips-queue-feb-27-2019:
  target/mips: Preparing for adding MMI instructions
  tests/tcg: target/mips: Add tests for MSA integer max/min instructions
  tests/tcg: target/mips: Add wrappers for MSA integer max/min instructions
  qemu-doc: Add section on MIPS' Boston board
  qemu-doc: Add section on MIPS' Fulong 2E board
  qemu-doc: Move section on MIPS' mipssim pseudo board
  disas: nanoMIPS: Fix a function misnomer
  tests/tcg: target/mips: Add tests for MSA integer compare instructions

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-28 12:59:49 +00:00
Max Filippov eb3f4298c9 target/xtensa: implement PREFCTL SR
Cache prefetch option adds an unprivileged SR PREFCTL. Add trivial
implementation for this SR.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-02-28 04:43:22 -08:00
Max Filippov 068e538a54 target/xtensa: prioritize load/store in FLIX bundles
Load/store opcodes may raise MMU exceptions. Normally exceptions should
be checked in priority order before any actual operations, but since MMU
exceptions are tightly coupled with actual memory access, there's
currently no way to do it.

Approximate this behavior by executing all load, then all store, and
then all other opcodes in the FLIX bundles. Use opcode dependency
mechanism to express ordering. Mark load/store opcodes with
XTENSA_OP_{LOAD,STORE} flags. Newer libisa has classifier functions that
can tell whether opcode is a load or store, but this information is not
available in the existing overlays.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-02-28 04:43:22 -08:00
Max Filippov 89bec9e911 target/xtensa: break circular register dependencies
Currently topologic opcode sorting stops at the first detected
dependency loop. Introduce struct opcode_arg_copy that describes
temporary register copy. Scan remaining opcodes searching for
dependencies that can be broken, break them by introducing temporary
register copies and record them in an array. In case of success
create local temporaries and initialize them with current register
values. Share single temporary copy between all register users. Delete
temporaries after translation.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-02-28 04:43:22 -08:00
Max Filippov 575e962a01 target/xtensa: reorganize access to boolean registers
libisa represents boolean registers b0..b16 as a BR register file and as
BR4 and BR8 register groups. Add these register files and use
OpcodeArg::{in,out} parameters to access boolean registers in
translators.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-02-28 04:43:22 -08:00
Max Filippov 7aa7834187 target/xtensa: reorganize access to MAC16 registers
libisa represents MAC16 registers m0..m3 as an MR register file. Add
this register file and reference its registers directly from the
translate_mac16. Drop translator parameter that indicates whether opcode
argument is in ar or in mr.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-02-28 04:43:22 -08:00
Max Filippov b0b24bdcd9 target/xtensa: reorganize register handling in translators
To support circular register dependencies in FLIX bundles opcode inputs
and outputs must be separate and adjustable. Circular dependencies can
be broken by making temporary copies of opcode inputs and substituting
them into the arguments array instead of the original registers.

E.g. the circular register dependency in the following bundle:

  { mov a2, a3 ; mov a3, a2 }

can be resolved by making copy a2' = a2 and substituting it as input
argument of the second opcode:

  { mov a2, a3 ; mov a3, a2' }

Change opcode translator prototype to accept OpcodeArg array as
argument. For each register argument initialize OpcodeArg::{in,out} with
TCGv_* of the respective register. Don't explicitly use cpu_R in the
opcode translators, use OpcodeArg::{in,out} instead.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-02-28 04:43:22 -08:00
Max Filippov c949009bc0 target/xtensa: only rotate window in the retw helper
Move return address calculation and WINDOW_START adjustment out of the
retw helper to simplify logic a bit and avoid using registers directly.
Pass a0 as a parameter to the helper.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-02-28 04:43:22 -08:00
Max Filippov 8df3fd3596 target/xtensa: move WINDOW_BASE SR update to postprocessing
Opcodes that modify WINDOW_BASE SR don't have dependency on opcodes that
use windowed registers. If such opcodes are combined in a single
instruction they may not be correctly ordered. Instead of adding said
dependency use temporary register to store changed WINDOW_BASE value and
do actual register window rotation as a postprocessing step.
Not all opcodes that change WINDOW_BASE need this: retw, rfwo and rfwu
are also jump opcodes, so they are guaranteed to be translated last and
thus will not affect other opcodes in the same instruction.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-02-28 04:43:22 -08:00
Max Filippov 45b71a795e target/xtensa: add generic instruction post-processing
Some opcodes may need additional actions at every exit from the
translated instruction or may need to amend TB exit slots available to
jumps generated for the instruction. Add gen_postprocess function and
call it from the gen_jump_slot and from the disas_xtensa_insn.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-02-28 04:43:22 -08:00
Max Filippov 20e9fd0fc0 target/xtensa: sort FLIX instruction opcodes
Opcodes in different slots may read and write same resources (registers,
states). In the absence of resource dependency loops it must be possible
to sort opcodes to avoid interference.

Record resources used by each opcode in the bundle. Build opcode
dependency graph and use topological sort to order its nodes. In case of
success translate opcodes in sort order. In case of failure report and
raise invalid opcode exception.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-02-28 04:43:15 -08:00
Richard Henderson 991c05995a target/arm: Enable ARMv8.2-FHM for -cpu max
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219222952.22183-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-28 11:03:05 +00:00
Richard Henderson 87732318c5 target/arm: Implement VFMAL and VFMSL for aarch32
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219222952.22183-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-28 11:03:05 +00:00
Richard Henderson 0caa5af802 target/arm: Implement FMLAL and FMLSL for aarch64
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219222952.22183-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-28 11:03:05 +00:00
Richard Henderson a4e943a716 target/arm: Add helpers for FMLAL
Note that float16_to_float32 rightly squashes SNaN to QNaN.
But of course pickNaNMulAdd, for ARM, selects SNaNs first.
So we have to preserve SNaN long enough for the correct NaN
to be selected.  Thus float16_to_float32_by_bits.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219222952.22183-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-28 11:03:05 +00:00
Peter Maydell 942f99c825 Revert "arm: Allow system registers for KVM guests to be changed by QEMU code"
This reverts commit 823e1b3818,
which introduces a regression running EDK2 guest firmware
under KVM:

error: kvm run failed Function not implemented
 PC=000000013f5a6208 X00=00000000404003c4 X01=000000000000003a
X02=0000000000000000 X03=00000000404003c4 X04=0000000000000000
X05=0000000096000046 X06=000000013d2ef270 X07=000000013e3d1710
X08=09010755ffaf8ba8 X09=ffaf8b9cfeeb5468 X10=feeb546409010756
X11=09010757ffaf8b90 X12=feeb50680903068b X13=090306a1ffaf8bc0
X14=0000000000000000 X15=0000000000000000 X16=000000013f872da0
X17=00000000ffffa6ab X18=0000000000000000 X19=000000013f5a92d0
X20=000000013f5a7a78 X21=000000000000003a X22=000000013f5a7ab2
X23=000000013f5a92e8 X24=000000013f631090 X25=0000000000000010
X26=0000000000000100 X27=000000013f89501b X28=000000013e3d14e0
X29=000000013e3d12a0 X30=000000013f5a2518  SP=000000013b7be0b0
PSTATE=404003c4 -Z-- EL1t

with
[ 3507.926571] kvm [35042]: load/store instruction decoding not implemented
in the host dmesg.

Revert the change for the moment until we can investigate the
cause of the regression.

Reported-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-28 11:03:05 +00:00
Peter Maydell c0c760afe8 target/arm: Gate "miscellaneous FP" insns by ID register field
There is a set of VFP instructions which we implement in
disas_vfp_v8_insn() and gate on the ARM_FEATURE_V8 bit.
These were all first introduced in v8 for A-profile, but in
M-profile they appeared in v7M. Gate them on the MVFR2
FPMisc field instead, and rename the function appropriately.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190222170936.13268-3-peter.maydell@linaro.org
2019-02-28 11:03:04 +00:00
Peter Maydell 602f6e42cf target/arm: Use MVFR1 feature bits to gate A32/T32 FP16 instructions
Instead of gating the A32/T32 FP16 conversion instructions on
the ARM_FEATURE_VFP_FP16 flag, switch to our new approach of
looking at ID register bits. In this case MVFR1 fields FPHP
and SIMDHP indicate the presence of these insns.

This change doesn't alter behaviour for any of our CPUs.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190222170936.13268-2-peter.maydell@linaro.org
2019-02-28 11:03:04 +00:00
Peter Maydell ea824b9742 target/arm/arm-powerctl: Add new arm_set_cpu_on_and_reset()
Currently the Arm arm-powerctl.h APIs allow:
 * arm_set_cpu_on(), which powers on a CPU and sets its
   initial PC and other startup state
 * arm_reset_cpu(), which resets a CPU which is already on
   (and fails if the CPU is powered off)

but there is no way to say "power on a CPU as if it had
just come out of reset and don't do anything else to it".

Add a new function arm_set_cpu_on_and_reset(), which does this.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219125808.25174-5-peter.maydell@linaro.org
2019-02-28 11:03:04 +00:00
Peter Maydell f9f62e4c37 target/arm/cpu: Allow init-svtor property to be set after realize
Make the M-profile "init-svtor" property be settable after realize.
This matches the hardware, where this is a config signal which
is sampled on CPU reset and can thus be changed between one
reset and another. To do this we have to change the API we
use to add the property.

(We will need this capability for the SSE-200.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219125808.25174-4-peter.maydell@linaro.org
2019-02-28 11:03:04 +00:00
Mateja Marjanovic 37b9aae2e6 target/mips: Preparing for adding MMI instructions
Set up MMI code to be compiled only for TARGET_MIPS64. This is
needed so that GPRs are 64 bit, and combined with MMI registers,
they will form full 128 bit registers.

Signed-off-by: Mateja Marjanovic <mateja.marjanovic@rt-rk.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1551183797-13570-2-git-send-email-mateja.marjanovic@rt-rk.com>
2019-02-27 14:26:14 +01:00
Benjamin Herrenschmidt 539c6e7358 target/ppc: Basic POWER9 bare-metal radix MMU support
No guest support yet

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-13-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:25 +11:00
Benjamin Herrenschmidt 3367c62f52 target/ppc: Support for POWER9 native hash
(Might need more patch splitting)

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-12-clg@kaod.org>
[dwg: Hack to fix compile with some earlier include tweaks of mine]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:25 +11:00
Benjamin Herrenschmidt 79825f4d58 target/ppc: Rename PATB/PATBE -> PATE
That "b" means "base address" and thus shouldn't be in the name
of actual entries and related constants.

This patch keeps the synthetic patb_entry field of the spapr
virtual hypervisor unchanged until I figure out if that has
an impact on the migration stream.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-11-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:25 +11:00
Benjamin Herrenschmidt c4dae9cd37 target/ppc: Flush the TLB locally when the LPIDR is written
Our TCG TLB only tags whether it's a HV vs a guest access, so it must
be flushed when the LPIDR is changed.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-10-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:25 +11:00
Benjamin Herrenschmidt 74c4912f09 target/ppc: Fix synchronization of mttcg with broadcast TLB flushes
Let's use the generic helper tlb_flush_all_cpus_synced() instead
of iterating the CPUs ourselves.

We do lose the optimization of clearing the "other" CPUs "need flush"
flags but this shouldn't be a problem in practice.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-9-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:25 +11:00
Benjamin Herrenschmidt 34525595fb target/ppc: Add basic support for "new format" HPTE as found on POWER9
POWER9 (arch v3) slightly changes the HPTE format. The B bits move
from the first to the second half of the HPTE, and the AVPN/ARPN
are slightly shorter.

However, under SPAPR, the hypercalls still take the old format
(and probably will for the foreseable future).

The simplest way to support this is thus to convert the HPTEs from
new to old format when reading them if the MMU model is v3 and there
is no virtual hypervisor, leaving the rest of the code unchanged.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-8-clg@kaod.org>
[dwg: Moved function to .c since there was no real need for it in the .h]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:25 +11:00
Benjamin Herrenschmidt 3054b0ca4b target/ppc: Fix ordering of hash MMU accesses
With mttcg, we can have MMU lookups happening at the same time
as the guest modifying the page tables.

Since the HPTEs of the hash table MMU contains two words (or
double worlds on 64-bit), we need to make sure we read them
in the right order, with the correct memory barrier.

Additionally, when using emulated SPAPR mode, the hypercalls
writing to the hash table must also perform the udpates in
the right order.

Note: This part is still not entirely correct

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-7-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:25 +11:00
Benjamin Herrenschmidt 2819282dae target/ppc: Fix #include guard in mmu-book3s-v3.h
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-5-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:25 +11:00
Benjamin Herrenschmidt 2b9e0a6b94 target/ppc: Re-enable RMLS on POWER9 for virtual hypervisors
Historically the 64-bit server MMU supports two way of configuring the
guest "real mode" mapping:

 - The "RMA" with is a single chunk of physically contiguous
memory remapped as guest real, and controlled by the RMLS
field in the LPCR register and the RMOR register.

 - The "VRMA" which uses special PTEs inserted in the partition
hash table by the hypervisor.

POWER9 deprecates the former, which is reflected by the filtering
done in ppc_store_lpcr() which effectively prevents setting of
the RMLS field.

However, when using fully emulated SPAPR machines, our qemu code
currently only knows how to define the guest real mode memory using
RMLS.

Thus you cannot run a SPAPR machine anymore with a POWER9 CPU
model today.

This works around it with a quirk in ppc_store_lpcr() to continue
allowing the RMLS field to be set when using a virtual hypervisor.

Ultimately we will want to implement configuring a VRMA instead
which will also be necessary if we want to migrate a SPAPR guest
between TCG and KVM but this is a lot more work.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-4-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:25 +11:00
Benjamin Herrenschmidt 38c784a1cc target/ppc/mmu: Use LPCR:HR to chose radix vs. hash translation
Now that LPCR:HR is set properly for SPAPR, use it for deciding
the translation type, which also works for bare metal

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-3-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:25 +11:00
Benjamin Herrenschmidt 00fd075e18 target/ppc/spapr: Set LPCR:HR when using Radix mode
The HW relies on LPCR:HR along with the PATE to determine whether
to use Radix or Hash mode. In fact it uses LPCR:HR more commonly
than the PATE.

For us, it's also more efficient to do so, especially since unlike
the HW we do not maintain a cache of the current PATE and HV PATE
in a generic place.

Prepare the grounds for that by ensuring that LPCR:HR is set
properly on SPAPR machines.

Another option would have been to use a callback to get the PATE
but this gets messy when implementing bare metal support, it's
much simpler (and faster) to use LPCR.

Since existing migration streams may not have it, fix it up in
spapr_post_load() as well based on the pseudo-PATE entry that
we keep.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-2-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:25 +11:00
Benjamin Herrenschmidt 6eebe6dccb target/ppc: Add support for LPCR:HEIC on POWER9
This controls whether the External Interrupt (0x500) can be
delivered to the hypervisor or not.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-11-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:25 +11:00
Benjamin Herrenschmidt 67afe7759d target/ppc: Add POWER9 external interrupt model
Adds support for the Hypervisor directed interrupts in addition to the
OS ones.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: - modified the icp_realize() and xive_tctx_realize() to take
        into account explicitely the POWER9 interrupt model
      - introduced a specific power9_set_irq for POWER9 ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215161648.9600-10-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:24 +11:00
Benjamin Herrenschmidt d8ce5fd664 target/ppc: Add Hypervisor Virtualization Interrupt on POWER9
This adds support for delivering that exception

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-9-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:24 +11:00
Benjamin Herrenschmidt f8154fd22b target/ppc: Detect erroneous condition in interrupt delivery
It's very easy for the CPU specific has_work() implementation
and the logic in ppc_hw_interrupt() to be subtly out of sync.

This can occasionally allow a CPU to wakeup from a PM state
and resume executing past the PM instruction when it should
resume at the 0x100 vector.

This detects if it happens and aborts, making it a lot easier
to catch such bugs when testing rather than chasing obscure
guest misbehaviour.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-8-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:24 +11:00
Benjamin Herrenschmidt a790e82b13 target/ppc: Add POWER9 exception model
And use it to get the correct HILE bit in HID0

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-7-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:24 +11:00
Benjamin Herrenschmidt 1e7fd61d97 target/ppc: Rename "in_pm_state" to "resume_as_sreset"
To better reflect what this does, as it's specific to some of the
P7/P8/P9 PM states, not generic.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-6-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:24 +11:00
Benjamin Herrenschmidt dead760b00 target/ppc: Move "wakeup reset" code to a separate function
This moves the code to handle waking up from the 0x100 vector
from powerpc_excp() to a separate function, as the former is
already way too big as it is.

No functional change.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-5-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:24 +11:00
Benjamin Herrenschmidt 21c0d66a9c target/ppc: Fix support for "STOP light" states on POWER9
STOP must act differently based on PSSCR:EC on POWER9. When set, it
acts like the P7/P8 power management instructions and wake up at 0x100
based on the wakeup conditions in LPCR.

When PSSCR:EC is clear however it will wakeup at the next instruction
after STOP (if EE is clear) or take the corresponding interrupts (if
EE is set).

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-4-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:24 +11:00
Benjamin Herrenschmidt 3621e2c960 target/ppc: Don't clobber MSR:EE on PM instructions
When issuing a power management instruction, we set MSR:EE
to force ppc_hw_interrupt() into calling powerpc_excp()
to deal with the fact that on P7 and P8, the system reset
caused by the wakeup needs to be generated regardless of
the MSR:EE value (using LPCR only).

This however means that the OS will see a bogus SRR1:EE
value which is a problem. It also prevents properly
implementing P9 STOP "light".

So fix this by instead putting some logic in ppc_hw_interrupt()
to decide whether to deliver or not by taking into account the
fact that we are waking up from sleep.

The LPCR isn't checked as this is done in the has_work() test.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-3-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:24 +11:00
Benjamin Herrenschmidt 154c69f2b8 target/ppc: Fix nip on power management instructions
Those instructions currently raise an exception from within
the helper. This tends to result in a bogus nip value in
the env context (typically the beginning of the TB). Such
a helper needs a gen_update_nip() first.

This fixes it with a different approach which is to throw the
exception from translate.c instead of the helper using
gen_exception_nip() which does the right thing. Exception
EXCP_HLT is also used instead of POWERPC_EXCP_STOP to effectively
exit from the CPU execution loop.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg : modified the commit log to comment the use of EXCP_HLT instead
       of POWERPC_EXCP_STOP]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215161648.9600-2-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26 09:21:24 +11:00
Peter Maydell 98e139bcec MIPS queue for February 21st, 2019, v2
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJcbu/QAAoJENSXKoln91plb0oH/RczDVACfmhnERAru8NhW19/
 6YB5w1FjbH+CkNB4ZBdF5sQNRyAnuHxL6xMKT3LvZUCEy0ADk+D5KJxzg340JABB
 eGc2FxKYe1vbhCAsYhQMOZyGhiye6UZnRjTXirYqMCm74zuFVI954X0V1ytfHARI
 0AIsWcOOVLnJj+itU0Uj+i+dBFFec0TbHWodvB8rt+TVcg5SFsdiwbT7jLxUSCAA
 VwhjmDUlE2+545LgbIrRbhMfnsEDkMgN2C1YGqkdBSM03dYnW0scxudGbxN0QPrV
 l0KAVTvUXcdUj0i1B3E91QiF0s6KU34TpE1vwZFUBdyuqHpIPgNhJkK6Tmt/DqM=
 =PVKW
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-feb-21-2019-v2' into staging

MIPS queue for February 21st, 2019, v2

# gpg: Signature made Thu 21 Feb 2019 18:37:04 GMT
# gpg:                using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <amarkovic@wavecomp.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01  DD75 D497 2A89 67F7 5A65

* remotes/amarkovic/tags/mips-queue-feb-21-2019-v2:
  target/mips: fulong2e: Dynamically generate SPD EEPROM data
  target/mips: fulong2e: Fix bios flash size
  hw/pci-host/bonito.c: Add PCI mem region mapped at the correct address
  target/mips: implement QMP query-cpu-definitions command
  tests/tcg: target/mips: Add wrappers for MSA integer compare instructions
  tests/tcg: target/mips: Change directory name 'bit-counting' to 'bit-count'
  tests/tcg: target/mips: Correct path to headers in some test source files
  hw/misc: mips_itu: Fix 32/64 bit issue in a line involving shift operator

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-22 11:26:17 +00:00
Pavel Dovgalyuk 535db74413 target/mips: implement QMP query-cpu-definitions command
This patch enables QMP-based querying of the available CPU types for
MIPS and MIPS64 platforms.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
2019-02-21 19:36:47 +01:00
Richard Henderson 6c1f6f2733 target/arm: Implement ARMv8.3-JSConv
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190215192302.27855-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: fixed a couple of comment typos]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-21 18:17:46 +00:00
Richard Henderson e80941bd64 target/arm: Rearrange Floating-point data-processing (2 regs)
There are lots of special cases within these insns.  Split the
major argument decode/loading/saving into no_output (compares),
rd_is_dp, and rm_is_dp.

We still need to special case argument load for compare (rd as
input, rm as zero) and vcvt fixed (rd as input+output), but lots
of special cases do disappear.

Now that we have a full switch at the beginning, hoist the ISA
checks from the code generation.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190215192302.27855-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-21 18:17:45 +00:00
Richard Henderson 37356079fc target/arm: Split out vfp_helper.c
Move all of the fp helpers out of helper.c into a new file.
This is code movement only.  Since helper.c has no copyright
header, take the one from cpu.h for the new file.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190215192302.27855-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-21 18:17:45 +00:00
Richard Henderson 3c3ff68492 target/arm: Restructure disas_fp_int_conv
For opcodes 0-5, move some if conditions into the structure
of a switch statement.  For opcodes 6 & 7, decode everything
at once with a second switch.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190215192302.27855-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-21 18:17:45 +00:00
Aaron Lindsay OS 67da43d668 target/arm: Stop unintentional sign extension in pmu_init
This was introduced by
    commit bf8d09694c
    target/arm: Don't clear supported PMU events when initializing PMCEID1
and identified by Coverity (CID 1398645).

Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190219144621.450-1-aaron@os.amperecomputing.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-21 18:17:45 +00:00
Peter Maydell cff21316c6 target/arm: v8M MPU should use background region as default, not always
The "background region" for a v8M MPU is a default which will be used
(if enabled, and if the access is privileged) if the access does
not match any specific MPU region. We were incorrectly using it
always (by putting the condition at the wrong nesting level). This
meant that we would always return the default background permissions
rather than the correct permissions for a specific region, and also
that we would not return the right information in response to a
TT instruction.

Move the check for the background region to the same place in the
logic as the equivalent v8M MPUCheck() pseudocode puts it.
This in turn means we must adjust the condition we use to detect
matches in multiple regions to avoid false-positives.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190214113408.10214-1-peter.maydell@linaro.org
2019-02-21 18:17:45 +00:00
Max Filippov fa6bc73c8b target/xtensa: implement wide branches and loops
FLIX adds branch and loop instruction variants with 15- and 18-bit wide
target offset. Implement them as additional names for the ordinary
branch/loop opcodes.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-02-18 21:29:09 -08:00