To make the code slightly cleaner to look at and make the save/restore
code easier to understand, introduce this function to set the priority of
interrupts.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Message-id: 1387606179-22709-3-git-send-email-christoffer.dall@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
TRIGGER can really mean mean anything (e.g. was it triggered, is it
level-triggered, is it edge-triggered, etc.). Rename to EDGE_TRIGGER to
make the code comprehensible without looking up the data structure.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Message-id: 1387606179-22709-2-git-send-email-christoffer.dall@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
commit 5ce4f35781
"target-arm: A64: add set_pc cpu method"
introduces an array aarch64_cpus which is zero
size if this code is built without CONFIG_USER_ONLY.
In particular an attempt to iterate over this array produces a warning
under gcc 4.8.2:
CC aarch64-softmmu/target-arm/cpu64.o
/scm/qemu/target-arm/cpu64.c: In function ‘aarch64_cpu_register_types’:
/scm/qemu/target-arm/cpu64.c:124:5: error: comparison of unsigned
expression < 0 is always false [-Werror=type-limits]
for (i = 0; i < ARRAY_SIZE(aarch64_cpus); i++) {
^
cc1: all warnings being treated as errors
This is the result of ARRAY_SIZE being an unsigned type,
causing "i" to be promoted to unsigned int as well.
As zero size arrays are a gcc extension, it seems
cleanest to add a dummy element with NULL name,
and test for it during registration.
We'll be able to drop this when we add more CPUs.
Cc: Alexander Graf <agraf@suse.de>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20131223145216.GA22663@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Don't conditionalise GEM instantiation on networking attachments. The
device should always be present even if not attached to a network.
This allows for probing of the device by expectant guests (such as
OS's). This is needed because sysbus (or AXI in Xilinx's real hw case)
is not self identifying so the guest has no dynamic way of detecting
device absence.
Also allows for testing of the GEM in loopback mode with -net none.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 55649779a68ee3ff54b24c339b6fdbdccd1f0ed7.1388800598.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is an inline duplication of the raw_read and raw_write function
bodies. Fix by just calling raw_read/raw_write instead.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: e69281b7e1462b346cb313cf0b89eedc0568125f.1388649290.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use c13_context field instead of c13_fcse for CONTEXTIDR register
definition.
Signed-off-by: Sergey Fedorov <s.fedorov@samsung.com>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1387521191-15350-1-git-send-email-s.fedorov@samsung.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If the UART back-end blocks, buffer in the Tx FIFO to try again later.
This stops the IO-thread busy waiting on char back-ends (which causes
all sorts of performance problems).
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 4bea048b3ab38425701d82ccc1ab92545c26b79c.1388626249.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The can_receive logic was only taking into account the RxFIFO
occupancy. RxFIFO population is only used for the echo and normal modes
however. Improve the logic to correctly return the true number of
receivable characters based on the current mode:
Normal mode: RxFIFO vacancy.
Remote loopback: TxFIFO vacancy.
Echo mode: The min of the TxFIFO and RxFIFO vacancies.
Local Loopback: Return non-zero (to implement droppage)
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 36a58440c9ca5080151e95765c2c81342de8a8df.1388626249.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This tx timer implementation is flawed. Despite the controller
attempting to time the guest visable assertion of the TX-empty status
bit (and corresponding interrupt) the controller is still transmitting
characters instantaneously. There is also no sense of multiple character
delay.
The only side effect of this timer is assertion of tx-empty status. So
just remove the timer completely and hold tx-empty as permanently
asserted (its reset status). This matches the actual behaviour of
instantaneous transmission.
While we are VMSD version bumping, add the tx_fifo as device state to
prepare for upcomming TxFIFO flow control. Implement the interrupt
generation logic for the TxFIFO occupancy.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 7a208a7eb8d79d6429fe28b1396c3104371807b2.1388626249.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some (interrupt) status register bits relating to the TxFIFO path were
not defined. Define them. This prepares support for proper Tx data path
flow control.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 2068b963f0af8cc834c353944e9fa816d950b163.1388626249.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The status register bits are always pure functions of other device
state. Move the generation of these bits to the update_status()
function to simplify. Makes developing much easier as theres now no need
to recheck status bits on all the changes to rx/tx fifo state.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 321994929f789096975104f99c55732774be4cae.1388626249.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This should be rechecked on bus write accesses as such accesses may
change the underlying state that generates the interrupt. Particular
relevant for when the guest touches the interrupt status or mask.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 1c250cd61b7b8de492fbc8b79b8370958a56d83b.1388626249.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When setting rounding modes we currently just hardcode the numeric values
for rounding modes in a big switch statement.
With AArch64 support coming, we will need to refer to these rounding modes
at different places throughout the code though, so let's better give them
names so we don't get confused by accident.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, use names from ARM ARM.]
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This adds decoding support for C3.6.24 FP conditional select.
Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This adds decoding support for C3.6.23 FP Conditional Compare.
Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add decoding support for C3.6.22 Floating-point compare.
Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds emulation for the fmov instruction working on scalars
with an immediate payload.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, rebase and use new infrastructure.]
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds emulation for the "Floating-point data-processing (3 source)"
group of instructions.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, merged single and double precision patches.
Implement using muladd as suggested by Richard Henderson.]
Signed-off-by: Will Newton <will.newton@linaro.org>
[PMM: pull field decode up a level, use register accessors]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds emulation for the "Floating-point data-processing (2 source)"
group of instructions.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, merge single and double precision patches. Rebase
and update to new infrastructure. Incorporate FMIN/FMAX support patch by
Michael Matz.]
Signed-off-by: Will Newton <will.newton@linaro.org>
[PMM:
* added convenience accessors for FP s and d regs
* pulled the field decode and opcode validity check up a level]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Use the VFP_BINOP macro to provide helpers for min, max, minnum
and maxnum, rather than hand-rolling them. (The float64 max
version is not used by A32 but will be needed for A64.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The A64 128 bit vector registers are stored as a pair of
uint64_t values in the register array. This means that if
we're directly loading or storing a value of size less than
64 bits we must adjust the offset appropriately to account
for whether the host is bigendian or not. Provide utility
functions to abstract away the offsetof() calculations for
the FP registers.
For do_fp_st() we can sidestep most of the issues for 64 bit
and smaller reg-to-mem transfers by always doing a 64 bit
load from the register and writing just the piece we need
to memory.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
When dumping the current CPU state, we can also get a request
to dump the FPU state along with the CPU's integer state.
Add support to dump the VFP state when that flag is set, so that
we can properly debug code that modifies floating point registers.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, rebased. Output all registers, two per-line.]
Signed-off-by: Will Newton <will.newton@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add a config for aarch64-linux-user, thereby enabling it as
a valid target.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Now the AArch64 targets are in mainline we can include them in our
Travis test matrix.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use the helpers provided for getting the correct FPSR and FPCR
values for the signal context.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The AArch64 linux-user support was written before but merged after
commit 4ce6243dc6 which cleaned up the handling of the clone()
syscall argument order, so we failed to notice that AArch64 also needs
TARGET_CLONE_BACKWARDS to be defined. Add this define so that clone
and fork syscalls work correctly.
Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This implement exclusive loads/stores for aarch64 along the lines of
arm32 and ppc implementations. The exclusive load remembers the address
and loaded value. The exclusive store throws an an exception which uses
those values to check for equality in a proper exclusive region.
This is not actually the architecture mandated semantics (for either
AArch32 or AArch64) but it is close enough for typical guest code
sequences to work correctly, and saves us from having to monitor all
guest stores. It's fairly easy to come up with test cases where we
don't behave like hardware - we don't for example model cache line
behaviour. However in the common patterns this works, and the existing
32 bit ARM exclusive access implementation has the same limitations.
AArch64 also implements new acquire/release loads/stores (which may be
either exclusive or non-exclusive). These imposes extra ordering
constraints on memory operations (ie they act as if they have an implicit
barrier built into them). As TCG is single-threaded all our barriers
are no-ops, so these just behave like normal loads and stores.
Signed-off-by: Michael Matz <matz@suse.de>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
In preparation for adding support for A64 load/store exclusive instructions,
widen the fields in the CPU state struct that deal with address and data values
for exclusives from 32 to 64 bits. Although in practice AArch64 and AArch32
exclusive accesses will be generally separate there are some odd theoretical
corner cases (eg you should be able to do the exclusive load in AArch32, take
an exception to AArch64 and successfully do the store exclusive there), and it's
also easier to reason about.
The changes in semantics for the variables are:
exclusive_addr -> extended to 64 bits; -1ULL for "monitor lost",
otherwise always < 2^32 for AArch32
exclusive_val -> extended to 64 bits. 64 bit exclusives in AArch32 now
use the high half of exclusive_val instead of a separate exclusive_high
exclusive_high -> is no longer used in AArch32; extended to 64 bits as
it will be needed for AArch64's pair-of-64-bit-values exclusives.
exclusive_test -> extended to 64 bits, as it is an address. Since this is
a linux-user-only field, in arm-linux-user it will always have the top
32 bits zero.
exclusive_info -> stays 32 bits, as it is neither data nor address, but
simply holds register indexes etc. AArch64 will be able to fit all its
information into 32 bits as well.
Note that the refactoring of gen_store_exclusive() coincidentally fixes
a minor bug where ldrexd would incorrectly update the first CPU register
even if the load for the second register faulted.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Adds support for Load Register (literal), both normal
and SIMD/FP forms.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
this patch adds support for C3.5.4 - C3.5.5
Conditional compare (both immediate and register)
Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The common pattern for system registers in a 64-bit capable ARM
CPU is that when in AArch32 the cp15 register is a view of the
bottom 32 bits of the 64-bit AArch64 system register; writes in
AArch32 leave the top half unchanged. The most natural way to
model this is to have the state field in the CPU struct be a
64 bit value, and simply have the AArch32 TCG code operate on
a pointer to its lower half.
For aarch64-linux-user the only registers we need to share like
this are the thread-local-storage ones. Widen their fields to
64 bits and provide the 64 bit reginfo struct to make them
visible in AArch64 state. Note that minor cleanup of the AArch64
system register encoding space means We can share the TPIDR_EL1
reginfo but need split encodings for TPIDR_EL0 and TPIDRRO_EL0.
Since we're touching almost every line in QEMU that uses the
c13_tls* fields in this patch anyway, we take the opportunity
to rename them in line with the standard ARM architectural names
for these registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The common pattern for system registers in a 64-bit capable ARM
CPU is that when in AArch32 the cp15 register is a view of the
bottom 32 bits of the 64-bit AArch64 system register; writes in
AArch32 leave the top half unchanged. The most natural way to
model this is to have the state field in the CPU struct be a
64 bit value, and simply have the AArch32 TCG code operate on
a pointer to its lower half.
For aarch64-linux-user the only registers we need to share like
this are the thread-local-storage ones. Widen their fields to
64 bits and provide the 64 bit reginfo struct to make them
visible in AArch64 state. Note that minor cleanup of the AArch64
system register encoding space means We can share the TPIDR_EL1
reginfo but need split encodings for TPIDR_EL0 and TPIDRRO_EL0.
Since we're touching almost every line in QEMU that uses the
c13_tls* fields in this patch anyway, we take the opportunity
to rename them in line with the standard ARM architectural names
for these registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Implement an initial minimal set of EL0-visible system registers:
* NZCV
* FPCR
* FPSR
* CTR_EL0
* DCZID_EL0
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
The AArch64 equivalent of the traditional AArch32
cp15 coprocessor registers is the set of instructions
MRS/MSR/SYS/SYSL, which cover between them both true
system registers and the "operations with side effects"
such as cache maintenance which in AArch32 are mixed
in with other cp15 registers. Implement these instructions
to look in the cpregs hashtable for the register or
operation.
Since we don't yet populate the cpregs hashtable with
any registers with the "AA64" bit set, everything will
still UNDEF at this point.
MSR/MRS is the first user of is_jmp = DISAS_UPDATE, so
fix an infelicity in its handling where the main loop
was requiring the caller to do the update of PC rather
than just doing it itself.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The cpregs APIs used by the decoder (get_arm_cp_reginfo() and
cp_access_ok()) currently take either a CPUARMState* or an ARMCPU*.
This is problematic for the A64 decoder, which doesn't pass the
environment pointer around everywhere the way the 32 bit decoder
does. Adjust the parameters these functions take so that we can
copy only the relevant info from the CPUARMState into the
DisasContext and then use that.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
In preference to the older helpers. Stores only in this patch.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
In preference to the older helpers. Loads only in this patch.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Now that we don't combine mem_index with operand size info,
we don't need to encode it. Which tidies many places that
access it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Rather than add s->mem_index into a combined size+mem_index
argument, pass the context down. This will allow cleaning
up s->mem_index later.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The previous placement could result in duplicate logging while
still processing interrupts.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
If a user or QMP client enter a bad syntax for the migrate
command in QMP/HMP, then the migrate command will never succeed
from that point on.
For example, if you enter:
(qemu) migrate tcp;0:4444
migrate: Parameter 'uri' expects a valid migration protocol
Then the migrate command will always fail from now on:
(qemu) migrate tcp:0:4444
migrate: There's a migration process in progress
The problem is that qmp_migrate() sets the migration status to
MIG_STATE_SETUP and doesn't reset it on syntax error. This bug
was introduced by commit 29ae8a4133.
Reviewed-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
This is no longer needed, and is obsoleted by error_abort. Remove.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
This is a boiler-plate _nofail variant of qemu_opts_create. Remove and
use error_abort in call sites.
null/0 arguments needs to be added for the id and fail_if_exists fields
in affected callsites due to argument inconsistency between the normal and
no_fail variants.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Replace an assert_no_error() usage with the error_abort system.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>