Add a return value to the event handler. Some I2C devices will
NAK if they have no data, so allow them to do this. This required
the following changes:
Go through all the event handlers and change them to return int
and return 0.
Modify i2c_start_transfer to terminate the transaction on a NAK.
Modify smbus handing to not assert if a NAK occurs on a second
operation, and terminate the transaction and return -1 instead.
Add some information on semantics to I2CSlaveClass.
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- add xts mode support
- add 3DES algorithm support
- other trivial fixes
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (GNU/Linux)
iQEcBAABAgAGBQJYXg4cAAoJEC7X/ekGPIZNAQoIAKzENXFRduW0d7jFIfiaOtcm
ViwGx/yR9Krg4fsheC8H6HbGXBQTtQ26sD709+uSMxHmByAVB8cpm2L9cd+HPgzi
Q1MaBiNgODNLan0dF+Q51BZtFiS4isspQ88SW7nMnOx/x31svzKpP1KXRAmrOhMP
99Wsj8AZNho0SZuZ6HO69fVrJmeaLqo12vDLu9U3n7ocYPVda6jgNWwDcQe8L+Ae
2LjmOj4B7FOp0eD/EdJ/XuVWIBKV6gWvvCXzwTq8UwBPYCZvAX1atReRwUWU4I6H
MSwHwg0gHmJ9i1VNSBy8/lq0oaSMhRc569A8/vvGeCi7zmwiZ/x5qaNWMrzdtZw=
=Df5g
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/gonglei/tags/cryptodev-next-20161224' into staging
cryptodev patches
- add xts mode support
- add 3DES algorithm support
- other trivial fixes
# gpg: Signature made Sat 24 Dec 2016 05:56:44 GMT
# gpg: using RSA key 0x2ED7FDE9063C864D
# gpg: Good signature from "Gonglei <arei.gonglei@huawei.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 3EF1 8E53 3459 E6D1 963A 3C05 2ED7 FDE9 063C 864D
* remotes/gonglei/tags/cryptodev-next-20161224:
cryptodev: add 3des-ede support
cryptodev: remove single-DES support in cryptodev
cryptodev: add xts(aes) support
cryptodev: fix the check of aes algorithm
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Version: GnuPG v1
iQEcBAABAgAGBQJYbwc1AAoJEO8Ells5jWIR1Y0IAKtI6HKo075VCtHJG/NilwAI
bhOSirja8oa0w8sLaWe706bwCAWCjQJpNQID8CDWpDxaNinLn7tPC4gFzTxcvWqV
A+BGvtyncniZywBBWUM6XkIXc0yPw7t/UZ1cobkx1RB5oDtagW3vgWnigqk+iiE3
fq0M9Q7utFQV0L08TShhY2cOV6PZJV7usUNA/ev8gzpjMwjQEYUboMkMNt853pHA
GDzTFTrOIV3+fI8olERpHUrszL3AKLFiMI+XdyOIB8fhe41hnnDf3BLZF2x0t0KY
yHPvRpb8i6YeGOif2CzWetbWW/6nj291z8PfgmNkb09pAqTyqQ2gJp+9Bib6YIo=
=aP1M
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging
# gpg: Signature made Fri 06 Jan 2017 02:55:49 GMT
# gpg: using RSA key 0xEF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* remotes/jasowang/tags/net-pull-request:
fsl_etsec: Fix Tx BD ring wrapping handling
rtl8139: correctly handle PHY reset
record/replay: add network support
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Current code that handles Tx buffer desciprtor ring scanning employs the
following algorithm:
1. Restore current buffer descriptor pointer from TBPTRn
2. Process current descriptor
3. If current descriptor has BD_WRAP flag set set current
descriptor pointer to start of the descriptor ring
4. If current descriptor points to start of the ring exit the
loop, otherwise increment current descriptor pointer and go
to #2
5. Store current descriptor in TBPTRn
The way the code is implemented results in buffer descriptor ring being
scanned starting at offset/descriptor #0. While covering 99% of the
cases, this algorithm becomes problematic for a number of edge cases.
Consider the following scenario: guest OS driver initializes descriptor
ring to N individual descriptors and starts sending data out. Depending
on the volume of traffic and probably guest OS driver implementation it
is possible that an edge case where a packet, spread across 2
descriptors is placed in descriptors N - 1 and 0 in that order(it is
easy to imagine similar examples involving more than 2 descriptors).
What happens then is aforementioned algorithm starts at descriptor 0,
sees a descriptor marked as BD_LAST, which it happily sends out as a
separate packet(very much malformed at this point) then the iteration
continues and the first part of the original packet is tacked to the
next transmission which ends up being bogus as well.
This behvaiour can be pretty reliably observed when scp'ing data from a
guest OS via TAP interface for files larger than 160K (every time for
700K+).
This patch changes the scanning algorithm to do the following:
1. Restore "current" buffer descriptor pointer from
TBPTRn
2. If "current" descriptor does not have BD_TX_READY set, goto #6
3. Process current descriptor
4. If "current" descriptor has BD_WRAP flag set "current"
descriptor pointer to start of the descriptor ring otherwise
set increment "current" by the size of one descriptor
5. Goto #1
6. Save "current" buffer descriptor in TBPTRn
This way we preserve the information about which descriptor was
processed last and always start where we left off avoiding the original
problem. On top of that, judging by the following excerpt from
MPC8548ERM (p. 14-48):
"... When the end of the TxBD ring is reached, eTSEC initializes TBPTRn
to the value in the corresponding TBASEn. The TBPTR register is
internally written by the eTSEC’s DMA controller during
transmission. The pointer increments by eight (bytes) each time a
descriptor is closed successfully by the eTSEC..."
revised algorithm might also a more correct way of emulating this aspect
of eTSEC peripheral.
Cc: Alexander Graf <agraf@suse.de>
Cc: Scott Wood <scottwood@freescale.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: qemu-devel@nongnu.org
Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
According to datasheet:
"[Bit 15 of Basic Mode Control Register] sets the status and control registers
of the PHY (register 0062-0074) in a default state. This bit is self-clearing.
1 = software reset; 0 = normal operation."
This fixes the netcard detection failure in Minoca OS.
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Jason Wang <jasowang@redhat.com>
This patch adds support of recording and replaying network packets in
irount rr mode.
Record and replay for network interactions is performed with the network filter.
Each backend must have its own instance of the replay filter as follows:
-netdev user,id=net1 -device rtl8139,netdev=net1
-object filter-replay,id=replay,netdev=net1
Replay network filter is used to record and replay network packets. While
recording the virtual machine this filter puts all packets coming from
the outer world into the log. In replay mode packets from the log are
injected into the network device. All interactions with network backend
in replay mode are disabled.
v5 changes:
- using iov_to_buf function instead of loop
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Signed-off-by: Jason Wang <jasowang@redhat.com>
- fix crash (2.8 regression)
- 9p functional tests
-----BEGIN PGP SIGNATURE-----
iEYEABECAAYFAlhr39IACgkQAvw66wEB28JqsgCfdg0P7g6aYl/nji1BSFniy504
qUEAnAmRsmJDll+eNH31aMFVlSAGGotq
=iMVO
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/gkurz/tags/for-upstream' into staging
- transport specific callbacks (for Xen)
- fix crash (2.8 regression)
- 9p functional tests
# gpg: Signature made Tue 03 Jan 2017 17:30:58 GMT
# gpg: using DSA key 0x02FC3AEB0101DBC2
# gpg: Good signature from "Greg Kurz <groug@kaod.org>"
# gpg: aka "Greg Kurz <groug@free.fr>"
# gpg: aka "Greg Kurz <gkurz@fr.ibm.com>"
# gpg: aka "Greg Kurz <gkurz@linux.vnet.ibm.com>"
# gpg: aka "Gregory Kurz (Groug) <groug@free.fr>"
# gpg: aka "Gregory Kurz (Cimai Technology) <gkurz@cimai.com>"
# gpg: aka "Gregory Kurz (Meiosys Technology) <gkurz@meiosys.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 2BD4 3B44 535E C0A7 9894 DBA2 02FC 3AEB 0101 DBC2
* remotes/gkurz/tags/for-upstream:
tests: virtio-9p: ".." cannot be used to walk out of the shared directory
tests: virtio-9p: no slash in path elements during walk
tests: virtio-9p: add walk operation test
tests: virtio-9p: add attach operation test
tests: virtio-9p: add version operation test
9pfs: fix P9_NOTAG and P9_NOFID macros
tests: virtio-9p: code refactoring
tests: virtio-9p: rename PCI configuration test
9pfs: fix crash when fsdev is missing
9pfs: introduce init_out/in_iov_from_pdu
9pfs: call v9fs_init_qiov_from_pdu before v9fs_pack
9pfs: introduce transport specific callbacks
9pfs: move pdus to V9fsState
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These parameters control the poll time self-tuning algorithm. They are
optional and will default to sane values if omitted.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20161201192652.9509-14-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This patch is based on the algorithm for the kvm.ko halt_poll_ns
parameter in Linux. The initial polling time is zero.
If the event loop is woken up within the maximum polling time it means
polling could be effective, so grow polling time.
If the event loop is woken up beyond the maximum polling time it means
polling is not effective, so shrink polling time.
If the event loop makes progress within the current polling time then
the sweet spot has been reached.
This algorithm adjusts the polling time so it can adapt to variations in
workloads. The goal is to reach the sweet spot while also recognizing
when polling would hurt more than help.
Two new trace events, poll_grow and poll_shrink, are added for observing
polling time adjustment.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20161201192652.9509-13-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This is a performance optimization to eliminate vmexits during polling.
It also avoids spurious ioeventfd processing after polling ends.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20161201192652.9509-12-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The begin and end callbacks can be used to prepare for the polling loop
and clean up when polling stops. Note that they may only be called once
for multiple aio_poll() calls if polling continues to succeed. Once
polling fails the end callback is invoked before aio_poll() resumes file
descriptor monitoring.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20161201192652.9509-11-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Polling should disable virtqueue notifications but that requires nested
virtio_queue_set_notification() calls. Turn vq->notification into a
counter so it is possible to do nesting.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20161201192652.9509-10-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The guest does not need to kick the virtqueue while we are processing
it. This reduces the number of vmexits during periods of heavy I/O.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20161201192652.9509-9-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The guest does not need to kick the virtqueue while we are processing
it. This reduces the number of vmexits during periods of heavy I/O.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20161201192652.9509-8-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Poll mode can be configured with -object iothread,poll-max-ns=NUM.
Polling is disabled with a value of 0 nanoseconds.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20161201192652.9509-7-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The Linux AIO userspace ABI includes a ring that is shared with the
kernel. This allows userspace programs to process completions without
system calls.
Add an AioContext poll handler to check for completions in the ring.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20161201192652.9509-6-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Add an AioContext poll handler to detect new virtqueue buffers without
waiting for a guest->host notification.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20161201192652.9509-5-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The AioContext event loop uses ppoll(2) or epoll_wait(2) to monitor file
descriptors or until a timer expires. In cases like virtqueues, Linux
AIO, and ThreadPool it is technically possible to wait for events via
polling (i.e. continuously checking for events without blocking).
Polling can be faster than blocking syscalls because file descriptors,
the process scheduler, and system calls are bypassed.
The main disadvantage to polling is that it increases CPU utilization.
In classic polling configuration a full host CPU thread might run at
100% to respond to events as quickly as possible. This patch implements
a timeout so we fall back to blocking syscalls if polling detects no
activity. After the timeout no CPU cycles are wasted on polling until
the next event loop iteration.
The run_poll_handlers_begin() and run_poll_handlers_end() trace events
are added to aid performance analysis and troubleshooting. If you need
to know whether polling mode is being used, trace these events to find
out.
Note that the AioContext is now re-acquired before disabling notify_me
in the non-polling case. This makes the code cleaner since notify_me
was enabled outside the non-polling AioContext release region. This
change is correct since it's safe to keep notify_me enabled longer
(disabling is an optimization) but potentially causes unnecessary
event_notifer_set() calls. I think the chance of performance regression
is small here.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20161201192652.9509-4-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The new AioPollFn io_poll() argument to aio_set_fd_handler() and
aio_set_event_handler() is used in the next patch.
Keep this code change separate due to the number of files it touches.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20161201192652.9509-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Polling mode will not call ppoll(2)/epoll_wait(2). Therefore we know
there are no fds ready and should avoid looping over fd handlers in
aio_dispatch().
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20161201192652.9509-2-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
It was not obvious to me why "qemu/osdep.h" must be the first #include.
This documents the rationale and the overall #include order.
Cc: Fam Zheng <famz@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1479307161-24658-1-git-send-email-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
According to the 9P spec at http://man.cat-v.org/plan_9/5/intro, the
parent directory of the root directory of a server's tree is itself.
This test hence checks that the qid of the root directory as returned by
attach is the same as the qid of ".." when walking from the root directory.
Signed-off-by: Greg Kurz <groug@kaod.org>
The walk operation is used to traverse the directory tree and to associate
paths to fids. A single walk can be used to traverse up to P9_MAXWELEM path
elements at the same time.
The test creates a path with P9_MAXWELEM elements on the backend (à la
'mkdir -p') and issues a walk operation. The walk is expected to succeed
without error.
Reference:
http://man.cat-v.org/plan_9/5/walk
Signed-off-by: Greg Kurz <groug@kaod.org>
The attach operation is used to establish a connection between the
client and the server. After this, the client is able to access the
underlying filesystem and do I/O.
This test simply ensures the operation succeeds without error.
Reference:
http://man.cat-v.org/plan_9/5/attach
Signed-off-by: Greg Kurz <groug@kaod.org>
This patch lays the foundations to be able to test 9P operations and
provides a test for the version operation as a first example.
A 9P request is composed of a T-message sent by the client (guest) to the
server (QEMU), and a R-message sent by the server back to the client.
The following general calls are available to implement requests for any
9P operations:
v9fs_req_init(): allocates the request structure and the guest memory for
the T-message
v9fs_req_send(): allocates the guest memory for the R-message and sends the
T-message to QEMU
v9fs_req_recv(): waits for QEMU to answer and does some sanity checks on the
returned R-message header
v9fs_req_free(): releases the guest memory and the request structure
Helpers are provided, to be used by each specific 9P operation to copy data
to/from the guest memory.
The version operation is used to negotiate the 9P protocol version to be
used and the maximum buffer size for exchanged data. It is necessarily
the first message of a 9P session. For simplicity, the maximum buffer size
is hardcoded to 4k, which should be enough for functional tests.
The test simply advertises the "9P2000.L" version to QEMU and expects QEMU
to answer it is supported.
References:
http://man.cat-v.org/plan_9/5/introhttp://man.cat-v.org/plan_9/5/version
Signed-off-by: Greg Kurz <groug@kaod.org>
The u16 and u32 types don't exist in QEMU common headers. It never broke
build because these two macros aren't use by the current code, but this
is about to change with the future addition of functional tests for 9P.
Also, these should have enclosing parenthesis to be usable in any
syntactical situation.
As suggested by Eric Blake, let's use UINT16_MAX and UINT32_MAX to address
both issues.
Signed-off-by: Greg Kurz <groug@kaod.org>
This moves the test_share static and the QOSState into the QVirtIO9P
structure, and put PCI related code in functions with a _pci_ name.
This will avoid code duplication in future tests, and allow to add
support for non-PCI platforms.
Signed-off-by: Greg Kurz <groug@kaod.org>
If the user passes -device virtio-9p without the corresponding -fsdev, QEMU
dereferences a NULL pointer and crashes.
This is a 2.8 regression introduced by commit 702dbcc274.
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Li Qiang <liq3ea@gmail.com>
Not all 9pfs transports share memory between request and response. For
those who don't, it is necessary to know how much memory is required in
the response.
Split the existing init_iov_from_pdu function in two:
init_out_iov_from_pdu (for writes) and init_in_iov_from_pdu (for reads).
init_in_iov_from_pdu takes an additional size parameter to specify the
memory required for the response message.
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Greg Kurz <groug@kaod.org>
v9fs_xattr_read should not access VirtQueueElement elems directly.
Move v9fs_init_qiov_from_pdu up in the file and call
v9fs_init_qiov_from_pdu before v9fs_pack. Use v9fs_pack on the new
iovec.
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Greg Kurz <groug@kaod.org>
Don't call virtio functions from 9pfs generic code, use generic function
callbacks instead.
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Greg Kurz <groug@kaod.org>
pdus are initialized and used in 9pfs common code. Move the array from
V9fsVirtioState to V9fsState.
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Greg Kurz <groug@kaod.org>
In the resource attach backing function, everytime it will
allocate 'res->iov' thus can leading a memory leak. This
patch avoid this.
Signed-off-by: Li Qiang <liq3ea@gmail.com>
Message-id: 1483003721-65360-1-git-send-email-liq3ea@gmail.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
If the virgl_renderer_resource_attach_iov function fails the
'res_iovs' will be leaked. Add check of the return value to
free the 'res_iovs' when failing.
Signed-off-by: Li Qiang <liq3ea@gmail.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-id: 1482999086-59795-1-git-send-email-liq3ea@gmail.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
If the guest destroy the resource before detach banking, the 'iov'
and 'addrs' field in resource is not freed thus leading memory
leak issue. This patch avoid this.
Signed-off-by: Li Qiang <liq3ea@gmail.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-id: 1480386565-10077-1-git-send-email-liq3ea@gmail.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
This is a cleanup patch. It adds call to tcg_temp_free()
when it is missing.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Also manage word and byte operands and fix the computation of
overflow in the case of M68000 arithmetic shifts.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1478699171-10637-4-git-send-email-rth@twiddle.net>
Report this properly via exception and, importantly, allow
the disassembler the chance to tell us what insn is not handled.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1478699171-10637-3-git-send-email-rth@twiddle.net>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
680x0 movem can load/store words and long words and can use more
addressing modes. Coldfire can only use long words with (Ax) and
(d16,Ax) addressing modes.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1478699171-10637-2-git-send-email-rth@twiddle.net>
Implement CAS using cmpxchg.
Implement CAS2 using helper and either cmpxchg when
the 32bit addresses are consecutive, or with
parallel_cpus+cpu_loop_exit_atomic() otherwise.
Suggested-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Update helper to set the throwing location in case of div-by-0.
Cleanup divX.w and add quad word variants of divX.l.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twidle.net>
[laurent: modified to clear Z on overflow, as found with risu]