Commit Graph

12 Commits

Author SHA1 Message Date
Richard Henderson
15fa08f845 tcg: Dynamically allocate TCGOps
With no fixed array allocation, we can't overflow a buffer.
This will be important as optimizations related to host vectors
may expand the number of ops used.

Use QTAILQ to link the ops together.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-12-29 12:43:39 -08:00
Jianjun Duan
94869d5c52 migration: migrate QTAILQ
Currently we cannot directly transfer a QTAILQ instance because of the
limitation in the migration code. Here we introduce an approach to
transfer such structures. We created VMStateInfo vmstate_info_qtailq
for QTAILQ. Similar VMStateInfo can be created for other data structures
such as list.

When a QTAILQ is migrated from source to target, it is appended to the
corresponding QTAILQ structure, which is assumed to have been properly
initialized.

This approach will be used to transfer pending_events and ccs_list in spapr
state.

We also create some macros in qemu/queue.h to access a QTAILQ using pointer
arithmetic. This ensures that we do not depend on the implementation
details about QTAILQ in the migration code.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Signed-off-by: Jianjun Duan <duanj@linux.vnet.ibm.com>
Message-Id: <1484852453-12728-3-git-send-email-duanj@linux.vnet.ibm.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-01-24 17:54:47 +00:00
Igor Mammedov
3b8c1761f0 qtail: clean up direct access to tqe_prev field
instead of accessing tqe_prev field dircetly outside
of queue.h use macros to check if element is in list
and make sure that afer element is removed from list
tqe_prev field could be used to do the same check.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1469450832-84343-1-git-send-email-imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-13 19:08:41 +02:00
Markus Armbruster
2a6a4076e1 Clean up ill-advised or unusual header guards
Cleaned up with scripts/clean-header-guards.pl.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2016-07-12 16:20:46 +02:00
Kevin Wolf
8e419aefa0 block: Remove bdrv_swap()
bdrv_swap() is unused now. Remove it and all functions that have
no other users than bdrv_swap(). In particular, this removes the
.bdrv_rebind callbacks from block drivers.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2015-10-16 15:34:30 +02:00
Kevin Wolf
ae81693004 queue.h: Add QLIST_FIX_HEAD_PTR()
If the head of a list has been moved to a different memory location, the
le_prev link in the first list entry has to be fixed up. Provide a macro
that implements this fixup.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2015-06-12 17:04:59 +02:00
Paolo Bonzini
2120465fbb queue: fix QSLIST_INSERT_HEAD_ATOMIC race
There is a not-so-subtle race in QSLIST_INSERT_HEAD_ATOMIC.

Because atomic_cmpxchg returns the old value instead of a success flag,
QSLIST_INSERT_HEAD_ATOMIC was checking for success by comparing against
the second argument to atomic_cmpxchg.  Unfortunately, this only works
if the second argument is a local or thread-local variable.

If it is in memory, it can be subject to common subexpression elimination
(and then everything's fine) or reloaded after the atomic_cmpxchg,
depending on the compiler's whims.  If the latter happens, the race can
happen.  A thread can sneak in, doing something on elm->field.sle_next
after the atomic_cmpxchg and before the comparison.  This causes a wrong
failure, and then two threads are using "elm" at the same time.  In the
case discovered by Christian, the sequence was likely something like this:

    thread 1                   | thread 2
    QSLIST_INSERT_HEAD_ATOMIC  |
      atomic_cmpxchg succeeds  |
      elm added to list        |
                               | steal release_pool
                               | QSLIST_REMOVE_HEAD
                               | elm removed from list
                               | ...
                               | QSLIST_INSERT_HEAD_ATOMIC
                               |   (overwrites sle_next)
      spurious failure         |
      atomic_cmpxchg succeeds  |
      elm added to list again  |
                               |
    steal release_pool         |
    QSLIST_REMOVE_HEAD         |
    elm removed again          |

The last three steps could be done by a third thread as well.
A reproducer that failed in a matter of seconds is as follows:

- the guest has 32 VCPUs on a 28 core host (hyperthreading was enabled),
  memory was 16G just to err on the safe side (the host has 64G, but hey
  at least you need no s390)

- the guest has 24 null-aio virtio-blk devices using dataplane
  (-object iothread,id=ioN -drive if=none,id=blkN,driver=null-aio,size=500G
  -device virtio-blk-pci,iothread=ioN,drive=blkN)

- the guest also has a single network interface.  It's only doing loopback
  tests so slirp vs. tap and the model doesn't matter.

- the guest is running fio with the following script:

     [global]
     rw=randread
     blocksize=16k
     ioengine=libaio
     runtime=10m
     buffered=0
     fallocate=none
     time_based
     iodepth=32

     [virtio1a]
     filename=/dev/block/252\:16

     [virtio1b]
     filename=/dev/block/252\:16

     ...

     [virtio24a]
     filename=/dev/block/252\:384

     [virtio24b]
     filename=/dev/block/252\:384

     [listen1]
     protocol=tcp
     ioengine=net
     port=12345
     listen
     rw=read
     bs=4k
     size=1000g

     [connect1]
     protocol=tcp
     hostname=localhost
     ioengine=net
     port=12345
     protocol=tcp
     rw=write
     startdelay=1
     size=1000g

     ...

     [listen8]
     protocol=tcp
     ioengine=net
     port=12352
     listen
     rw=read
     bs=4k
     size=1000g

     [connect8]
     protocol=tcp
     hostname=localhost
     ioengine=net
     port=12352
     rw=write
     startdelay=1
     size=1000g

Moral of the story: I should refrain from writing more clever stuff.
At least it looks like it is not too clever to be undebuggable.

Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1426002357-6889-1-git-send-email-pbonzini@redhat.com
Fixes: c740ad92d0
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2015-03-12 17:41:23 +00:00
Mike Day
341774fe6c rcu: introduce RCU-enabled QLIST
Add RCU-enabled variants on the existing bsd DQ facility. Each
operation has the same interface as the existing (non-RCU)
version. Also, each operation is implemented as macro.

Using the RCU-enabled QLIST, existing QLIST users will be able to
convert to RCU without using a different list interface.

Signed-off-by: Mike Day <ncmike@ncultra.org>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-02-16 17:30:19 +01:00
Paolo Bonzini
7911747bd4 rcu: add rcu library
This includes a (mangled) copy of the liburcu code.  The main changes
are: 1) removing dependencies on many other header files in liburcu; 2)
removing for simplicity the tentative busy waiting in synchronize_rcu,
which has limited performance effects; 3) replacing futexes in
synchronize_rcu with QemuEvents for Win32 portability.  The API is
the same as liburcu, so it should be possible in the future to require
liburcu on POSIX systems for example and use our copy only on Windows.

Among the various versions available I chose urcu-mb, which is the
least invasive implementation even though it does not have the
fastest rcu_read_{lock,unlock} implementation.  The urcu flavor can
be changed later, after benchmarking.

Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-02-02 16:55:10 +01:00
Paolo Bonzini
c740ad92d0 QSLIST: add lock-free operations
These operations are trivial to implement and do not have ABA problems.
They are enough to implement simple multiple-producer, single consumer
lock-free lists or, as in the next patch, the multiple consumers can
steal a whole batch of elements and process them at their leisure.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-id: 1417518350-6167-5-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2015-01-13 13:43:29 +00:00
Paolo Bonzini
82595da8de linux-aio: simplify removal of completed iocbs from the list
There is no need to do another O(n) pass on the list; the iocb to
split the list at is already available through the array we passed to
io_submit.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 1418305950-30924-6-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-12-12 16:57:55 +00:00
Paolo Bonzini
1de7afc984 misc: move include files to include/qemu/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19 08:32:39 +01:00