There are several error messages that identify a BlockDriverState by
its device name. However those errors can be produced in nodes that
don't have a device name associated.
In those cases we should use bdrv_get_device_or_node_name() to fall
back to the node name and produce a more meaningful message. The
messages are also updated to use the more generic term 'node' instead
of 'device'.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 9823a1f0514fdb0692e92868661c38a9e00a12d6.1428485266.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This function gets the device name associated with a BlockDriverState,
or its node name if the device name is empty.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 4fa30aa8d61d9052ce266fd5429a59a14e941255.1428485266.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The 'block-stream' QMP command is documented in block-core.json but not
qmp-commands.hx. Add a summary of the command to qmp-commands.hx
(similar to the documentation for 'block-commit').
Reported-by: Kashyap Chamarthy <kchamart@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1429094622-26218-1-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Delay the call to blk_blockalign() until s->blk has been assigned.
This never caused a crash because blk_blockalign(NULL, size) defaults to
4096 alignment but it's technically incorrect.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1429091024-25098-1-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Of the block devices that poked into -drive options via drive_get_next,
m25p80 was the only one who also did not attach itself to the BlockBackend.
Since sd does it, and all other devices go through a "drive" property,
with this change all block backends attached to the guest will have a
non-NULL result for blk_get_attached_dev().
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 1429025387-11077-1-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
update virtio blk header from latest linux, include comment fixups.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Message-id: 1428854036-12806-1-git-send-email-mst@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
bdrv_aio_* APIs can use coroutines to achieve asynchronicity. However,
the coroutine may terminate without having yielded back to the caller
(for example because of something that invokes a nested event loop,
or because the coroutine is doing nothing at all). In this case,
the bdrv_aio_* API must delay the completion to the next iteration
of the main loop, because bdrv_aio_* will never invoke the callback
before returning.
This can be done with a bottom half, and indeed bdrv_aio_* is always
using one for simplicity. It is possible to gain some performance
(~3%) by avoiding this in the common case. A new field in the
BlockAIOCBCoroutine struct is set to true until the first time the
corotine has yielded to its creator, and completion goes through a
new function bdrv_co_complete. If the flag is false, bdrv_co_complete
invokes the callback immediately. If it is true, the caller will
notice that the coroutine has completed and schedule the bottom
half itself.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1427524638-28157-1-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 1428069921-2957-5-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 1428069921-2957-4-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This is necessary to suppress more IO requests from being generated from
block job coroutines.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 1428069921-2957-3-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch changes block_job_pause to increase the pause counter and
block_job_resume to decrease it.
The counter will allow calling block_job_pause/block_job_resume
unconditionally on a job when we need to suspend the IO temporarily.
From now on, each block_job_resume must be paired with a block_job_pause
to keep the counter balanced.
The user pause from QMP or HMP will only trigger block_job_pause once
until it's resumed, this is achieved by adding a user_paused flag in
BlockJob.
One occurrence of block_job_resume in mirror_complete is replaced with
block_job_enter which does what is necessary.
In block_job_cancel, the cancel flag is good enough to instruct
coroutines to quit loop, so use block_job_enter to replace the unpaired
block_job_resume.
Upon block job IO error, user is notified about the entering to the
pause state, so this pause belongs to user pause, set the flag
accordingly and expect a matching QMP resume.
[Extended doc comments as suggested by Paolo Bonzini
<pbonzini@redhat.com>.
--Stefan]
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 1428069921-2957-2-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1427852740-24315-4-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reopen is used in block-commit. With this always-succeed operation, it
is now possible to test committing to a null drive, by specifying
"null-aio://" or "null-co://" as the backing image when creating the
qcow2 image.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1427852740-24315-3-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Aio context switch should just work because the requests will be
drained, so the scheduled timer(s) on the old context will be freed.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1427852740-24315-2-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The 'qemu coroutine <coroutine-address>' GDB command prints the
backtrace for a CoroutineUContext. This is useful for peeking inside
yielded coroutines that are waiting for file descriptor events, timers,
etc.
For example:
$ gdb tests/test-coroutine
(gdb) b test_yield
(gdb) r
(gdb) b qemu_coroutine_enter
(gdb) c
(gdb) c
Continuing.
Breakpoint 2, qemu_coroutine_enter (co=0x555555c66520, opaque=0x0) at qemu-coroutine.c:103
103 {
(gdb) source scripts/qemu-gdb.py
(gdb) qemu coroutine 0x555555c66520
#0 0x000055555557a740 in qemu_coroutine_switch (from_=<optimized out>, to_=0x7ffff7f90a70, action=COROUTINE_YIELD) at coroutine-ucontext.c:177
#1 0x0000555555566af9 in yield_5_times (opaque=0x7fffffffdbb7) at tests/test-coroutine.c:107
#2 0x000055555557a7aa in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at coroutine-ucontext.c:80
#3 0x00007ffff08de000 in __start_context () at /lib64/libc.so.6
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1427409754-8556-1-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch simplifies thread_pool_completion_bh().
The function first checks elem->state:
if (elem->state != THREAD_DONE) {
continue;
}
It then goes on to check elem->state == THREAD_DONE although we already
know this must be the case.
The QLIST_REMOVE() is duplicated down both branches of an if-else
statement so that can be lifted out as well.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1427992762-10126-1-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Fix the length of the zero-fill for the back, which was accidentally
using the same value as for the front. This is caught by qemu-iotests
033.
For consistency, change the code for the front as well to use the length
stored in the iov (it is the same value, copied four lines above).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Jeff Cody <jcody@redhat.com>
This is, amongst others, required for qemu-iotests 033 to run as
intended on VHDX, which uses explicit bdrv_truncate() calls to bs->file
when allocating new blocks.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
This adds a regression test for some problems that the qemu-img convert
rewrite just fixed.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
The implementation of qemu-img convert is (a) messy, (b) buggy, and
(c) less efficient than possible. The changes required to beat some
sense into it are massive enough that incremental changes would only
make my and the reviewers' life harder. So throw it away and reimplement
it from scratch.
Let me give some examples what I mean by messy, buggy and inefficient:
(a) The copying logic of qemu-img convert has two separate branches for
compressed and normal target images, which roughly do the same -
except for a little code that handles actual differences between
compressed and uncompressed images, and much more code that
implements just a different set of optimisations and bugs. This is
unnecessary code duplication, and makes the code for compressed
output (unsurprisingly) suffer from bitrot.
The code for uncompressed ouput is run twice to count the the total
length for the progress bar. In the first run it just takes a
shortcut and runs only half the loop, and when it's done, it toggles
a boolean, jumps out of the loop with a backwards goto and starts
over. Works, but pretty is something different.
(b) Converting while keeping a backing file (-B option) is broken in
several ways. This includes not writing to the image file if the
input has zero clusters or data filled with zeros (ignoring that the
backing file will be visible instead).
It also doesn't correctly limit every iteration of the copy loop to
sectors of the same status so that too many sectors may be copied to
in the target image. For -B this gives an unexpected result, for
other images it just does more work than necessary.
Conversion with a compressed target completely ignores any target
backing file.
(c) qemu-img convert skips reading and writing an area if it knows from
metadata that copying isn't needed (except for the bug mentioned
above that ignores a status change in some cases). It does, however,
read from the source even if it knows that it will read zeros, and
then search for non-zero bytes in the read buffer, if it's possible
that a write might be needed.
This reimplementation of the copying core reorganises the code to remove
the duplication and have a much more obvious code flow, by essentially
splitting the copy iteration loop into three parts:
1. Find the number of contiguous sectors of the same status at the
current offset (This can also be called in a separate loop before the
copying loop in order to determine the total sectors for the progress
bar.)
2. Read sectors. If the status implies that there is no data there to
read (zero or unallocated cluster), don't do anything.
3. Write sectors depending on the status. If it's data, write it. If
we want the backing file to be visible (with -B), don't write it. If
it's zeroed, skip it if you can, otherwise use bdrv_write_zeroes() to
optimise the write at least where possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This is the first step towards having fine-grained critical sections in
dataplane threads, which resolves lock ordering problems between
address_space_* functions (which need the BQL when doing MMIO, even
after we complete RCU-based dispatch) and the AioContext.
Because AioContext does not use contention callbacks anymore, the
unit test has to be changed.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1424449612-18215-4-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This is the first step in pushing down acquire/release, and will let
rfifolock drop the contention callback feature.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1424449612-18215-3-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
By using thread-local storage, aio_poll can stop using global data during
g_poll_ns. This will make it possible to drop callbacks from rfifolock.
[Moved npfd = 0 assignment to end of walking_handlers region as
suggested by Paolo. This resolves the assert(npfd == 0) assertion
failure in pollfds_cleanup().
--Stefan]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1424449612-18215-2-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Currently, throttle timers won't make any progress when VCPU is not
running, which would stall the request queue in utils, qtest, vm
suspending, and live migration, without special handling.
Block jobs are confusingly inconsistent between with and without
throttling: if user sets a bps limit, stops the vm, then start a block
job, the block job will not make any progress; in contrary, if user
unsets the bps limit, or if it's not set, the block job will run
normally.
After this patch, with the host clock, even if the VCPUs are stopped,
the throttle queues will be processed.
This patch also enables potential to add throttle to bdrv_drain_all.
Currently all requests are drained immediately. In other words whenever
it is called, IO throttling goes ineffective (examples: system reset,
migration and many block job operations.). This is a loophole that guest
could exploit. If we use the host clock, we can later just trust the
nested poll. This could be done on top.
Note that for qemu-iotests case 093, which uses qtest, we still keep vm
clock so the script can control the clock stepping in order to be
deterministic.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-id: 1427268446-6426-1-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The ffs(3) family of functions is not portable. MinGW doesn't always
provide the function.
Use ctz32() or ctz64() instead.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1427124571-28598-10-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The lack of ffs(3) in the MinGW headers is a hint that we shouldn't rely
on it. MinGW 4.9.2 does not make it available for linking when QEMU's
./configure --enable-debug is used (release builds are fine though).
Now that all QEMU code has been switched to ctz32() there is no need for
ffs(3).
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1427124571-28598-9-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Rewrite the loop using level &= level - 1 to clear the least significant
bit after each iteration. This simplifies the loop and makes it easy to
replace ffs(3) with ctz32().
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1427124571-28598-8-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
ffs() cannot be replaced with ctz32() when the argument might be zero,
because ffs(0) returns 0 while ctz32(0) returns 32.
The ffs(3) call in sd_normal_command() is a special case though. It can
be converted to ctz32() + 1 because the argument is never zero:
if (!(req.arg >> 8) || (req.arg >> (ctz32(req.arg & ~0xff) + 1))) {
~~~~~~~~~~~~~~~
^--------------- req.arg cannot be zero
Cc: Markus Armbruster <armbru@redhat.com>
Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1427124571-28598-7-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
There are a number of ffs(3) callers that do roughly:
bit = ffs(val);
if (bit) {
do_something(bit - 1);
}
This pattern can be converted to ctz32() like this:
zeroes = ctz32(val);
if (zeroes != 32) {
do_something(zeroes);
}
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1427124571-28598-6-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This commit was generated mechanically by coccinelle from the following
semantic patch:
@@
expression val;
@@
- (ffs(val) - 1)
+ ctz32(val)
The call sites have been audited to ensure the ffs(0) - 1 == -1 case
never occurs (due to input validation, asserts, etc). Therefore we
don't need to worry about the fact that ctz32(0) == 32.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1427124571-28598-5-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
It is not clear from the code how a 0 parameter should be handled by the
hardware. Keep the same behavior as ffs(0) - 1 == -1.
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1427124571-28598-4-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
It is not clear from the code how a 0 parameter should be handled by the
hardware. Keep the same behavior as ffs(0) - 1 == -1.
Cc: Andrzej Zaborowski <balrog@zabor.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1427124571-28598-3-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The binary search in sdp_uuid_match() only works when the number of
elements to search is a power of two.
lo = record->uuid;
hi = record->uuids;
while (hi >>= 1)
if (lo[hi] <= val)
lo += hi;
return *lo == val;
I noticed that the record->uuids calculation in
sdp_service_record_build() was suspect:
record->uuids = 1 << ffs(record->uuids - 1);
Unlike most ffs(val) - 1 users, the expression is ffs(val - 1)!
Actually ffs() is the wrong function to use for power-of-2. Use
pow2ceil() to achieve the correct effect. Now the record->uuid[] array
is sized correctly and the binary search in sdp_uuid_match() should
work.
I'm not sure how to run/test this code.
Cc: Andrzej Zaborowski <balrog@zabor.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1427124571-28598-2-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: 1426522925-14444-1-git-send-email-berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The command "virsh create" will fail in such condition: vm has two
disks: vda and vdb. vda has snapshot s1 with id "1", vdb doesn't have
s1 but has snapshot s2 with id "1". When we want to run command "virsh
create s1", del_existing_snapshots() only deletes s1 in vda, and
bdrv_snapshot_create() tries to create vdb's snapshot s1 with id "1",
but id "1" alreay exists in vdb with name "s2"!
The simplest way is call find_new_snapshot_id() unconditionally.
Signed-off-by: Yi Wang <up2wing@gmail.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJVPoMfAAoJECgHk2+YTcWmzuQQAIOf4o/0t/lV87uR1/smP+E/
mvdWm9FiaRXkc0SYrIdpNoPyyMfNnzwDocywIaMcMNEzxMe1oP/objm3yWuOpeeH
Q26oMt/phXsPBRl5vxhAQ29LxuCagDivRFyTUxa7E8/D9GKmVFE6/orwHeMfwUEs
frILNM5lCLiXk4dgeQPW9xWb8iAm/RVRWFRIya5HVh51PjjsN/iLQDtSLM5Ivvvn
Pn5+cspcjD6bjM3tGiXYJZtUdoKh/7Dc8QLHhOOGd0XGysnfEbsmNQbk5Kl62PL/
3Ug0yQWpecM5IrmPWi/Omh2YJNdTbrjsOMaLWZv2LtWl/o8mD51rPJeK9Pfym/26
dsQgcYK/0GowEOxb/PUkWoX7eNziUyKryksgtdLxyrcllO5QKhrkTibDh8UOKYwI
X6qEqrhCJ8pN6GhFJYK/2ieMajOoa3TE7WE5CenEPFyEmDqQkKWrjUyc0YZjcZ98
cHJESCyXxaetzpwt17WT+vaEwpKyj7RBBS55fuV7S6+QRpWewyvxsKcEe7nfHDPe
k7VOkG7YD0LX9dnoFGLvHbxms/QUji5K2rprvZYdA5PNywTwRsyW58XR/G1jOU2Q
oDkLL+B52A3+cBUz4n7yrP4GumvxTR9mfOGbvdKXRYHyJka74aoGecPblAVz3ew4
UTgqXmdlJzA5SzyPObcq
=t4Wd
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-pull-request' into staging
X86 queue, 2015-04-27 (v2)
# gpg: Signature made Mon Apr 27 19:42:39 2015 BST using RSA key ID 984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-pull-request:
target-i386: Remove AMD feature flag aliases from CPU model table
target-i386: X86CPU::xlevel2 QOM property
target-i386: Make "level" and "xlevel" properties static
qemu-config: Accept empty option values
MAINTAINERS: Change status of X86 to Maintained
MAINTAINERS: Add myself to X86
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJVPnmrAAoJECgHk2+YTcWm9fkQALwhU06fh1C3zOqW162cYnPi
omkrj89606UzbpC9/NxP1XsjJcXysjD2Dn+yZ5F4WKwHOuqRcoj9wdp/PdE1+lUt
+Dwl3+c0waVmRm91fBA71EgkENGpd1+d0BmddAKhCcoM/R1qtH/RJBVmmojHnt7g
43uAVunV2PLpnW00nug4fJZnjS/MN3d5MDyJLw0AbaLLGDojG7r9sI6UMbOK8I2l
zjCX+RUDAzf4QPPszwvwD+2JsZOSUoqNRts42FxGe1AOMvH4REBnIJZzbFoZ58lr
tqL+1vJYx6HHvvZu0/gWSD4tmQenWelhlfVIig1L02srJKLq2Zw6oq18D9QOJXdd
7a/lkM+RRCFYwBQR/74ZIFEvS5VxtJzQbxpuHAf0A5tjHzV7E97Nz7SwvuSAAdCv
aKQSKqMa5aQFxCaUK6sl7u5X08Av9o6cNg2z7Ri+hFFVXQu1JIKoNFxPH9As/gDZ
YtGSrsVDBEYHxLF70iLXWEMdxM5j5Aec1HIBYWGXA930hSXDxZYRSFnbRhiPe7HI
wArOPvk81w/DrD0Pqy+4r9emxPsiDbAn/U03VYHVoSZXl6BbHN8gMnyF+X0sZT3f
kaLQEFihzTHron7b3nzfrFsktkbrRWnxCeqVhhMn0yPCZwJV6I/MD01FcDdTe7y0
f16IYLpvqMDTDT+HUt/d
=P+kR
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost/tags/numa-pull-request' into staging
NUMA queue, 2015-04-27
# gpg: Signature made Mon Apr 27 19:02:19 2015 BST using RSA key ID 984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/numa-pull-request:
MAINTAINERS: Add myself as NUMA code maintainer
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* memory system updates to support transaction attributes
* set user-mode and secure attributes for accesses made by ARM CPUs
* rename c1_coproc to cpacr_el1
* adjust id_aa64pfr0 when has_el3 CPU property disabled
* allow ARMv8 SCR.SMD updates
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJVPlJWAAoJEDwlJe0UNgzeX6wP/3VRL+vFcgTugqLtG5MNpWl1
FOjWikAKY6NNjddduK4e7gusy3a9NVnLJnqEfBL+9PYsWyMkDPJKgVzy+4gBl6Yy
M5kGlemzLF1Fnx0ORXRLN4MHXDnrr7JIKtscKVMIZHG4YyqkWpd/iVWu+IlB+F2e
S10QT+Djv3eAR/hu64CbdgQ+d2EzP5z84t+qIB/BMSklZ+wv/MFctotUb2LY+6xT
9Sj1s1820BJGK1prFg9yb4NKsGcmwbn490bVb5Q6t5otqwns4O1LHlVLRpN5HAXB
XReCuyaTBif1sD/iRlBTZGBdbM9p4UEPFwFDA8CEToZlRQemm7y+YZxkNgVLSVX3
SeCUuYz81kklRmC4egKeMs1l4jKmNCKvHhoO/XpEVAwlIcf/Aap/Bm31G1hB5moI
Ao0yEd9PXOinOxUSNUtmdM0CfVx7Rmja98Li4/7+GcgsyqOVO2M+dOp5dR5JUumf
YbnYxegoxAGCSvXtmGqwPOnFcIpXJ/0a4mjVx40govnYC1KEc97KA+//pFQIy8s8
cDKFYSVdpZ6VJ5M1V2fr4uUPa7phTZQDw8k7UoxzcjfL/ABQErkQG7ABcx+Q1txZ
K7pA8LtCxInn+Ah03VPya2BRpb1eC+7ycr7ezb0Vlc7XtxgRqWGesvcWAr9z+Jtm
eyr8k+QtnO9ic7Pgpt8M
=hDRI
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20150427' into staging
target-arm queue:
* memory system updates to support transaction attributes
* set user-mode and secure attributes for accesses made by ARM CPUs
* rename c1_coproc to cpacr_el1
* adjust id_aa64pfr0 when has_el3 CPU property disabled
* allow ARMv8 SCR.SMD updates
# gpg: Signature made Mon Apr 27 16:14:30 2015 BST using RSA key ID 14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
* remotes/pmaydell/tags/pull-target-arm-20150427:
Allow ARMv8 SCR.SMD updates
target-arm: Adjust id_aa64pfr0 when has_el3 CPU property disabled
target-arm: rename c1_coproc to cpacr_el1
target-arm: Check watchpoints against CPU security state
target-arm: Use attribute info to handle user-only watchpoints
target-arm: Add user-mode transaction attribute
target-arm: Use correct memory attributes for page table walks
target-arm: Honour NS bits in page tables
Switch non-CPU callers from ld/st*_phys to address_space_ld/st*
exec.c: Capture the memory attributes for a watchpoint hit
exec.c: Add new address_space_ld*/st* functions
exec.c: Make address_space_rw take transaction attributes
exec.c: Convert subpage memory ops to _with_attrs
Add MemTxAttrs to the IOTLB
Make CPU iotlb a structure rather than a plain hwaddr
memory: Replace io_mem_read/write with memory_region_dispatch_read/write
memory: Define API for MemoryRegionOps to take attrs and return status
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When CPU vendor is AMD, the AMD feature alias bits on
CPUID[0x80000001].EDX are already automatically copied from CPUID[1].EDX
on x86_cpu_realizefn(). When CPU vendor is Intel, those bits are
reserved and should be zero. On either case, those bits shouldn't be set
in the CPU model table.
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
We already have "level" and "xlevel", only "xlevel2" is missing.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Static properties require only 1 line of code, much simpler than the
existing code that requires writing new getters/setters.
As a nice side-effect, this fixes an existing bug where the setters were
incorrectly allowing the properties to be changed after the CPU was
already realized.
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Currently it is impossible to set an option in a config file to an empty
string, because the parser matches only lines containing non-empty
strings between double-quotes.
As sscanf() "[" conversion specifier only matches non-empty strings, add
a special case for empty strings.
Reviewed-by: Eric Blake <eblake@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
"Odd Fixes" doesn't reflect the current status of target-i386. We have
people looking after it, now.
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The "srat" and "numa" keywords will help get_maintainer.pl catch
NUMA-related code in other files too.
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>