Commit Graph

47343 Commits

Author SHA1 Message Date
Max Reitz
c4b48bfdc5 vvfat: Fix qcow write target driver specification
First, bdrv_open_child() expects all options for the child to be
prefixed by the child's name (and a separating dot). Second,
bdrv_open_child() does not take ownership of the QDict passed to it but
only extracts all options for the child, so if a QDict is created for
the sole purpose of passing it to bdrv_open_child(), it needs to be
freed afterwards.

This patch makes vvfat adhere to both of these rules.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20160711135452.11304-1-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:41:39 +02:00
Lin Ma
0c204cc810 hmp: show all of snapshot info on every block dev in output of 'info snapshots'
Currently, the output of 'info snapshots' shows fully available snapshots.
It's opaque, hides some snapshot information to users. It's not convenient
if users want to know more about all of snapshot information on every block
device via monitor.

Follow Kevin's and Max's proposals, The patch makes the output more detailed:
(qemu) info snapshots
List of snapshots present on all disks:
 ID        TAG                 VM SIZE                DATE       VM CLOCK
 --        checkpoint-1           165M 2016-05-22 16:58:07   00:02:06.813

List of partial (non-loadable) snapshots on 'drive_image1':
 ID        TAG                 VM SIZE                DATE       VM CLOCK
 1         snap1                     0 2016-05-22 16:57:31   00:01:30.567

Signed-off-by: Lin Ma <lma@suse.com>
Message-id: 1467869164-26688-3-git-send-email-lma@suse.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:41:39 +02:00
Lin Ma
3a1ee71190 hmp: use snapshot name to determine whether a snapshot is 'fully available'
Currently qemu uses snapshot id to determine whether a snapshot is fully
available, It causes incorrect output in some scenario.

For instance:
(qemu) info block
drive_image1 (#block113): /opt/vms/SLES12-SP1-JeOS-x86_64-GM/disk0.qcow2
(qcow2)
    Cache mode:       writeback

drive_image2 (#block349): /opt/vms/SLES12-SP1-JeOS-x86_64-GM/disk1.qcow2
(qcow2)
    Cache mode:       writeback
(qemu)
(qemu) info snapshots
There is no snapshot available.
(qemu)
(qemu) snapshot_blkdev_internal drive_image1 snap1
(qemu)
(qemu) info snapshots
There is no suitable snapshot available
(qemu)
(qemu) savevm checkpoint-1
(qemu)
(qemu) info snapshots
ID        TAG                 VM SIZE                DATE       VM CLOCK
1         snap1                     0 2016-05-22 16:57:31   00:01:30.567
(qemu)

$ qemu-img snapshot -l disk0.qcow2
Snapshot list:
ID        TAG                 VM SIZE                DATE       VM CLOCK
1         snap1                     0 2016-05-22 16:57:31   00:01:30.567
2         checkpoint-1           165M 2016-05-22 16:58:07   00:02:06.813

$ qemu-img snapshot -l disk1.qcow2
Snapshot list:
ID        TAG                 VM SIZE                DATE       VM CLOCK
1         checkpoint-1              0 2016-05-22 16:58:07   00:02:06.813

The patch uses snapshot name instead of snapshot id to determine whether a
snapshot is fully available and uses '--' instead of snapshot id in output
because the snapshot id is not guaranteed to be the same on all images.
For instance:
(qemu) info snapshots
List of snapshots present on all disks:
 ID        TAG                 VM SIZE                DATE       VM CLOCK
 --        checkpoint-1           165M 2016-05-22 16:58:07   00:02:06.813

Signed-off-by: Lin Ma <lma@suse.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1467869164-26688-2-git-send-email-lma@suse.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:41:39 +02:00
Alberto Garcia
435d5ee6cd qemu-iotests: Test naming of throttling groups
Throttling groups are named using the 'group' parameter of the
block_set_io_throttle command and the throttling.group command-line
option. If that parameter is unspecified the groups get the name of
the block device.

This patch adds a new test to check the naming of throttling groups.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: d87d02823a6b91609509d8bb18e2f5dbd9a6102c.1467986342.git.berto@igalia.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:41:39 +02:00
Alberto Garcia
ff356ee4da blockdev: Fix regression with the default naming of throttling groups
When I/O limits are set for a block device, the name of the throttling
group is taken from the BlockBackend if the user doesn't specify one.

Commit efaa7c4eeb moved the naming of the BlockBackend in
blockdev_init() to the end of the function, after I/O limits are set.
The consequence is that the throttling group gets an empty name.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reported-by: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: qemu-stable@nongnu.org
Message-id: af5cd58bd2c4b9f6c57f260d9cfe586b9fb7d34d.1467986342.git.berto@igalia.com
[mreitz: Use existing "id" variable instead of new "blk_id"]
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:41:39 +02:00
Reda Sallahi
524089bce4 vmdk: fix metadata write regression
Commit "cdeaf1f vmdk: add bdrv_co_write_zeroes" causes a regression on
writes. It writes metadata after every write instead of doing it only once
for each cluster.

vmdk_pwritev() writes metadata whenever m_data is set as valid so this patch
sets m_data as valid only when we have a new cluster which hasn't been
allocated before or a zero grain.

Signed-off-by: Reda Sallahi <fullmanet@gmail.com>
Message-id: 20160707084249.29084-1-fullmanet@gmail.com
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:41:39 +02:00
Sascha Silbe
f14a39ccb9 Improve block job rate limiting for small bandwidth values
ratelimit_calculate_delay() previously reset the accounting every time
slice, no matter how much data had been processed before. This had (at
least) two consequences:

1. The minimum speed is rather large, e.g. 5 MiB/s for commit and stream.

   Not sure if there are real-world use cases where this would be a
   problem. Mirroring and backup over a slow link (e.g. DSL) would
   come to mind, though.

2. Tests for block job operations (e.g. cancel) were rather racy

   All block jobs currently use a time slice of 100ms. That's a
   reasonable value to get smooth output during regular
   operation. However this also meant that the state of block jobs
   changed every 100ms, no matter how low the configured limit was. On
   busy hosts, qemu often transferred additional chunks until the test
   case had a chance to cancel the job.

Fix the block job rate limit code to delay for more than one time
slice to address the above issues. To make it easier to handle
oversized chunks we switch the semantics from returning a delay
_before_ the current request to a delay _after_ the current
request. If necessary, this delay consists of multiple time slice
units.

Since the mirror job sends multiple chunks in one go even if the rate
limit was exceeded in between, we need to keep track of the start of
the current time slice so we can correctly re-compute the delay for
the updated amount of data.

The minimum bandwidth now is 1 data unit per time slice. The block
jobs are currently passing the amount of data transferred in sectors
and using 100ms time slices, so this translates to 5120
bytes/second. With chunk sizes usually being O(512KiB), tests have
plenty of time (O(100s)) to operate on block jobs. The chance of a
race condition now is fairly remote, except possibly on insanely
loaded systems.

Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Message-id: 1467127721-9564-2-git-send-email-silbe@linux.vnet.ibm.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:41:38 +02:00
Max Reitz
c834cba905 qcow2: Fix qcow2_get_cluster_offset()
Recently, qcow2_get_cluster_offset() has been changed to work with bytes
instead of sectors. This invalidated some assertions and introduced a
possible integer multiplication overflow.

This could be reproduced using e.g.

$ qemu-img create -f qcow2 -o cluster_size=1M blub.qcow2 8G
Formatting 'foo.qcow2', fmt=qcow2 size=8589934592 encryption=off
cluster_size=1048576 lazy_refcounts=off refcount_bits=16
$ qemu-io -c map blub.qcow2
qemu-io: qemu/block/qcow2-cluster.c:504: qcow2_get_cluster_offset:
Assertion `bytes_needed <= INT_MAX' failed.
[1]    20775 abort (core dumped)  qemu-io -c map foo.qcow2

This patch removes the now wrong assertion, adding comments and more
assertions to prove its correctness (and fixing the overflow which would
become apparent with the original assertion removed).

Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20160620142623.24471-3-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:41:38 +02:00
Max Reitz
a367467995 qemu-io: Use correct range limitations
create_iovec() has a comment lamenting the lack of SIZE_T_MAX. Since
there actually is a SIZE_MAX, use it.

Two places use INT_MAX for checking the upper bound of a sector count
that is used as an argument for a blk_*() function (blk_discard() and
blk_write_compressed(), respectively). BDRV_REQUEST_MAX_SECTORS should
be used instead.

And finally, do_co_pwrite_zeroes() used to similarly check that the
sector count does not exceed INT_MAX. However, this function is now
backed by blk_co_pwrite_zeroes() which takes bytes as an argument
instead of sectors. Therefore, it should be the byte count that does not
exceed INT_MAX, not the sector count.

Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:41:38 +02:00
Max Reitz
84c26520d3 qcow2: Avoid making the L1 table too big
We refuse to open images whose L1 table we deem "too big". Consequently,
we should not produce such images ourselves.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20160615153630.2116-3-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
[mreitz: Added QEMU_BUILD_BUG_ON()]
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:41:38 +02:00
Max Reitz
bcf23482ae qemu-img: Use strerror() for generic resize error
Emitting the plain error number is not very helpful. Use strerror()
instead.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20160615153630.2116-2-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:41:38 +02:00
Kevin Wolf
35fedb7b0e block: Remove BB options from blockdev-add
werror/rerror are now available as qdev options. The stats-* options are
removed without an existing replacement; they should probably be
configurable with a separate QMP command like I/O throttling settings.

Removing id is left for another day because this involves updating
qemu-iotests cases to use node-name for everything. Before we can do
that, however, all QMP commands must support node-name.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:32:28 +02:00
Kevin Wolf
62ed9fa991 qemu-iotests: Test setting WCE with qdev
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:32:27 +02:00
Kevin Wolf
8c39825218 block/qdev: Allow configuring rerror/werror with qdev properties
The rerror/werror policies are implemented in the devices, so that's
where they should be configured. In comparison to the old options in
-drive, the qdev properties are only added to those devices that
actually support them.

If the option isn't given (or "auto" is specified), the setting of the
BlockBackend is used for compatibility with the old options. For block
jobs, "auto" is the same as "enospc".

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:32:27 +02:00
Kevin Wolf
1e8fb7f1ee commit: Fix use of error handling policy
Commit implemented the 'enospc' policy as 'ignore' if the error was not
ENOSPC. The QAPI documentation promises that it's treated as 'stop'.
Using the common block job error handling function fixes this and also
adds the missing QMP event.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:32:27 +02:00
Kevin Wolf
f6166a06ff block/qdev: Allow configuring WCE with qdev properties
As cache.writeback is a BlockBackend property and as such more related
to the guest device than the BlockDriverState, we already removed it
from the blockdev-add interface. This patch adds the new way to set it,
as a qdev property of the corresponding guest device.

For example: -drive if=none,file=test.img,node-name=img
             -device ide-hd,drive=img,write-cache=off

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:32:27 +02:00
Xiao Guangrong
2aece63c8a hostmem: detect host backend memory is being used properly
Currently, we use memory_region_is_mapped() to detect if the host
backend memory is being used. This works if the memory is directly
mapped into guest's address space, however, it is not true for
nvdimm as it uses aliased memory region to map the memory. This is
why this bug can happen:
   https://bugzilla.redhat.com/show_bug.cgi?id=1352769

Fix it by introduce a new filed, is_mapped, to HostMemoryBackend,
we set/clear this filed accordingly when the device link/unlink to
host backend memory

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-07-13 13:30:04 +02:00
Xiao Guangrong
1454d33f05 hostmem: fix QEMU crash by 'info memdev'
'info memdev' crashes QEMU:
   (qemu) info memdev
   Unexpected error in parse_str() at qapi/string-input-visitor.c:111:
   Parameter 'null' expects an int64 value or range
It is caused by null uint16List is returned if 'host-nodes' is the default
value

Return MAX_NODES under this case to fix this bug

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-07-13 13:30:04 +02:00
Marc-André Lureau
aa5cb7f5e8 char: do not use atexit cleanup handler
It turns out qemu is calling exit() in various places from various
threads without taking much care of resources state. The atexit()
cleanup handlers cannot easily destroy resources that are in use (by
the same thread or other).

Since c1111a24a3, TCG arm guests run into the following abort() when
running tests, the chardev mutex is locked during the write, so
qemu_mutex_destroy() returns an error:

 #0  0x00007fffdbb806f5 in raise () at /lib64/libc.so.6
 #1  0x00007fffdbb822fa in abort () at /lib64/libc.so.6
 #2  0x00005555557616fe in error_exit (err=<optimized out>, msg=msg@entry=0x555555c38c30 <__func__.14622> "qemu_mutex_destroy")
     at /home/drjones/code/qemu/util/qemu-thread-posix.c:39
 #3  0x0000555555b0be20 in qemu_mutex_destroy (mutex=mutex@entry=0x5555566aa0e0) at /home/drjones/code/qemu/util/qemu-thread-posix.c:57
 #4  0x00005555558aab00 in qemu_chr_free_common (chr=0x5555566aa0e0) at /home/drjones/code/qemu/qemu-char.c:4029
 #5  0x00005555558b05f9 in qemu_chr_delete (chr=<optimized out>) at /home/drjones/code/qemu/qemu-char.c:4038
 #6  0x00005555558b05f9 in qemu_chr_delete (chr=<optimized out>) at /home/drjones/code/qemu/qemu-char.c:4044
 #7  0x00005555558b062c in qemu_chr_cleanup () at /home/drjones/code/qemu/qemu-char.c:4557
 #8  0x00007fffdbb851e8 in __run_exit_handlers () at /lib64/libc.so.6
 #9  0x00007fffdbb85235 in  () at /lib64/libc.so.6
 #10 0x00005555558d1b39 in testdev_write (testdev=0x5555566aa0a0) at /home/drjones/code/qemu/backends/testdev.c:71
 #11 0x00005555558d1b39 in testdev_write (chr=<optimized out>, buf=0x7fffc343fd9a "", len=0) at /home/drjones/code/qemu/backends/testdev.c:95
 #12 0x00005555558adced in qemu_chr_fe_write (s=0x5555566aa0e0, buf=buf@entry=0x7fffc343fd98 "0q", len=len@entry=2) at /home/drjones/code/qemu/qemu-char.c:282

Instead of using a atexit() handler, only run the chardev cleanup as
initially proposed at the end of main(), where there are less chances
(hic) of conflicts or other races.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reported-by: Andrew Jones <drjones@redhat.com>
Message-Id: <20160704153823.16879-1-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-07-13 13:30:00 +02:00
Paolo Bonzini
8caf911d33 net: do not use atexit for cleanup
This will be necessary in the next patch, which stops using atexit for
character devices; without it, vhost-user and the redirector filter
will cause a use-after-free.  Relying on the ordering of atexit calls
is also brittle, even now that both the network and chardev
subsystems are using atexit.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-07-13 13:30:00 +02:00
Paolo Bonzini
f6c2e66ae8 slirp: use exit notifier for slirp_smb_cleanup
We would like to move back net_cleanup() at the end of main function,
like it used to be until f30dbae63a, but minimum
cleanup is needed regardless at exit() time for slirp's SMB
functionality.  Use an exit notifier to call slirp_smb_cleanup.
If net_cleanup() is called first, then remove the exit notifier as it
will become a dangling pointer otherwise.

Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-07-13 13:30:00 +02:00
Marc-André Lureau
9e32ff3299 tap: use an exit notifier to call down_script
We would like to move back net_cleanup() at the end of main function,
like it used to be until f30dbae63a, but minimum
tap cleanup is necessary regarless at exit() time. Use an exit notifier
to call TAP down_script. If net_cleanup() is called first, then remove
the exit notifier as it will become a dangling pointer otherwise.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20160711144847.16651-1-marcandre.lureau@redhat.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-07-13 13:29:56 +02:00
Kevin Wolf
8daea51095 block/qdev: Allow node name for drive properties
If a node name instead of a BlockBackend name is specified as the driver
for a guest device, an anonymous BlockBackend is created now.

The order of operations in release_drive() must be reversed in order to
avoid a use-after-free bug because now blk_detach_dev() frees the last
reference if an anonymous BlockBackend is used.

usb-storage uses a hack where it forwards its BlockBackend as a property
to another device that it internally creates. This hack must be updated
so that it doesn't drop its original BB before it can be passed to the
other device. This used to work because we always had the monitor
reference around, but with node-names the device reference is the only
one now.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-07-13 13:28:00 +02:00
Paolo Bonzini
0b8b8753e4 coroutine: move entry argument to qemu_coroutine_create
In practice the entry argument is always known at creation time, and
it is confusing that sometimes qemu_coroutine_enter is used with a
non-NULL argument to re-enter a coroutine (this happens in
block/sheepdog.c and tests/test-coroutine.c).  So pass the opaque value
at creation time, for consistency with e.g. aio_bh_new.

Mostly done with the following semantic patch:

@ entry1 @
expression entry, arg, co;
@@
- co = qemu_coroutine_create(entry);
+ co = qemu_coroutine_create(entry, arg);
  ...
- qemu_coroutine_enter(co, arg);
+ qemu_coroutine_enter(co);

@ entry2 @
expression entry, arg;
identifier co;
@@
- Coroutine *co = qemu_coroutine_create(entry);
+ Coroutine *co = qemu_coroutine_create(entry, arg);
  ...
- qemu_coroutine_enter(co, arg);
+ qemu_coroutine_enter(co);

@ entry3 @
expression entry, arg;
@@
- qemu_coroutine_enter(qemu_coroutine_create(entry), arg);
+ qemu_coroutine_enter(qemu_coroutine_create(entry, arg));

@ reentry @
expression co;
@@
- qemu_coroutine_enter(co, NULL);
+ qemu_coroutine_enter(co);

except for the aforementioned few places where the semantic patch
stumbled (as expected) and for test_co_queue, which would otherwise
produce an uninitialized variable warning.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Paolo Bonzini
7e70cdba9f test-coroutine: prepare for the next patch
The next patch moves the coroutine argument from first-enter to
creation time.  In this case, coroutine has not been initialized
yet when the coroutine is created, so change to a pointer.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Paolo Bonzini
7d9c858137 coroutine: use QSIMPLEQ instead of QTAILQ
CoQueue do not need to remove any element but the head of the list;
processing is always strictly FIFO.  Therefore, the simpler singly-linked
QSIMPLEQ can be used instead.

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Fam Zheng
5af7045bd0 raw-posix: Use qemu_dup
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Fam Zheng
761d1ddf25 osdep: Introduce qemu_dup
And use it in qemu_dup_flags.

Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Alberto Garcia
6aae5be6a7 blockjob: Update description of the 'device' field in the QMP API
The 'device' field in all BLOCK_JOB_* events and 'block-job-*' command
is no longer the device name, but the ID of the job. This patch
updates the documentation to clarify that.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Alberto Garcia
a5d5a3bdbd qemu-img: Set the ID of the block job in img_commit()
img_commit() creates a block job without an ID. This is no longer
allowed now that we require it to be unique and well-formed. We were
solving this by having a fallback in block_job_create(), but now that
we extended the API of commit_active_start() we can finally set an
explicit ID and revert that change.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Alberto Garcia
fd62c609ed commit: Add 'job-id' parameter to 'block-commit'
This patch adds a new optional 'job-id' parameter to 'block-commit',
allowing the user to specify the ID of the block job to be created.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Alberto Garcia
2323322ed0 stream: Add 'job-id' parameter to 'block-stream'
This patch adds a new optional 'job-id' parameter to 'block-stream',
allowing the user to specify the ID of the block job to be created.

The HMP 'block_stream' command remains unchanged.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Alberto Garcia
70559d499c backup: Add 'job-id' parameter to 'blockdev-backup' and 'drive-backup'
This patch adds a new optional 'job-id' parameter to 'blockdev-backup'
and 'drive-backup', allowing the user to specify the ID of the block
job to be created.

The HMP 'drive_backup' command remains unchanged.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Alberto Garcia
71aa98678c mirror: Add 'job-id' parameter to 'blockdev-mirror' and 'drive-mirror'
This patch adds a new optional 'job-id' parameter to 'blockdev-mirror'
and 'drive-mirror', allowing the user to specify the ID of the block
job to be created.

The HMP 'drive_mirror' command remains unchanged.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Alberto Garcia
7f0317cfc8 blockjob: Add 'job_id' parameter to block_job_create()
When a new job is created, the job ID is taken from the device name of
the BDS. This patch adds a new 'job_id' parameter to let the caller
provide one instead.

This patch also verifies that the ID is always unique and well-formed.
This causes problems in a couple of places where no ID is being set,
because the BDS does not have a device name.

In the case of test_block_job_start() (from test-blockjob-txn.c) we
can simply use this new 'job_id' parameter to set the missing ID.

In the case of img_commit() (from qemu-img.c) we still don't have the
API to make commit_active_start() set the job ID, so we solve it by
setting a default value. We'll get rid of this as soon as we extend
the API.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Alberto Garcia
3ddf3efefa block: Use block_job_get() in find_block_job()
find_block_job() looks for a block backend with a specified name,
checks whether it has a block job and acquires its AioContext.

We want to identify jobs by their ID and not by the block backend
they're attached to, so this patch ignores the backends altogether and
gets the job directly. Apart from making the code simpler, this will
allow us to find block jobs once they start having user-specified IDs.

To ensure backward compatibility we keep ERROR_CLASS_DEVICE_NOT_ACTIVE
as the error class if the job doesn't exist. In subsequent patches
we'll also need to keep the device name as the default job ID if the
user doesn't specify a different one.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Alberto Garcia
ffb1f10cd1 blockjob: Add block_job_get()
Currently the way to look for a specific block job is to iterate the
list manually using block_job_next().

Since we want to be able to identify a job primarily by its ID it
makes sense to have a function that does just that.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Alberto Garcia
9df229c3ca blockjob: Update description of the 'id' field
The 'id' field of the BlockJob structure will be able to hold any ID,
not only a device name. This patch updates the description of that
field and the error messages where it is being used.

Soon we'll add the ability to set an arbitrary ID when creating a
block job.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Alberto Garcia
29338003c9 stream: Fix prototype of stream_start()
'stream-start' has a parameter called 'backing-file', which is the
string to be written to bs->backing when the job finishes.

In the stream_start() implementation it is called 'backing_file_str',
but it the prototype in the header file it is called 'base_id'.

This patch fixes it so the name is the same in both cases and is
consistent with other cases (like commit_start()).

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-07-13 13:26:02 +02:00
Fam Zheng
d27ba624aa util: Fix MIN_NON_ZERO
MIN_NON_ZERO(1, 0) is evaluated to 0. Rewrite the macro to fix it.

Reported-by: Miroslav Rezanina <mrezanin@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <1468306113-847-1-git-send-email-famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-07-13 12:55:11 +02:00
Peter Maydell
9ec3025660 OpenBIOS: switch over to official OpenBIOS git repo
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.12 (GNU/Linux)
 
 iQEcBAABAgAGBQJXhTJ1AAoJEFvCxW+uDzIfdgkIAK9cJSPegg9FbP/60/9VwvzA
 EEdP+FMQU9G8Wm3sW9l8TpJh44C/bcvyD4K0Ch/THPqXI/LMavuyWvLckOrcCfid
 OTkQmMTaqmHSb2eE4V282K2BzKt5akK9ZQManY4prC1zsCmvi5LCgeKEWNc1zFh0
 gt/yp6GEWfyoPocEGwgGTgNieOlBBxmoaHWFqYCvMswlaGGY++4jtV7qRz8C+JYo
 GH9FeqJICBgtJm/9plgpi2Bf8Ti8hn5mTL7AaZKp71cU6ZOYClOp16StOZjEZdh6
 OWVcmwxxPWyjuhTD1MXMmH2HdYQWdIi9CE3iMvsrm6R/mJLIs0bEg/c4w49cNa0=
 =wupm
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/mcayland/tags/qemu-openbios-signed' into staging

OpenBIOS: switch over to official OpenBIOS git repo

# gpg: Signature made Tue 12 Jul 2016 19:09:57 BST
# gpg:                using RSA key 0x5BC2C56FAE0F321F
# gpg: Good signature from "Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>"
# Primary key fingerprint: CC62 1AB9 8E82 200D 915C  C9C4 5BC2 C56F AE0F 321F

* remotes/mcayland/tags/qemu-openbios-signed:
  OpenBIOS: switch over to official OpenBIOS git repo

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-07-13 11:15:35 +01:00
Mark Cave-Ayland
fa3007e7ec OpenBIOS: switch over to official OpenBIOS git repo
This update should preserve git history, and switches git.qemu-project.org
over to be a mirror of the new official git repo hosted at
https://github.com/openbios from a git-svn import of the old coreboot SVN
repository. All prior history from the SVN repository should still be preserved
(i.e. commit hashes are the same for historical commits).

No other source changes are made by this commit since both the old and new
HEADs contain the same source tree (albeit with difference metadata) whilst the
previous git-svn HEAD can be retrieved via the svn-head branch.

Proposed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
2016-07-12 19:07:56 +01:00
Richard Henderson
f9c816c00c target-sparc: Elide duplicate updates to fprs
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-12 11:03:01 -07:00
Richard Henderson
02c79d7885 target-sparc: Use cpu_loop_exit_restore from helper_check_ieee_exceptions
This avoids needing to save state before every FP operation.

Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-12 11:02:58 -07:00
Richard Henderson
ba2397d1ca target-sparc: Use cpu_fsr in stfsr
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-12 11:02:56 -07:00
Richard Henderson
7385aed20d target-sparc: Use explicit writes to cpu_fsr
By arranging for explicit writes to cpu_fsr after floating point
operations, we are able to mark the helpers as not writing to
tcg globals, which means that we don't need to invalidate the
integer register set across said calls.

Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-12 11:02:52 -07:00
Richard Henderson
f2fe396f0f target-sparc: Remove helper_ldf_asi, helper_stf_asi
We've now implemented all fp asis inline, except for the no-fault
memory reads.  The latter can be passed directly to helper_ld_asi.

Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-12 11:02:48 -07:00
Richard Henderson
ca5ce5723f target-sparc: Directly implement block and short ldf/stf asis
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-12 11:02:45 -07:00
Richard Henderson
7705091ca4 target-sparc: Directly implement easy ldf/stf asis
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-12 11:02:43 -07:00
Richard Henderson
6850811e7c target-sparc: Pass TCGMemOp constants to helper_ld/st_asi
Reduces the argument count for helper_ld_asi; do helper_st_asi
for consistency.

Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-12 11:02:40 -07:00