Add interfaces to open and create files for proxy file system driver.
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Provide root privilege access to QEMU 9p proxy filesystem using socket
communication.
Proxy helper is started by root user as:
~ # virtfs-proxy-helper -f|--fd <socket descriptor> -p|--path <path-to-share>
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Add new proxy filesystem driver to add root privilege to qemu process.
It needs a helper process to be started by root user.
Following command line can be used to utilize proxy filesystem driver
-virtfs proxy,id=<id>,mount_tag=<tag>,socket_fd=<socket-fd>
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Move p9 marshaling/unmarshaling code to a separate file so that
proxy filesytem driver can use these calls. Also made marshaling
code generic to accept "struct iovec" instead of V9fsPDU.
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
This remove all conditional code from common code path and
make opt validation a FSDriver callback.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
* qemu-kvm/memory/page_desc: (22 commits)
Remove cpu_get_physical_page_desc()
sparc: avoid cpu_get_physical_page_desc()
virtio-balloon: avoid cpu_get_physical_page_desc()
vhost: avoid cpu_get_physical_page_desc()
kvm: avoid cpu_get_physical_page_desc()
memory: remove CPUPhysMemoryClient
xen: convert to MemoryListener API
memory: temporarily add memory_region_get_ram_addr()
xen, vga: add API for registering the framebuffer
vhost: convert to MemoryListener API
kvm: convert to MemoryListener API
kvm: switch kvm slots to use host virtual address instead of ram_addr_t
memory: add API for observing updates to the physical memory map
memory: replace cpu_physical_sync_dirty_bitmap() with a memory API
framebuffer: drop use of cpu_physical_sync_dirty_bitmap()
loader: remove calls to cpu_get_physical_page_desc()
framebuffer: drop use of cpu_get_physical_page_desc()
memory: introduce memory_region_find()
memory: add memory_region_is_logging()
memory: add memory_region_is_rom()
...
Make's multiple output syntax
x.c x.h: x.template
gen < x.template
actually invokes the command once for x.c and once for x.h (with differing $@
in each invocation). During a parallel build, the two commands may be invoked
in parallel; this opens up a race, where the second invocation trashes a file
supposedly produced during the first, and now in use by a dependent command.
The various qapi code generators are susceptible to this; fix by making them
generate just one file per invocation.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
* aneesh/for-upstream:
scripts/analyse-9p-simpletrace.py: Add symbolic names for 9p operations.
hw/9pfs: iattr_valid flags are kernel internal flags map them to 9p values.
hw/9pfs: Use the correct signed type for different variables
hw/9pfs: replace iovec manipulation with QEMUIOVector
* bonzini/nbd-for-anthony: (26 commits)
nbd: add myself as maintainer
qemu-nbd: throttle requests
qemu-nbd: asynchronous operation
qemu-nbd: add client pointer to NBDRequest
qemu-nbd: move client handling to nbd.c
qemu-nbd: use common main loop
link the main loop and its dependencies into the tools
qemu-nbd: introduce NBDRequest
qemu-nbd: introduce NBDExport
qemu-nbd: introduce nbd_do_receive_request
qemu-nbd: more robust handling of invalid requests
qemu-nbd: introduce nbd_do_send_reply
qemu-nbd: simplify nbd_trip
move corking functions to osdep.c
qemu-nbd: remove data_size argument to nbd_trip
qemu-nbd: remove offset argument to nbd_trip
Update ioctl order in nbd_init() to detect EBUSY
nbd: add support for NBD_CMD_TRIM
nbd: add support for NBD_CMD_FLUSH
nbd: add support for NBD_CMD_FLAG_FUA
...
qemu-kvm passes numa/SRAT topology information for smp_cpus to SeaBIOS. However
SeaBIOS always expects to setup max_cpus number of SRAT cpu entries
(MaxCountCPUs variable in build_srat function of Seabios). When qemu-kvm runs
with smp_cpus != max_cpus (e.g. -smp 2,maxcpus=4), Seabios will mistakenly use
memory SRAT info for setting up CPU SRAT entries for the offline CPUs. Wrong
SRAT memory entries are also created. This breaks NUMA in a guest.
Fix by setting up SRAT info for max_cpus in qemu-kvm.
Signed-off-by: Vasilis Liaskovitis <vasilis.liaskovitis@profitbricks.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
The latter was already commented out, the former is redundant as well.
We always get the latest changes after return from the guest via
kvm_arch_post_run.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Keep a per-VCPU xsave buffer for kvm_put/get_xsave instead of
continuously allocating and freeing it on state sync.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Field 0 (FCW+FSW) and 1 (FTW+FOP) were hard-coded so far.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Limiting the number of in-flight requests is implemented very simply
with a can_read callback. It does not require a semaphore, unlike the
client side in block/nbd.c, because we can throttle directly the creation
of coroutines. The client side can have a coroutine created at any time
when an I/O request is made.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Using coroutines enable asynchronous operation on both the network and
the block side. Network can be owned by two coroutines at the same time,
one writing and one reading. On the send side, mutual exclusion is
guaranteed by a CoMutex. On the receive side, mutual exclusion is
guaranteed because new coroutines immediately start receiving data,
and no new coroutines are created as long as the previous one is receiving.
Between receive and send, qemu-nbd can have an arbitrary number of
in-flight block transfers. Throttling is implemented by the next
patch.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
By attaching a client to an NBDRequest, we can avoid passing around the
socket descriptor and data buffer.
Also, we can now manage the reference count for the client in
nbd_request_get/put request instead of having to do it ourselved in
nbd_read. This simplifies things when coroutines are used.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch sets up the fd handler in nbd.c instead of qemu-nbd.c. It
introduces NBDClient, which wraps the arguments to nbd_trip in a single
structure, so that we can add a notifier to it. This way, qemu-nbd can
know about disconnections.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Using a single main loop for sockets will help yielding from the socket
coroutine back to the main loop, and later reentering it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Using the main loop code from QEMU enables tools to operate fully
asynchronously. Advantages include better Windows portability (for some
definition of portability) over glib's.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move the buffer from NBDExport to a new structure, so that it will be
possible to have multiple in-flight requests for the same export
(and for the same client too---we get that for free).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Group the sending of a reply and the associated data into a new function.
Without corking, the caller would be forced to leave 12 free bytes at the
beginning of the data pointer. Not too ugly, but still ugly. :)
Using nbd_do_send_reply everywhere will help when the routine will set up
the write handler that re-enters the send coroutine.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use TCP_CORK to remove a violation of encapsulation, that would later
require nbd_trip to know too much about an NBD reply.
We could also switch to sendmsg (qemu_co_sendv) later, it is even
easier once coroutines are in.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Update ioctl(s) in nbd_init() to detect device busy early.
Current nbd_init() issues NBD_CLEAR_SOCKET before NBD_SET_SOCKET, if issuing
"qemu-nbd -c /dev/nbd0 disk.img" twice, the second time won't detect EBUSY in
nbd_init(), but in nbd_client will report EBUSY and do clear socket (the 1st
time command will be affacted too because of no socket any more.)
No change to previous version.
Signed-off-by: Chunyan Liu <cyliu@suse.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Allow sending up to 16 requests, and drive the replies to the coroutine
that did the request. The code is written to be exactly the same as
before this patch when MAX_NBD_REQUESTS == 1 (modulo the extra mutex
and state).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
qemu-nbd has a limit of slightly less than 1M per request. Work
around this in the nbd block driver.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Outside coroutines, avoid busy waiting on EAGAIN by temporarily
making the socket blocking.
The API of qemu_recvv/qemu_sendv is slightly different from
do_readv/do_writev because they do not handle coroutines. It
returns the number of bytes written before encountering an
EAGAIN. The specificity of yielding on EAGAIN is entirely in
qemu-coroutine.c.
Reviewed-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There's no need to check if ports can accept any incoming data from the
guest each time the guest sends data. Check if the port implements such
functionality during port initialisation.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>