Commit Graph

28 Commits

Author SHA1 Message Date
Peter Maydell
21cbfe5f37 xen: Clean up includes
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.

This commit was created with scripts/clean-includes.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1453832250-766-14-git-send-email-peter.maydell@linaro.org
2016-01-29 15:07:23 +00:00
Ian Campbell
e0cb42ae4b xen: Switch uses of xc_map_foreign_{pages,bulk} to use libxenforeignmemory API.
In Xen 4.7 we are refactoring parts libxenctrl into a number of
separate libraries which will provide backward and forward API and ABI
compatiblity.

One such library will be libxenforeignmemory which provides access to
privileged foreign mappings and which will provide an interface
equivalent to xc_map_foreign_{pages,bulk}.

The new xenforeignmemory_map() function behaves like
xc_map_foreign_pages() when the err argument is NULL and like
xc_map_foreign_bulk() when err is non-NULL, which maps into the shim
here onto checking err == NULL and calling the appropriate old
function.

Note that xenforeignmemory_map() takes the number of pages before the
arrays themselves, in order to support potentially future use of
variable-length-arrays in the prototype (in the future, when Xen's
baseline toolchain requirements are new enough to ensure VLAs are
supported).

In preparation for adding support for libxenforeignmemory add support
to the <=4.0 and <=4.6 compat code in xen_common.h to allow us to
switch to using the new API. These shims will disappear for versions
of Xen which include libxenforeignmemory.

Since libxenforeignmemory will have its own handle type but for <= 4.6
the functionality is provided by using a libxenctrl handle we
introduce a new global xen_fmem alongside the existing xen_xc. In fact
we make xen_fmem a pointer to the existing xen_xc, which then works
correctly with both <=4.0 (xc handle is an int) and <=4.6 (xc handle
is a pointer). In the latter case xen_fmem is actually a double
indirect pointer, but it all falls out in the wash.

Unlike libxenctrl libxenforeignmemory has an explicit unmap function,
rather than just specifying that munmap should be used, so the unmap
paths are updated to use xenforeignmemory_unmap, which is a shim for
munmap on these versions of xen. The mappings in xen-hvm.c do not
appear to be unmapped (which makes sense for a qemu-dm process)

In fb_disconnect this results in a change from simply mmap over the
existing mapping (with an implicit munmap) to expliclty unmapping with
xenforeignmemory_unmap and then mapping the required anonymous memory
in the same hole. I don't think this is a problem since any other
thread which was racily touching this region would already be running
the risk of hitting the mapping halfway through the call. If this is
thought to be a problem then we could consider adding an extra API to
the libxenforeignmemory interface to replace a foreign mapping with
anonymous shared memory, but I'd prefer not to.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2016-01-26 17:19:35 +00:00
Markus Armbruster
012aef0734 maint: avoid useless "if (foo) free(foo)" pattern
My Coccinelle semantic patch finds a few more, because it also fixes up
the equally pointless conditional

    if (foo) {
        free(foo);
        foo = NULL;
    }

Result (feel free to squash it into your patch):

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2015-09-11 10:21:38 +03:00
Paolo Bonzini
86a6a9bf55 xen: add a lock for the mapcache
Extend the existing dummy mapcache_lock/unlock macros to cover all of
xen-mapcache.c.  This prepares for unlocked memory access, when parts
of exec.c will not be protected by the BQL.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2015-01-20 14:24:17 +00:00
Paolo Bonzini
9b6d7b365d xen: do not use __-named variables in mapcache
Keep the namespace clean.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2015-01-20 14:24:13 +00:00
Stefano Stabellini
643f593224 xen: build on ARM
Collection of fixes to build QEMU with Xen support on ARM:
- use xenstore_read_fe_uint64 to retrieve the page-ref (xenfb);
- use xen_pfn_t instead of unsigned long in xenfb;
- unsigned long/xenpfn_t in xen_remove_from_physmap;
- in xen-mapcache.c use HOST_LONG_BITS to check for QEMU's address space
size.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2014-07-07 10:37:40 +00:00
Paolo Bonzini
0d09e41a51 hw: move headers to include/
Many of these should be cleaned up with proper qdev-/QOM-ification.
Right now there are many catch-all headers in include/hw/ARCH depending
on cpu.h, and this makes it necessary to compile these files per-target.
However, fixing this does not belong in these patches.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-04-08 18:13:10 +02:00
Hanweidong
044d4e1aae xen-mapcache: pass the right size argument to test_bits
Compute the correct size for test_bits().
qemu_get_ram_ptr() and qemu_safe_ram_ptr() will call xen_map_cache()
with size is 0 if the requested address is in the RAM.  Then
xen_map_cache() will pass the size 0 to test_bits() for checking if the
corresponding pfn was mapped in cache. But test_bits() will always
return 1 when size is 0 without any bit testing. Actually, for this
case, test_bits should check one bit. So this patch introduced a
__test_bit_size which is greater than 0 and a multiple of XC_PAGE_SIZE,
then test_bits can work correctly with __test_bit_size
>> XC_PAGE_SHIFT as its size.

Signed-off-by: Zhenguo Wang <wangzhenguo@huawei.com>
Signed-off-by: Weidong Han <hanweidong@huawei.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2013-04-03 11:51:53 +00:00
Stefano Stabellini
e2deee3ea6 xen-mapcache: replace last_address_index with a last_entry pointer
Replace last_address_index and last_address_vaddr with a single pointer
to the last MapCacheEntry used.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2013-04-03 11:51:52 +00:00
Paolo Bonzini
9c17d615a6 softmmu: move include files to include/sysemu/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19 08:32:45 +01:00
Paolo Bonzini
1de7afc984 misc: move include files to include/qemu/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19 08:32:39 +01:00
Avi Kivity
a8170e5e97 Rename target_phys_addr_t to hwaddr
target_phys_addr_t is unwieldly, violates the C standard (_t suffixes are
reserved) and its purpose doesn't match the name (most target_phys_addr_t
addresses are not target specific).  Replace it with a finger-friendly,
standards conformant hwaddr.

Outstanding patchsets can be fixed up with the command

  git rebase -i --exec 'find -name "*.[ch]"
                        | xargs s/target_phys_addr_t/hwaddr/g' origin

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-10-23 08:58:25 -05:00
Frediano Ziglio
27b7652ef5 Fix invalidate if memory requested was not bucket aligned
When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
is created pointing to the virtual address of location requested.
The cached mapped entry is saved in last_address_vaddr with the memory
location of the base virtual address (without bucket offset).
However when this entry is invalidated the virtual address saved in the
reverse mapping is used. This cause that the mapping is freed but the
last_address_vaddr is not reset.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2012-08-22 10:17:04 +00:00
Julien Grall
852a7cec90 xen-mapcache: don't unmap locked entry during mapcache invalidation
When an IOREQ_TYPE_INVALIDATE is sent to QEMU, it invalidates all entry
of the map cache even if it's locked.

QEMU is not able to know that entry was invalidated, so when an IO
access is requested a segfault occured.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2012-04-13 17:35:06 +00:00
Anthony PERARD
09ab48ee6c Xen, mapcache: Fix the compute of the size of bucket.
Because the size of a mapping is wrong when there is an offset and a
size >= bucket_size.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2012-04-13 17:34:50 +00:00
Anthony PERARD
cd1ba7de23 xen mapcache: check if memory region has moved.
This patch changes the xen_map_cache behavior. Before trying to map a guest
addr, mapcache will look into the list of range of address that have been moved
(physmap/set_memory). There is currently one memory space like this, the vram,
"moved" from were it's allocated to were the guest will look into.

This help to have a succefull migration.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2012-03-19 18:21:12 +00:00
Paolo Bonzini
6b620ca3b0 prepare for future GPLv2+ relicensing
All files under GPLv2 will get GPLv2+ changes starting tomorrow.
event_notifier.c and exec-obsolete.h were only ever touched by Red Hat
employees and can be relicensed now.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-01-13 10:55:56 -06:00
Stefan Hajnoczi
922453bca6 block: convert qemu_aio_flush() calls to bdrv_drain_all()
Many places in QEMU call qemu_aio_flush() to complete all pending
asynchronous I/O.  Most of these places actually want to drain all block
requests but there is no block layer API to do so.

This patch introduces the bdrv_drain_all() API to wait for requests
across all BlockDriverStates to complete.  As a bonus we perform checks
after qemu_aio_wait() to ensure that requests really have finished.

Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2011-12-05 14:56:06 +01:00
Anthony PERARD
56c119e52c xen-mapcache: Fix rlimit set size.
Previously, the address space soft limit was set mcache_max_size. So,
before the mcache_max_size was reached by the mapcache, QEMU was killed
for overuse of the virtual address space.

This patch fix that by setting the soft limit the maximum than can have
QEMU. So the soft and hard limit are always set to RLIM_INFINITY if QEMU
is privileged.

In case QEMU is not run as root and the limit is too low, the maximum
mapcache size will be set the rlim_max - 80MB because observed that QEMU
use 75MB more than the maximum mapcache size after several empirical
tests.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2011-09-09 13:13:16 +00:00
Anthony Liguori
7267c0947d Use glib memory allocation and free functions
qemu_malloc/qemu_free no longer exist after this commit.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-20 23:01:08 -05:00
Stefan Berger
ecf169b7fa Fix a compilation error in xen-mapcache.c
This patch fixes a compilation error in xen-mapcache.c .

/home/stefanb/qemu/qemu-git/xen-mapcache.c: In function ‘xen_ram_addr_from_mapcache’:
/home/stefanb/qemu/qemu-git/xen-mapcache.c:240:42: error: variable ‘pentry’ set but not used [-Werror=unused-but-set-variable]
cc1: all warnings being treated as errors

Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-07-29 09:33:56 -05:00
Jan Kiszka
e41d7c691a xen: Clean up map cache API naming
The map cache is a Xen thing, so its API should make this clear.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-07-17 01:54:24 +02:00
Stefano Stabellini
6506e4f995 xen: remove xen_map_block and xen_unmap_block
Replace xen_map_block with qemu_map_cache with the appropriate locking
and size parameters.
Replace xen_unmap_block with qemu_invalidate_entry.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-06-19 04:40:05 +02:00
Stefano Stabellini
cd306087e5 xen: remove qemu_map_cache_unlock
There is no need for qemu_map_cache_unlock, just use
qemu_invalidate_entry instead.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-06-19 04:40:04 +02:00
Stefano Stabellini
c13390cd38 xen: fix qemu_map_cache with size != MCACHE_BUCKET_SIZE
Fix the implementation of qemu_map_cache: correctly support size
arguments different from 0 or MCACHE_BUCKET_SIZE.
The new implementation supports locked mapcache entries with size
multiple of MCACHE_BUCKET_SIZE. qemu_invalidate_entry can correctly
find and unmap these "large" mapcache entries given that the virtual
address passed to qemu_invalidate_entry is the same returned by
qemu_map_cache when the locked mapcache entry was created.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-06-19 04:40:04 +02:00
Anthony PERARD
050a0ddf39 Introduce qemu_put_ram_ptr
This function allows to unlock a ram_ptr give by qemu_get_ram_ptr. After
a call to qemu_put_ram_ptr, the pointer may be unmap from QEMU when
used with Xen.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Acked-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-05-08 10:10:01 +02:00
John Baboval
ea6c5f8ffe xen: Adds a cap to the number of map cache entries.
Adds a cap to the number of map cache entries. This prevents the map
cache from overwhelming system memory.

I also removed the bitmap macros and #included bitmap.h instead.

Signed-off-By: John Baboval <john.baboval@virtualcomputer.com>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-05-08 10:10:01 +02:00
Jun Nakajima
432d268c05 xen: Introduce the Xen mapcache
On IA32 host or IA32 PAE host, at present, generally, we can't create
an HVM guest with more than 2G memory, because generally it's almost
impossible for Qemu to find a large enough and consecutive virtual
address space to map an HVM guest's whole physical address space.
The attached patch fixes this issue using dynamic mapping based on
little blocks of memory.

Each call to qemu_get_ram_ptr makes a call to qemu_map_cache with the
lock option, so mapcache will not unmap these ram_ptr.

Blocks that do not belong to the RAM, but usually to a device ROM or to
a framebuffer, are handled in a separate function. So the whole RAMBlock
can be map.

Signed-off-by: Jun Nakajima <jun.nakajima@intel.com>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-05-08 10:10:01 +02:00