Currently we use a global radix tree to dispatch memory access. This only
works with a single address space; to support multiple address spaces we
make the radix tree a member of AddressSpace (via an intermediate structure
AddressSpaceDispatch to avoid exposing too many internals).
A side effect is that address_space_io also gains a dispatch table. When
we remove all the pre-memory-API I/O registrations, we can use that for
dispatching I/O and get rid of the original I/O dispatch.
Signed-off-by: Avi Kivity <avi@redhat.com>
Using the AddressSpace type reduces confusion, as you can't accidentally
supply the MemoryRegion you're interested in.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
With this change, memory.c no longer knows anything about special address
spaces, so it is prepared for AddressSpace based DMA.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Instead of calling a global function on coalesced mmio changes, which
routes the call to kvm if enabled, add coalesced mmio hooks to
MemoryListener and make kvm use that instead.
The motivation is support for multiple address spaces (which means we
we need to filter the call on the right address space) but the result
is cleaner as well.
Signed-off-by: Avi Kivity <avi@redhat.com>
Many listeners don't need to respond to all MemoryListener callbacks;
provide suitable no-op defaults instead.
Signed-off-by: Avi Kivity <avi@redhat.com>
Instead of embedding knowledge of the memory and I/O address spaces in the
memory core, maintain a list of all address spaces. This list will later
be extended dynamically for other bus masters.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
The DMA API will use an AddressSpace to differentiate among different
initiators.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
AddressSpace contains a member, current_map, of type FlatView. Since we
want to limit the leakage of internal types to public headers, switch to
a pointer to a FlatView. There is no performance impact as this isn't used
during lookups, only address space reconfigurations.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
exec-obsolete.h used to hold pre-memory-API functions that were used from
device code prior to the transition to the memory API. Now that the
transition is complete, the name no longer describes the file. The
functions still need to be merged better into the memory core, but there's
no danger of anyone using them.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Flush pending coalesced MMIO before performing mapping or state changes
that could affect the event orderings or route the buffered requests to
a wrong region.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Simplify the code as we are using now only a subset of the original
features of memory_region_update_topology.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Wrap also simple operations consisting only of a single step with
memory_region_transaction_begin/commit. This allows to perform
additional steps like coalesced MMIO flushing from a single place.
This requires dropping some micro-optimizations: The skipping of
topology updates after updating disabled or unregistered regions.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Instead of flushing pending coalesced MMIO requests on every vmexit,
this provides a mechanism to selectively flush when memory regions
related to the coalesced one are accessed. This first of all includes
the coalesced region itself but can also applied to other regions, e.g.
of the same device, by calling memory_region_set_flush_coalesced.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
The last argument of find_portio is "write", so this must be true here.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Under Win32, EventNotifiers will not have event_notifier_get_fd, so we
cannot call it in common code such as hw/virtio-pci.c. Pass a pointer to
the notifier, and only retrieve the file descriptor in kvm-specific code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
This patch resolves a bug in memory listener registration.
"range_add" callback was called on each section of the both
address space (IO and memory space) even if it doesn't match
the address space filter.
Signed-off-by: Julien Grall <julien.grall@citrix.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
The return value of cpu_register_io_memory() is no longer used anywhere, so
we can remove it and all associated data and code.
Signed-off-by: Avi Kivity <avi@redhat.com>
Instead of indirecting via io_mem_region, dispatch directly
through the MemoryRegion obtained from the iotlb or phys_page_find().
Signed-off-by: Avi Kivity <avi@redhat.com>
Commit e58ac72b6a0 ("ioport: change portio_list not to use
memory_region_set_offset()") started using aliases of I/O memory
regions. Since the IORange used for the I/O was contained in the
target region, the alias information (specifically, the offset
into the region) was lost. This broke -vga std.
Fix by allocating an independent object to hold the IORange and
also the new offset.
Note that I/O memory regions were conceptually broken wrt aliases
in a different way: an alias can cause the same region to appear
twice in an address space, but we had just one IORange to service it.
This patch fixes that problem as well, since we can now have multiple
IORange/MemoryRegion associations.
Signed-off-by: Avi Kivity <avi@redhat.com>
Current memory listeners are incremental; that is, they are expected to
maintain their own state, and receive callbacks for changes to that state.
This patch adds support for stateless listeners; these work by receiving
a ->begin() callback (which tells them that new state is coming), a
sequence of ->region_add() and ->region_nop() callbacks, and then a
->commit() callback which signifies the end of the new state. They should
ignore ->region_del() callbacks.
Signed-off-by: Avi Kivity <avi@redhat.com>
All functionality has been moved to various MemoryListeners.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This transforms memory.c into a library which can then be unit tested
easily, by feeding it inputs and listening to its outputs.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
It can be derived from the MemoryRegion itself (which is why it is not
used there).
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
.readonly cannot be obtained from the MemoryRegion, since it is
inherited from aliases (so you can have a MemoryRegion mapped RW
at one address and RO at another). Record it in a MemoryRegionSection
for listeners.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This allows reverse iteration, which in turns allows consistent ordering
among multiple listeners:
l1->add
l2->add
l2->del
l1->del
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
memory_region_set_offset() complicates the API, and has been deprecated
since its introduction. Now that it is no longer used, remove it.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Helpful to understand guest configurations of things like the i440FX's
PAM or the state of ROM devices.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Instead of each device knowing or guessing the guest page size,
just pass the desired size of dirtied memory area.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Instead of each target knowing or guessing the guest page size,
just pass the desired size of dirtied memory area.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Introduce a memory region type that can reserve I/O space. Such regions
are useful for modeling I/O that is only handled outside of QEMU, i.e.
in the context of an accelerator like KVM.
Any access to such a region from QEMU is a bug, but could theoretically
be triggered by guest code (DMA to reserved region). So only warning
about such events once, then ignore them.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
All files under GPLv2 will get GPLv2+ changes starting tomorrow.
event_notifier.c and exec-obsolete.h were only ever touched by Red Hat
employees and can be relicensed now.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Commit a621f38de8 (Direct dispatch
through MemoryRegion) moved byte swaps to a central function.
Add a missing break, so that long-sized byte swaps don't abort.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Avi Kivity <avi@redhat.com>
Since commit be675c9720 (memory: move
endianness compensation to memory core) it was checking for
TARGET_BIG_ENDIAN instead of TARGET_WORDS_BIGENDIAN, thereby not
swapping correctly for Big Endian targets.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Avi Kivity <avi@redhat.com>
Unlike ->readonly, ->readable is not inherited from aliase, so we can simply
query the memory region.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Now that all mmio goes through MemoryRegions, we can convert
io_mem_opaque to be a MemoryRegion pointer, and remove the thunks
that convert from old-style CPU{Read,Write}MemoryFunc to MemoryRegionOps.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Convert the fixed-address IO_MEM_RAM, IO_MEM_ROM, IO_MEM_UNASSIGNED,
and IO_MEM_NOTDIRTY io handlers to MemoryRegions. These aren't real
regions, since they are never added to the memory hierarchy, but they
allow reuse of the dispatch functionality.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The code sometimes uses range comparisons on io indexes (e.g.
index =< IO_MEM_ROM). Avoid these as they make moving to objects harder.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
backend_registered was used to lazify the process of registering an
mmio region, since the it is different for the I/O address space and
the memory address space. However, it also makes registration dependent
on the region being visible in the address space. This is not the case
for "fake" regions, like watchpoints or IO_MEM_UNASSIGNED.
Remove backend_registered and always initialize the region. If it turns
out to be part of the I/O address space, we've wasted an I/O slot, but
that's not too bad. In any case this will be optimized later on.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Currently mmio access goes directly to the io_mem_{read,write} arrays.
In preparation for eliminating them, add indirection via a function.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Instead of doing device endianness compensation in cpu_register_io_memory(),
do it in the memory core.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The getter is no longer used, so it is completely removed.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Currently creating a memory region automatically registers it for
live migration. This differs from other state (which is enumerated
in a VMStateDescription structure) and ties the live migration code
into the memory core.
Decouple the two by introducing a separate API, vmstate_register_ram(),
for registering a RAM block for migration. Currently the same
implementation is reused, but later it can be moved into a separate list,
and registrations can be moved to VMStateDescription blocks.
Signed-off-by: Avi Kivity <avi@redhat.com>