Use inline functions rather than macros for cpu_ld/st accessors
for the *-user configurations, as we already do for softmmu.
This has a two advantages:
* we can actually typecheck our arguments
* we don't need to leak the _raw macros everywhere
Since the _kernel functions were only used by target-i386/seg_helper.c,
put the definitions for them in that file too. (It already has the
similar template include code to define them for the softmmu case,
so it makes sense to have it deal with defining them for user-only.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-12-git-send-email-peter.maydell@linaro.org
The very short ld*/st* defines are now not used anywhere; delete them.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-11-git-send-email-peter.maydell@linaro.org
The ld*_kernel and st*_kernel defines are not used anywhere;
delete them.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-10-git-send-email-peter.maydell@linaro.org
The five ldul_ macros are not used anywhere and are marked up with an XXX
comment. "ldul" is a non-standard prefix for our family of load instructions:
we don't mark 32-bit accesses for signedness because they return a 32 bit
quantity. So just delete them.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1421334118-3287-2-git-send-email-peter.maydell@linaro.org
This makes ROM blocks resizeable. This infrastructure is required for other
functionality we have queued.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJUrme8AAoJECgfDbjSjVRpqmEH/1APnrphAi/CM6rxf2hPyvWj
f5yQDNXfeGxrHaW5vux6DvgHUkTng6KGBxz6XMSiwul6MeyRFNDqwbfMhSHjiIum
QkT//jqb5xux60kyTLXuIBTPok1SsKDtaTxbvZb0VmZrnkdYeI2CLa1Mq3cQUY0a
8DKnchQEM5lic9bxj+OuLiDFx8QYaMpQlUP9iIvNq6GjX+0zNsWvfPtkMTm00t93
lHKPvD2eVmrgfS5g+lkAwLDahLSjqwDc0YuLABOgDUFsZFz9GAUCHSpt0y8HEBwR
1NhGCfbnyyRl/1OSULtARGQ4Ddwm5dn1i5I4usoP5rLFS7FV5F7xhBu0IZlwgVA=
=pFmm
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
pc: resizeable ROM blocks
This makes ROM blocks resizeable. This infrastructure is required for other
functionality we have queued.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Thu 08 Jan 2015 11:19:24 GMT using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
* remotes/mst/tags/for_upstream:
acpi-build: make ROMs RAM blocks resizeable
memory: API to allocate resizeable RAM MR
arch_init: support resizing on incoming migration
exec: qemu_ram_alloc_resizeable, qemu_ram_resize
exec: split length -> used_length/max_length
exec: cpu_physical_memory_set/clear_dirty_range
memory: add memory_region_set_size
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add API to allocate resizeable RAM MR.
This looks just like regular RAM generally, but
has a special property that only a portion of it
(used_length) is actually used, and migrated.
This used_length size can change across reboots.
Follow up patches will change used_length for such blocks at migration,
making it easier to extend devices using such RAM (notably ACPI,
but in the future thinkably other ROMs) without breaking migration
compatibility or wasting ROM (guest) memory.
Device is notified on resize, so it can adjust if necessary.
Note: nothing prevents making all RAM resizeable in this way.
However, reviewers felt that only enabling this selectively will
make some class of errors easier to detect.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Add API to allocate "resizeable" RAM.
This looks just like regular RAM generally, but
has a special property that only a portion of it
(used_length) is actually used, and migrated.
This used_length size can change across reboots.
Follow up patches will change used_length for such blocks at migration,
making it easier to extend devices using such RAM (notably ACPI,
but in the future thinkably other ROMs) without breaking migration
compatibility or wasting ROM (guest) memory.
Device is notified on resize, so it can adjust if necessary.
qemu_ram_alloc_resizeable allocates this memory, qemu_ram_resize resizes
it.
Note: nothing prevents making all RAM resizeable in this way.
However, reviewers felt that only enabling this selectively will
make some class of errors easier to detect.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
This patch allows us to distinguish between two
length values for each block:
max_length - length of memory block that was allocated
used_length - length of block used by QEMU/guest
Currently, we set used_length - max_length, unconditionally.
Follow-up patches allow used_length <= max_length.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Make cpu_physical_memory_set/clear_dirty_range
behave symmetrically.
To clear range for a given client type only, add
cpu_physical_memory_clear_dirty_range_type.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Add API to change MR size.
Will be used internally for RAM resize.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently 'info jit' outputs half of the information to monitor and the
rest to qemu log. Dumping opcode counts to monitor as a part of 'info
jit' command doesn't sound useful. Add new monitor command 'info
opcount' that only dumps opcode counters.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
directory, and updating MAINTAINERS.
I've also folded my other MAINTAINERS update patches into this, as
they're small by themselves.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJUkCPEAAoJEB6aO1+FQIO21h8QAI4Y6nGFUlqkjTZFaQMjJG8O
3Ze/GtHSruU0x/fG8yGZN/i1PE+6n//lLTFTKj1YLogyX89LCuQ5MEWAu67XTSll
awRNZuInt56zBMOFi80vmX8rnZKCBD3Q+CT+OV0wQyI3sM8bBIjL0usFEv/fuXXw
pcbzKu3VVrfVtXt2wy8+JUyiSo1fRi/Bf40a/eXVIpXSApc+5GnRK5A9l+8V7d+u
hOxh1seivkyc388Npep5bp+amno/VcJ++/vHwH81ecI8CzKOrUgiDeRfLJhPdJd1
CD9vvnGFTCZUP4MZpM4KKJx6mcg+RqGWxgUBZucxP4rB9CEsXx5G5jMJItqmXWfm
gyW/tJFEHJcr2t1ZQpyZC2T30yiZTYDEXlREUZazu3xLb5ceHoC8tN55zY/eA7Wz
HRkcSU5BU19zQ3h+9ItFAXidHaVmE6nizRr5ls1+xkcy8yd7eNjeNuLrMi2Ks1yX
tcFFGZRSDz38fjtgPE/Bo2LS93LpeBo9z1DcdwvUvS/59W/3u8AoQw7PIly35Dsz
bl71xOfbsAOGc1Ow60q6m8wm0M9mqC/6hRbTc9GY6xn5uJolyGytpb7vo2dEsPFn
o3xIeV/5D/fsVlexAylLpboXElJwkBxHsA29MS7WF8wi69Rk0RiPzLXI06AXbpQ8
TH+gsb7IiTync0uf4jWX
=Kr1U
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/amit-migration/tags/for-2.3-2' into staging
Migration pull for 2.3. Mostly moving the code to the migration/
directory, and updating MAINTAINERS.
I've also folded my other MAINTAINERS update patches into this, as
they're small by themselves.
# gpg: Signature made Tue 16 Dec 2014 12:21:24 GMT using RSA key ID 854083B6
# gpg: Good signature from "Amit Shah <amit@amitshah.net>"
# gpg: aka "Amit Shah <amit@kernel.org>"
# gpg: aka "Amit Shah <amitshah@gmx.net>"
* remotes/amit-migration/tags/for-2.3-2:
MAINTAINERS: Update for migrated migration code
Split the QEMU buffered file code out
Split struct QEMUFile out
Remove migration- pre/post fixes off files in migration/ dir
Start migrating migration code into a migration directory
qmp-command.hx: add missing docs for migration capabilites
cpu: verify that block->host is set
cpu: assert host pointer offset within block
exec: add wrapper for host pointer access
MAINTAINERS: add include files to virtio-serial entry
MAINTAINERS: add entry for virtio-rng
MAINTAINERS: migration: add vmstate static checker files
MAINTAINERS: Add myself to migration maintainers
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If it isn't, access at an offset will cause memory corruption.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amos Kong <akong@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Make accesses safer in case we missed some
check somewhere.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amos Kong <akong@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
host pointer accesses force pointer math, let's
add a wrapper to make them safer.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amos Kong <akong@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
- valgrind/KVM support
- small i386 patches
- PCI SD host controller support
- malloc/free cleanups from Markus (x86/scsi)
- IvyBridge model
- XSAVES support for KVM
- initial patches from record/replay
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJUjw28AAoJEL/70l94x66D9kcH/RBoc4mNjrSt+MLy9Y+Fu1bu
HNhfd1n/yA0MKSHtSYwJPgkiuoxG3jHt0N69gbpZE0kdBcK+PPZZZUpTFIAU6vD/
D0O7l+2viOcl2z7SPuHIp9/O0CChsAYZkH+Zn2XbeStbe4d4f6bFzdy4vblMsirQ
BfMn/Y2Dw1uLknvrO3/QKgGhbK5Nxo/Te7lavRP+w7FgOhAdAUHOhBPfGrPWtG+0
0hVWmxoQyJtk+Ltt2oF4zUkql7czDsgyXkaO82l3TkecCvtqolCuby4lQIFJnq7E
vw0XUDwC/l/MWnXFq/rG97yopfIxkSAthT/xP/+TTJKM/oJEWDTh6I8ghQTdG90=
=ncys
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
- Migration and linuxboot fixes for 2.2 regressions
- valgrind/KVM support
- small i386 patches
- PCI SD host controller support
- malloc/free cleanups from Markus (x86/scsi)
- IvyBridge model
- XSAVES support for KVM
- initial patches from record/replay
# gpg: Signature made Mon 15 Dec 2014 16:35:08 GMT using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (47 commits)
sdhci: Support SDHCI devices on PCI
sdhci: Define SDHCI PCI ids
sdhci: Add "sysbus" to sdhci QOM types and methods
sdhci: Remove class "virtual" methods
sdhci: Set a default frequency clock
serial: only resample THR interrupt on rising edge of IER.THRI
serial: update LSR on enabling/disabling FIFOs
serial: clean up THRE/TEMT handling
serial: reset thri_pending on IER writes with THRI=0
linuxboot: fix loading old kernels
kvm/apic: fix 2.2->2.1 migration
target-i386: add Ivy Bridge CPU model
target-i386: add f16c and rdrand to Haswell and Broadwell
target-i386: add VME to all CPUs
pc: add 2.3 machine types
i386: do not cross the pages boundaries in replay mode
cpus: make icount warp behave well with respect to stop/cont
timer: introduce new QEMU_CLOCK_VIRTUAL_RT clock
cpu-exec: invalidate nocache translation if they are interrupted
icount: introduce cpu_get_icount_raw
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In this case, QEMU might longjmp out of cpu-exec.c and miss the final
cleanup in cpu_exec_nocache. Do this manually through a new compile
flag.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The usual semihosting behaviour is to process the system calls locally and
return; unfortuantelly the initial implementation dinamically changed the
target to GDB during debug sessions, which, for the usual arm-none-eabi-gdb,
is not implemented. The result was that during debug sessions the semihosting
calls were discarded.
This patch adds a configuration variable and an option to set it on the
command line:
-semihosting-config [enable=on|off,]target=native|gdb|auto
This option enables semihosting and defines where the semihosting calls will
be addressed, to QEMU ('native') or to GDB ('gdb'). The default is auto, which
means 'gdb' during debug sessions and 'native' otherwise.
Signed-off-by: Liviu Ionescu <ilg@livius.net>
Message-id: 1416341957-9796-1-git-send-email-ilg@livius.net
[PMM: moved declaration and definition of semihosting_target to
gdbstub.h and gdbstub.c to fix build failure on linux-user]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
introduce memory_region_get_alignment() that returns
underlying memory block alignment or 0 if it's not
relevant/implemented for backend.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The code in invalidate_and_set_dirty() needs to handle addr/length
combinations which cross guest physical page boundaries. This can happen,
for example, when disk I/O reads large blocks into guest RAM which previously
held code that we have cached translations for. Unfortunately we were only
checking the clean/dirty status of the first page in the range, and then
were calling a tb_invalidate function which only handles ranges that don't
cross page boundaries. Fix the function to deal with multipage ranges.
The symptoms of this bug were that guest code would misbehave (eg segfault),
in particular after a guest reboot but potentially any time the guest
reused a page of its physical RAM for new code.
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1416167061-13203-1-git-send-email-peter.maydell@linaro.org
New MIPS features depend on the access type and enum is more convenient than
using the numbers directly.
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
The PCI MMIO might be disabled or the device in the reset state.
Make sure we do not dump these memory regions.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The initial base address is miscalculated in walk_memory_regions().
It has to be shifted TARGET_PAGE_BITS more. Holder variables are
extended to target_ulong size otherwise they don't fit for MIPS N32
(a 32-bit ABI with a 64-bit address space) and qemu won't compile.
The issue led to incorrect debug output of memory maps and a
mis-formed coredumped file.
Signed-off-by: Mikhail Ilyin <m.ilin@samsung.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
- Build: fixing block/iscsi.so and ranlib warnings on Mac OS X
- Migration fixes for x86
- The odd KVM patch.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABAgAGBQJUEXeWAAoJEBvWZb6bTYby4AwP/0Hh55A7QzkkzZ66y65zM+G5
dsgRcLjufHSRQHoNQqm6LOcicV3Ygc/X644EY6jnZCZxFh/fsWuTPqUDGxLAnxEc
2V0PkLRIScAMOPezzxvRy6/9hkG+UYM3ZOL5D9yxA9pGuBtttw7tkts19Vqf9WZc
NYG5TBDuEGM1c596Zpo7t10m+Oiw+Jyi5luLXsb4lh5ikdFPDrtJaf0AnFvR+ym0
HXlj2K/0vHNowUeLoo+oWnZsW8mLE6OyJhgfo1tJtsH1BR+lQJnBnQ4moq4Sl/Wz
+iht/4gtz34XwLILokFR6yiNrPe+MIryyv+FYxOD5loIdGVDtKMx30UkIE2/D933
6/n5i3GBLi9JapeT9gkKTxk/UVRPzJ1PK07RWevgNZNQyTGKAUGp+p48nSzMYX7V
7GFSy3Q8uqOR8g9n+t+RURxkoMNbhhw7v53Z3PPXPCALCMDzg9RARlW/nkfiExcZ
oThUjE/8xfMTQlN1SO5HTyQXEkYjtknZhfC7/KFvkWYMbCG0KBTf212Md0zlTNkj
+C6r8Gq4ZWVIc07QyKkoCMxB+a9Uhvy4T1PKuSlm6iu94zUgZRhdf/PlOXimhFqH
9GL67Tv15kpj05xCS6jDXjeMZ416/UKw91OcsiT1UUHcq7/rc+GBycd0ngV1UgnQ
di5V12IVt8JwdzFxMeCT
=GIKW
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
- Memory: improve error reporting and avoid crashes on hotplug
- Build: fixing block/iscsi.so and ranlib warnings on Mac OS X
- Migration fixes for x86
- The odd KVM patch.
# gpg: Signature made Thu 11 Sep 2014 11:21:10 BST using RSA key ID 9B4D86F2
# gpg: Good signature from "Paolo Bonzini <pbonzini@redhat.com>"
# gpg: aka "Paolo Bonzini <bonzini@gnu.org>"
* remotes/bonzini/tags/for-upstream: (21 commits)
gdbstub: init mon_chr through qemu_chr_alloc
pckbd: adding new fields to vmstate
mc146818rtc: add missed field to vmstate
piix: do not set irq while loading vmstate
serial: fixing vmstate for save/restore
parallel: adding vmstate for save/restore
fdc: adding vmstate for save/restore
cpu: init vmstate for ticks and clock offset
apic_common: vapic_paddr synchronization fix
vl: use QLIST_FOREACH_SAFE to visit change state handlers
exec: add parameter errp to gethugepagesize
exec: report error when memory < hpagesize
hostmem-ram: don't exit qemu if size of memory-backend-ram is way too big
memory: add parameter errp to memory_region_init_rom_device
memory: add parameter errp to memory_region_init_ram
exec: add parameter errp to qemu_ram_alloc and qemu_ram_alloc_from_ptr
rules.mak: Fix DSO build by pulling in archive symbols
util: Don't link host-utils.o if it's empty
util: Move general qemu_getauxval to util/getauxval.c
trace: Only link generated-tracers.o with "simple" backend
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add parameter errp to memory_region_init_rom_device and update all call
sites to propagate the error.
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
[Propagate the error out of realize. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add parameter errp to memory_region_init_ram and update all call sites
to pass in &error_abort.
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add parameter errp to qemu_ram_alloc and qemu_ram_alloc_from_ptr so that
we can handle errors.
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
[Assert ptr != NULL in memory_region_init_ram_ptr. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
A bunch of bugfixes - these will make sense for 2.1.1
Initial Intel IOMMU support.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJUBdygAAoJECgfDbjSjVRpa9cIAJS06we0CpJaVmPrQS5HvC1w
An5Y5bGdfMQtfKjqN1Kehmtu/+wjNKZJw427+6B+KNO7wm9rRUiu927qp9lNGlbH
g3ybrknKYeyqVO/43SJt8c1eODSkmNgHPqyCkRVLbriYo850b2HhjJyMvVNZqeHD
zuTmU95GTNeiYAV8J1c59OrqUz302kCXI4A47loY7LdoEFMbJat4DbkrkspuTgbQ
EVk5sR8p2atKzgaOV6M6yiAtL5uSBNr9KmHvuA7ZBiV21wmOJm5u3y6DpLczUD90
+Ln6BCjmPS5GQ12pzY7U65enr/x/RYo6k01ig9MP3TndNA02XxCaskqfd083jM8=
=4drK
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
pci, pc fixes, features
A bunch of bugfixes - these will make sense for 2.1.1
Initial Intel IOMMU support.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Tue 02 Sep 2014 16:05:04 BST using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
* remotes/mst/tags/for_upstream:
vhost_net: start/stop guest notifiers properly
pci: avoid losing config updates to MSI/MSIX cap regs
virtio-net: don't run bh on vm stopped
ioh3420: remove unused ioh3420_init() declaration
vhost_net: cleanup start/stop condition
intel-iommu: add IOTLB using hash table
intel-iommu: add context-cache to cache context-entry
intel-iommu: add supports for queued invalidation interface
intel-iommu: fix coding style issues around in q35.c and machine.c
intel-iommu: add Intel IOMMU emulation to q35 and add a machine option "iommu" as a switch
intel-iommu: add DMAR table to ACPI tables
intel-iommu: introduce Intel IOMMU (VT-d) emulation
iommu: add is_write as a parameter to the translate function of MemoryRegionIOMMUOps
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
QEMU system mode page table walks are expensive. Taken by running QEMU
qemu-system-x86_64 system mode on Intel PIN , a TLB miss and walking a
4-level page tables in guest Linux OS takes ~450 X86 instructions on
average.
QEMU system mode TLB is implemented using a directly-mapped hashtable.
This structure suffers from conflict misses. Increasing the
associativity of the TLB may not be the solution to conflict misses as
all the ways may have to be walked in serial.
A victim TLB is a TLB used to hold translations evicted from the
primary TLB upon replacement. The victim TLB lies between the main TLB
and its refill path. Victim TLB is of greater associativity (fully
associative in this patch). It takes longer to lookup the victim TLB,
but its likely better than a full page table walk. The memory
translation path is changed as follows :
Before Victim TLB:
1. Inline TLB lookup
2. Exit code cache on TLB miss.
3. Check for unaligned, IO accesses
4. TLB refill.
5. Do the memory access.
6. Return to code cache.
After Victim TLB:
1. Inline TLB lookup
2. Exit code cache on TLB miss.
3. Check for unaligned, IO accesses
4. Victim TLB lookup.
5. If victim TLB misses, TLB refill
6. Do the memory access.
7. Return to code cache
The advantage is that victim TLB can offer more associativity to a
directly mapped TLB and thus potentially fewer page table walks while
still keeping the time taken to flush within reasonable limits.
However, placing a victim TLB before the refill path increase TLB
refill path as the victim TLB is consulted before the TLB refill. The
performance results demonstrate that the pros outweigh the cons.
some performance results taken on SPECINT2006 train
datasets and kernel boot and qemu configure script on an
Intel(R) Xeon(R) CPU E5620 @ 2.40GHz Linux machine are shown in the
Google Doc link below.
https://docs.google.com/spreadsheets/d/1eiItzekZwNQOal_h-5iJmC4tMDi051m9qidi5_nwvH4/edit?usp=sharing
In summary, victim TLB improves the performance of qemu-system-x86_64 by
11% on average on SPECINT2006, kernelboot and qemu configscript and with
highest improvement of in 26% in 456.hmmer. And victim TLB does not result
in any performance degradation in any of the measured benchmarks. Furthermore,
the implemented victim TLB is architecture independent and is expected to
benefit other architectures in QEMU as well.
Although there are measurement fluctuations, the performance
improvement is very significant and by no means in the range of
noises.
Signed-off-by: Xin Tong <trent.tong@gmail.com>
Message-id: 1407202523-23553-1-git-send-email-trent.tong@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a bool variable is_write as a parameter to the translate function of
MemoryRegionIOMMUOps to indicate the operation of the access. It can be
used for correct fault reporting from within the callback.
Change the interface of related functions.
Signed-off-by: Le Tan <tamlokveer@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Build /proc/self/maps doing a match against guest memory translation table.
Output only that map records which are valid for guest memory layout.
Signed-off-by: Mikhail Ilyin <m.ilin@samsung.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Memory changes for QOMification and automatic tracking of MR lifetime.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJT8et9AAoJEBvWZb6bTYbyIJAQAI3AlLSe27xWoUGfQUgWH30z
Rt/pShHz3BJMfQpD79JfTH8u6uBpkQmKtflerNT7FhXN9ULDzNq+b/jRtke8nkuy
ctCt05FhhK00rfWpUoRue4XiCuvbizBU7MK0DI3yCyNdXQyYnFvgnvsJtlqox8Zh
J5HZcBJEmdCiWBxq7UPk0qBitp4PqNoy7jlD/Ex3m7fJN5WK2cyspQIT9zmhehVn
B8Nwp+RitDDbXbwm0r18col5rFr/6Nj6+dW1gr+7sVJDLNsmJEqC2l3Kgk0wbPkG
Uqwbih29me9PC9/L1VLGHY0ApKDQ8JGE0GrYgEg162hbhoxEHkjjoHMhDUfV6Pj8
NkqcjjWl11UUhgkNqrGafayXbBVnOiEglxy8uXCeq14y9Xd/gjK9Fz6MQvRSOjms
PFmaKknhdmpxh0DuZmTix7WBmKim8zOiCE0/vrAPvwx5L+d1bn5xh6yQvtVjBMpU
Sru3Mhdm9bL9dUDBgOM/G6WCxSTVLBlExOblcYkQh03MfabD7bfplcrKYPXt5ull
Y8YLjqkoIfoy5t0ErvtlpdBJjeEz99JXU+wLQ6NYHnzwzTV+oUtSaEph14mAFOcY
XkFKdoPDI9PnyEfvy4193du8z/dSbhu7sWgHWbTCQyrcaNnSaVhlH43NUC+p23YN
8vfEsVLd1X7MFkDBUmWp
=M+/m
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
SCSI changes that enable sending vendor-specific commands via virtio-scsi.
Memory changes for QOMification and automatic tracking of MR lifetime.
# gpg: Signature made Mon 18 Aug 2014 13:03:09 BST using RSA key ID 9B4D86F2
# gpg: Good signature from "Paolo Bonzini <pbonzini@redhat.com>"
# gpg: aka "Paolo Bonzini <bonzini@gnu.org>"
* remotes/bonzini/tags/for-upstream:
mtree: remove write-only field
memory: Use canonical path component as the name
memory: Use memory_region_name for name access
memory: constify memory_region_name
exec: Abstract away ref to memory region names
loader: Abstract away ref to memory region names
tpm_tis: remove instance_finalize callback
memory: remove memory_region_destroy
memory: convert memory_region_destroy to object_unparent
ioport: split deletion and destruction
nic: do not destroy memory regions in cleanup functions
vga: do not dynamically allocate chain4_alias
sysbus: remove unused function sysbus_del_io
qom: object: move unparenting to the child property's release callback
qom: object: delete properties before calling instance_finalize
virtio-scsi: implement parse_cdb
scsi-block, scsi-generic: implement parse_cdb
scsi-block: extract scsi_block_is_passthrough
scsi-bus: introduce parse_cdb in SCSIDeviceClass and SCSIBusInfo
scsi-bus: prepare scsi_req_new for introduction of parse_cdb
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rather than having the name as separate state. This prepares support
for creating a MemoryRegion dynamically (i.e. without
memory_region_init() and friends) and the MemoryRegion still getting
a usable name.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It doesn't change the MR and some prospective call sites will have
const MRs at hand.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The function is empty after the previous patch, so remove it.
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Devices that use address_space_rw to write large areas to memory
(as opposed to address_space_map/unmap) were broken with respect
to migration since fe680d0 (exec: Limit translation limiting in
address_space_translate to xen, 2014-05-07). Such devices include
IDE CD-ROMs.
The reason is that invalidate_and_set_dirty (called by address_space_rw
but not address_space_map/unmap) was only setting the dirty bit for
the first page in the translation.
To fix this, introduce cpu_physical_memory_set_dirty_range_nocode that
is the same as cpu_physical_memory_set_dirty_range except it does not
muck with the DIRTY_MEMORY_CODE bitmap. This function can be used if
the caller invalidates translations with tb_invalidate_phys_page_range.
There is another difference between cpu_physical_memory_set_dirty_range
and cpu_physical_memory_set_dirty_flag; the former includes a call
to xen_modified_memory. This is handled separately in
invalidate_and_set_dirty, and is not needed in other callers of
cpu_physical_memory_set_dirty_range_nocode, so leave it alone.
Just one nit: now that invalidate_and_set_dirty takes care of handling
multiple pages, there is no need for address_space_unmap to wrap it
in a loop. In fact that loop would now be O(n^2).
Reported-by: Dave Gilbert <dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
QOM propertyify the .may-overlap and .priority fields. The setters
will re-add the memory as a subregion if needed (i.e. the values change
when the memory region is already contained).
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
[Remove setters. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
QOMify memory regions as an Object. The former init() and destroy()
routines become instance_init() and instance_finalize() resp.
memory_region_init() is re-implemented to be:
object_initialize() + set fields
memory_region_destroy() is re-implemented to call unparent().
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
[Add newly-created MR as child, unparent on destruction. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Old code was affected by memory gaps which resulted in buffer pointers
pointing to address outside of the mapped regions.
Here we are introducing following changes:
- new function qemu_get_ram_block_host_ptr() returns host pointer
to the ram block, it is needed to calculate offset of specific
region in the host memory
- new field mmap_offset is added to the VhostUserMemoryRegion. It
contains offset where specific region starts in the mapped memory.
As there is stil no wider adoption of vhost-user agreement was made
that we will not bump version number due to this change
- other fileds in VhostUserMemoryRegion struct are not changed, as
they are all needed for usermode app implementation
- region data is not taken from ram_list.blocks anymore, instead we
use region data which is alredy calculated for use in vhost-net
- Now multiple regions can have same FD and user applicaton can call
mmap() multiple times with the same FD but with different offset
(user needs to take care for offset page alignment)
Signed-off-by: Damjan Marion <damarion@cisco.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Damjan Marion <damarion@cisco.com>
A new "share" property can be used with the "memory-file" backend to
map memory with MAP_SHARED instead of MAP_PRIVATE.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
And allow preallocation of file-based memory even without -mem-prealloc.
Some care is necessary because -mem-prealloc does not allow disabling
preallocation for hostmem-file.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Right now, -mem-path will fall back to RAM-based allocation in some
cases. This should never happen with "-object memory-file", prepare
the code by adding correct error propagation.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
MST: drop \n at end of error messages
Like the previous patch did in exec.c, split memory_region_init_ram and
memory_region_init_ram_from_file, and push mem_path one step further up.
Other RAM regions than system memory will now be backed by regular RAM.
Also, boards that do not use memory_region_allocate_system_memory will
not support -mem-path anymore. This can be changed before the patches
are merged by migrating boards to use the function.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Prepare for adding more flags. The "_MASK" suffix is unique, kill it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Split the internal interface in exec.c to a separate function, and
push the check on mem_path up to memory_region_init_ram.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
MST: comment tweaks
which allows to check if MemoryRegion is already mapped.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
See commit fbeadf50 (bitops: unify bitops_ffsl with the one in
host-utils.h, call it bitops_ctzl) on why ctzl should be used instead
of ffsl.
This is also needed for musl libc which does not implement ffsl.
Signed-off-by: Natanael Copa <ncopa@alpinelinux.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Unify pieces of cpu-all.h, exec-all.h, softmmu_exec.h and tcg/tcg.h
into a single new header file with all helpers.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This will collect all load and store helpers soon. For now
it is just a replacement for softmmu_exec.h, which this patch
stops including directly, but we also include it where this will
be necessary in order to simplify the next patch.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
They do not need to be in op_helper.c. Because cputlb.c now includes
softmmu_template.h twice for each size, io_readX must be elided the
second time through.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We will reference it from more files in the next patch. To avoid
ruining the small steps we're making towards multi-target, make
it a method of CPU rather than just a global.
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This preprocessor symbol is already used in softmmu_template.h. We
will use it to distinguish the two "fake" ACCESS_TYPEs
NB_MMU_MODES and NB_MMU_MODES + 1.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Tidying the initialization of the args arrays at the same time.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Rather than special casing them, use the standard mechanisms
for tcg helper generation.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Rather than include helper.h with N values of GEN_HELPER, include a
secondary file that sets up the macros to include helper.h. This
minimizes the files that must be rebuilt when changing the macros
for file N.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Now that the code_gen_buffer is constrained to not cross 256mb
regions, we are assured that we can use J to reach another TB.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJTdOpIAAoJEK0ScMxN0CebAJUIAKmxOlk39ukPD3hn8Ik1MCr8
byJLcCZrdTzKMAjiovQIAmTmMQ4dba0YI6E+G8H4Z21u74P3fgUbhlt3SHpeMNch
kbfIUJ4PYAar9wze858rD4BANOOMB3qLjkE3LH8WF70S8S7yTc7fsCrDqS0+qG0P
+fFHdoHT1w93O5V07ELI9xCDEeCH7gE6znD0RLAc00SNErDWBCZKIpgT45K0bJmG
1uX8nuUHx6U8TUpjLzwUomJc5o3OeutbF3H2XlVQdzbPbBchkjeHEZ9jv2h2q6bC
e3xVzwBL7IP3vVEMLT6WWkNtI3XO1erDOjzbw/4F6hqpMOFy92Lpmpib8Q2W+S0=
=glsl
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-s390-20140515' into staging
tcg/s390 updates
# gpg: Signature made Thu 15 May 2014 17:24:40 BST using RSA key ID 4DD0279B
# gpg: Can't check signature: public key not found
* remotes/rth/tags/pull-tcg-s390-20140515:
tcg-s390: Implement direct chaining of TBs
tcg-s390: Don't force -march=z990
tcg-s390: Improve setcond
tcg-s390: Allow immediate operands to add2 and sub2
tcg-s390: Implement tcg_register_jit
tcg-s390: Use more risbg in the tlb sequence
tcg-s390: Move ldst helpers out of line
tcg-s390: Convert to new ldst opcodes
tcg-s390: Integrate endianness into TCGMemOp
tcg-s390: Convert to TCGMemOp
tcg-s390: Fix off-by-one in wraparound andi
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* remotes/kvm/uq/master:
pc: port 92 reset requires a low->high transition
cpu: make CPU_INTERRUPT_RESET available on all targets
apic: do not accept SIPI on the bootstrap processor
target-i386: preserve FPU and MSR state on INIT
target-i386: fix set of registers zeroed on reset
kvm: forward INIT signals coming from the chipset
kvm: reset state from the CPU's reset method
target-i386: the x86 CPL is stored in CS.selector - auto update hflags accordingly.
target-i386: set eflags prior to calling cpu_x86_load_seg_cache() in seg_helper.c
target-i386: set eflags and cr0 prior to calling cpu_x86_load_seg_cache() in smm_helper.c
target-i386: set eflags prior to calling svm_load_seg_cache() in svm_helper.c
pci-assign: limit # of msix vectors
pci-assign: Fix a bug when map MSI-X table memory failed
kvm: make one_reg helpers available for everyone
target-i386: Remove unused data from local array
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We got the wrong version of stl_p, the one that bswaps as appropriate
for the target. Since x86 is always little-endian, the "_le_" routine
will resolve to what we want.
Signed-off-by: Richard Henderson <rth@twiddle.net>
On the x86, some devices need access to the CPU reset pin (INIT#).
Provide a generic service to do this, using one of the internal
cpu_interrupt targets. Generalize the PPC-specific code for
CPU_INTERRUPT_RESET to other targets.
Since PPC does not support migration across QEMU versions (its
machine types are not versioned yet), I picked the value that
is used on x86, CPU_INTERRUPT_TGT_INT_1. Consequently, TGT_INT_2
and TGT_INT_3 are shifted down by one while keeping their value.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
To be defined by the tcg backend based on the elemental unit of the ISA.
During the transition, allow TCG_TARGET_INSN_UNIT_SIZE to be undefined,
which allows us to default tcg_insn_unit to the current uint8_t.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The code which patches x86 jump instructions assumes it can do an
unaligned write of a uint32_t. This is actually safe on x86, but it's
still undefined behaviour. We have infrastructure for doing efficient
unaligned accesses which doesn't engage in undefined behaviour, so
use it.
This is technically fractionally less efficient, at least with gcc 4.6;
instead of one instruction:
7b2: 89 3e mov %edi,(%rsi)
we get an extra spurious store to the stack slot:
7b2: 89 7c 24 64 mov %edi,0x64(%rsp)
7b6: 89 3e mov %edi,(%rsi)
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
.impl.valid should be .impl.unaligned and the description needs some
fixes.
.old_portio is removed since commit b40acf99b (ioport: Switch
dispatching to memory core layer).
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Implement the DC ZVA instruction, which clears a block of memory.
The fast path obtains a pointer to the underlying RAM via the TCG TLB
data structure so we can do a direct memset(), with fallback to a
simple byte-store loop in the slow path.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Acked-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
The ARM A64 decoder's worst case number of TCG ops per instruction
is 266 (for insn 0x4c800000, a post-indexed ST4 multiple-structures
store). Raise the MAX_OP_PER_INSTR define accordingly.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1394822294-14837-17-git-send-email-peter.maydell@linaro.org