Commit Graph

436 Commits

Author SHA1 Message Date
Nikunj A Dadhania 8695350357 cpus: fix wrong define name
While the configure script generates TARGET_SUPPORTS_MTTCG define, one
of the define is cpus.c is checking wrong name: TARGET_SUPPORT_MTTCG

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
2017-04-10 10:14:28 +01:00
Pranith Kumar 8cfef89271 tcg: Add a new line after incompatibility warning
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-28 10:52:50 +01:00
Vincent Palatin b3d3a426da hax: fix breakage in locking
use qemu_mutex_lock_iothread consistently in qemu_hax_cpu_thread_fn() as
done in other _thread_fn functions, instead of grabbing directly the
BQL. This way we ensure that iothread_locked is properly set.

On v2.9.0-rc0, QEMU was dying in an assertion in the mutex code when
running with '--enable-hax' either on OSX or Windows. This bug was triggered
since the code modification for multithreading added new usages of
qemu_mutex_iothread_locked.
This fixes the breakage on both platforms, I can now run again a full
Chromium OS image with HAX kernel acceleration.

Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
Message-Id: <20170320101549.150076-1-vpalatin@chromium.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-20 12:24:43 +01:00
Paolo Bonzini 6b8f0187a4 icount: process QEMU_CLOCK_VIRTUAL timers in vCPU thread
icount has become much slower after tcg_cpu_exec has stopped
using the BQL.  There is also a latent bug that is masked by
the slowness.

The slowness happens because every occurrence of a QEMU_CLOCK_VIRTUAL
timer now has to wake up the I/O thread and wait for it.  The rendez-vous
is mediated by the BQL QemuMutex:

- handle_icount_deadline wakes up the I/O thread with BQL taken
- the I/O thread wakes up and waits on the BQL
- the VCPU thread releases the BQL a little later
- the I/O thread raises an interrupt, which calls qemu_cpu_kick
- the VCPU thread notices the interrupt, takes the BQL to
  process it and waits on it

All this back and forth is extremely expensive, causing a 6 to 8-fold
slowdown when icount is turned on.

One may think that the issue is that the VCPU thread is too dependent
on the BQL, but then the latent bug comes in.  I first tried removing
the BQL completely from the x86 cpu_exec, only to see everything break.
The only way to fix it (and make everything slow again) was to add a dummy
BQL lock/unlock pair.

This is because in -icount mode you really have to process the events
before the CPU restarts executing the next instruction.  Therefore, this
series moves the processing of QEMU_CLOCK_VIRTUAL timers straight in
the vCPU thread when running in icount mode.

The required changes include:

- make the timer notification callback wake up TCG's single vCPU thread
  when run from another thread.  By using async_run_on_cpu, the callback
  can override all_cpu_threads_idle() when the CPU is halted.

- move handle_icount_deadline after qemu_tcg_wait_io_event, so that
  the timer notification callback is invoked after the dummy work item
  wakes up the vCPU thread

- make handle_icount_deadline run the timers instead of just waking the
  I/O thread.

- stop processing the timers in the main loop

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-14 13:51:34 +01:00
Paolo Bonzini 3f53bc61a4 cpus: define QEMUTimerListNotifyCB for QEMU system emulation
There is no change for now, because the callback just invokes
qemu_notify_event.

Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-14 13:28:29 +01:00
Alex Bennée c34c762015 cpus.c: add additional error_report when !TARGET_SUPPORT_MTTCG
While we may fail the memory ordering check later that can be
confusing. So in cases where TARGET_SUPPORT_MTTCG has yet to be
defined we should say so specifically.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
2017-03-09 10:38:16 +00:00
Alex Bennée 83fd9629a3 vl/cpus: be smarter with icount and MTTCG
The sense of the test was inverted. Make it simple, if icount is
enabled then we disabled MTTCG by default. If the user tries to force
MTTCG upon us then we tell them "no".

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2017-03-09 10:38:02 +00:00
Paolo Bonzini 18268b6016 KVM: move SIG_IPI handling to kvm-all.c
This lets us remove a bunch of CONFIG_LINUX defines.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-03 16:40:02 +01:00
Paolo Bonzini 2ae41db262 KVM: do not use sigtimedwait to catch SIGBUS
Call kvm_on_sigbus_vcpu asynchronously from the VCPU thread.
Information for the SIGBUS can be stored in thread-local variables
and processed later in kvm_cpu_exec.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-03 16:40:02 +01:00
Paolo Bonzini a16fc07ebd cpus: reorganize signal handling code
Move the KVM "eat signals" code under CONFIG_LINUX, in preparation
for moving it to kvm-all.c; reraise non-MCE SIGBUS immediately,
without passing it to KVM.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-03 16:40:02 +01:00
Paolo Bonzini d98d407234 cpus: remove ugly cast on sigbus_handler
The cast is there because sigbus_handler is invoked via sigfd_handler.
But it feels just wrong to use struct qemu_signalfd_siginfo in the
prototype of a function that is passed to sigaction.

Instead, do a simple-minded conversion of qemu_signalfd_siginfo to
siginfo_t.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-03 16:40:02 +01:00
Pranith Kumar 08e73c48b0 tcg: handle EXCP_ATOMIC exception for system emulation
The patch enables handling atomic code in the guest. This should be
preferably done in cpu_handle_exception(), but the current assumptions
regarding when we can execute atomic sections cause a deadlock.

The current mechanism discards the flags which were set in atomic
execution. We ensure they are properly saved by calling the
cc->cpu_exec_enter/leave() functions around the loop.

As we are running cpu_exec_step_atomic() from the outermost loop we
need to avoid an abort() when single stepping over atomic code since
debug exception longjmp will point to the the setlongjmp in
cpu_exec(). We do this by setting a new jmp_env so that it jumps back
here on an exception.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
[AJB: tweak title, merge with new patches, add mmap_lock]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
CC: Paolo Bonzini <pbonzini@redhat.com>
2017-02-24 10:32:45 +00:00
Alex Bennée 372579427a tcg: enable thread-per-vCPU
There are a couple of changes that occur at the same time here:

  - introduce a single vCPU qemu_tcg_cpu_thread_fn

  One of these is spawned per vCPU with its own Thread and Condition
  variables. qemu_tcg_rr_cpu_thread_fn is the new name for the old
  single threaded function.

  - the TLS current_cpu variable is now live for the lifetime of MTTCG
    vCPU threads. This is for future work where async jobs need to know
    the vCPU context they are operating in.

The user to switch on multi-thread behaviour and spawn a thread
per-vCPU. For a simple test kvm-unit-test like:

  ./arm/run ./arm/locking-test.flat -smp 4 -accel tcg,thread=multi

Will now use 4 vCPU threads and have an expected FAIL (instead of the
unexpected PASS) as the default mode of the test has no protection when
incrementing a shared variable.

We enable the parallel_cpus flag to ensure we generate correct barrier
and atomic code if supported by the front and backends. This doesn't
automatically enable MTTCG until default_mttcg_enabled() is updated to
check the configuration is supported.

Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[AJB: Some fixes, conditionally, commit rewording]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2017-02-24 10:32:45 +00:00
Alex Bennée e5143e30fb tcg: remove global exit_request
There are now only two uses of the global exit_request left.

The first ensures we exit the run_loop when we first start to process
pending work and in the kick handler. This is just as easily done by
setting the first_cpu->exit_request flag.

The second use is in the round robin kick routine. The global
exit_request ensured every vCPU would set its local exit_request and
cause a full exit of the loop. Now the iothread isn't being held while
running we can just rely on the kick handler to push us out as intended.

We lightly re-factor the main vCPU thread to ensure cpu->exit_requests
cause us to exit the main loop and process any IO requests that might
come along. As an cpu->exit_request may legitimately get squashed
while processing the EXCP_INTERRUPT exception we also check
cpu->queued_work_first to ensure queued work is expedited as soon as
possible.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2017-02-24 10:32:45 +00:00
Jan Kiszka 8d04fb55de tcg: drop global lock during TCG code execution
This finally allows TCG to benefit from the iothread introduction: Drop
the global mutex while running pure TCG CPU code. Reacquire the lock
when entering MMIO or PIO emulation, or when leaving the TCG loop.

We have to revert a few optimization for the current TCG threading
model, namely kicking the TCG thread in qemu_mutex_lock_iothread and not
kicking it in qemu_cpu_kick. We also need to disable RAM block
reordering until we have a more efficient locking mechanism at hand.

Still, a Linux x86 UP guest and my Musicpal ARM model boot fine here.
These numbers demonstrate where we gain something:

20338 jan       20   0  331m  75m 6904 R   99  0.9   0:50.95 qemu-system-arm
20337 jan       20   0  331m  75m 6904 S   20  0.9   0:26.50 qemu-system-arm

The guest CPU was fully loaded, but the iothread could still run mostly
independent on a second core. Without the patch we don't get beyond

32206 jan       20   0  330m  73m 7036 R   82  0.9   1:06.00 qemu-system-arm
32204 jan       20   0  330m  73m 7036 S   21  0.9   0:17.03 qemu-system-arm

We don't benefit significantly, though, when the guest is not fully
loading a host CPU.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <1439220437-23957-10-git-send-email-fred.konrad@greensocs.com>
[FK: Rebase, fix qemu_devices_reset deadlock, rm address_space_* mutex]
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
[EGC: fixed iothread lock for cpu-exec IRQ handling]
Signed-off-by: Emilio G. Cota <cota@braap.org>
[AJB: -smp single-threaded fix, clean commit msg, BQL fixes]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
[PM: target-arm changes]
Acked-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24 10:32:45 +00:00
Alex Bennée 791158d93b tcg: rename tcg_current_cpu to tcg_current_rr_cpu
..and make the definition local to cpus. In preparation for MTTCG the
concept of a global tcg_current_cpu will no longer make sense. However
we still need to keep track of it in the single-threaded case to be able
to exit quickly when required.

qemu_cpu_kick_no_halt() moves and becomes qemu_cpu_kick_rr_cpu() to
emphasise its use-case. qemu_cpu_kick now kicks the relevant cpu as
well as qemu_kick_rr_cpu() which will become a no-op in MTTCG.

For the time being the setting of the global exit_request remains.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
2017-02-24 10:32:45 +00:00
Alex Bennée 6546706d28 tcg: add kick timer for single-threaded vCPU emulation
Currently we rely on the side effect of the main loop grabbing the
iothread_mutex to give any long running basic block chains a kick to
ensure the next vCPU is scheduled. As this code is being re-factored and
rationalised we now do it explicitly here.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
2017-02-24 10:32:45 +00:00
KONRAD Frederic 8d4e9146b3 tcg: add options for enabling MTTCG
We know there will be cases where MTTCG won't work until additional work
is done in the front/back ends to support. It will however be useful to
be able to turn it on.

As a result MTTCG will default to off unless the combination is
supported. However the user can turn it on for the sake of testing.

Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
[AJB: move to -accel tcg,thread=multi|single, defaults]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2017-02-24 10:32:45 +00:00
Claudio Imbrenda 2d76e82395 move vm_start to cpus.c
This patch:

* moves vm_start to cpus.c.
* exports qemu_vmstop_requested, since it's needed by vm_start.
* extracts vm_prepare_start from vm_start; it does what vm_start did,
  except restarting the cpus.
* vm_start now calls vm_prepare_start and then restarts the cpus.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
Message-Id: <1487092068-16562-2-git-send-email-imbrenda@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-02-16 14:06:55 +01:00
Vincent Palatin b0cb0a66d6 Plumb the HAXM-based hardware acceleration support
Use the Intel HAX is kernel-based hardware acceleration module for
Windows (similar to KVM on Linux).

Based on the "target/i386: Add Intel HAX to android emulator" patch
from David Chou <david.j.chou@intel.com>

Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
Message-Id: <7b9cae28a0c379ab459c7a8545c9a39762bd394f.1484045952.git.vpalatin@chromium.org>
[Drop hax_populate_ram stub. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-01-19 22:07:46 +01:00
Vincent Palatin b39466269b kvm: move cpu synchronization code
Move the generic cpu_synchronize_ functions to the common hw_accel.h header,
in order to prepare for the addition of a second hardware accelerator.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
Message-Id: <f5c3cffe8d520011df1c2e5437bb814989b48332.1484045952.git.vpalatin@chromium.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-01-19 22:07:46 +01:00
Paolo Bonzini 14e6fe12a7 *_run_on_cpu: introduce run_on_cpu_data type
This changes the *_run_on_cpu APIs (and helpers) to pass data in a
run_on_cpu_data type instead of a plain void *. This is because we
sometimes want to pass a target address (target_ulong) and this fails on
32 bit hosts emulating 64 bit guests.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20161027151030.20863-24-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-31 15:00:25 +01:00
Alex Bennée 12e9700d7a cpus: re-factor out handle_icount_deadline
In preparation for adding a MTTCG thread we re-factor out a bit of what
will be common code to handle the QEMU_CLOCK_VIRTUAL expiration.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20161027151030.20863-18-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-31 10:51:17 +01:00
Alex Bennée c93bbbefca tcg: cpus rm tcg_exec_all()
In preparation for multi-threaded TCG we remove tcg_exec_all and move
all the CPU cycling into the main thread function. When MTTCG is enabled
we shall use a separate thread function which only handles one vCPU.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>

Message-Id: <20161027151030.20863-13-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-31 10:51:17 +01:00
Alex Bennée 1be7fcb8aa tcg: move tcg_exec_all and helpers above thread fn
This is a pure mechanical change in preparation for up-coming
re-factoring. Instead of a forward declaration for tcg_exec_all it and
the associated helper functions are moved in front of the call from
qemu_tcg_cpu_thread_fn.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20161027151030.20863-12-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-31 10:51:16 +01:00
Alex Bennée e8faee06f3 cpus: make all_vcpus_paused() return bool
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>

Message-Id: <20161027151030.20863-2-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-31 10:24:45 +01:00
Richard Henderson fdbc2b5722 tcg: Add EXCP_ATOMIC
When we cannot emulate an atomic operation within a parallel
context, this exception allows us to stop the world and try
again in a serial context.

Reviewed-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-10-26 08:29:00 -07:00
John Snow 22af08eacf qemu: use bdrv_flush_all for vm_stop et al
Reimplement bdrv_flush_all for vm_stop. In contrast to blk_flush_all,
bdrv_flush_all does not have device model restrictions. This allows
us to flush and halt unconditionally without error.

This allows us to do things like migrate when we have a device with
an open tray, but has a node that may need to be flushed, or nodes
that aren't currently attached to any device and need to be flushed.

Specifically, this allows us to migrate when we have a CDROM with
an open tray.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Acked-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-29 14:13:37 +02:00
Pavel Dovgalyuk 6d0ceb80ff replay: allow replay stopping and restarting
This patch fixes bug with stopping and restarting replay
through monitor.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20160926080815.6992.71818.stgit@PASHA-ISP>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27 11:57:30 +02:00
Paolo Bonzini ab129972c8 cpus-common: move exclusive work infrastructure from linux-user
This will serve as the base for async_safe_run_on_cpu.  Because
start_exclusive uses CPU_FOREACH, merge exclusive_lock with
qemu_cpu_list_lock: together with a call to exclusive_idle (via
cpu_exec_start/end) in cpu_list_add, this protects exclusive work
against concurrent CPU addition and removal.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27 11:57:30 +02:00
Sergey Fedorov d148d90ee8 cpus-common: move CPU work item management to common code
Make CPU work core functions common between system and user-mode
emulation. User-mode does not use run_on_cpu, so do not implement it.

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <1470158864-17651-10-git-send-email-alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27 11:57:30 +02:00
Sergey Fedorov a5403c69fc cpus: Rename flush_queued_work()
To avoid possible confusion, rename flush_queued_work() to
process_queued_cpu_work().

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <1470158864-17651-6-git-send-email-alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27 11:57:29 +02:00
Sergey Fedorov fd38b25103 cpus: Move common code out of {async_, }run_on_cpu()
Move the code common between run_on_cpu() and async_run_on_cpu() into a
new function queue_work_on_cpu().

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <1470158864-17651-4-git-send-email-alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27 11:57:29 +02:00
Alex Bennée e0eeb4a21a cpus: pass CPUState to run_on_cpu helpers
CPUState is a fairly common pointer to pass to these helpers. This means
if you need other arguments for the async_run_on_cpu case you end up
having to do a g_malloc to stuff additional data into the routine. For
the current users this isn't a massive deal but for MTTCG this gets
cumbersome when the only other parameter is often an address.

This adds the typedef run_on_cpu_func for helper functions which has an
explicit CPUState * passed as the first parameter. All the users of
run_on_cpu and async_run_on_cpu have had their helpers updated to use
CPUState where available.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
[Sergey Fedorov:
 - eliminate more CPUState in user data;
 - remove unnecessary user data passing;
 - fix target-s390x/kvm.c and target-s390x/misc_helper.c]
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au> (ppc parts)
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> (s390 parts)
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <1470158864-17651-3-git-send-email-alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27 11:57:29 +02:00
Peter Maydell 8212ff86f4 * minor patches here and there
* MTTCG: lock-free TB lookup
 * SCSI: bugfixes for MPTSAS, MegaSAS, LSI53c, vmw_pvscsi
 * buffer_is_zero rewrite (except for one patch)
 * chardev: qemu_chr_fe_write checks
 * checkpatch improvement for markdown preformatted text
 * default-configs cleanups
 * atomics cleanups
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQEcBAABAgAGBQJX2DP2AAoJEL/70l94x66DIBYH/2pW+/HYexCobNn9eVD0Wm08
 im0mRHIU0vjfTaeZSasJPXvA2FyYQLl9KnSFvUFcRiLILpp+hE3QdZ8o0QGlfAmE
 +5MWsPJDXMbOaCOfMKZpZvPfJ6q/lSTg6eiJTPiRgyU7fQgjMDAot1s44ETYGVRu
 myeheEvjSwm/aT9sRIUK6KC7LWXGHFYRYzYJDnvoN6svHZ10DcEDhve8bdmixFk0
 0zUY4RmPk8n46SntDG65tgAlKlzfSuPOesvbpcQIYe1H+r+uJt9BST7MjKdbdDQv
 b/LDzMx8CTbd2tDPL6JWgjBGBZ6SZ4Q6x0a45kzJRtkS+BPtNeGGzBVwULVN4RY=
 =eAJS
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

* minor patches here and there
* MTTCG: lock-free TB lookup
* SCSI: bugfixes for MPTSAS, MegaSAS, LSI53c, vmw_pvscsi
* buffer_is_zero rewrite (except for one patch)
* chardev: qemu_chr_fe_write checks
* checkpatch improvement for markdown preformatted text
* default-configs cleanups
* atomics cleanups

# gpg: Signature made Tue 13 Sep 2016 18:14:30 BST
# gpg:                using RSA key 0xBFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini/tags/for-upstream: (58 commits)
  cutils: Add generic prefetch
  cutils: Add SSE4 version
  cutils: Add test for buffer_is_zero
  cutils: Remove ppc buffer zero checking
  cutils: Remove aarch64 buffer zero checking
  cutils: Rearrange buffer_is_zero acceleration
  cutils: Export only buffer_is_zero
  cutils: Remove SPLAT macro
  cutils: Move buffer_is_zero and subroutines to a new file
  ppc: do not redefine CPUPPCState
  x86/lapic: Load LAPIC state at post_load
  optionrom: do not rely on compiler's bswap optimization
  checkpatch: Fix whitespace checks for documentation code blocks
  atomics: Use __atomic_*_n() variant primitives
  atomics: Remove redundant barrier()'s
  kvm-all: drop kvm_setup_guest_memory
  i8257: Make device "i8257" unavailable with -device
  Revert "megasas: remove useless check for cmd->frame"
  char: convert qemu_chr_fe_write to qemu_chr_fe_write_all
  hw: replace most use of qemu_chr_fe_write with qemu_chr_fe_write_all
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

 Conflicts:
	cpus.c
	tests/Makefile.include
2016-09-15 10:24:22 +01:00
Cao jin d90f3cca87 cpus: update comments
The returned value of cpu_get_clock() is plused with the offset,
so it is the time elapsed in virtual machine when vm is active.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc  Peter Crosthwaite <crosthwaite.peter@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
Message-Id: <1469790338-28990-4-git-send-email-caoj.fnst@cn.fujitsu.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-13 19:08:44 +02:00
Cao jin 1d45cea549 cpus: rename local variable to meaningful one
The function actually returns monotonic time value in nanosecond,
the "ticks" is not suitable.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc  Peter Crosthwaite <crosthwaite.peter@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
Message-Id: <1469790338-28990-3-git-send-email-caoj.fnst@cn.fujitsu.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-13 19:08:44 +02:00
Cao jin 3224e8786f timer/cpus: fix some typos and update some comments
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2016-09-13 18:12:34 +03:00
Emilio G. Cota 03719e44b6 seqlock: rename write_lock/unlock to write_begin/end
It is a more appropriate name, now that the mutex embedded
in the seqlock is gone.

Reviewed-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1465412133-3029-4-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-06-11 22:59:34 +00:00
Emilio G. Cota ccdb3c1fc8 seqlock: remove optional mutex
This option is unused; besides, it bloats the struct when not needed.
Let's just let writers define their own locks elsewhere.

Reviewed-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1465412133-3029-3-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-06-11 22:59:34 +00:00
Bharata B Rao 2c579042e3 cpu: Add a sync version of cpu_remove()
This sync API will be used by the CPU hotplug code to wait for the CPU to
completely get removed before flagging the failure to the device_add
command.

Sync version of this call is needed to correctly recover from CPU
realization failures when ->plug() handler fails.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-05-30 14:17:05 +10:00
Gu Zheng 4c055ab54f cpu: Reclaim vCPU objects
In order to deal well with the kvm vcpus (which can not be removed without any
protection), we do not close KVM vcpu fd, just record and mark it as stopped
into a list, so that we can reuse it for the appending cpu hot-add request if
possible. It is also the approach that kvm guys suggested:
https://www.mail-archive.com/kvm@vger.kernel.org/msg102839.html

Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Zhu Guihua <zhugh.fnst@cn.fujitsu.com>
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
               [- Explicit CPU_REMOVE() from qemu_kvm/tcg_destroy_vcpu()
                  isn't needed as it is done from cpu_exec_exit()
                - Use iothread mutex instead of global mutex during
                  destroy
                - Don't cleanup vCPU object from vCPU thread context
                  but leave it to the callers (device_add/device_del)]
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-05-30 14:03:59 +10:00
Bandan Das 1453e6627d cpus: call the core nmi injection function
We can call the common function here directly since
x86 specific actions will be taken care of by the arch
specific nmi handler

Signed-off-by: Bandan Das <bsd@redhat.com>
Message-Id: <1463761717-26558-4-git-send-email-bsd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-23 16:53:47 +02:00
Peter Maydell a2d1761da1 cpus.c: Use pthread_sigmask() rather than sigprocmask()
On Linux, sigprocmask() and pthread_sigmask() are in practice the
same thing (they only set the signal mask for the calling thread),
but the documentation states that the behaviour of sigprocmask() in a
multithreaded process is undefined. Use pthread_sigmask() instead
(which is what we do in almost all places in QEMU that alter the
signal mask already).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <1463420039-29761-1-git-send-email-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-23 16:53:45 +02:00
Paolo Bonzini 63c915526d cpu: move exec-all.h inclusion out of cpu.h
exec-all.h contains TCG-specific definitions.  It is not needed outside
TCG-specific files such as translate.c, exec.c or *helper.c.

One generic function had snuck into include/exec/exec-all.h; move it to
include/qom/cpu.h.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-19 16:42:29 +02:00
Paolo Bonzini 33c11879fd qemu-common: push cpu.h inclusion out of qemu-common.h
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-19 16:42:29 +02:00
Alex Bennée ccffff48c9 cpus: don't use atomic_read for vm_clock_warp_start
As vm_clock_warp_start is a 64 bit value this causes problems for the
compiler trying to come up with a suitable atomic operation on 32 bit
hosts. Because the variable is protected by vm_clock_seqlock, we check its
value inside a seqlock critical section.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <1459780549-12942-2-git-send-email-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-04-05 11:46:52 +02:00
Rutuja Shah 73bcb24d93 Replaced get_tick_per_sec() by NANOSECONDS_PER_SECOND
This patch replaces get_ticks_per_sec() calls with the macro
NANOSECONDS_PER_SECOND. Also, as there are no callers, get_ticks_per_sec()
is then removed.  This replacement improves the readability and
understandability of code.

For example,

    timer_mod(fdctrl->result_timer,
	      qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + (get_ticks_per_sec() / 50));

NANOSECONDS_PER_SECOND makes it obvious that qemu_clock_get_ns
matches the unit of the expression on the right side of the plus.

Signed-off-by: Rutuja Shah <rutu.shah.26@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-22 22:20:17 +01:00
Max Reitz da31d594cf block: Use blk_{commit,flush}_all() consistently
Replace bdrv_commmit_all() and bdrv_flush_all() by their BlockBackend
equivalents.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-03-17 15:47:56 +01:00
Pavel Dovgalyuk e76d1798fa icount: decouple warp calls
qemu_clock_warp function is called to update virtual clock when CPU
is sleeping. This function includes replay checkpoint to make execution
deterministic in icount mode.
Record/replay module flushes async event queue at checkpoints.
Some of the events (e.g., block devices operations) include interaction
with hardware. E.g., APIC polled by block devices sets one of IRQ flags.
Flag to be set depends on currently executed thread (CPU or iothread).
Therefore in replay mode we have to process the checkpoints in the same thread
as they were recorded.
qemu_clock_warp function (and its checkpoint) may be called from different
thread. This patch decouples two different execution cases of this function:
call when CPU is sleeping from iothread and call from cpu thread to update
virtual clock.
First task is performed by qemu_start_warp_timer function. It sets warp
timer event to the moment of nearest pending virtual timer.
Second function (qemu_account_warp_timer) is called from cpu thread
before execution of the code. It advances virtual clock by adding the length
of period while CPU was sleeping.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20160310115609.4812.44986.stgit@PASHA-ISP>
[Update docs. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-15 18:23:45 +01:00
Pavel Dovgalyuk 281b2201e4 icount: remove obsolete warp call
qemu_clock_warp call in qemu_tcg_wait_io_event function is not needed
anymore, because it is called in every iteration of main_loop_wait.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20160310115603.4812.67559.stgit@PASHA-ISP>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-15 18:23:42 +01:00
Pranith Kumar 778d9f9b25 icount: possible options for sleep are on or off
icount sleep takes on or off as options. A few places mention sleep=no
which is not accepted. This patch corrects them.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Message-Id: <1456499811-16819-1-git-send-email-bobby.prani@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-07 13:26:38 +01:00
Eric Blake 544a373159 qapi: Don't box branches of flat unions
There's no reason to do two malloc's for a flat union; let's just
inline the branch struct directly into the C union branch of the
flat union.

Surprisingly, fewer clients were actually using explicit references
to the branch types in comparison to the number of flat unions
thus modified.

This lets us reduce the hack in qapi-types:gen_variants() added in
the previous patch; we no longer need to distinguish between
alternates and flat unions.

The change to unboxed structs means that u.data (added in commit
cee2dedb) is now coincident with random fields of each branch of
the flat union, whereas beforehand it was only coincident with
pointers (since all branches of a flat union have to be objects).
Note that this was already the case for simple unions - but there
we got lucky.  Remember, visit_start_union() blindly returns true
for all visitors except for the dealloc visitor, where it returns
the value !!obj->u.data, and that this result then controls
whether to proceed with the visit to the variant.  Pre-patch,
this meant that flat unions were testing whether the boxed pointer
was still NULL, and thereby skipping visit_end_implicit_struct()
and avoiding a NULL dereference if the pointer had not been
allocated.  The same was true for simple unions where the current
branch had pointer type, except there we bypassed visit_type_FOO().
But for simple unions where the current branch had scalar type, the
contents of that scalar meant that the decision to call
visit_type_FOO() was data-dependent - the reason we got lucky there
is that visit_type_FOO() for all scalar types in the dealloc visitor
is a no-op (only the pointer variants had anything to free), so it
did not matter whether the dealloc visit was skipped.  But with this
patch, we would risk leaking memory if we could skip a call to
visit_type_FOO_fields() based solely on a data-dependent decision.

But notice: in the dealloc visitor, visit_type_FOO() already handles
a NULL obj - it was only the visit_type_implicit_FOO() that was
failing to check for NULL. And now that we have refactored things to
have the branch be part of the parent struct, we no longer have a
separate pointer that can be NULL in the first place.  So we can just
delete the call to visit_start_union() altogether, and blindly visit
the branch type; there is no change in behavior except to the dealloc
visitor, where we now unconditionally visit the branch, but where that
visit is now always safe (for a flat union, we can no longer
dereference NULL, and for a simple union, visit_type_FOO() was already
safely handling NULL on pointer types).

Unfortunately, simple unions are not as easy to switch to unboxed
layout; because we are special-casing the hidden implicit type with
a single 'data' member, we really DO need to keep calling another
layer of visit_start_struct(), with a second malloc; although there
are some cleanups planned for simple unions in later patches.

visit_start_union() and gen_visit_implicit_struct() are now unused.
Drop them.

Note that after this patch, the only remaining use of
visit_start_implicit_struct() is for alternate types; the next patch
will do further cleanup based on that fact.

Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1455778109-6278-14-git-send-email-eblake@redhat.com>
[Dead code deletion squashed in, commit message updated accordingly]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-02-19 11:08:57 +01:00
Eric Blake 86ae191163 qapi: Fix compilation failure on MIPS and SPARC
Commit 86f4b687 broke compilation on MIPS and SPARC, which have a
preprocessor pollution of '#define mips 1' and '#define sparc 1',
respectively.  Treat it the same way as we do for the pollution with
'unix', so that QMP remains backwards compatible and only the C code
needs to use the alternative 'q_mips', 'q_sparc' spelling.

CC: James Hogan <james.hogan@imgtec.com>
CC: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Eric Blake <eblake@redhat.com>
Tested-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-02-08 17:29:57 +01:00
Peter Maydell 7b31bbc2e6 exec: Clean up includes
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.

This commit was created with scripts/clean-includes.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1453832250-766-4-git-send-email-peter.maydell@linaro.org
2016-01-29 15:07:22 +00:00
Peter Maydell ba3fb2f023 * chardev support for TLS and leak fix
* NBD fix from Denis
 * condvar fix from Dave
 * kvm_stat and dump-guest-memory almost rewrite
 * mem-prealloc fix from Luiz
 * manpage style improvement
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQEcBAABAgAGBQJWp4mKAAoJEL/70l94x66DersH/iUfwRTL7tmGOiUX73Qm32da
 QseRiC5E5OaTLOGm+Q0Aehjq6Q18zgdiz/+/wSTPjnLmOiSDn6Sr6yB/URSMwhOE
 +JVX3+UOpfHpQ1KHlBesIjS/WBSS1691ND1OPcHbHHa6UYbwEUTEc00hus8nVx6J
 wyeteUoBryZA177rjVNb9sH7ncNFuuiQDfkr5pmC5f5JEsDiSK9hDmlg9sFnTWrO
 XIVqQb0PD+EbOuufR4z3PTLIgbZEXegEgWOsE1FLBTVY/CZAkujynccOENIujFVv
 CEhHJrGWo2NU0yeVJ1UlHREQyK+suIHgsiJlQKvAW8ZyFNqpy3+sWSEo7ZBpB6U=
 =bVe7
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

* chardev support for TLS and leak fix
* NBD fix from Denis
* condvar fix from Dave
* kvm_stat and dump-guest-memory almost rewrite
* mem-prealloc fix from Luiz
* manpage style improvement

# gpg: Signature made Tue 26 Jan 2016 14:58:18 GMT using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>"

* remotes/bonzini/tags/for-upstream: (49 commits)
  scripts/dump-guest-memory.py: Fix module docstring
  scripts/dump-guest-memory.py: Introduce multi-arch support
  scripts/dump-guest-memory.py: Cleanup functions
  scripts/dump-guest-memory.py: Improve python 3 compatibility
  scripts/dump-guest-memory.py: Make methods functions
  scripts/dump-guest-memory.py: Move constants to the top
  nbd: add missed aio_context_acquire in nbd_export_new
  memory: exit when hugepage allocation fails if mem-prealloc
  cpus: use broadcast on qemu_pause_cond
  scripts/kvm/kvm_stat: Add optparse description
  scripts/kvm/kvm_stat: Add interactive filtering
  scripts/kvm/kvm_stat: Fixup filtering
  scripts/kvm/kvm_stat: Fix rlimit for unprivileged users
  scripts/kvm/kvm_stat: Read event values as u64
  scripts/kvm/kvm_stat: Cleanup and pre-init perf_event_attr
  scripts/kvm/kvm_stat: Fix output formatting
  scripts/kvm/kvm_stat: Make tui function a class
  scripts/kvm/kvm_stat: Remove unneeded X86_EXIT_REASONS
  scripts/kvm/kvm_stat: Group arch specific data
  scripts/kvm/kvm_stat: Cleanup of Event class
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-01-26 15:09:13 +00:00
Dr. David Alan Gilbert 96bce6831b cpus: use broadcast on qemu_pause_cond
Jiri saw a hang on pause_all_vcpus called from postcopy_start,
where the cpus are all apparently stopped ('stopped' flag set)
but pause_all_vcpus is still stuck on a cond_wait on qemu_paused_cond.
We suspect this is happening if a qmp_stop is called at about the
same time as the postcopy code calls that pause_all_vcpus;
although they both should have the main lock held, Paolo spotted
the cond_wait unlocks the global lock so perhaps they both
could end up waiting at the same time?

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reported-by: Jiri Denemark <jdenemar@redhat.com>
Message-Id: <1453716498-27238-1-git-send-email-dgilbert@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-01-26 15:58:14 +01:00
Peter Crosthwaite 6731d864f8 qom/cpu: Add MemoryRegion property
Add a MemoryRegion property, which if set is used to construct
the CPU's initial (default) AddressSpace.

Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
[PMM: code is moved from qom/cpu.c to exec.c to avoid having to
 make qom/cpu.o be a non-common object file; code to use the
 MemoryRegion and to default it to system_memory added.]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2016-01-21 14:15:06 +00:00
Peter Maydell 12ebc9a76d exec.c: Allow target CPUs to define multiple AddressSpaces
Allow multiple calls to cpu_address_space_init(); each
call adds an entry to the cpu->ases array at the specified
index. It is up to the target-specific CPU code to actually use
these extra address spaces.

Since this multiple AddressSpace support won't work with
KVM, add an assertion to avoid confusing failures.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2016-01-21 14:15:04 +00:00
Peter Maydell 56943e8cc1 exec.c: Don't set cpu->as until cpu_address_space_init
Rather than setting cpu->as unconditionally in cpu_exec_init
(and then having target-i386 override this later), don't set
it until the first call to cpu_address_space_init.

This requires us to initialise the address space for
both TCG and KVM (KVM doesn't need the AS listener but
it does require cpu->as to be set).

For target CPUs which don't set up any address spaces (currently
everything except i386), add the default address_space_memory
in qemu_init_vcpu().

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2016-01-21 14:15:04 +00:00
Eric Blake 86f4b6871c cpu: Convert CpuInfo into flat union
The CpuInfo struct is used only by the 'query-cpus' output
command, so we are free to modify it by adding fields (clients
are already supposed to ignore unknown output fields), or by
changing optional members to mandatory, while still keeping
QMP wire compatibility with older versions of qemu.

When qapi type CpuInfo was originally created for 0.14, we had
no notion of a flat union, and instead just listed a bunch of
optional fields with documentation about the mutually-exclusive
choice of which instruction pointer field(s) would be provided
for a given architecture.  But now that we have flat unions and
introspection, it is better to segregate off which fields will
be provided according to the actual architecture.  With this in
place, we no longer need the fields to be optional, because the
choice of the new 'arch' discriminator serves that role.

This has an additional benefit: the old all-in-one struct was
the only place in the code base that had a case-sensitive
naming of members 'pc' vs. 'PC'.  Separating these spellings
into different branches of the flat union will allow us to add
restrictions against future case-insensitive collisions, since
that is generally a poor interface practice.

Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1447836791-369-25-git-send-email-eblake@redhat.com>
[Spelling of CPUInfo{SPARC,PPC,MIPS} fixed]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2015-12-17 08:21:28 +01:00
Wen Congyang b2780d3253 call bdrv_drain_all() even if the vm is stopped
There are still I/O operations when the vm is stopped. For example,
stop the vm, and do block migration. In this case, we don't drain all
I/O operation, and may meet the following problem:

qemu-system-x86_64: migration/block.c:731: block_save_complete: Assertion `block_mig_state.submitted == 0' failed.

Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Message-Id: <564EE92E.4070701@cn.fujitsu.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-26 16:47:44 +01:00
Peter Maydell 9319738080 So here it is, let's see what happens.
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQEcBAABCAAGBQJWPHM6AAoJEL/70l94x66DK5YIAJTNthYWL8eNhQ1iek6CLlV+
 etVXm3JDmkV0zOfYVHLBb44VLZ6I1ocas+57F/kmz7SKpMLiI6bMXRxhTSkiO4D+
 3N36cWQf3fq+P0DmxuikMlYGz8V6QQ5PQE2xJKV0ZIWAkiqInxilkN3qt81sNR+A
 A9Ohom3sc0eGHyYJcVDK4krbnNSAZjIB2yMWperw61x+GYAhxjA02HPUgB32KK6q
 KrdnKmnRu9Cw6y4wTCbbDITJztPexZYsX2DOJh30wC0eNcE+MZ7J2im8Frpxe+Ml
 C8MUuvSqLOyeu9tUfrXGzd6kMtEKrmU+fh2nNbxJbtfowDjkW2jcIEgC0UjkGE4=
 =BF1q
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream-replay' into staging

So here it is, let's see what happens.

# gpg: Signature made Fri 06 Nov 2015 09:30:34 GMT using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>"

* remotes/bonzini/tags/for-upstream-replay:
  replay: recording of the user input
  replay: command line options
  replay: replay blockers for devices
  replay: initialization and deinitialization
  replay: ptimer
  bottom halves: introduce bh call function
  replay: checkpoints
  icount: improve counting for record/replay
  replay: shutdown event
  replay: recording and replaying clock ticks
  replay: asynchronous events infrastructure
  replay: interrupts and exceptions
  cpu: replay instructions sequence
  cpu-exec: allow temporary disabling icount
  replay: introduce icount event
  replay: introduce mutex to protect the replay log
  replay: internal functions for replay log
  replay: global variables and function stubs

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-11-06 11:31:40 +00:00
Pavel Dovgalyuk 8bd7f71d79 replay: checkpoints
This patch introduces checkpoints that synchronize cpu thread and iothread.
When checkpoint is met in the code all asynchronous events from the queue
are executed.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20150917162444.8676.52916.stgit@PASHA-ISP.def.inno>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
2015-11-06 10:16:03 +01:00
Pavel Dovgalyuk efab87cf79 icount: improve counting for record/replay
icount_warp_rt function is called by qemu_clock_warp and as
callback of icount_warp timer. This patch adds call to qemu_clock_warp
into main_loop_wait function, because icount warp may be missed
in record/replay mode, when CPU is sleeping.
This patch also disables of calling this function by timer, because
it is not needed after making modifications of main_loop_wait.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20150917162439.8676.38290.stgit@PASHA-ISP.def.inno>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
2015-11-06 10:16:02 +01:00
Pavel Dovgalyuk 8eda206e09 replay: recording and replaying clock ticks
Clock ticks are considered as the sources of non-deterministic data for
virtual machine. This patch implements saving the clock values when they
are acquired (virtual, host clock).
When replaying the execution corresponding values are read from log and
transfered to the module, which wants to read the values.
Such a design required the clock polling to be synchronized. Sometimes
it is not true - e.g. when timeouts for timer lists are checked. In this case
we use a cached value of the clock, passing it to the client code.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20150917162427.8676.36558.stgit@PASHA-ISP.def.inno>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
2015-11-06 10:16:02 +01:00
Pavel Dovgalyuk 8b42704441 cpu: replay instructions sequence
This patch adds calls to replay functions into the icount setup block.
In record mode number of executed instructions is written to the log.
In replay mode number of istructions to execute is taken from the replay log.
When replayed instructions counter is expired qemu_notify_event()
function is called to wake up the iothread.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20150917162405.8676.31890.stgit@PASHA-ISP.def.inno>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-05 12:19:09 +01:00
Liang Li 6388acc853 Revert "Introduce cpu_clean_all_dirty"
This reverts commit de9d61e83d.

Now 'cpu_clean_all_dirty' is useless, we can revert the related code.

Conflicts:
	include/sysemu/kvm.h

Signed-off-by: Liang Li <liang.z.li@intel.com>
Message-Id: <1446695464-27116-3-git-send-email-liang.z.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-05 11:28:23 +01:00
Christopher Covington 4a7428c5a7 s/cpu_get_real_ticks/cpu_get_host_ticks/
This should help clarify the purpose of the function that returns
the host system's CPU cycle count.

Signed-off-by: Christopher Covington <cov@codeaurora.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
ppc portion
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2015-10-08 19:46:01 +03:00
Jason J. Herne 2adcc85d40 cpu: Provide vcpu throttling interface
Provide a method to throttle guest cpu execution. CPUState is augmented with
timeout controls and throttle start/stop functions. To throttle the guest cpu
the caller simply has to call the throttle set function and provide a percentage
of throttle time.

Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
2015-09-30 09:42:04 +02:00
KONRAD Frederic d5f8d61390 cpus: remove tcg_halt_cond and tcg_cpu_thread globals
This hides the tcg_halt_cond and tcg_cpu_thread global variables
inside qemu_tcg_init_vcpu.  Multi-threaded TCG will need one
QemuCond and one QemuThread per virtual cpu, so it's preferrable
to use cpu->halt_cond and cpu->thread.

Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Message-Id: <1439220437-23957-9-git-send-email-fred.konrad@greensocs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-09 15:34:55 +02:00
Paolo Bonzini 376692b9dc cpus: protect work list with work_mutex
Protect the list of queued work items with something other than
the BQL, as a preparation for running the work items outside it.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-09 15:34:55 +02:00
Paolo Bonzini e0c382113f tcg: signal-free qemu_cpu_kick
Signals are slow and do not exist on Win32.  The previous patches
have done most of the legwork to introduce memory barriers (some
of them were even there already for the sake of Windows!) and
we can now set the flags directly in the iothread.

qemu_cpu_kick_thread is not used anymore on TCG, since the TCG thread is
never outside usermode while the CPU is running (not halted).  Instead run
the content of the signal handler (now in qemu_cpu_kick_no_halt) directly.
qemu_cpu_kick_no_halt is also used in qemu_mutex_lock_iothread to avoid
the overhead of qemu_cond_broadcast.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-09 15:34:54 +02:00
Paolo Bonzini 9102dedaa1 use qemu_cpu_kick instead of cpu_exit or qemu_cpu_kick_thread
Use the same API to trigger interruption of a CPU, no matter if
under TCG or KVM.  There is no difference: these calls come from
the CPU thread, so the qemu_cpu_kick calls will send a signal
to the running thread and it will be processed synchronously,
just like a call to cpu_exit.  The only difference is in the
overhead, but neither call to cpu_exit (now qemu_cpu_kick)
is in a hot path.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-09 15:34:54 +02:00
Paolo Bonzini aed807c8e2 tcg: synchronize exit_request and tcg_current_cpu accesses
Synchronize the remaining pair of accesses in cpu_signal.  These should
be necessary on Windows as well, at least in theory.  Probably
SuspendProcess and ResumeProcess introduce some implicit memory
barrier.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-09 15:34:54 +02:00
Paolo Bonzini 9373e63297 tcg: introduce tcg_current_cpu
This is already useful on Windows in order to remove tls.h, because
accesses to current_cpu are done from a different thread on that
platform.  It will be used on POSIX platforms as soon TCG stops using
signals to interrupt the execution of translated code.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-09 15:34:53 +02:00
Aníbal Limón 46036b2462 cpus.c: qemu_mutex_lock_iothread fix race condition at cpu thread init
When QEMU starts the RCU thread executes qemu_mutex_lock_thread
causing error "qemu:qemu_cpu_kick_thread: No such process" and exits.

This isn't occur frequently but in glibc the thread id can exist and
this not guarantee that the thread is on active/running state. If is
inserted a sleep(1) after newthread assignment [1] the issue appears.

So not make assumption that thread exist if first_cpu->thread is set
then change the validation of cpu to created that is set into cpu
threads (kvm, tcg, dummy).

[1] https://sourceware.org/git/?p=glibc.git;a=blob;f=nptl/pthread_create.c;h=d10f4ea8004e1d8f3a268b95cc0f8d93b8d89867;hb=HEAD#l621

Cc: qemu-stable@nongnu.org
Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
Message-Id: <1441313313-3040-1-git-send-email-anibal.limon@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-07 18:14:03 +02:00
Paolo Bonzini 414b15c909 exec: drop cpu_can_do_io, just read cpu->can_do_io
After commit 626cf8f (icount: set can_do_io outside TB execution,
2014-12-08), can_do_io is set to 1 if not executing code.  It is
no longer necessary to make this assumption in cpu_can_do_io.

It is also possible to remove the use_icount test, simply by
never setting cpu->can_do_io to 0 unless use_icount is true.

With these changes cpu_can_do_io boils down to a read of
cpu->can_do_io.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-14 23:40:32 +02:00
Paolo Bonzini ab28bd2312 rcu: actually register threads that have RCU read-side critical sections
Otherwise, grace periods are detected too early!

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-07-24 13:57:45 +02:00
Peter Crosthwaite ea3e984740 cpu-exec: Purge all uses of ENV_GET_CPU()
Remove un-needed usages of ENV_GET_CPU() by converting the APIs to use
CPUState pointers and retrieving the env_ptr as minimally needed.

Scripted conversion for target-* change:

for I in target-*/cpu.h; do
    sed -i \
    's/\(^int cpu_[^_]*_exec(\)[^ ][^ ]* \*s);$/\1CPUState *cpu);/' \
    $I;
done

Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2015-07-09 15:20:40 +02:00
Peter Crosthwaite 3d57f7893c cpu: Change tcg_cpu_exec() arg to cpu, not env
The sole caller of this function navigates the cpu->env_ptr only for
this function to take it back the cpu pointer straight away. Pass in
cpu pointer instead and grab the env pointer locally in the function.
Removes a core code usage of ENV_GET_CPU().

Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2015-07-09 15:20:40 +02:00
Paolo Bonzini afbe70535f main-loop: introduce qemu_mutex_iothread_locked
This function will be used to avoid recursive locking of the iothread lock
whenever address_space_rw/ld*/st* are called with the BQL held, which is
almost always the case.

Tracking whether the iothread is owned is very cheap (just use a TLS
variable) but requires some care because now the lock must always be
taken with qemu_mutex_lock_iothread().  Previously this wasn't the case.
Outside TCG mode this is not a problem.  In TCG mode, we need to be
careful and avoid the "prod out of compiled code" step if already
in a VCPU thread.  This is easily done with a check on current_cpu,
i.e. qemu_in_vcpu_thread().

Hopefully, multithreaded TCG will get rid of the whole logic to kick
VCPUs whenever an I/O event occurs!

Cc: Frederic Konrad <fred.konrad@greensocs.com>
Message-Id: <1434646046-27150-3-git-send-email-pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-07-01 15:45:50 +02:00
Paolo Bonzini 2e7f7a3c86 main-loop: use qemu_mutex_lock_iothread consistently
The next patch will require the BQL to be always taken with
qemu_mutex_lock_iothread(), while right now this isn't the case.

Outside TCG mode this is not a problem.  In TCG mode, we need to be
careful and avoid the "prod out of compiled code" step if already
in a VCPU thread.  This is easily done with a check on current_cpu,
i.e. qemu_in_vcpu_thread().

Hopefully, multithreaded TCG will get rid of the whole logic to kick
VCPUs whenever an I/O event occurs!

Cc: Frederic Konrad <fred.konrad@greensocs.com>
Message-Id: <1434646046-27150-2-git-send-email-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-07-01 15:45:50 +02:00
Markus Armbruster d49b683644 qerror: Move #include out of qerror.h
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
2015-06-22 18:20:40 +02:00
Markus Armbruster c6bd8c706a qerror: Clean up QERR_ macros to expand into a single string
These macros expand into error class enumeration constant, comma,
string.  Unclean.  Has been that way since commit 13f59ae.

The error class is always ERROR_CLASS_GENERIC_ERROR since the previous
commit.

Clean up as follows:

* Prepend every use of a QERR_ macro by ERROR_CLASS_GENERIC_ERROR, and
  delete it from the QERR_ macro.  No change after preprocessing.

* Rewrite error_set(ERROR_CLASS_GENERIC_ERROR, ...) into
  error_setg(...).  Again, no change after preprocessing.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
2015-06-22 18:20:40 +02:00
Juan Quintela 5cd8cadae8 migration: Use normal VMStateDescriptions for Subsections
We create optional sections with this patch.  But we already have
optional subsections.  Instead of having two mechanism that do the
same, we can just generalize it.

For subsections we just change:

- Add a needed function to VMStateDescription
- Remove VMStateSubsection (after removal of the needed function
  it is just a VMStateDescription)
- Adjust the whole tree, moving the needed function to the corresponding
  VMStateDescription

Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-06-12 06:53:57 +02:00
Victor CLEMENT d7a0f71d9a icount: print a warning if there is no more deadline in sleep=no mode
While qemu is running in sleep=no mode, a warning will be printed
when no timer deadline is set.
As this mode is intended for getting deterministic virtual time, if no
timer is set on the virtual clock this determinism is broken.

Signed-off-by: Victor CLEMENT <victor.clement@openwide.fr>
Message-Id: <1432912446-9811-4-git-send-email-victor.clement@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05 17:10:00 +02:00
Victor CLEMENT f1f4b57e88 icount: add sleep parameter to the icount option to set icount_sleep mode
The 'sleep' parameter sets the icount_sleep mode, which is enabled by
default. To disable it, add the 'sleep=no' parameter (or 'nosleep') to the
qemu -icount option.

Signed-off-by: Victor CLEMENT <victor.clement@openwide.fr>
Message-Id: <1432912446-9811-3-git-send-email-victor.clement@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05 17:10:00 +02:00
Victor CLEMENT 5045e9d912 icount: implement a new icount_sleep mode toggleing real-time cpu sleep
When the icount_sleep mode is disabled, the QEMU_VIRTUAL_CLOCK runs at the
maximum possible speed by warping the sleep times of the virtual cpu to the
soonest clock deadline. The virtual clock will be updated only according
the instruction counter.

Signed-off-by: Victor CLEMENT <victor.clement@openwide.fr>
Message-Id: <1432912446-9811-2-git-send-email-victor.clement@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05 17:10:00 +02:00
Eduardo Habkost 58f88d4b7e qmp: Add qom_path field to query-cpus command
This will allow clients to query additional information directly using
qom-get on the CPU objects.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2015-05-14 17:25:46 +02:00
Emilio G. Cota c28e399cad cpus: use first_cpu macro instead of QTAILQ_FIRST(&cpus)
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2015-04-30 16:05:48 +03:00
Peter Crosthwaite bdd459a00a cpus: Don't kick un-realized cpus.
following a464982499, it's now possible for
there to be attempts to take the BQL before CPUs have been realized in
cases where a machine model inits peripherals before the first CPU.

BQL lock aquisition kicks the first_cpu, leading to a segfault if this
happens pre-realize. Guard the CPU kick routine to perform no action for
a CPU that doesn't exist or doesn't have a thread yet.

There was a fix to this with commit
6b49809c59, but the check there misses
the case where the CPU has been inited and not realized. Strengthen the
check to make sure that the first_cpu has a thread (i.e. it is
realized) before allowing the kick.

Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-Id: <1427107689-6946-1-git-send-email-peter.crosthwaite@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-03-25 13:38:07 +01:00
Alexey Kardashevskiy 89d5cbddee profiler: Reenable built-in profiler
2ed1ebcf6 "timer: replace time() with QEMU_CLOCK_HOST" broke compile
when configured with --enable-profiler. Turned out the profiler has been
broken for a while.

This does s/qemu_time/tcg_time/ as the profiler only works in a TCG mode.
This also fixes the compile error.

This changes profile_getclock() to return nanoseconds rather than
CPU ticks as the "profile" HMP command prints seconds and there is no
platform-independent way to get ticks-per-second rate.
Since TCG is quite slow and get_clock() returns nanoseconds (fine
enough), this should not affect precision much.

This removes unused qemu_time_start and tlb_flush_time.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <1426478258-29961-1-git-send-email-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-03-18 12:07:34 +01:00
Borislav Petkov 0dc9daf0be memsave: Improve and disambiguate error message
When requesting a size which cannot be read, the error message shows
a different address which is misleading to the user and it looks like
something's wrong with the address parsing. This is because the input
@addr variable is incremented in the memory dumping loop:

(qemu) memsave 0xffffffff8418069c 0xb00000 mem
Invalid addr 0xffffffff849ffe9c specified

Fix that by saving the original address and size and use them in the
error message:

(qemu) memsave 0xffffffff8418069c 0xb00000 mem
Invalid addr 0xffffffff8418069c/size 11534336 specified

Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2015-03-10 08:15:33 +03:00
Paolo Bonzini 21618b3e55 cpus: be more paranoid in avoiding deadlocks
For good measure, ensure that the following sequence:

   thread 1 calls qemu_mutex_lock_iothread
   thread 2 calls qemu_mutex_lock_iothread
   VCPU thread are created
   VCPU thread enters execution loop

results in the VCPU threads letting the other two threads run
and obeying iothread_requesting_mutex even if the VCPUs are
not halted.  To do this, check iothread_requesting_mutex
before execution starts.

Tested-by: Leon Alrae <leon.alrae@imgtec.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-03-02 10:57:07 +01:00
Paolo Bonzini 6b49809c59 cpus: fix deadlock and segfault in qemu_mutex_lock_iothread
When two threads (other than the low-priority TCG VCPU thread)
are competing for the iothread lock, a deadlock can happen.  This
is because iothread_requesting_mutex is set to false by the first
thread that gets the mutex, and then the VCPU thread might never
yield from the execution loop.  If iothread_requesting_mutex is
changed from a bool to a counter, the deadlock is fixed.

However, there is another bug in qemu_mutex_lock_iothread that
can be triggered by the new call_rcu thread.  The bug happens
if qemu_mutex_lock_iothread is called before the CPUs are
created.  In that case, first_cpu is NULL and the caller
segfaults in qemu_mutex_lock_iothread.  To fix this, just
do not do the kick if first_cpu is NULL.

Reported-by: Leon Alrae <leon.alrae@imgtec.com>
Reported-by: Andreas Gustafsson <gson@gson.org>
Tested-by: Leon Alrae <leon.alrae@imgtec.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-03-02 10:57:07 +01:00
Peter Maydell 73104fd399 - vhost-scsi: add bootindex property
- RCU: fix MemoryRegion lifetime issues in PCI; document the rules;
 convert of AddressSpaceDispatch and RAMList
 - KVM: add kvm_exit reasons for aarch64
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQEcBAABAgAGBQJU4hugAAoJEL/70l94x66DZXEH/i72tOgvKZfAjfq2xmHXNEsr
 roCfTFIIjKK7feyW6YgwT5pgex6I5umFsO+uIyI/wbu8nDl/3NYEQBT4fR2cGfli
 GKeJOEu8kf+Zt8U+fbxyVQclbuU5S0Ujsg1fX4QXC4swB5fGLT2cRWJ5qd6hKBQs
 GflBuLa7h4eOzcTtOPpqRIwZ8mQE0uxv/hKq9kYLKHXJN2aWsiOls8KQ2CXj2yAl
 p6bMS5f0H0S/1hvQcQV9EazX7owlPIEet3AmSL1TC2sjJ8hrNGMBoFPtUys1uqjc
 B3CwuGi0JtWIduFYV9vZ/Ze4G7Y2iZlqc5vDxIl94d+iFmoHymDOi3mFUZ3H8XQ=
 =Lk9p
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

- vhost-scsi: add bootindex property
- RCU: fix MemoryRegion lifetime issues in PCI; document the rules;
convert of AddressSpaceDispatch and RAMList
- KVM: add kvm_exit reasons for aarch64

# gpg: Signature made Mon Feb 16 16:32:32 2015 GMT using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini/tags/for-upstream: (21 commits)
  Convert ram_list to RCU
  exec: convert ram_list to QLIST
  cosmetic changes preparing for the following patches
  exec: protect mru_block with RCU
  rcu: add g_free_rcu
  rcu: introduce RCU-enabled QLIST
  exec: RCUify AddressSpaceDispatch
  exec: make iotlb RCU-friendly
  exec: introduce cpu_reload_memory_map
  docs: clarify memory region lifecycle
  pci: split shpc_cleanup and shpc_free
  pcie: remove mmconfig memory leak and wrap mmconfig update with transaction
  memory: keep the owner of the AddressSpace alive until do_address_space_destroy
  rcu: run RCU callbacks under the BQL
  rcu: do not let RCU callbacks pile up indefinitely
  vhost-scsi: set the bootable value of channel/target/lun
  vhost-scsi: add a property for booting
  vhost-scsi: expose the TYPE_FW_PATH_PROVIDER interface
  vhost-scsi: add bootindex property
  qdev: support to get a device firmware path directly
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-02-24 13:58:18 +00:00
Paolo Bonzini 79e2b9aecc exec: RCUify AddressSpaceDispatch
Note that even after this patch, most callers of address_space_*
functions must still be under the big QEMU lock, otherwise the memory
region returned by address_space_translate can disappear as soon as
address_space_translate returns.  This will be fixed in the next part
of this series.

Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-02-16 17:30:19 +01:00
Fam Zheng efef88b3d9 qtest: Fix deadloop by running main loop AIO context's timers
qemu_clock_run_timers() only takes care of main_loop_tlg, we shouldn't
forget aio timer list groups.

Currently, the qemu_clock_deadline_ns_all (a few lines above) counts all
the timergroups of this clock type, including aio tlg, but we don't fire
them, so they are never cleared, which makes a dead loop.

For example, this function hangs when trying to drive throttled block
request queue with qtest clock_step.

Signed-off-by: Fam Zheng <famz@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1421661103-29153-1-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2015-02-16 15:07:17 +00:00
Paolo Bonzini 2e91cc62f2 cpu-exec: simplify init_delay_params
With the introduction of QEMU_CLOCK_VIRTUAL_RT, the computation of
sc->diff_clk can be simplified nicely:

        qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) -
        qemu_clock_get_ns(QEMU_CLOCK_REALTIME) +
        cpu_get_clock_offset()

     =  qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) -
        (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - cpu_get_clock_offset())

     =  qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) -
        (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + timers_state.cpu_clock_offset)

     =  qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) -
        qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT)

Cc: Sebastian Tanase <sebastian.tanase@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-02-02 16:55:11 +01:00
Pavel Dovgalyuk 1979b908b6 cpus: consistently use QEMU_CLOCK_VIRTUAL_RT for icount_warp_rt timer
Fix mismatch between timer_new_ms and timer_mod.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-01-14 10:38:58 +01:00
Paolo Bonzini f9d8f66735 cpu: initialize cpu->exception_index on reset
This unbreaks linux-user (broken by e511b4d, cpu-exec: reset exception_index
correctly, 2014-11-26).

Reported-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Eduardo Habkost <ehabkost@redhat.com>
Message-id: 1418989994-17244-2-git-send-email-pbonzini@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-12-20 20:38:07 +00:00
Pavel Dovgalyuk bf2a7ddb0a cpus: make icount warp behave well with respect to stop/cont
This patch makes icount warp use the new QEMU_CLOCK_VIRTUAL_RT clock.
This way, icount's QEMU_CLOCK_VIRTUAL will never count time during which
the virtual machine is stopped.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Pavel Dovgalyuk 2a62914bd8 icount: introduce cpu_get_icount_raw
Separate accessing the instruction counter from the compensation for
speed and halting that are introduced by qemu_icount_bias.  This
introduces new infrastructure used by the record/replay patches.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Pavel Dovgalyuk 626cf8f4c6 icount: set can_do_io outside TB execution
This patch sets can_do_io function to allow reading icount
within cpu-exec, but outside TB execution.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Pavel Dovgalyuk e511b4d783 cpu-exec: reset exception_index correctly
Exception index is reset at every entry at every entry into cpu_exec()
function. This may cause missing the exceptions while replaying them.
This patch moves exception_index reset to the locations where they are
processed.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Marcelo Tosatti de9d61e83d Introduce cpu_clean_all_dirty
Introduce cpu_clean_all_dirty, to force subsequent cpu_synchronize_all_states
to read in-kernel register state.

Cc: qemu-stable@nongnu.org
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-09-16 11:04:09 +02:00
Pavel Dovgalyuk 4603ea0105 cpu: init vmstate for ticks and clock offset
Ticks and clock offset used by CPU timers have to be saved in vmstate.
But vmstate for these fields registered only in icount mode.
Missing registration leads to breaking the continuity when vmstate is loaded.
This patch introduces new initialization function which fixes this.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-09-11 12:20:32 +02:00
Bastian Koppelmann 48e06fe0ed target-tricore: Add target stubs and qom-cpu
Add TriCore target stubs, and QOM cpu, and Maintainer

Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-id: 1409572800-4116-2-git-send-email-kbastian@mail.uni-paderborn.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-09-01 14:49:20 +01:00
Alexey Kardashevskiy 3dd7852f19 s390x: Migrate to new NMI interface
This implements an NMI interface for s390 and s390-ccw machines.

This removes #ifdef s390 branch in qmp_inject_nmi so new s390's
nmi_monitor_handler() callback is going to be used for NMI.

Since nmi_monitor_handler()-calling code is platform independent,
CPUState::cpu_index is used instead of S390CPU::env.cpu_num.
There should not be any change in behaviour as both @cpu_index and
@cpu_num are global CPU numbers.

Note that s390_cpu_restart() already takes care of the specified cpu,
so we don't need to schedule via async_run_on_cpu().

Since the only error s390_cpu_restart() can return is ENOSYS, convert
it to QERR_UNSUPPORTED.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-25 13:25:16 +02:00
Alexey Kardashevskiy 9cb805fd26 cpus: Define callback for QEMU "nmi" command
This introduces an NMI (Non Maskable Interrupt) interface with
a single nmi_monitor_handler() method. A machine or a device can
implement it. This searches for an QOM object with this interface
and if it is implemented, calls it. The callback implements an action
required to cause debug crash dump on in-kernel debugger invocation.
The callback returns Error**.

This adds a nmi_monitor_handle() helper which walks through
all objects to find the interface. The interface method is called
for all found instances.

This adds support for it in qmp_inject_nmi(). Since no architecture
supports it at the moment, there is no change in behaviour.

This changes inject-nmi command description for HMP and QMP.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-25 13:25:16 +02:00
Sebastian Tanase 27498bef35 monitor: Add drift info to 'info jit'
Show in 'info jit' the current delay between the host clock
and the guest clock. In addition, print the maximum advance
and delay of the guest compared to the host.

Signed-off-by: Sebastian Tanase <sebastian.tanase@openwide.fr>
Tested-by: Camille Bégué <camille.begue@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-07 15:09:48 +02:00
Sebastian Tanase c2aa5f8199 cpu-exec: Add sleeping algorithm
The goal is to sleep qemu whenever the guest clock
is in advance compared to the host clock (we use
the monotonic clocks). The amount of time to sleep
is calculated in the execution loop in cpu_exec.

At first, we tried to approximate at each for loop the real time elapsed
while searching for a TB (generating or retrieving from cache) and
executing it. We would then approximate the virtual time corresponding
to the number of virtual instructions executed. The difference between
these 2 values would allow us to know if the guest is in advance or delayed.
However, the function used for measuring the real time
(qemu_clock_get_ns(QEMU_CLOCK_REALTIME)) proved to be very expensive.
We had an added overhead of 13% of the total run time.

Therefore, we modified the algorithm and only take into account the
difference between the 2 clocks at the begining of the cpu_exec function.
During the for loop we try to reduce the advance of the guest only by
computing the virtual time elapsed and sleeping if necessary. The overhead
is thus reduced to 3%. Even though this method still has a noticeable
overhead, it no longer is a bottleneck in trying to achieve a better
guest frequency for which the guest clock is faster than the host one.

As for the the alignement of the 2 clocks, with the first algorithm
the guest clock was oscillating between -1 and 1ms compared to the host clock.
Using the second algorithm we notice that the guest is 5ms behind the host, which
is still acceptable for our use case.

The tests where conducted using fio and stress. The host machine in an i5 CPU at
3.10GHz running Debian Jessie (kernel 3.12). The guest machine is an arm versatile-pb
built with buildroot.

Currently, on our test machine, the lowest icount we can achieve that is suitable for
aligning the 2 clocks is 6. However, we observe that the IO tests (using fio) are
slower than the cpu tests (using stress).

Signed-off-by: Sebastian Tanase <sebastian.tanase@openwide.fr>
Tested-by: Camille Bégué <camille.begue@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-06 17:53:07 +02:00
Sebastian Tanase a8bfac3708 icount: Add align option to icount
The align option is used for activating the align algorithm
in order to synchronise the host clock and the guest clock.

Signed-off-by: Sebastian Tanase <sebastian.tanase@openwide.fr>
Tested-by: Camille Bégué <camille.begue@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-06 17:53:07 +02:00
Sebastian Tanase 1ad9580bd7 icount: Add QemuOpts for icount
Make icount parameter use QemuOpts style options in order
to easily add other suboptions.

Signed-off-by: Sebastian Tanase <sebastian.tanase@openwide.fr>
Tested-by: Camille Bégué <camille.begue@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-06 17:53:07 +02:00
Sebastian Tanase 7146839505 icount: Fix virtual clock start value on ARM
When using the icount option on ARM, the virtual
clock starts counting at realtime clock but it
should start at 0.

The reason why the virtual clock starts at realtime clock
is because the first time we call qemu_clock_warp (which
calls icount_warp_rt) in tcg_exec_all, qemu_icount_bias
(which is part of the virtual time computation mechanism)
will increment by realtime - vm_clock_warp_start, with
vm_clock_warp_start being 0 (see icount_warp_rt in cpus.c).

By changing the value of vm_clock_warp_start from 0 to -1,
the first time we call qemu_clock_warp which calls
icount_warp_rt, we will return immediatly because
icount_warp_rt first checks if vm_clock_warp_start is -1
and if it's the case it returns. Therefore, qemu_icount_bias
will first be incremented by the value of a virtual timer
deadline when the virtual cpu goes from active to inactive.

The virtual time will start at 0 and increment based
on the instruction counter when the vcpu is active or
the qemu_icount_bias value when inactive.

Signed-off-by: Sebastian Tanase <sebastian.tanase@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-06 17:53:07 +02:00
KONRAD Frederic 3f03131390 timer: add cpu_icount_to_ns function.
This adds cpu_icount_to_ns function which is needed for reverse execution.

It returns the time for a specific instruction.

Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-06 17:53:07 +02:00
KONRAD Frederic d09eae3726 migration: migrate icount fields.
This fixes a bug where qemu_icount and qemu_icount_bias are not migrated.
It adds a subsection "timer/icount" to vmstate_timers so icount is migrated only
when needed.

Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-06 17:53:07 +02:00
KONRAD Frederic c96778bb84 icount: put icount variables into TimerState.
This puts qemu_icount and qemu_icount_bias into TimerState structure to allow
them to be migrated.

Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-06 17:53:07 +02:00
Wenchao Xia a4e15de9a2 qapi event: convert STOP
Signed-off-by: Wenchao Xia <wenchaoqemu@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
2014-06-23 11:01:25 -04:00
Paolo Bonzini 74892d2468 vl: allow other threads to do qemu_system_vmstop_request
There patch protects vmstop_requested with a lock and introduces
qemu_system_vmstop_request_prepare.

Together with the new call to qemu_vmstop_requested in vm_start,
qemu_system_vmstop_request_prepare avoids a race where the VM could remain
stopped even though the iostatus of a block device has already been set
(for example).

qemu_system_vmstop_request_prepare however also lets the caller thread
delay observation of the state change until it has itself communicated
that change to the user.  This delay avoids any possibility of a wrong
reordering of the BLOCK_IO_ERROR event and the subsequent STOP event.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-06-23 16:36:13 +08:00
Wanlong Gao 96d0e26c23 NUMA: move numa related code to new file numa.c
Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>

MST: comment tweaks
2014-06-19 18:44:18 +03:00
Sergey Fedorov c9299e2fe7 qtest: fix qtest_clock_warp() for no deadline case
Use dedicated qemu_soonest_timeout() instead of MIN().

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-17 16:07:37 +02:00
Juan Quintela 35d08458a9 savevm: Remove all the unneeded version_minimum_id_old (rest)
After previous Peter patch, they are redundant.  This way we don't
assign them except when needed.  Once there, there were lots of case
where the ".fields" indentation was wrong:

     .fields = (VMStateField []) {
and
     .fields =      (VMStateField []) {

Change all the combinations to:

     .fields = (VMStateField[]){

The biggest problem (appart from aesthetics) was that checkpatch complained
when we copy&pasted the code from one place to another.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2014-05-14 15:24:51 +02:00
Stefan Weil eb6282f230 misc: Use cpu_physical_memory_read and cpu_physical_memory_write
These functions don't need type casts (as does cpu_physical_memory_rw)
and also make the code better readable.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-04-27 13:04:18 +04:00
Andreas Färber 28ecfd7a62 cpu: Move icount_decr field from CPU_COMMON to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:46 +01:00
Andreas Färber efee734004 cpu: Move icount_extra field from CPU_COMMON to CPUState
Reset it.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:46 +01:00
Andreas Färber 99df7dce8a cpu: Move can_do_io field from CPU_COMMON to CPUState
Rename can_do_io() to cpu_can_do_io() and change argument to CPUState.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:46 +01:00
Andreas Färber 8c2e1b0093 cpu: Turn cpu_has_work() into a CPUClass hook
Default to false.

Tidy variable naming and inline cast uses while at it.

Tested-by: Jia Liu <proljc@gmail.com> (or32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:01:49 +01:00
Dr. David Alan Gilbert 4900116e6f Add a 'name' parameter to qemu_thread_create
If enabled, set the thread name at creation (on GNU systems with
  pthread_set_np)
Fix up all the callers with a thread name

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
2014-03-09 21:09:38 +02:00
Edgar E. Iglesias 09daed848c cpu: Add per-cpu address space
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2014-02-11 22:56:37 +10:00
Chen Fan 02e5148334 target-i386: Move apic_state field from CPUX86State to X86CPU
This motion is preparing for refactoring vCPU APIC subsequently.

Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-12-23 16:30:40 +01:00
Paolo Bonzini 5f3e31012e timers: fix stop/cont with -icount
Stop/cont commands are broken with -icount due to a deadlock.  The
real problem is that the computation of timers_state.cpu_ticks_offset
makes no sense with -icount enabled: we set it to an icount clock value
in cpu_disable_ticks, and subtract a TSC (or similar, whatever
cpu_get_real_ticks happens to return) value in cpu_enable_ticks.

The fix is simple.  timers_state.cpu_ticks_offset is only used
together with cpu_get_real_ticks, so we can use cpu_get_real_ticks
in cpu_disable_ticks.  There is no need to update cpu_ticks_prev
at the time cpu_disable_ticks is called; instead, we can do it
the next time cpu_get_ticks is called.

The change to cpu_disable_ticks is the important part of the patch.
The rest modifies the code to always check timers_state.cpu_ticks_prev,
even when the ticks are not advancing (i.e. the VM is stopped).  It also
makes a similar change to cpu_get_clock_locked, so that the code remains
similar for cpu_get_ticks and cpu_get_clock_locked.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1382977938-13844-1-git-send-email-pbonzini@redhat.com
Signed-off-by: Anthony Liguori <aliguori@amazon.com>
2013-11-06 21:47:05 -08:00
Aneesh Kumar K.V 2f4d0f5990 target-ppc: Check for error on address translation in memsave command
When we translate the virtual address to physical check for error.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-10-25 23:25:48 +02:00
Paolo Bonzini 17a15f1b76 icount: make it thread-safe
This lets threads other than the I/O thread use vm_clock even in -icount mode.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-10-17 17:31:00 +02:00
Paolo Bonzini a3270e19cc icount: document (future) locking rules for icount
Reviewed-by: Alex Bligh <alex@alex.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-10-17 17:31:00 +02:00
Paolo Bonzini ce78d18ced icount: prepare the code for future races in calling qemu_clock_warp
Computing the deadline of all vm_clocks is somewhat expensive and calls
out to qemu-timer.c; two reasons not to do it in the seqlock's write-side
critical section.  This however opens the door for races in setting and
reading vm_clock_warp_start.

To plug them, we need to cover the case where a new deadline slips in
between the call to qemu_clock_deadline_ns_all and the actual modification
of the icount_warp_timer.  Restrict changes to vm_clock_warp_start and
the icount_warp_timer's expiration time, to only move them back (which
would simply cause an early wakeup).

If a vm_clock timer is cancelled while CPUs are idle, this might cause the
icount_warp_timer to fire unnecessarily.  This is not a problem, after it
fires the timer becomes inactive and the next call to timer_mod_anticipate
will be precise.

In addition to this, we must deactivate the icount_warp_timer _before_
checking whether CPUs are idle.  This way, if the "last" CPU becomes idle
during the call to timer_del we will still set up the icount_warp_timer.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-10-17 17:31:00 +02:00
Paolo Bonzini 8ed961d957 icount: reorganize icount_warp_rt
To prepare for future code changes, move the increment of qemu_icount_bias
outside the "if" statement.

Also, hoist outside the if the check for timers that expired due to the
"warping".  The check is redundant when !runstate_is_running(), but
doing it this way helps because the code that increments qemu_icount_bias
will be a critical section.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-10-17 17:31:00 +02:00
Paolo Bonzini 468cc7cf3b icount: use cpu_get_icount() directly
This will help later when we will have to place these calls in
a critical section, and thus call a version of cpu_get_icount()
that does not take the lock.

Reviewed-by: Alex Bligh <alex@alex.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-10-17 17:31:00 +02:00
Liu Ping Fan cb365646a9 timer: protect timers_state's clock with seqlock
QEMU_CLOCK_VIRTUAL may be read outside BQL. This will make its
foundation, i.e. cpu_clock_offset exposed to race condition.
Using private lock to protect it.

After this patch, reading QEMU_CLOCK_VIRTUAL is thread safe
unless use_icount is true, in which case the existing callers
still rely on the BQL.

Lock rule: private lock innermost, ie BQL->"this lock"

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-10-17 17:30:52 +02:00
Andreas Färber 38fcbd3f08 cpu: Replace qemu_for_each_cpu()
It was introduced to loop over CPUs from target-independent code, but
since commit 182735efaf target-independent
CPUState is used.

A loop can be considered more efficient than function calls in a loop,
and CPU_FOREACH() hides implementation details just as well, so use that
instead.

Suggested-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-09-03 12:25:55 +02:00
Andreas Färber bdc44640cb cpu: Use QTAILQ for CPU list
Introduce CPU_FOREACH(), CPU_FOREACH_SAFE() and CPU_NEXT() shorthand
macros.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-09-03 12:25:55 +02:00
Eugene (jno) Dvurechenski 7f7f975295 s390: wire up nmi command to raise a RESTART interrupt on S390
There is the 'nmi' command that is used to trigger a guest dump via kdump feature on x86.
s390 uses RESTART interrupt to trigger kdump.
So, this patch provides a mean to use 'nmi' command on s390 to raise RESTART interrupt.

The CPU to receive the RESTART interrupt is the "default" one.

There is an infrastructure to select the "default" CPU using 'cpu' command.
The 'info cpus' command can be used to see which one is the "default".

In order to wire up the RESTART to 'nmi' command we had to:
1. implement the kvm_s390_cpu_restart function by exporting the existing code
2. implement s390_cpu_restart function as kvm-aware wrapper
3. modify the qmp_inject_nmi function to enable (for s390) the scan for
   "default" CPU and call s390_cpu_restart for it;
3. fix some messages.

Signed-off-by: Eugene (jno) Dvurechenski <jno@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Alexander Graf <agraf@suse.de>
2013-08-30 14:16:48 +02:00
Alex Bligh 40daca54cd aio / timers: Rearrange timer.h & make legacy functions call non-legacy
Rearrange timer.h so it is in order by function type.

Make legacy functions call non-legacy functions rather than vice-versa.

Convert cpus.c to use new API.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-08-22 19:14:24 +02:00
Alex Bligh ac70aafc28 aio / timers: Use all timerlists in icount warp calculations
Notify all timerlists derived from vm_clock in icount warp
calculations.

When calculating timer delay based on vm_clock deadline, use
all timerlists.

For compatibility, maintain an apparent bug where when using
icount, if no vm_clock timer was set, qemu_clock_deadline
would return INT32_MAX and always set an icount clock expiry
about 2 seconds ahead.

NB: thread safety - when different timerlists sit on different
threads, this will need some locking.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-08-22 19:10:28 +02:00
Tiejun Chen 321bc0b2b2 cpus: Use cpu_is_stopped() efficiently
It makes more sense and will make things simpler later.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-08-16 18:44:33 +02:00
Liu Ping Fan d9cd4007d5 timer: make timers_state static
Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
Reviewed-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2013-07-27 11:22:53 +04:00
Andreas Färber f17ec444c3 exec: Change cpu_memory_rw_debug() argument to CPUState
Propagate X86CPU in kvmvapic for simplicity.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-23 02:41:33 +02:00
Andreas Färber ed2803da58 cpu: Move singlestep_enabled field from CPU_COMMON to CPUState
Prepares for changing cpu_single_step() argument to CPUState.

Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-23 02:41:32 +02:00
Kevin Wolf 594a45ce64 cpus: Let vm_stop[_force_state]() always flush block devices
Even if the VM is already stopped, we cannot assume that all data has
already been successfully flushed to disk. The flush during the previous
vm_stop() could have failed.

Run bdrv_flush_all() unconditionally so that we get an error each time
if the block device isn't really flushed.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-07-19 12:29:21 +08:00