Adds EXTERNAL attribute definition to qemu timers subsystem and assigns
it to virtual clock timers, used in slirp (ICMP IPv6) and ui (key queue).
Virtual clock processing in rr mode can use this attribute instead of a
separate clock type.
Fixes: 87f4fe7653
Fixes: 775a412bf8
Fixes: 9888091404
Signed-off-by: Artem Pisarenko <artem.k.pisarenko@gmail.com>
Message-Id: <e771f96ab94e86b54b9a783c974f2af3009fe5d1.1539764043.git.artem.k.pisarenko@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Attributes are simple flags, associated with individual timers for their
whole lifetime. They intended to be used to mark individual timers for
special handling when they fire.
New/init functions family in timer interface updated and refactored (new
'attribute' argument added, timer_list replaced with timer_list_group+type
combinations, comments improved to avoid info duplication). Also existing
aio interface extended with attribute-enabled variants of functions,
which create/initialize timers.
Signed-off-by: Artem Pisarenko <artem.k.pisarenko@gmail.com>
Message-Id: <f47b81dbce734e9806f9516eba8ca588e6321c2f.1539764043.git.artem.k.pisarenko@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
That patch series introduced new virtual clock type for use in external
subsystems. It breaks desired behavior in non-record/replay usage
scenarios due to a small change to existing behavior. Processing of
virtual timers belonging to new clock type is kicked off to the main
loop, which makes these timers asynchronous with vCPU thread and,
in icount mode, with whole guest execution. This breaks expected
determinism in non-record/replay icount mode of emulation where these
"external subsystems" are isolated from the host (i.e. they are
external only to guest core, not to the entire emulation environment).
Example for slirp ("user" backend for network device):
User runs qemu in icount mode with rtc clock=vm without any external
communication interfaces but with "-netdev user,restrict=on". It expects
deterministic execution, because network services are emulated inside
qemu and isolated from host. There are no reasons to get reply from DHCP
server with different delay or something like that.
The next patches revert reimplements the same changes in a better way.
This reverts commit 87f4fe7653.
This reverts commit 775a412bf8.
This reverts commit 9888091404.
Signed-off-by: Artem Pisarenko <artem.k.pisarenko@gmail.com>
Message-Id: <18b1e7c8f155fe26976f91be06bde98eef6f8751.1539764043.git.artem.k.pisarenko@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When we implemented per-vCPU TCG contexts, we forgot to also
distribute the tcg_time counter, which has remained as a global
accessed without any serialization, leading to potentially missed
counts.
Fix it by distributing the field over the TCG contexts, embedding
it into TCGProfile with a field called "cpu_exec_time", which is more
descriptive than "tcg_time". Add a function to query this value
directly, and for completeness, fill in the field in
tcg_profile_snapshot, even though its callers do not use it.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <20181010144853.13005-5-cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Slirp and VNC modules use virtual clock for processing some events that
are related to the guest execution speed.
But virtual clock-related events are consideres to be deterministic and
are recorded/replayed by icount mechanism. But slirp and VNC lie outside
the recorded guest core (which includes CPU and peripherals).
Therefore slirp and VNC are external for the guest, but should work at
guest speed.
This patch introduces new virtual clock which can be used for external
subsystems for running timers that are synchronized with the guest.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Message-Id: <20180912082002.3228.82417.stgit@pasha-VirtualBox>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch adds saving/restoring of the host clock field 'last'.
It is used in host clock calculation and therefore clock may
become incorrect when using restored vmstate.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20180227095226.1060.50975.stgit@pasha-VirtualBox>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
We dropped support for ia64 host CPUs in the 2.11 release (removing
the TCG backend for it, and advertising the support as being
completely removed in the changelog). However there are a few bits
and pieces of code still floating about. Remove those, too.
We can drop the check in configure for "ia64 or hppa host?"
entirely, because we don't support hppa hosts either any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <1516897189-11035-1-git-send-email-peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Tested and confirmed that the stretch i386 debian qcow2 image on a
raspberry pi 2 works.
Fixes: LP#: 893208 <https://bugs.launchpad.net/qemu/+bug/893208/>
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20170418191817.10430-1-bobby.prani@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
By holding off updates to timer_state.qemu_icount we can run into
trouble when the non-vCPU thread needs to know the time. This helper
ensures we atomically update timers_state.qemu_icount based on what
has been currently executed.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
icount has become much slower after tcg_cpu_exec has stopped
using the BQL. There is also a latent bug that is masked by
the slowness.
The slowness happens because every occurrence of a QEMU_CLOCK_VIRTUAL
timer now has to wake up the I/O thread and wait for it. The rendez-vous
is mediated by the BQL QemuMutex:
- handle_icount_deadline wakes up the I/O thread with BQL taken
- the I/O thread wakes up and waits on the BQL
- the VCPU thread releases the BQL a little later
- the I/O thread raises an interrupt, which calls qemu_cpu_kick
- the VCPU thread notices the interrupt, takes the BQL to
process it and waits on it
All this back and forth is extremely expensive, causing a 6 to 8-fold
slowdown when icount is turned on.
One may think that the issue is that the VCPU thread is too dependent
on the BQL, but then the latent bug comes in. I first tried removing
the BQL completely from the x86 cpu_exec, only to see everything break.
The only way to fix it (and make everything slow again) was to add a dummy
BQL lock/unlock pair.
This is because in -icount mode you really have to process the events
before the CPU restarts executing the next instruction. Therefore, this
series moves the processing of QEMU_CLOCK_VIRTUAL timers straight in
the vCPU thread when running in icount mode.
The required changes include:
- make the timer notification callback wake up TCG's single vCPU thread
when run from another thread. By using async_run_on_cpu, the callback
can override all_cpu_threads_idle() when the CPU is halted.
- move handle_icount_deadline after qemu_tcg_wait_io_event, so that
the timer notification callback is invoked after the dummy work item
wakes up the vCPU thread
- make handle_icount_deadline run the timers instead of just waking the
I/O thread.
- stop processing the timers in the main loop
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There is no change for now, because the callback just invokes
qemu_notify_event.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This dependency is the wrong way, and we will need util/qemu-timer.h from
sysemu/cpus.h in the next patch.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Similarly to allocation, do it from an inline function. This allows
tests to only use the headers for allocation/free of timer.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
It's timer to expire, not clock.
Signed-off-by: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
Message-Id: <1480566640-27264-1-git-send-email-baiyaowei@cmss.chinamobile.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The comments is outdated. The patch has following changes:
1. tense correction.
2. all clock time value is returned in nanoseconds, so, they are same in
precision.
3. virtual clock doesn't use cpu cycles.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
Message-Id: <1469790338-28990-2-git-send-email-caoj.fnst@cn.fujitsu.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This patch replaces get_ticks_per_sec() calls with the macro
NANOSECONDS_PER_SECOND. Also, as there are no callers, get_ticks_per_sec()
is then removed. This replacement improves the readability and
understandability of code.
For example,
timer_mod(fdctrl->result_timer,
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + (get_ticks_per_sec() / 50));
NANOSECONDS_PER_SECOND makes it obvious that qemu_clock_get_ns
matches the unit of the expression on the right side of the plus.
Signed-off-by: Rutuja Shah <rutu.shah.26@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Re-run scripts/clean-includes to apply the previous commit's
corrections and updates. Besides redundant qemu/typedefs.h, this only
finds a redundant config-host.h include in ui/egl-helpers.c. No idea
how that escaped the previous runs.
Some manual whitespace trimming around dropped includes squashed in.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
qemu_clock_warp function is called to update virtual clock when CPU
is sleeping. This function includes replay checkpoint to make execution
deterministic in icount mode.
Record/replay module flushes async event queue at checkpoints.
Some of the events (e.g., block devices operations) include interaction
with hardware. E.g., APIC polled by block devices sets one of IRQ flags.
Flag to be set depends on currently executed thread (CPU or iothread).
Therefore in replay mode we have to process the checkpoints in the same thread
as they were recorded.
qemu_clock_warp function (and its checkpoint) may be called from different
thread. This patch decouples two different execution cases of this function:
call when CPU is sleeping from iothread and call from cpu thread to update
virtual clock.
First task is performed by qemu_start_warp_timer function. It sets warp
timer event to the moment of nearest pending virtual timer.
Second function (qemu_account_warp_timer) is called from cpu thread
before execution of the code. It advances virtual clock by adding the length
of period while CPU was sleeping.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20160310115609.4812.44986.stgit@PASHA-ISP>
[Update docs. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This should help clarify the purpose of the function that returns
the host system's CPU cycle count.
Signed-off-by: Christopher Covington <cov@codeaurora.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
ppc portion
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Move the muldiv64() function from qemu-common.h to host-utils.h.
This puts it together with all the other arithmetic functions
where we provide a version with __int128_t and a fallback
without, and allows headers which need muldiv64() to avoid
including qemu-common.h.
We don't include host-utils from qemu-common.h, to avoid dragging
more things into qemu-common.h than it already has; in practice
everywhere that needs muldiv64() can get it via qemu/timer.h.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Commit e0cf11f31c ("timer: Use a single
definition of NSEC_PER_SEC for the whole codebase") renamed
NANOSECONDS_PER_SECOND to NSEC_PER_SEC.
On Mac OS X there is a <dispatch/time.h> system header which also
defines NSEC_PER_SEC. This causes compiler warnings.
Let's use the old name instead. It's longer but it doesn't clash.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1436364609-7929-1-git-send-email-stefanha@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We want to be able to reuse this define by making it common to
multiple QEMU modules.
This also makes it an integer since there's no need for it to be a
float.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: 6375912849da2ab561046dd013684535ccecca44.1434113783.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Commit 691a0c9c introduced a mechanism by which QEMU_CLOCK_HOST can
notify other parts of the emulator when the host clock has jumped
backward. This is used to avoid stalling timers that were scheduled
based on the host clock.
However, if the host clock jumps forward, then timers that were
scheduled based on the host clock may fire rapidly and cause other
problems. For example, the mc146818rtc periodic timer will block
execution of the VM and consume host CPU while firing every interrupt
for the time period that was skipped by the host clock.
To correct that problem, this commit fires the reset notification if the
host clock jumps forward by more than a hard-coded limit. The limit is
currently set to a value of 60 seconds, which should be small enough to
prevent excessive timer loops, but large enough to avoid frequent resets
in idle VMs.
Signed-off-by: Paul Donohue <qemu-git@PaulSD.com>
Message-Id: <20150612140845.GD2749@TopQuark.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2ed1ebcf6 "timer: replace time() with QEMU_CLOCK_HOST" broke compile
when configured with --enable-profiler. Turned out the profiler has been
broken for a while.
This does s/qemu_time/tcg_time/ as the profiler only works in a TCG mode.
This also fixes the compile error.
This changes profile_getclock() to return nanoseconds rather than
CPU ticks as the "profile" HMP command prints seconds and there is no
platform-independent way to get ticks-per-second rate.
Since TCG is quite slow and get_clock() returns nanoseconds (fine
enough), this should not affect precision much.
This removes unused qemu_time_start and tlb_flush_time.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <1426478258-29961-1-git-send-email-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
With the introduction of QEMU_CLOCK_VIRTUAL_RT, the computation of
sc->diff_clk can be simplified nicely:
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) -
qemu_clock_get_ns(QEMU_CLOCK_REALTIME) +
cpu_get_clock_offset()
= qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) -
(qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - cpu_get_clock_offset())
= qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) -
(qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + timers_state.cpu_clock_offset)
= qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) -
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT)
Cc: Sebastian Tanase <sebastian.tanase@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In some cases, a timer was set to NULL so that we could check if it is
initialized. Use the timer_list field instead, and add a timer_deinit
function that NULLs it.
It then makes sense that timer_del be a no-op (instead of a crasher) on
such a de-initialized timer. It avoids the need to poke at the timerlist
field to check if the timers are initialized.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch makes icount warp use the new QEMU_CLOCK_VIRTUAL_RT clock.
This way, icount's QEMU_CLOCK_VIRTUAL will never count time during which
the virtual machine is stopped.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch introduces new QEMU_CLOCK_VIRTUAL_RT clock, which
should be used for icount warping. In the next patch, it
will be used to avoid a huge icount warp when a virtual
machine is stopped for a long time.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Separate accessing the instruction counter from the compensation for
speed and halting that are introduced by qemu_icount_bias. This
introduces new infrastructure used by the record/replay patches.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The goal is to sleep qemu whenever the guest clock
is in advance compared to the host clock (we use
the monotonic clocks). The amount of time to sleep
is calculated in the execution loop in cpu_exec.
At first, we tried to approximate at each for loop the real time elapsed
while searching for a TB (generating or retrieving from cache) and
executing it. We would then approximate the virtual time corresponding
to the number of virtual instructions executed. The difference between
these 2 values would allow us to know if the guest is in advance or delayed.
However, the function used for measuring the real time
(qemu_clock_get_ns(QEMU_CLOCK_REALTIME)) proved to be very expensive.
We had an added overhead of 13% of the total run time.
Therefore, we modified the algorithm and only take into account the
difference between the 2 clocks at the begining of the cpu_exec function.
During the for loop we try to reduce the advance of the guest only by
computing the virtual time elapsed and sleeping if necessary. The overhead
is thus reduced to 3%. Even though this method still has a noticeable
overhead, it no longer is a bottleneck in trying to achieve a better
guest frequency for which the guest clock is faster than the host one.
As for the the alignement of the 2 clocks, with the first algorithm
the guest clock was oscillating between -1 and 1ms compared to the host clock.
Using the second algorithm we notice that the guest is 5ms behind the host, which
is still acceptable for our use case.
The tests where conducted using fio and stress. The host machine in an i5 CPU at
3.10GHz running Debian Jessie (kernel 3.12). The guest machine is an arm versatile-pb
built with buildroot.
Currently, on our test machine, the lowest icount we can achieve that is suitable for
aligning the 2 clocks is 6. However, we observe that the IO tests (using fio) are
slower than the cpu tests (using stress).
Signed-off-by: Sebastian Tanase <sebastian.tanase@openwide.fr>
Tested-by: Camille Bégué <camille.begue@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This adds cpu_icount_to_ns function which is needed for reverse execution.
It returns the time for a specific instruction.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These let a user anticipate the deadline of a timer, atomically with
other sites that call the function. This helps avoiding complicated
lock hierarchies.
Reviewed-by: Alex Bligh <alex@alex.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
After disabling the QemuClock, we should make sure that no QemuTimers
are still in flight. To implement that with light overhead, we resort
to QemuEvent. The caller of disabling will wait on QemuEvent of each
timerlist.
Note, qemu_clock_enable(foo,false) can _not_ be called from timer's cb.
Also, the callers of qemu_clock_enable() should be protected by the BQL.
Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
QEMU_CLOCK_VIRTUAL may be read outside BQL. This will make its
foundation, i.e. cpu_clock_offset exposed to race condition.
Using private lock to protect it.
After this patch, reading QEMU_CLOCK_VIRTUAL is thread safe
unless use_icount is true, in which case the existing callers
still rely on the BQL.
Lock rule: private lock innermost, ie BQL->"this lock"
Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Introduce QEMUTimerList->active_timers_lock to protect the linked list
of active timers. This allows qemu_timer_mod_ns() to be called from any
thread.
Note that vm_clock is not thread-safe and its use of
qemu_clock_has_timers() works fine today but is also not thread-safe.
The purpose of this patch is to eventually let device models set or
cancel timers from a vcpu thread without holding the global mutex.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Remove the legacy interface from include/qemu/timers.h.
Ensure struct QEMUClock is not exposed at all.
Signed-off-by: Alex Bligh <alex@alex.org.uk>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Now we have timerlistgroups implemented and main_loop_tlg, we
no longer need the concept of a default timer list associated
with each clock. Remove it and simplify initialisation of
clocks and timer lists.
Signed-off-by: Alex Bligh <alex@alex.org.uk>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Rearrange timer.h so it is in order by function type.
Make legacy functions call non-legacy functions rather than vice-versa.
Convert cpus.c to use new API.
Signed-off-by: Alex Bligh <alex@alex.org.uk>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Add utility functions qemu_clock_get_ms and qemu_clock_get_us
Signed-off-by: Alex Bligh <alex@alex.org.uk>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Remove qemu_clock_deadline and qemu_timerlist_deadline now we are using
the ns functions throughout.
Signed-off-by: Alex Bligh <alex@alex.org.uk>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>