Now that all *_fill() and *_poll() functions use GPollFD we no longer
need rfds/wfds/xfds or pollfds_from_select()/pollfds_to_select().
>From now on everything uses GPollFD.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Message-id: 1361356113-11049-8-git-send-email-stefanha@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Convert iohandler_select_fill() and iohandler_select_poll() to use
GPollFD instead of rfds/wfds/xfds.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Message-id: 1361356113-11049-7-git-send-email-stefanha@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Slirp uses rfds/wfds/xfds more extensively than other QEMU components.
The rarely-used out-of-band TCP data feature is used. That means we
need the full table of select(2) to g_poll(3) events:
rfds -> G_IO_IN | G_IO_HUP | G_IO_ERR
wfds -> G_IO_OUT | G_IO_ERR
xfds -> G_IO_PRI
I came up with this table by looking at Linux fs/select.c which maps
select(2) to poll(2) internally.
Another detail to watch out for are the global variables that reference
rfds/wfds/xfds during slirp_select_poll(). sofcantrcvmore() and
sofcantsendmore() use these globals to clear fd_set bits. When
sofcantrcvmore() is called, the wfds bit is cleared so that the write
handler will no longer be run for this iteration of the event loop.
This actually seems buggy to me since TCP connections can be half-closed
and we'd still want to handle data in half-duplex fashion. I think the
real intention is to avoid running the read/write handler when the
socket has been fully closed. This is indicated with the SS_NOFDREF
state bit so we now check for it before invoking the TCP write handler.
Note that UDP/ICMP code paths don't care because they are
connectionless.
Note that slirp/ has a lot of tabs and sometimes mixed tabs with spaces.
I followed the style of the surrounding code.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Message-id: 1361356113-11049-6-git-send-email-stefanha@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Convert glib file descriptor polling from rfds/wfds/xfds to GPollFD.
The Windows code still needs poll_fds[] and n_poll_fds but they can now
become local variables.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Message-id: 1361356113-11049-4-git-send-email-stefanha@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Use g_poll(3) instead of select(2). Well, this is kind of a cheat.
It's true that we're now using g_poll(3) on POSIX hosts but the *_fill()
and *_poll() functions are still using rfds/wfds/xfds.
We've set the scene to start converting *_fill() and *_poll() functions
step-by-step until no more rfds/wfds/xfds users remain. Then we'll drop
the temporary gpollfds_from_select() and gpollfds_to_select() functions
and be left with native g_poll(2).
On Windows things are a little crazy: convert from rfds/wfds/xfds to
GPollFDs, back to rfds/wfds/xfds, call select(2), rfds/wfds/xfds back to
GPollFDs, and finally back to rfds/wfds/xfds again. This is only
temporary and keeps the Windows build working through the following
patches. We'll drop this excessive conversion later and be left with a
single GPollFDs -> select(2) -> GPollFDs sequence that allows Windows to
use select(2) while the rest of QEMU only knows about GPollFD.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Message-id: 1361356113-11049-3-git-send-email-stefanha@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
The current implementation of os_host_main_loop_wait() on Windows,
returns 1 only when a g_poll() event occurs because the return value of
select() is overridden. This is wrong as we may skip a socket event, as
shown in this example:
1. select() returns 0
2. g_poll() returns 1 (socket event occurs)
3. os_host_main_loop_wait() returns 1
4. qemu_iohandler_poll() sees no socket event because select() has
return before the event occurs
5. select() returns 1
6. g_poll() returns 0 (g_poll overrides select's return value)
7. os_host_main_loop_wait() returns 0
8. qemu_iohandler_poll() doesn't check for socket events because the
return value of os_host_main_loop_wait() is zero.
9. goto 5
This patch use one variable for each of these return values, so we don't
miss a select() event anymore.
Also move the call to select() after g_poll(), this will improve latency
as we don't have to go through two os_host_main_loop_wait() calls to
detect a socket event.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Commit ac4119c (chardev: Use timer instead of bottom-half to postpone
open event, 2012-10-12) moved the alarm timer initialization to an earlier
point but failed to consider that it depends on qemu_init_main_loop.
Later, commit 1c53786 (vl: init main loop earlier, 2012-10-30) fixed
this, but left -daemonize in two different ways. First, timers need to
be reinitialized after forking. Second, the global mutex was being held
by the parent, and thus dropped after forking.
The first is now fixed using pthread_atfork. For the second part,
make sure that the global mutex is not taken before daemonization,
and similarly delay qemu_thread_self.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
init_timer_alarm was being called twice. This is not needed.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This lets us remove the hooks for the main loop in async.c.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The Win32 implementation will only accept EventNotifiers, thus a few
drivers are disabled under Windows. EventNotifiers are a good match
for the GSource implementation, too, because the Win32 port of glib
allows to place their HANDLEs in a GPollFD.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This will be used when polling the GSource attached to an AioContext.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
With this patch, I/O handlers (including event notifier handlers) can be
attached to a single AioContext.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Start introducing AioContext, which will let us remove globals from
aio.c/async.c, and introduce multiple I/O threads.
The bottom half functions now take an additional AioContext argument.
A bottom half is created with a specific AioContext that remains the
same throughout the lifetime. qemu_bh_new is just a wrapper that
uses a global context.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The timeout argument was unused up to now,
but it can be used to reduce the poll_timeout when it is infinite
(negative value) or larger than timeout.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
This patch fixes a build regression with MinGW which was introduced by
commit 7c7db75576.
The 3rd argument of g_main_context_query must point to a gint value.
Using a pointer to an uint32_t value is wrong.
The timeout argument of function os_host_main_loop_wait was never
used for w32 / w64.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
- remove qemu_calculate_timeout;
- explicitly size timeout to uint32_t;
- introduce slirp_update_timeout;
- pass NULL as timeout argument to select in case timeout is the maximum
value;
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Paul Brook <paul@codesourcery.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Casting a pointer to an integer must use (DWORD_PTR) instead of (DWORD).
This also matches the definition of 'fd' (gint for w32, gint64 for w64).
Signed-off-by: Stefan Weil <sw@weilnetz.de>
* stefanha/trivial-patches:
make: fix clean rule by removing build file in qom/
configure: Link qga against UST tracing related libraries
configure: Link QEMU against 'liburcu-bp'
main-loop: make qemu_event_handle static
block/curl: Replace usleep by g_usleep
qtest: Add missing GCC_FMT_ATTR
w32: Undefine error constants before their redefinition
configure: fix mingw32 libs_qga typo
On w32, glib implements g_poll using WaitForMultipleObjects
or MsgWaitForMultipleObjects. This means that we can simplify
our code by switching to g_poll, and at the same time prepare for
adding back glib sources.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Right now, the main loop is not interrupted when data arrives on a
socket. To fix this, register each socket to interrupt the main loop
with WSAEventSelect. This does not replace select, it only communicates
a change in socket state that requires a select call.
Since the interrupt fires only once per recv call, or only once
after a send call returns EWOULDBLOCK we can activate it on all events
unconditionally. If QEMU is momentarily uninterested on some condition,
the main loop will not busy wait. Instead, it may get one extra wakeup,
but then it will ignore the condition until progress occurs and/or
qemu_set_fd_handler is called to set a callback. At this point the
condition will be tested via select and the callback will be invoked
even if it is still disabled on the event.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Using select with glib pollfds is wrong under w32. Restrict
the code to the POSIX case.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The timeval-based timeout is not needed until we actually invoke select,
so compute it only then. Also group the two calls that modify the
timeout, glib_select_fill and os_host_main_loop_wait.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
In some cases initializing the alarm timers can lead to non-negligable
overhead from programs that link against qemu-tool.o. At least,
setting a max-resolution WinMM alarm timer via mm_start_timer() (the
current default for Windows) can increase the "tick rate" on Windows
OSs and affect frequency scaling, and in the case of tools that run
in guest OSs such has qemu-ga, the impact can be fairly dramatic
(+20%/20% user/sys time on a core 2 processor was observed from an idle
Windows XP guest).
This patch doesn't address the issue directly (not sure what a good
solution would be for Windows, or what other situations it might be
noticeable), but it at least limits the scope of the issue to programs
that "opt-in" to using the main-loop.c functions by only enabling alarm
timers when qemu_init_main_loop() is called, which is already required
to make use of those facilities, so existing users shouldn't be
affected.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
The __attribute__((constructor)) init_main_loop() automatically get
called if qemu-tool.o is linked in. On win32, this leads to
a qemu_notify_event() call which attempts to SetEvent() on a HANDLE that
won't be initialized until qemu_init_main_loop() is manually called,
breaking qemu-tools.o programs on Windows at runtime.
This patch checks for an initialized event handle before attempting to
set it, which is analoguous to how we deal with an unitialized
io_thread_fd in the posix implementation.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
stdint.h defines the POSIX data types and is needed
for MinGW-w64 (and maybe other hosts).
v2: Instead of adding stdint.h directly, qemu-common.h is now
included and duplicate include statements were removed.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>