Some NIC supports loopback mode and this is done by calling
nc->info->receive() directly which in fact suppresses the effort of
reentrancy check that is done in qemu_net_queue_send().
Unfortunately we can't use qemu_net_queue_send() here since for
loopback there's no sender as peer, so this patch introduce a
qemu_receive_packet() which is used for implementing loopback mode
for a NIC with this check.
NIC that supports loopback mode will be converted to this helper.
This is intended to address CVE-2021-3416.
Cc: Prasad J Pandit <ppandit@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Jason Wang <jasowang@redhat.com>
The memory API allows DMA into NIC's MMIO area. This means the NIC's
RX routine must be reentrant. Instead of auditing all the NIC, we can
simply detect the reentrancy and return early. The queue->delivering
is set and cleared by qemu_net_queue_deliver() for other queue helpers
to know whether the delivering in on going (NIC's receive is being
called). We can check it and return early in qemu_net_queue_flush() to
forbid reentrant RX.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Description copied from Linux kernel commit from Gustavo A. R. Silva
(see [3]):
--v-- description start --v--
The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to
declare variable-length types such as these ones is a flexible
array member [1], introduced in C99:
struct foo {
int stuff;
struct boo array[];
};
By making use of the mechanism above, we will get a compiler
warning in case the flexible array does not occur last in the
structure, which will help us prevent some kind of undefined
behavior bugs from being unadvertenly introduced [2] to the
Linux codebase from now on.
--^-- description end --^--
Do the similar housekeeping in the QEMU codebase (which uses
C99 since commit 7be41675f7).
All these instances of code were found with the help of the
following Coccinelle script:
@@
identifier s, m, a;
type t, T;
@@
struct s {
...
t m;
- T a[0];
+ T a[];
};
@@
identifier s, m, a;
type t, T;
@@
struct s {
...
t m;
- T a[0];
+ T a[];
} QEMU_PACKED;
[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=76497732932f
[3] https://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux.git/commit/?id=17642a2fbd2c1
Inspired-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Most list head structs need not be given a name. In most cases the
name is given just in case one is going to use QTAILQ_LAST, QTAILQ_PREV
or reverse iteration, but this does not apply to lists of other kinds,
and even for QTAILQ in practice this is only rarely needed. In addition,
we will soon reimplement those macros completely so that they do not
need a name for the head struct. So clean up everything, not giving a
name except in the rare case where it is necessary.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1454089805-5470-11-git-send-email-peter.maydell@linaro.org
This will be used by buffer filter implementation later to
queue packets.
Signed-off-by: Yang Hongyang <yanghy@cn.fujitsu.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
net/queue.c has logic to send/queue/flush packets but a
qemu_deliver_packet_iov() call is hardcoded. Abstract this
func so that we can use our own deliver function in netfilter.
Signed-off-by: Yang Hongyang <yanghy@cn.fujitsu.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
qemu_deliver_packet_iov already have the compat delivery, we
can drop qemu_deliver_packet.
Signed-off-by: Yang Hongyang <yanghy@cn.fujitsu.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
g_new(T, n) is neater than g_malloc(sizeof(T) * n). It's also safer,
for two reasons. One, it catches multiplication overflowing size_t.
Two, it returns T * rather than void *, which lets the compiler catch
more type errors.
This commit only touches allocations with size arguments of the form
sizeof(T).
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
devices rely on packet callbacks eventually running,
but we violate this rule whenever we purge the queue.
To fix, invoke callbacks on all packets on purge.
Set length to 0, this way callers can detect that
this happened and re-queue if necessary.
Cc: qemu-stable@nongnu.org
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
In the current implementation of qemu, running without a network
backend will cause the queue to grow unbounded when the guest is
transmitting traffic.
This patch fixes the problem by implementing bounded size NetQueue,
used with an arbitrary limit of 10000 packets, and dropping packets
when the queue is full _and_ the sender does not pass a callback.
The second condition makes sure that we never drop packets that
contains a callback (which would be tricky, because the producer
expects the callback to be run when all previous packets have been
consumed; so we cannot run it when the packet is dropped).
If documentation is correct, producers that submit a callback should
stop sending when their packet is queued, so there is no real risk
that the queue exceeds the max size by large values.
Signed-off-by: Luigi Rizzo <rizzo@iet.unipi.it>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Move public headers to include/net, and leave private headers in net/.
Put the virtio headers in include/net/tap.h, removing the multiple copies
that existed. Leave include/net/tap.h as the interface for NICs, and
net/tap_int.h as the interface for OS-specific parts of the tap backend.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Net send functions have a return value where 0 means the packet has not
been sent and will be queued. A non-zero value means the packet was
sent or an error caused the packet to be dropped.
This patch fixes two instances where packets are queued but we return
their size. This causes callers to believe the packets were sent. When
the caller uses the async send interface this creates a real problem
because the callback will be invoked for a packet that the caller
believed to be already sent. This bug can cause double-frees in the
caller.
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
virtio-net has code to flush the queue and notify the iothread
whenever new receive buffers are added by the guest. That is
fine, and indeed we need to do the same in all other drivers.
However, notifying the iothread should be work for the network
subsystem. And since we are at it we can add a little smartness:
if some of the queued packets already could not be delivered,
there is no need to notify the iothread.
Reported-by: Luigi Rizzo <rizzo@iet.unipi.it>
Cc: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Cc: Jan Kiszka <jan.kiszka@siemens.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Amos Kong <akong@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
The vlan feature is no longer part of net core. Rename VLANClientState
to NetClientState because net clients are not explicitly associated with
a vlan at all, instead they have a peer net client to which they are
connected.
This patch is a mechanical search-and-replace except for a few
whitespace fixups where changing VLANClientState to NetClientState
misaligned whitespace.
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Now that we disable any receiver whose queue is full, we do not require
senders to handle a zero return by supplying a sent callback.
This is a second step towards allowing can_receive() handlers to return
true even if no buffer space is available.
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>