virtio,pc features, fixes

New features:
     vhost-user multiqueue support
     virtio-ccw virtio 1 support
 
 Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJWBOxjAAoJECgfDbjSjVRpao8H/1hV55WvPXyEHB9ian+JPVEb
 pYFUcKGRO/bWMbXkqWnIBzNPrViPNQHot3zrOcoXtgnBGcuniiteGcAtqj4WEkgb
 WSa22AI1QrEPfHIkhR3sYdJAsqte/RppnFKLSDDi9TwKOGUho47OnkzJWfB+vuup
 7YM/r8YDCkckdvsvfsCwW4Fbjxv7oKSokFkkdV/NwNDocNvRSBS9iAXsQYFdS7tm
 8DIkWK63HQDY9in+fYkk8zoaXK7oZMyi3vHd2g4W0t0mGznxj9dxomrJrMo/4GWZ
 ZrnlB9R1QxpOCtoDtozelxkCnLJhEVjd8xYkGPg+xzYjrxl9aHIWjSNGhf5Q9QY=
 =5IBX
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging

virtio,pc features, fixes

New features:
    vhost-user multiqueue support
    virtio-ccw virtio 1 support

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

# gpg: Signature made Fri 25 Sep 2015 07:40:35 BST using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg:                 aka "Michael S. Tsirkin <mst@redhat.com>"

* remotes/mst/tags/for_upstream:
  MAINTAINERS: add more devices to the PCI section
  MAINTAINERS: add more devices to the PC section
  vhost-user: add a new message to disable/enable a specific virt queue.
  vhost-user: add multiple queue support
  vhost: introduce vhost_backend_get_vq_index method
  vhost-user: add VHOST_USER_GET_QUEUE_NUM message
  vhost: rename VHOST_RESET_OWNER to VHOST_RESET_DEVICE
  vhost-user: add protocol feature negotiation
  vhost-user: use VHOST_USER_XXX macro for switch statement
  virtio-ccw: enable virtio-1
  virtio-ccw: feature bits > 31 handling
  virtio-ccw: support ring size changes
  virtio: ring sizes vs. reset
  pc: Introduce pc-*-2.5 machine classes
  q35: Move options common to all classes to pc_i440fx_machine_options()
  q35: Move options common to all classes to pc_q35_machine_options()
  virtio-net: unbreak self announcement and guest offloads after migration
  virtio: right size for virtio_queue_get_avail_size

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This commit is contained in:
Peter Maydell 2015-09-25 16:40:05 +01:00
commit cdf9818242
23 changed files with 593 additions and 144 deletions

View File

@ -601,6 +601,25 @@ F: hw/acpi/piix4.c
F: hw/acpi/ich9.c
F: include/hw/acpi/ich9.h
F: include/hw/acpi/piix.h
F: hw/misc/sga.c
PC Chipset
M: Michael S. Tsirkin <mst@redhat.com>
M: Paolo Bonzini <pbonzini@redhat.com>
S: Support
F: hw/char/debugcon.c
F: hw/char/parallel.c
F: hw/char/serial*
F: hw/dma/i8257*
F: hw/i2c/pm_smbus.c
F: hw/intc/apic*
F: hw/intc/ioapic*
F: hw/intc/i8259*
F: hw/misc/debugexit.c
F: hw/misc/pc-testdev.c
F: hw/timer/hpet*
F: hw/timer/i8254*
F: hw/timer/mc146818rtc*
Xtensa Machines
@ -653,7 +672,9 @@ PCI
M: Michael S. Tsirkin <mst@redhat.com>
S: Supported
F: include/hw/pci/*
F: hw/misc/pci-testdev.c
F: hw/pci/*
F: hw/pci-bridge/*
ACPI/SMBIOS
M: Michael S. Tsirkin <mst@redhat.com>

View File

@ -113,6 +113,7 @@ message replies. Most of the requests don't require replies. Here is a list of
the ones that do:
* VHOST_GET_FEATURES
* VHOST_GET_PROTOCOL_FEATURES
* VHOST_GET_VRING_BASE
There are several messages that the master sends with file descriptors passed
@ -127,6 +128,30 @@ in the ancillary data:
If Master is unable to send the full message or receives a wrong reply it will
close the connection. An optional reconnection mechanism can be implemented.
Any protocol extensions are gated by protocol feature bits,
which allows full backwards compatibility on both master
and slave.
As older slaves don't support negotiating protocol features,
a feature bit was dedicated for this purpose:
#define VHOST_USER_F_PROTOCOL_FEATURES 30
Multiple queue support
----------------------
Multiple queue is treated as a protocol extension, hence the slave has to
implement protocol features first. The multiple queues feature is supported
only when the protocol feature VHOST_USER_PROTOCOL_F_MQ (bit 0) is set:
#define VHOST_USER_PROTOCOL_F_MQ 0
The max number of queues the slave supports can be queried with message
VHOST_USER_GET_PROTOCOL_FEATURES. Master should stop when the number of
requested queues is bigger than that.
As all queues share one connection, the master uses a unique index for each
queue in the sent message to identify a specified queue. One queue pair
is enabled initially. More queues are enabled dynamically, by sending
message VHOST_USER_SET_VRING_ENABLE.
Message types
-------------
@ -138,6 +163,8 @@ Message types
Slave payload: u64
Get from the underlying vhost implementation the features bitmask.
Feature bit VHOST_USER_F_PROTOCOL_FEATURES signals slave support for
VHOST_USER_GET_PROTOCOL_FEATURES and VHOST_USER_SET_PROTOCOL_FEATURES.
* VHOST_USER_SET_FEATURES
@ -146,6 +173,33 @@ Message types
Master payload: u64
Enable features in the underlying vhost implementation using a bitmask.
Feature bit VHOST_USER_F_PROTOCOL_FEATURES signals slave support for
VHOST_USER_GET_PROTOCOL_FEATURES and VHOST_USER_SET_PROTOCOL_FEATURES.
* VHOST_USER_GET_PROTOCOL_FEATURES
Id: 15
Equivalent ioctl: VHOST_GET_FEATURES
Master payload: N/A
Slave payload: u64
Get the protocol feature bitmask from the underlying vhost implementation.
Only legal if feature bit VHOST_USER_F_PROTOCOL_FEATURES is present in
VHOST_USER_GET_FEATURES.
Note: slave that reported VHOST_USER_F_PROTOCOL_FEATURES must support
this message even before VHOST_USER_SET_FEATURES was called.
* VHOST_USER_SET_PROTOCOL_FEATURES
Id: 16
Ioctl: VHOST_SET_FEATURES
Master payload: u64
Enable protocol features in the underlying vhost implementation.
Only legal if feature bit VHOST_USER_F_PROTOCOL_FEATURES is present in
VHOST_USER_GET_FEATURES.
Note: slave that reported VHOST_USER_F_PROTOCOL_FEATURES must support
this message even before VHOST_USER_SET_FEATURES was called.
* VHOST_USER_SET_OWNER
@ -157,10 +211,10 @@ Message types
as an owner of the session. This can be used on the Slave as a
"session start" flag.
* VHOST_USER_RESET_OWNER
* VHOST_USER_RESET_DEVICE
Id: 4
Equivalent ioctl: VHOST_RESET_OWNER
Equivalent ioctl: VHOST_RESET_DEVICE
Master payload: N/A
Issued when a new connection is about to be closed. The Master will no
@ -264,3 +318,22 @@ Message types
Bits (0-7) of the payload contain the vring index. Bit 8 is the
invalid FD flag. This flag is set when there is no file descriptor
in the ancillary data.
* VHOST_USER_GET_QUEUE_NUM
Id: 17
Equivalent ioctl: N/A
Master payload: N/A
Slave payload: u64
Query how many queues the backend supports. This request should be
sent only when VHOST_USER_PROTOCOL_F_MQ is set in quried protocol
features by VHOST_USER_GET_PROTOCOL_FEATURES.
* VHOST_USER_SET_VRING_ENABLE
Id: 18
Equivalent ioctl: N/A
Master payload: vring state description
Signal slave to enable or disable corresponding vring.

View File

@ -460,17 +460,29 @@ static void pc_i440fx_machine_options(MachineClass *m)
m->family = "pc_piix";
m->desc = "Standard PC (i440FX + PIIX, 1996)";
m->hot_add_cpu = pc_hot_add_cpu;
m->default_machine_opts = "firmware=bios-256k.bin";
m->default_display = "std";
}
static void pc_i440fx_2_5_machine_options(MachineClass *m)
{
pc_i440fx_machine_options(m);
m->alias = "pc";
m->is_default = 1;
}
DEFINE_I440FX_MACHINE(v2_5, "pc-i440fx-2.5", NULL,
pc_i440fx_2_5_machine_options);
static void pc_i440fx_2_4_machine_options(MachineClass *m)
{
PCMachineClass *pcmc = PC_MACHINE_CLASS(m);
pc_i440fx_machine_options(m);
pc_i440fx_2_5_machine_options(m);
m->alias = NULL;
m->is_default = 0;
pcmc->broken_reserved_end = true;
m->default_machine_opts = "firmware=bios-256k.bin";
m->default_display = "std";
m->alias = "pc";
m->is_default = 1;
SET_MACHINE_COMPAT(m, PC_COMPAT_2_4);
}
DEFINE_I440FX_MACHINE(v2_4, "pc-i440fx-2.4", NULL,

View File

@ -364,20 +364,30 @@ static void pc_q35_machine_options(MachineClass *m)
m->desc = "Standard PC (Q35 + ICH9, 2009)";
m->hot_add_cpu = pc_hot_add_cpu;
m->units_per_default_bus = 1;
}
static void pc_q35_2_4_machine_options(MachineClass *m)
{
PCMachineClass *pcmc = PC_MACHINE_CLASS(m);
pc_q35_machine_options(m);
pcmc->broken_reserved_end = true;
m->default_machine_opts = "firmware=bios-256k.bin";
m->default_display = "std";
m->no_floppy = 1;
m->no_tco = 0;
}
static void pc_q35_2_5_machine_options(MachineClass *m)
{
pc_q35_machine_options(m);
m->alias = "q35";
}
DEFINE_Q35_MACHINE(v2_5, "pc-q35-2.5", NULL,
pc_q35_2_5_machine_options);
static void pc_q35_2_4_machine_options(MachineClass *m)
{
PCMachineClass *pcmc = PC_MACHINE_CLASS(m);
pc_q35_2_5_machine_options(m);
m->alias = NULL;
pcmc->broken_reserved_end = true;
SET_MACHINE_COMPAT(m, PC_COMPAT_2_4);
}
DEFINE_Q35_MACHINE(v2_4, "pc-q35-2.4", NULL,
pc_q35_2_4_machine_options);

View File

@ -122,6 +122,11 @@ void vhost_net_ack_features(struct vhost_net *net, uint64_t features)
vhost_ack_features(&net->dev, vhost_net_get_feature_bits(net), features);
}
uint64_t vhost_net_get_max_queues(VHostNetState *net)
{
return net->dev.max_queues;
}
static int vhost_net_get_fd(NetClientState *backend)
{
switch (backend->info->type) {
@ -143,6 +148,11 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
fprintf(stderr, "vhost-net requires net backend to be setup\n");
goto fail;
}
net->nc = options->net_backend;
net->dev.max_queues = 1;
net->dev.nvqs = 2;
net->dev.vqs = net->vqs;
if (backend_kernel) {
r = vhost_net_get_fd(options->net_backend);
@ -152,14 +162,15 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
net->dev.backend_features = qemu_has_vnet_hdr(options->net_backend)
? 0 : (1ULL << VHOST_NET_F_VIRTIO_NET_HDR);
net->backend = r;
net->dev.protocol_features = 0;
} else {
net->dev.backend_features = 0;
net->dev.protocol_features = 0;
net->backend = -1;
}
net->nc = options->net_backend;
net->dev.nvqs = 2;
net->dev.vqs = net->vqs;
/* vhost-user needs vq_index to initiate a specific queue pair */
net->dev.vq_index = net->nc->queue_index * net->dev.nvqs;
}
r = vhost_dev_init(&net->dev, options->opaque,
options->backend_type);
@ -285,7 +296,7 @@ static void vhost_net_stop_one(struct vhost_net *net,
} else if (net->nc->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER) {
for (file.index = 0; file.index < net->dev.nvqs; ++file.index) {
const VhostOps *vhost_ops = net->dev.vhost_ops;
int r = vhost_ops->vhost_call(&net->dev, VHOST_RESET_OWNER,
int r = vhost_ops->vhost_call(&net->dev, VHOST_RESET_DEVICE,
NULL);
assert(r >= 0);
}
@ -411,7 +422,25 @@ VHostNetState *get_vhost_net(NetClientState *nc)
return vhost_net;
}
int vhost_set_vring_enable(NetClientState *nc, int enable)
{
VHostNetState *net = get_vhost_net(nc);
const VhostOps *vhost_ops = net->dev.vhost_ops;
if (vhost_ops->vhost_backend_set_vring_enable) {
return vhost_ops->vhost_backend_set_vring_enable(&net->dev, enable);
}
return 0;
}
#else
uint64_t vhost_net_get_max_queues(VHostNetState *net)
{
return 1;
}
struct vhost_net *vhost_net_init(VhostNetOptions *options)
{
error_report("vhost-net support is not compiled in");
@ -456,4 +485,9 @@ VHostNetState *get_vhost_net(NetClientState *nc)
{
return 0;
}
int vhost_set_vring_enable(NetClientState *nc, int enable)
{
return 0;
}
#endif

View File

@ -406,6 +406,10 @@ static int peer_attach(VirtIONet *n, int index)
return 0;
}
if (nc->peer->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER) {
vhost_set_vring_enable(nc->peer, 1);
}
if (nc->peer->info->type != NET_CLIENT_OPTIONS_KIND_TAP) {
return 0;
}
@ -421,6 +425,10 @@ static int peer_detach(VirtIONet *n, int index)
return 0;
}
if (nc->peer->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER) {
vhost_set_vring_enable(nc->peer, 0);
}
if (nc->peer->info->type != NET_CLIENT_OPTIONS_KIND_TAP) {
return 0;
}
@ -1458,11 +1466,33 @@ static int virtio_net_load(QEMUFile *f, void *opaque, int version_id)
{
VirtIONet *n = opaque;
VirtIODevice *vdev = VIRTIO_DEVICE(n);
int ret;
if (version_id < 2 || version_id > VIRTIO_NET_VM_VERSION)
return -EINVAL;
return virtio_load(vdev, f, version_id);
ret = virtio_load(vdev, f, version_id);
if (ret) {
return ret;
}
if (virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS)) {
n->curr_guest_offloads = qemu_get_be64(f);
} else {
n->curr_guest_offloads = virtio_net_supported_guest_offloads(n);
}
if (peer_has_vnet_hdr(n)) {
virtio_net_apply_guest_offloads(n);
}
if (virtio_vdev_has_feature(vdev, VIRTIO_NET_F_GUEST_ANNOUNCE) &&
virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
n->announce_counter = SELF_ANNOUNCE_ROUNDS;
timer_mod(n->announce_timer, qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL));
}
return 0;
}
static int virtio_net_load_device(VirtIODevice *vdev, QEMUFile *f,
@ -1559,16 +1589,6 @@ static int virtio_net_load_device(VirtIODevice *vdev, QEMUFile *f,
}
}
if (virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS)) {
n->curr_guest_offloads = qemu_get_be64(f);
} else {
n->curr_guest_offloads = virtio_net_supported_guest_offloads(n);
}
if (peer_has_vnet_hdr(n)) {
virtio_net_apply_guest_offloads(n);
}
virtio_net_set_queues(n);
/* Find the first multicast entry in the saved MAC filter */
@ -1586,12 +1606,6 @@ static int virtio_net_load_device(VirtIODevice *vdev, QEMUFile *f,
qemu_get_subqueue(n->nic, i)->link_down = link_down;
}
if (virtio_vdev_has_feature(vdev, VIRTIO_NET_F_GUEST_ANNOUNCE) &&
virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
n->announce_counter = SELF_ANNOUNCE_ROUNDS;
timer_mod(n->announce_timer, qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL));
}
return 0;
}

View File

@ -242,6 +242,26 @@ static const TypeInfo ccw_machine_info = {
.driver = TYPE_S390_SKEYS,\
.property = "migration-enabled",\
.value = "off",\
},{\
.driver = "virtio-blk-ccw",\
.property = "max_revision",\
.value = "0",\
},{\
.driver = "virtio-balloon-ccw",\
.property = "max_revision",\
.value = "0",\
},{\
.driver = "virtio-serial-ccw",\
.property = "max_revision",\
.value = "0",\
},{\
.driver = "virtio-9p-ccw",\
.property = "max_revision",\
.value = "0",\
},{\
.driver = "virtio-rng-ccw",\
.property = "max_revision",\
.value = "0",\
},
static void ccw_machine_2_4_class_init(ObjectClass *oc, void *data)

View File

@ -307,11 +307,18 @@ static int virtio_ccw_set_vqs(SubchDev *sch, VqInfoBlock *info,
if (!desc) {
virtio_queue_set_vector(vdev, index, VIRTIO_NO_VECTOR);
} else {
/* Fail if we don't have a big enough queue. */
/* TODO: Add interface to handle vring.num changing */
if (virtio_queue_get_num(vdev, index) > num) {
if (info) {
/* virtio-1 allows changing the ring size. */
if (virtio_queue_get_num(vdev, index) < num) {
/* Fail if we exceed the maximum number. */
return -EINVAL;
}
virtio_queue_set_num(vdev, index, num);
} else if (virtio_queue_get_num(vdev, index) > num) {
/* Fail if we don't have a big enough queue. */
return -EINVAL;
}
/* We ignore possible increased num for legacy for compatibility. */
virtio_queue_set_vector(vdev, index, index);
}
/* tell notify handler in case of config change */
@ -460,16 +467,19 @@ static int virtio_ccw_cb(SubchDev *sch, CCW1 ccw)
MEMTXATTRS_UNSPECIFIED,
NULL);
if (features.index == 0) {
features.features = (uint32_t)vdev->host_features;
} else if (features.index == 1) {
features.features = (uint32_t)(vdev->host_features >> 32);
/*
* Don't offer version 1 to the guest if it did not
* negotiate at least revision 1.
*/
if (dev->revision <= 0) {
features.features &= ~(1 << (VIRTIO_F_VERSION_1 - 32));
if (dev->revision >= 1) {
/* Don't offer legacy features for modern devices. */
features.features = (uint32_t)
(vdev->host_features & ~VIRTIO_LEGACY_FEATURES);
} else {
features.features = (uint32_t)vdev->host_features;
}
} else if ((features.index == 1) && (dev->revision >= 1)) {
/*
* Only offer feature bits beyond 31 if the guest has
* negotiated at least revision 1.
*/
features.features = (uint32_t)(vdev->host_features >> 32);
} else {
/* Return zeroes if the guest supports more feature bits. */
features.features = 0;
@ -508,14 +518,12 @@ static int virtio_ccw_cb(SubchDev *sch, CCW1 ccw)
virtio_set_features(vdev,
(vdev->guest_features & 0xffffffff00000000ULL) |
features.features);
} else if (features.index == 1) {
} else if ((features.index == 1) && (dev->revision >= 1)) {
/*
* The guest should not set version 1 if it didn't
* negotiate a revision >= 1.
* If the guest did not negotiate at least revision 1,
* we did not offer it any feature bits beyond 31. Such a
* guest passing us any bit here is therefore buggy.
*/
if (dev->revision <= 0) {
features.features &= ~(1 << (VIRTIO_F_VERSION_1 - 32));
}
virtio_set_features(vdev,
(vdev->guest_features & 0x00000000ffffffffULL) |
((uint64_t)features.features << 32));
@ -766,7 +774,7 @@ static int virtio_ccw_cb(SubchDev *sch, CCW1 ccw)
* need to fetch it here. Nothing to do for now, though.
*/
if (dev->revision >= 0 ||
revinfo.revision > virtio_ccw_rev_max(vdev)) {
revinfo.revision > virtio_ccw_rev_max(dev)) {
ret = -ENOSYS;
break;
}
@ -1539,6 +1547,10 @@ static void virtio_ccw_device_plugged(DeviceState *d, Error **errp)
sch->id.cu_model = virtio_bus_get_vdev_id(&dev->bus);
if (dev->max_rev >= 1) {
virtio_add_feature(&vdev->host_features, VIRTIO_F_VERSION_1);
}
css_generate_sch_crws(sch->cssid, sch->ssid, sch->schid,
d->hotplugged, 1);
}
@ -1555,6 +1567,8 @@ static Property virtio_ccw_net_properties[] = {
DEFINE_PROP_STRING("devno", VirtioCcwDevice, bus_id),
DEFINE_PROP_BIT("ioeventfd", VirtioCcwDevice, flags,
VIRTIO_CCW_FLAG_USE_IOEVENTFD_BIT, true),
DEFINE_PROP_UINT32("max_revision", VirtioCcwDevice, max_rev,
VIRTIO_CCW_MAX_REV),
DEFINE_PROP_END_OF_LIST(),
};
@ -1582,6 +1596,8 @@ static Property virtio_ccw_blk_properties[] = {
DEFINE_PROP_STRING("devno", VirtioCcwDevice, bus_id),
DEFINE_PROP_BIT("ioeventfd", VirtioCcwDevice, flags,
VIRTIO_CCW_FLAG_USE_IOEVENTFD_BIT, true),
DEFINE_PROP_UINT32("max_revision", VirtioCcwDevice, max_rev,
VIRTIO_CCW_MAX_REV),
DEFINE_PROP_END_OF_LIST(),
};
@ -1609,6 +1625,8 @@ static Property virtio_ccw_serial_properties[] = {
DEFINE_PROP_STRING("devno", VirtioCcwDevice, bus_id),
DEFINE_PROP_BIT("ioeventfd", VirtioCcwDevice, flags,
VIRTIO_CCW_FLAG_USE_IOEVENTFD_BIT, true),
DEFINE_PROP_UINT32("max_revision", VirtioCcwDevice, max_rev,
VIRTIO_CCW_MAX_REV),
DEFINE_PROP_END_OF_LIST(),
};
@ -1636,6 +1654,8 @@ static Property virtio_ccw_balloon_properties[] = {
DEFINE_PROP_STRING("devno", VirtioCcwDevice, bus_id),
DEFINE_PROP_BIT("ioeventfd", VirtioCcwDevice, flags,
VIRTIO_CCW_FLAG_USE_IOEVENTFD_BIT, true),
DEFINE_PROP_UINT32("max_revision", VirtioCcwDevice, max_rev,
VIRTIO_CCW_MAX_REV),
DEFINE_PROP_END_OF_LIST(),
};
@ -1663,6 +1683,8 @@ static Property virtio_ccw_scsi_properties[] = {
DEFINE_PROP_STRING("devno", VirtioCcwDevice, bus_id),
DEFINE_PROP_BIT("ioeventfd", VirtioCcwDevice, flags,
VIRTIO_CCW_FLAG_USE_IOEVENTFD_BIT, true),
DEFINE_PROP_UINT32("max_revision", VirtioCcwDevice, max_rev,
VIRTIO_CCW_MAX_REV),
DEFINE_PROP_END_OF_LIST(),
};
@ -1689,6 +1711,8 @@ static const TypeInfo virtio_ccw_scsi = {
#ifdef CONFIG_VHOST_SCSI
static Property vhost_ccw_scsi_properties[] = {
DEFINE_PROP_STRING("devno", VirtioCcwDevice, bus_id),
DEFINE_PROP_UINT32("max_revision", VirtioCcwDevice, max_rev,
VIRTIO_CCW_MAX_REV),
DEFINE_PROP_END_OF_LIST(),
};
@ -1727,6 +1751,8 @@ static Property virtio_ccw_rng_properties[] = {
DEFINE_PROP_STRING("devno", VirtioCcwDevice, bus_id),
DEFINE_PROP_BIT("ioeventfd", VirtioCcwDevice, flags,
VIRTIO_CCW_FLAG_USE_IOEVENTFD_BIT, true),
DEFINE_PROP_UINT32("max_revision", VirtioCcwDevice, max_rev,
VIRTIO_CCW_MAX_REV),
DEFINE_PROP_END_OF_LIST(),
};
@ -1880,6 +1906,8 @@ static Property virtio_ccw_9p_properties[] = {
DEFINE_PROP_STRING("devno", VirtioCcwDevice, bus_id),
DEFINE_PROP_BIT("ioeventfd", VirtioCcwDevice, flags,
VIRTIO_CCW_FLAG_USE_IOEVENTFD_BIT, true),
DEFINE_PROP_UINT32("max_revision", VirtioCcwDevice, max_rev,
VIRTIO_CCW_MAX_REV),
DEFINE_PROP_END_OF_LIST(),
};

View File

@ -88,6 +88,7 @@ struct VirtioCcwDevice {
SubchDev *sch;
char *bus_id;
int revision;
uint32_t max_rev;
VirtioBusState bus;
bool ioeventfd_started;
bool ioeventfd_disabled;
@ -102,9 +103,10 @@ struct VirtioCcwDevice {
};
/* The maximum virtio revision we support. */
static inline int virtio_ccw_rev_max(VirtIODevice *vdev)
#define VIRTIO_CCW_MAX_REV 1
static inline int virtio_ccw_rev_max(VirtioCcwDevice *dev)
{
return 0;
return dev->max_rev;
}
/* virtual css bus type */

View File

@ -42,11 +42,19 @@ static int vhost_kernel_cleanup(struct vhost_dev *dev)
return close(fd);
}
static int vhost_kernel_get_vq_index(struct vhost_dev *dev, int idx)
{
assert(idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs);
return idx - dev->vq_index;
}
static const VhostOps kernel_ops = {
.backend_type = VHOST_BACKEND_TYPE_KERNEL,
.vhost_call = vhost_kernel_call,
.vhost_backend_init = vhost_kernel_init,
.vhost_backend_cleanup = vhost_kernel_cleanup
.vhost_backend_cleanup = vhost_kernel_cleanup,
.vhost_backend_get_vq_index = vhost_kernel_get_vq_index,
};
int vhost_set_backend_type(struct vhost_dev *dev, VhostBackendType backend_type)

View File

@ -24,13 +24,17 @@
#include <linux/vhost.h>
#define VHOST_MEMORY_MAX_NREGIONS 8
#define VHOST_USER_F_PROTOCOL_FEATURES 30
#define VHOST_USER_PROTOCOL_FEATURE_MASK 0x1ULL
#define VHOST_USER_PROTOCOL_F_MQ 0
typedef enum VhostUserRequest {
VHOST_USER_NONE = 0,
VHOST_USER_GET_FEATURES = 1,
VHOST_USER_SET_FEATURES = 2,
VHOST_USER_SET_OWNER = 3,
VHOST_USER_RESET_OWNER = 4,
VHOST_USER_RESET_DEVICE = 4,
VHOST_USER_SET_MEM_TABLE = 5,
VHOST_USER_SET_LOG_BASE = 6,
VHOST_USER_SET_LOG_FD = 7,
@ -41,6 +45,10 @@ typedef enum VhostUserRequest {
VHOST_USER_SET_VRING_KICK = 12,
VHOST_USER_SET_VRING_CALL = 13,
VHOST_USER_SET_VRING_ERR = 14,
VHOST_USER_GET_PROTOCOL_FEATURES = 15,
VHOST_USER_SET_PROTOCOL_FEATURES = 16,
VHOST_USER_GET_QUEUE_NUM = 17,
VHOST_USER_SET_VRING_ENABLE = 18,
VHOST_USER_MAX
} VhostUserRequest;
@ -94,7 +102,7 @@ static unsigned long int ioctl_to_vhost_user_request[VHOST_USER_MAX] = {
VHOST_GET_FEATURES, /* VHOST_USER_GET_FEATURES */
VHOST_SET_FEATURES, /* VHOST_USER_SET_FEATURES */
VHOST_SET_OWNER, /* VHOST_USER_SET_OWNER */
VHOST_RESET_OWNER, /* VHOST_USER_RESET_OWNER */
VHOST_RESET_DEVICE, /* VHOST_USER_RESET_DEVICE */
VHOST_SET_MEM_TABLE, /* VHOST_USER_SET_MEM_TABLE */
VHOST_SET_LOG_BASE, /* VHOST_USER_SET_LOG_BASE */
VHOST_SET_LOG_FD, /* VHOST_USER_SET_LOG_FD */
@ -180,6 +188,19 @@ static int vhost_user_write(struct vhost_dev *dev, VhostUserMsg *msg,
0 : -1;
}
static bool vhost_user_one_time_request(VhostUserRequest request)
{
switch (request) {
case VHOST_USER_SET_OWNER:
case VHOST_USER_RESET_DEVICE:
case VHOST_USER_SET_MEM_TABLE:
case VHOST_USER_GET_QUEUE_NUM:
return true;
default:
return false;
}
}
static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
void *arg)
{
@ -193,27 +214,45 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
msg_request = vhost_user_request_translate(request);
/* only translate vhost ioctl requests */
if (request > VHOST_USER_MAX) {
msg_request = vhost_user_request_translate(request);
} else {
msg_request = request;
}
/*
* For non-vring specific requests, like VHOST_USER_SET_MEM_TABLE,
* we just need send it once in the first time. For later such
* request, we just ignore it.
*/
if (vhost_user_one_time_request(msg_request) && dev->vq_index != 0) {
return 0;
}
msg.request = msg_request;
msg.flags = VHOST_USER_VERSION;
msg.size = 0;
switch (request) {
case VHOST_GET_FEATURES:
switch (msg_request) {
case VHOST_USER_GET_FEATURES:
case VHOST_USER_GET_PROTOCOL_FEATURES:
case VHOST_USER_GET_QUEUE_NUM:
need_reply = 1;
break;
case VHOST_SET_FEATURES:
case VHOST_SET_LOG_BASE:
case VHOST_USER_SET_FEATURES:
case VHOST_USER_SET_LOG_BASE:
case VHOST_USER_SET_PROTOCOL_FEATURES:
msg.u64 = *((__u64 *) arg);
msg.size = sizeof(m.u64);
break;
case VHOST_SET_OWNER:
case VHOST_RESET_OWNER:
case VHOST_USER_SET_OWNER:
case VHOST_USER_RESET_DEVICE:
break;
case VHOST_SET_MEM_TABLE:
case VHOST_USER_SET_MEM_TABLE:
for (i = 0; i < dev->mem->nregions; ++i) {
struct vhost_memory_region *reg = dev->mem->regions + i;
ram_addr_t ram_addr;
@ -246,30 +285,31 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
break;
case VHOST_SET_LOG_FD:
case VHOST_USER_SET_LOG_FD:
fds[fd_num++] = *((int *) arg);
break;
case VHOST_SET_VRING_NUM:
case VHOST_SET_VRING_BASE:
case VHOST_USER_SET_VRING_NUM:
case VHOST_USER_SET_VRING_BASE:
case VHOST_USER_SET_VRING_ENABLE:
memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
msg.size = sizeof(m.state);
break;
case VHOST_GET_VRING_BASE:
case VHOST_USER_GET_VRING_BASE:
memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
msg.size = sizeof(m.state);
need_reply = 1;
break;
case VHOST_SET_VRING_ADDR:
case VHOST_USER_SET_VRING_ADDR:
memcpy(&msg.addr, arg, sizeof(struct vhost_vring_addr));
msg.size = sizeof(m.addr);
break;
case VHOST_SET_VRING_KICK:
case VHOST_SET_VRING_CALL:
case VHOST_SET_VRING_ERR:
case VHOST_USER_SET_VRING_KICK:
case VHOST_USER_SET_VRING_CALL:
case VHOST_USER_SET_VRING_ERR:
file = arg;
msg.u64 = file->index & VHOST_USER_VRING_IDX_MASK;
msg.size = sizeof(m.u64);
@ -302,6 +342,8 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
switch (msg_request) {
case VHOST_USER_GET_FEATURES:
case VHOST_USER_GET_PROTOCOL_FEATURES:
case VHOST_USER_GET_QUEUE_NUM:
if (msg.size != sizeof(m.u64)) {
error_report("Received bad msg size.");
return -1;
@ -327,13 +369,61 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
static int vhost_user_init(struct vhost_dev *dev, void *opaque)
{
unsigned long long features;
int err;
assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
dev->opaque = opaque;
err = vhost_user_call(dev, VHOST_USER_GET_FEATURES, &features);
if (err < 0) {
return err;
}
if (virtio_has_feature(features, VHOST_USER_F_PROTOCOL_FEATURES)) {
dev->backend_features |= 1ULL << VHOST_USER_F_PROTOCOL_FEATURES;
err = vhost_user_call(dev, VHOST_USER_GET_PROTOCOL_FEATURES, &features);
if (err < 0) {
return err;
}
dev->protocol_features = features & VHOST_USER_PROTOCOL_FEATURE_MASK;
err = vhost_user_call(dev, VHOST_USER_SET_PROTOCOL_FEATURES,
&dev->protocol_features);
if (err < 0) {
return err;
}
/* query the max queues we support if backend supports Multiple Queue */
if (dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_MQ)) {
err = vhost_user_call(dev, VHOST_USER_GET_QUEUE_NUM, &dev->max_queues);
if (err < 0) {
return err;
}
}
}
return 0;
}
static int vhost_user_set_vring_enable(struct vhost_dev *dev, int enable)
{
struct vhost_vring_state state = {
.index = dev->vq_index,
.num = enable,
};
assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
if (!(dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_MQ))) {
return -1;
}
return vhost_user_call(dev, VHOST_USER_SET_VRING_ENABLE, &state);
}
static int vhost_user_cleanup(struct vhost_dev *dev)
{
assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
@ -343,9 +433,18 @@ static int vhost_user_cleanup(struct vhost_dev *dev)
return 0;
}
static int vhost_user_get_vq_index(struct vhost_dev *dev, int idx)
{
assert(idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs);
return idx;
}
const VhostOps user_ops = {
.backend_type = VHOST_BACKEND_TYPE_USER,
.vhost_call = vhost_user_call,
.vhost_backend_init = vhost_user_init,
.vhost_backend_cleanup = vhost_user_cleanup
};
.vhost_backend_cleanup = vhost_user_cleanup,
.vhost_backend_get_vq_index = vhost_user_get_vq_index,
.vhost_backend_set_vring_enable = vhost_user_set_vring_enable,
};

View File

@ -719,7 +719,7 @@ static int vhost_virtqueue_start(struct vhost_dev *dev,
{
hwaddr s, l, a;
int r;
int vhost_vq_index = idx - dev->vq_index;
int vhost_vq_index = dev->vhost_ops->vhost_backend_get_vq_index(dev, idx);
struct vhost_vring_file file = {
.index = vhost_vq_index
};
@ -728,7 +728,6 @@ static int vhost_virtqueue_start(struct vhost_dev *dev,
};
struct VirtQueue *vvq = virtio_get_queue(vdev, idx);
assert(idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs);
vq->num = state.num = virtio_queue_get_num(vdev, idx);
r = dev->vhost_ops->vhost_call(dev, VHOST_SET_VRING_NUM, &state);
@ -822,12 +821,12 @@ static void vhost_virtqueue_stop(struct vhost_dev *dev,
struct vhost_virtqueue *vq,
unsigned idx)
{
int vhost_vq_index = idx - dev->vq_index;
int vhost_vq_index = dev->vhost_ops->vhost_backend_get_vq_index(dev, idx);
struct vhost_vring_state state = {
.index = vhost_vq_index,
};
int r;
assert(idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs);
r = dev->vhost_ops->vhost_call(dev, VHOST_GET_VRING_BASE, &state);
if (r < 0) {
fprintf(stderr, "vhost VQ %d ring restore failed: %d\n", idx, r);
@ -875,8 +874,9 @@ static void vhost_eventfd_del(MemoryListener *listener,
static int vhost_virtqueue_init(struct vhost_dev *dev,
struct vhost_virtqueue *vq, int n)
{
int vhost_vq_index = dev->vhost_ops->vhost_backend_get_vq_index(dev, n);
struct vhost_vring_file file = {
.index = n,
.index = vhost_vq_index,
};
int r = event_notifier_init(&vq->masked_notifier, 0);
if (r < 0) {
@ -927,7 +927,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
}
for (i = 0; i < hdev->nvqs; ++i) {
r = vhost_virtqueue_init(hdev, hdev->vqs + i, i);
r = vhost_virtqueue_init(hdev, hdev->vqs + i, hdev->vq_index + i);
if (r < 0) {
goto fail_vq;
}
@ -1066,17 +1066,15 @@ void vhost_virtqueue_mask(struct vhost_dev *hdev, VirtIODevice *vdev, int n,
{
struct VirtQueue *vvq = virtio_get_queue(vdev, n);
int r, index = n - hdev->vq_index;
struct vhost_vring_file file;
assert(n >= hdev->vq_index && n < hdev->vq_index + hdev->nvqs);
struct vhost_vring_file file = {
.index = index
};
if (mask) {
file.fd = event_notifier_get_fd(&hdev->vqs[index].masked_notifier);
} else {
file.fd = event_notifier_get_fd(virtio_queue_get_guest_notifier(vvq));
}
file.index = hdev->vhost_ops->vhost_backend_get_vq_index(hdev, n);
r = hdev->vhost_ops->vhost_call(hdev, VHOST_SET_VRING_CALL, &file);
assert(r >= 0);
}

View File

@ -60,6 +60,7 @@ typedef struct VRingUsed
typedef struct VRing
{
unsigned int num;
unsigned int num_default;
unsigned int align;
hwaddr desc;
hwaddr avail;
@ -633,6 +634,7 @@ void virtio_reset(void *opaque)
vdev->vq[i].signalled_used = 0;
vdev->vq[i].signalled_used_valid = false;
vdev->vq[i].notification = true;
vdev->vq[i].vring.num = vdev->vq[i].vring.num_default;
}
}
@ -964,6 +966,7 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
abort();
vdev->vq[i].vring.num = queue_size;
vdev->vq[i].vring.num_default = queue_size;
vdev->vq[i].vring.align = VIRTIO_PCI_VRING_ALIGN;
vdev->vq[i].handle_output = handle_output;
@ -977,6 +980,7 @@ void virtio_del_queue(VirtIODevice *vdev, int n)
}
vdev->vq[n].vring.num = 0;
vdev->vq[n].vring.num_default = 0;
}
void virtio_irq(VirtQueue *vq)
@ -1056,6 +1060,19 @@ static bool virtio_virtqueue_needed(void *opaque)
return virtio_host_has_feature(vdev, VIRTIO_F_VERSION_1);
}
static bool virtio_ringsize_needed(void *opaque)
{
VirtIODevice *vdev = opaque;
int i;
for (i = 0; i < VIRTIO_QUEUE_MAX; i++) {
if (vdev->vq[i].vring.num != vdev->vq[i].vring.num_default) {
return true;
}
}
return false;
}
static void put_virtqueue_state(QEMUFile *f, void *pv, size_t size)
{
VirtIODevice *vdev = pv;
@ -1104,6 +1121,52 @@ static const VMStateDescription vmstate_virtio_virtqueues = {
}
};
static void put_ringsize_state(QEMUFile *f, void *pv, size_t size)
{
VirtIODevice *vdev = pv;
int i;
for (i = 0; i < VIRTIO_QUEUE_MAX; i++) {
qemu_put_be32(f, vdev->vq[i].vring.num_default);
}
}
static int get_ringsize_state(QEMUFile *f, void *pv, size_t size)
{
VirtIODevice *vdev = pv;
int i;
for (i = 0; i < VIRTIO_QUEUE_MAX; i++) {
vdev->vq[i].vring.num_default = qemu_get_be32(f);
}
return 0;
}
static VMStateInfo vmstate_info_ringsize = {
.name = "ringsize_state",
.get = get_ringsize_state,
.put = put_ringsize_state,
};
static const VMStateDescription vmstate_virtio_ringsize = {
.name = "virtio/ringsize",
.version_id = 1,
.minimum_version_id = 1,
.needed = &virtio_ringsize_needed,
.fields = (VMStateField[]) {
{
.name = "ringsize",
.version_id = 0,
.field_exists = NULL,
.size = 0,
.info = &vmstate_info_ringsize,
.flags = VMS_SINGLE,
.offset = 0,
},
VMSTATE_END_OF_LIST()
}
};
static const VMStateDescription vmstate_virtio_device_endian = {
.name = "virtio/device_endian",
.version_id = 1,
@ -1138,6 +1201,7 @@ static const VMStateDescription vmstate_virtio = {
&vmstate_virtio_device_endian,
&vmstate_virtio_64bit_features,
&vmstate_virtio_virtqueues,
&vmstate_virtio_ringsize,
NULL
}
};
@ -1460,7 +1524,7 @@ hwaddr virtio_queue_get_desc_size(VirtIODevice *vdev, int n)
hwaddr virtio_queue_get_avail_size(VirtIODevice *vdev, int n)
{
return offsetof(VRingAvail, ring) +
sizeof(uint64_t) * vdev->vq[n].vring.num;
sizeof(uint16_t) * vdev->vq[n].vring.num;
}
hwaddr virtio_queue_get_used_size(VirtIODevice *vdev, int n)

View File

@ -1,6 +1,9 @@
#ifndef HW_COMPAT_H
#define HW_COMPAT_H
#define HW_COMPAT_2_4 \
/* empty */
#define HW_COMPAT_2_3 \
{\
.driver = "virtio-blk-pci",\

View File

@ -291,7 +291,11 @@ int e820_add_entry(uint64_t, uint64_t, uint32_t);
int e820_get_num_entries(void);
bool e820_get_entry(int, uint32_t, uint64_t *, uint64_t *);
#define PC_COMPAT_2_4 \
HW_COMPAT_2_4
#define PC_COMPAT_2_3 \
PC_COMPAT_2_4 \
HW_COMPAT_2_3 \
{\
.driver = TYPE_X86_CPU,\

View File

@ -24,12 +24,16 @@ typedef int (*vhost_call)(struct vhost_dev *dev, unsigned long int request,
void *arg);
typedef int (*vhost_backend_init)(struct vhost_dev *dev, void *opaque);
typedef int (*vhost_backend_cleanup)(struct vhost_dev *dev);
typedef int (*vhost_backend_get_vq_index)(struct vhost_dev *dev, int idx);
typedef int (*vhost_backend_set_vring_enable)(struct vhost_dev *dev, int enable);
typedef struct VhostOps {
VhostBackendType backend_type;
vhost_call vhost_call;
vhost_backend_init vhost_backend_init;
vhost_backend_cleanup vhost_backend_cleanup;
vhost_backend_get_vq_index vhost_backend_get_vq_index;
vhost_backend_set_vring_enable vhost_backend_set_vring_enable;
} VhostOps;
extern const VhostOps user_ops;

View File

@ -47,6 +47,8 @@ struct vhost_dev {
unsigned long long features;
unsigned long long acked_features;
unsigned long long backend_features;
unsigned long long protocol_features;
unsigned long long max_queues;
bool started;
bool log_enabled;
unsigned long long log_size;

View File

@ -13,6 +13,7 @@ typedef struct VhostNetOptions {
void *opaque;
} VhostNetOptions;
uint64_t vhost_net_get_max_queues(VHostNetState *net);
struct vhost_net *vhost_net_init(VhostNetOptions *options);
int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, int total_queues);
@ -27,4 +28,6 @@ bool vhost_net_virtqueue_pending(VHostNetState *net, int n);
void vhost_net_virtqueue_mask(VHostNetState *net, VirtIODevice *dev,
int idx, bool mask);
VHostNetState *get_vhost_net(NetClientState *nc);
int vhost_set_vring_enable(NetClientState * nc, int enable);
#endif

View File

@ -78,7 +78,7 @@ struct vhost_memory {
#define VHOST_SET_OWNER _IO(VHOST_VIRTIO, 0x01)
/* Give up ownership, and reset the device to default values.
* Allows subsequent call to VHOST_OWNER_SET to succeed. */
#define VHOST_RESET_OWNER _IO(VHOST_VIRTIO, 0x02)
#define VHOST_RESET_DEVICE _IO(VHOST_VIRTIO, 0x02)
/* Set up/modify memory layout */
#define VHOST_SET_MEM_TABLE _IOW(VHOST_VIRTIO, 0x03, struct vhost_memory)

View File

@ -14,6 +14,7 @@
#include "sysemu/char.h"
#include "qemu/config-file.h"
#include "qemu/error-report.h"
#include "qmp-commands.h"
typedef struct VhostUserState {
NetClientState nc;
@ -39,37 +40,77 @@ static int vhost_user_running(VhostUserState *s)
return (s->vhost_net) ? 1 : 0;
}
static int vhost_user_start(VhostUserState *s)
static void vhost_user_stop(int queues, NetClientState *ncs[])
{
VhostNetOptions options;
VhostUserState *s;
int i;
if (vhost_user_running(s)) {
return 0;
for (i = 0; i < queues; i++) {
assert (ncs[i]->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER);
s = DO_UPCAST(VhostUserState, nc, ncs[i]);
if (!vhost_user_running(s)) {
continue;
}
if (s->vhost_net) {
vhost_net_cleanup(s->vhost_net);
s->vhost_net = NULL;
}
}
options.backend_type = VHOST_BACKEND_TYPE_USER;
options.net_backend = &s->nc;
options.opaque = s->chr;
s->vhost_net = vhost_net_init(&options);
return vhost_user_running(s) ? 0 : -1;
}
static void vhost_user_stop(VhostUserState *s)
static int vhost_user_start(int queues, NetClientState *ncs[])
{
if (vhost_user_running(s)) {
vhost_net_cleanup(s->vhost_net);
VhostNetOptions options;
VhostUserState *s;
int max_queues;
int i;
options.backend_type = VHOST_BACKEND_TYPE_USER;
for (i = 0; i < queues; i++) {
assert (ncs[i]->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER);
s = DO_UPCAST(VhostUserState, nc, ncs[i]);
if (vhost_user_running(s)) {
continue;
}
options.net_backend = ncs[i];
options.opaque = s->chr;
s->vhost_net = vhost_net_init(&options);
if (!s->vhost_net) {
error_report("failed to init vhost_net for queue %d\n", i);
goto err;
}
if (i == 0) {
max_queues = vhost_net_get_max_queues(s->vhost_net);
if (queues > max_queues) {
error_report("you are asking more queues than "
"supported: %d\n", max_queues);
goto err;
}
}
}
s->vhost_net = 0;
return 0;
err:
vhost_user_stop(i + 1, ncs);
return -1;
}
static void vhost_user_cleanup(NetClientState *nc)
{
VhostUserState *s = DO_UPCAST(VhostUserState, nc, nc);
vhost_user_stop(s);
if (s->vhost_net) {
vhost_net_cleanup(s->vhost_net);
s->vhost_net = NULL;
}
qemu_purge_queued_packets(nc);
}
@ -95,59 +136,61 @@ static NetClientInfo net_vhost_user_info = {
.has_ufo = vhost_user_has_ufo,
};
static void net_vhost_link_down(VhostUserState *s, bool link_down)
{
s->nc.link_down = link_down;
if (s->nc.peer) {
s->nc.peer->link_down = link_down;
}
if (s->nc.info->link_status_changed) {
s->nc.info->link_status_changed(&s->nc);
}
if (s->nc.peer && s->nc.peer->info->link_status_changed) {
s->nc.peer->info->link_status_changed(s->nc.peer);
}
}
static void net_vhost_user_event(void *opaque, int event)
{
VhostUserState *s = opaque;
const char *name = opaque;
NetClientState *ncs[MAX_QUEUE_NUM];
VhostUserState *s;
Error *err = NULL;
int queues;
queues = qemu_find_net_clients_except(name, ncs,
NET_CLIENT_OPTIONS_KIND_NIC,
MAX_QUEUE_NUM);
s = DO_UPCAST(VhostUserState, nc, ncs[0]);
switch (event) {
case CHR_EVENT_OPENED:
vhost_user_start(s);
net_vhost_link_down(s, false);
if (vhost_user_start(queues, ncs) < 0) {
exit(1);
}
qmp_set_link(name, true, &err);
error_report("chardev \"%s\" went up", s->chr->label);
break;
case CHR_EVENT_CLOSED:
net_vhost_link_down(s, true);
vhost_user_stop(s);
qmp_set_link(name, true, &err);
vhost_user_stop(queues, ncs);
error_report("chardev \"%s\" went down", s->chr->label);
break;
}
if (err) {
error_report_err(err);
}
}
static int net_vhost_user_init(NetClientState *peer, const char *device,
const char *name, CharDriverState *chr)
const char *name, CharDriverState *chr,
int queues)
{
NetClientState *nc;
VhostUserState *s;
int i;
nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name);
for (i = 0; i < queues; i++) {
nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name);
snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user to %s",
chr->label);
snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user%d to %s",
i, chr->label);
s = DO_UPCAST(VhostUserState, nc, nc);
/* We don't provide a receive callback */
nc->receive_disabled = 1;
nc->queue_index = i;
/* We don't provide a receive callback */
s->nc.receive_disabled = 1;
s->chr = chr;
s = DO_UPCAST(VhostUserState, nc, nc);
s->chr = chr;
}
qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event, s);
qemu_chr_add_handlers(chr, NULL, NULL, net_vhost_user_event, (void*)name);
return 0;
}
@ -226,6 +269,7 @@ static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
int net_init_vhost_user(const NetClientOptions *opts, const char *name,
NetClientState *peer, Error **errp)
{
int queues;
const NetdevVhostUserOptions *vhost_user_opts;
CharDriverState *chr;
@ -243,6 +287,7 @@ int net_init_vhost_user(const NetClientOptions *opts, const char *name,
return -1;
}
queues = vhost_user_opts->has_queues ? vhost_user_opts->queues : 1;
return net_vhost_user_init(peer, "vhost_user", name, chr);
return net_vhost_user_init(peer, "vhost_user", name, chr, queues);
}

View File

@ -2481,12 +2481,16 @@
#
# @vhostforce: #optional vhost on for non-MSIX virtio guests (default: false).
#
# @queues: #optional number of queues to be created for multiqueue vhost-user
# (default: 1) (Since 2.5)
#
# Since 2.1
##
{ 'struct': 'NetdevVhostUserOptions',
'data': {
'chardev': 'str',
'*vhostforce': 'bool' } }
'*vhostforce': 'bool',
'*queues': 'int' } }
##
# @NetClientOptions

View File

@ -1990,13 +1990,14 @@ The hubport netdev lets you connect a NIC to a QEMU "vlan" instead of a single
netdev. @code{-net} and @code{-device} with parameter @option{vlan} create the
required hub automatically.
@item -netdev vhost-user,chardev=@var{id}[,vhostforce=on|off]
@item -netdev vhost-user,chardev=@var{id}[,vhostforce=on|off][,queues=n]
Establish a vhost-user netdev, backed by a chardev @var{id}. The chardev should
be a unix domain socket backed one. The vhost-user uses a specifically defined
protocol to pass vhost ioctl replacement messages to an application on the other
end of the socket. On non-MSIX guests, the feature can be forced with
@var{vhostforce}.
@var{vhostforce}. Use 'queues=@var{n}' to specify the number of queues to
be created for multiqueue vhost-user.
Example:
@example

View File

@ -58,7 +58,7 @@ typedef enum VhostUserRequest {
VHOST_USER_GET_FEATURES = 1,
VHOST_USER_SET_FEATURES = 2,
VHOST_USER_SET_OWNER = 3,
VHOST_USER_RESET_OWNER = 4,
VHOST_USER_RESET_DEVICE = 4,
VHOST_USER_SET_MEM_TABLE = 5,
VHOST_USER_SET_LOG_BASE = 6,
VHOST_USER_SET_LOG_FD = 7,