* switch to C11 atomics (Alex)

* Coverity fixes for IPMI (Corey), i386 (Paolo), qemu-char (Paolo)
 * at long last, fail on wrong .pc files if -m32 is in use (Daniel)
 * qemu-char regression fix (Daniel)
 * SAS1068 device (Paolo)
 * memory region docs improvements (Peter)
 * target-i386 cleanups (Richard)
 * qemu-nbd docs improvements (Sitsofe)
 * thread-safe memory hotplug (Stefan)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQEcBAABCAAGBQJWug86AAoJEL/70l94x66DMMoH/A4tioDjhozDBtAkz/Ny2lZs
 4Q34kQOWNnE0rIFDCsdg3Eq0QyYYpLH5tSuRZUHr37pfUyTkbff87uhnNepJaphY
 YV6LmmGZmYewZuvS3+bhvYOV6Eq9Ycsi85eT860/n3FFnfklcPqFWgjjxblKewOl
 Qf+9sLRVzlaeKjQPKNXbZV/4jkEF7a4W9oVKMGXcQXzyCe6vQ/ciK2jGBSLQhL9J
 FYFTvm70G39t79U7zPiJNXvZBtbKJdLbqPmMBHcyVk75np3mKVln3V0gYj68ACv+
 S30NedLwrxShLng98trHvD2TZqwsyxXqt7NimxLsVF5sH3GCfgYuc6fhueI0H6A=
 =5xD6
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

* switch to C11 atomics (Alex)
* Coverity fixes for IPMI (Corey), i386 (Paolo), qemu-char (Paolo)
* at long last, fail on wrong .pc files if -m32 is in use (Daniel)
* qemu-char regression fix (Daniel)
* SAS1068 device (Paolo)
* memory region docs improvements (Peter)
* target-i386 cleanups (Richard)
* qemu-nbd docs improvements (Sitsofe)
* thread-safe memory hotplug (Stefan)

# gpg: Signature made Tue 09 Feb 2016 16:09:30 GMT using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>"

* remotes/bonzini/tags/for-upstream: (33 commits)
  qemu-char, io: fix ordering of arguments for UDP socket creation
  MAINTAINERS: add all-match entry for qemu-devel@
  get_maintainer.pl: fall back to git if only lists are found
  target-i386: fix PSE36 mode
  docs/memory.txt: Improve list of different memory regions
  ipmi_bmc_sim: Add break to correct watchdog NMI check
  ipmi_bmc_sim: Fix off by one in check.
  ipmi: do not take/drop iothread lock
  target-i386: Deconstruct the cpu_T array
  target-i386: Tidy gen_add_A0_im
  target-i386: Rewrite leave
  target-i386: Rewrite gen_enter inline
  target-i386: Use gen_lea_v_seg in pusha/popa
  target-i386: Access segs via TCG registers
  target-i386: Use gen_lea_v_seg in stack subroutines
  target-i386: Use gen_lea_v_seg in gen_lea_modrm
  target-i386: Introduce mo_stacksize
  target-i386: Create gen_lea_v_seg
  char: fix repeated registration of tcp chardev I/O handlers
  kvm-all: trace: strerror fixup
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This commit is contained in:
Peter Maydell 2016-02-09 19:34:46 +00:00
commit c9f19dff10
32 changed files with 5221 additions and 1193 deletions

View File

@ -52,6 +52,11 @@ General Project Administration
------------------------------
M: Peter Maydell <peter.maydell@linaro.org>
All patches CC here
L: qemu-devel@nongnu.org
F: *
F: */
Responsible Disclosure, Reporting Security Issues
------------------------------
W: http://wiki.qemu.org/SecurityProcess

24
configure vendored
View File

@ -3063,6 +3063,30 @@ for i in $glib_modules; do
fi
done
# Sanity check that the current size_t matches the
# size that glib thinks it should be. This catches
# problems on multi-arch where people try to build
# 32-bit QEMU while pointing at 64-bit glib headers
cat > $TMPC <<EOF
#include <glib.h>
#include <unistd.h>
#define QEMU_BUILD_BUG_ON(x) \
typedef char qemu_build_bug_on[(x)?-1:1] __attribute__((unused));
int main(void) {
QEMU_BUILD_BUG_ON(sizeof(size_t) != GLIB_SIZEOF_SIZE_T);
return 0;
}
EOF
if ! compile_prog "-Werror $CFLAGS" "$LIBS" ; then
error_exit "sizeof(size_t) doesn't match GLIB_SIZEOF_SIZE_T."\
"You probably need to set PKG_CONFIG_LIBDIR"\
"to point to the right pkg-config files for your"\
"build target"
fi
# g_test_trap_subprocess added in 2.38. Used by some tests.
glib_subprocess=yes
if ! $pkg_config --atleast-version=2.38 glib-2.0; then

View File

@ -15,6 +15,7 @@ CONFIG_ES1370=y
CONFIG_LSI_SCSI_PCI=y
CONFIG_VMW_PVSCSI_SCSI_PCI=y
CONFIG_MEGASAS_SCSI_PCI=y
CONFIG_MPTSAS_SCSI_PCI=y
CONFIG_RTL8139_PCI=y
CONFIG_E1000_PCI=y
CONFIG_VMXNET3_PCI=y

View File

@ -26,14 +26,28 @@ These represent memory as seen from the CPU or a device's viewpoint.
Types of regions
----------------
There are four types of memory regions (all represented by a single C type
There are multiple types of memory regions (all represented by a single C type
MemoryRegion):
- RAM: a RAM region is simply a range of host memory that can be made available
to the guest.
You typically initialize these with memory_region_init_ram(). Some special
purposes require the variants memory_region_init_resizeable_ram(),
memory_region_init_ram_from_file(), or memory_region_init_ram_ptr().
- MMIO: a range of guest memory that is implemented by host callbacks;
each read or write causes a callback to be called on the host.
You initialize these with memory_region_io(), passing it a MemoryRegionOps
structure describing the callbacks.
- ROM: a ROM memory region works like RAM for reads (directly accessing
a region of host memory), but like MMIO for writes (invoking a callback).
You initialize these with memory_region_init_rom_device().
- IOMMU region: an IOMMU region translates addresses of accesses made to it
and forwards them to some other target memory region. As the name suggests,
these are only needed for modelling an IOMMU, not for simple devices.
You initialize these with memory_region_init_iommu().
- container: a container simply includes other memory regions, each at
a different offset. Containers are useful for grouping several regions
@ -45,12 +59,22 @@ MemoryRegion):
can overlay a subregion of RAM with MMIO or ROM, or a PCI controller
that does not prevent card from claiming overlapping BARs.
You initialize a pure container with memory_region_init().
- alias: a subsection of another region. Aliases allow a region to be
split apart into discontiguous regions. Examples of uses are memory banks
used when the guest address space is smaller than the amount of RAM
addressed, or a memory controller that splits main memory to expose a "PCI
hole". Aliases may point to any type of region, including other aliases,
but an alias may not point back to itself, directly or indirectly.
You initialize these with memory_region_init_alias().
- reservation region: a reservation region is primarily for debugging.
It claims I/O space that is not supposed to be handled by QEMU itself.
The typical use is to track parts of the address space which will be
handled by the host kernel when KVM is enabled.
You initialize these with memory_region_init_reservation(), or by
passing a NULL callback parameter to memory_region_init_io().
It is valid to add subregions to a region which is not a pure container
(that is, to an MMIO, RAM or ROM region). This means that the region

75
exec.c
View File

@ -980,8 +980,9 @@ bool cpu_physical_memory_test_and_clear_dirty(ram_addr_t start,
ram_addr_t length,
unsigned client)
{
DirtyMemoryBlocks *blocks;
unsigned long end, page;
bool dirty;
bool dirty = false;
if (length == 0) {
return false;
@ -989,8 +990,22 @@ bool cpu_physical_memory_test_and_clear_dirty(ram_addr_t start,
end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS;
page = start >> TARGET_PAGE_BITS;
dirty = bitmap_test_and_clear_atomic(ram_list.dirty_memory[client],
page, end - page);
rcu_read_lock();
blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
while (page < end) {
unsigned long idx = page / DIRTY_MEMORY_BLOCK_SIZE;
unsigned long offset = page % DIRTY_MEMORY_BLOCK_SIZE;
unsigned long num = MIN(end - page, DIRTY_MEMORY_BLOCK_SIZE - offset);
dirty |= bitmap_test_and_clear_atomic(blocks->blocks[idx],
offset, num);
page += num;
}
rcu_read_unlock();
if (dirty && tcg_enabled()) {
tlb_reset_dirty_range_all(start, length);
@ -1504,6 +1519,47 @@ int qemu_ram_resize(ram_addr_t base, ram_addr_t newsize, Error **errp)
return 0;
}
/* Called with ram_list.mutex held */
static void dirty_memory_extend(ram_addr_t old_ram_size,
ram_addr_t new_ram_size)
{
ram_addr_t old_num_blocks = DIV_ROUND_UP(old_ram_size,
DIRTY_MEMORY_BLOCK_SIZE);
ram_addr_t new_num_blocks = DIV_ROUND_UP(new_ram_size,
DIRTY_MEMORY_BLOCK_SIZE);
int i;
/* Only need to extend if block count increased */
if (new_num_blocks <= old_num_blocks) {
return;
}
for (i = 0; i < DIRTY_MEMORY_NUM; i++) {
DirtyMemoryBlocks *old_blocks;
DirtyMemoryBlocks *new_blocks;
int j;
old_blocks = atomic_rcu_read(&ram_list.dirty_memory[i]);
new_blocks = g_malloc(sizeof(*new_blocks) +
sizeof(new_blocks->blocks[0]) * new_num_blocks);
if (old_num_blocks) {
memcpy(new_blocks->blocks, old_blocks->blocks,
old_num_blocks * sizeof(old_blocks->blocks[0]));
}
for (j = old_num_blocks; j < new_num_blocks; j++) {
new_blocks->blocks[j] = bitmap_new(DIRTY_MEMORY_BLOCK_SIZE);
}
atomic_rcu_set(&ram_list.dirty_memory[i], new_blocks);
if (old_blocks) {
g_free_rcu(old_blocks, rcu);
}
}
}
static ram_addr_t ram_block_add(RAMBlock *new_block, Error **errp)
{
RAMBlock *block;
@ -1543,6 +1599,7 @@ static ram_addr_t ram_block_add(RAMBlock *new_block, Error **errp)
(new_block->offset + new_block->max_length) >> TARGET_PAGE_BITS);
if (new_ram_size > old_ram_size) {
migration_bitmap_extend(old_ram_size, new_ram_size);
dirty_memory_extend(old_ram_size, new_ram_size);
}
/* Keep the list sorted from biggest to smallest block. Unlike QTAILQ,
* QLIST (which has an RCU-friendly variant) does not have insertion at
@ -1568,18 +1625,6 @@ static ram_addr_t ram_block_add(RAMBlock *new_block, Error **errp)
ram_list.version++;
qemu_mutex_unlock_ramlist();
new_ram_size = last_ram_offset() >> TARGET_PAGE_BITS;
if (new_ram_size > old_ram_size) {
int i;
/* ram_list.dirty_memory[] is protected by the iothread lock. */
for (i = 0; i < DIRTY_MEMORY_NUM; i++) {
ram_list.dirty_memory[i] =
bitmap_zero_extend(ram_list.dirty_memory[i],
old_ram_size, new_ram_size);
}
}
cpu_physical_memory_set_dirty_range(new_block->offset,
new_block->used_length,
DIRTY_CLIENTS_ALL);

View File

@ -51,9 +51,7 @@ static int ipmi_do_hw_op(IPMIInterface *s, enum ipmi_op op, int checkonly)
if (checkonly) {
return 0;
}
qemu_mutex_lock_iothread();
qmp_inject_nmi(NULL);
qemu_mutex_unlock_iothread();
return 0;
case IPMI_POWERCYCLE_CHASSIS:

View File

@ -559,7 +559,7 @@ static void ipmi_init_sensors_from_sdrs(IPMIBmcSim *s)
static int ipmi_register_netfn(IPMIBmcSim *s, unsigned int netfn,
const IPMINetfn *netfnd)
{
if ((netfn & 1) || (netfn > MAX_NETFNS) || (s->netfns[netfn / 2])) {
if ((netfn & 1) || (netfn >= MAX_NETFNS) || (s->netfns[netfn / 2])) {
return -1;
}
s->netfns[netfn / 2] = netfnd;
@ -1135,6 +1135,8 @@ static void set_watchdog_timer(IPMIBmcSim *ibs,
rsp[2] = IPMI_CC_INVALID_DATA_FIELD;
return;
}
break;
default:
/* We don't support PRE_SMI */
rsp[2] = IPMI_CC_INVALID_DATA_FIELD;

View File

@ -1,6 +1,7 @@
common-obj-y += scsi-disk.o
common-obj-y += scsi-generic.o scsi-bus.o
common-obj-$(CONFIG_LSI_SCSI_PCI) += lsi53c895a.o
common-obj-$(CONFIG_MPTSAS_SCSI_PCI) += mptsas.o mptconfig.o mptendian.o
common-obj-$(CONFIG_MEGASAS_SCSI_PCI) += megasas.o
common-obj-$(CONFIG_VMW_PVSCSI_SCSI_PCI) += vmw_pvscsi.o
common-obj-$(CONFIG_ESP) += esp.o

1153
hw/scsi/mpi.h Normal file

File diff suppressed because it is too large Load Diff

904
hw/scsi/mptconfig.c Normal file
View File

@ -0,0 +1,904 @@
/*
* QEMU LSI SAS1068 Host Bus Adapter emulation - configuration pages
*
* Copyright (c) 2016 Red Hat, Inc.
*
* Author: Paolo Bonzini
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*/
#include "qemu/osdep.h"
#include "hw/hw.h"
#include "hw/pci/pci.h"
#include "hw/scsi/scsi.h"
#include "mptsas.h"
#include "mpi.h"
#include "trace.h"
/* Generic functions for marshaling and unmarshaling. */
#define repl1(x) x
#define repl2(x) x x
#define repl3(x) x x x
#define repl4(x) x x x x
#define repl5(x) x x x x x
#define repl6(x) x x x x x x
#define repl7(x) x x x x x x x
#define repl8(x) x x x x x x x x
#define repl(n, x) glue(repl, n)(x)
typedef union PackValue {
uint64_t ll;
char *str;
} PackValue;
static size_t vfill(uint8_t *data, size_t size, const char *fmt, va_list ap)
{
size_t ofs;
PackValue val;
const char *p;
ofs = 0;
p = fmt;
while (*p) {
memset(&val, 0, sizeof(val));
switch (*p) {
case '*':
p++;
break;
case 'b':
case 'w':
case 'l':
val.ll = va_arg(ap, int);
break;
case 'q':
val.ll = va_arg(ap, int64_t);
break;
case 's':
val.str = va_arg(ap, void *);
break;
}
switch (*p++) {
case 'b':
if (data) {
stb_p(data + ofs, val.ll);
}
ofs++;
break;
case 'w':
if (data) {
stw_le_p(data + ofs, val.ll);
}
ofs += 2;
break;
case 'l':
if (data) {
stl_le_p(data + ofs, val.ll);
}
ofs += 4;
break;
case 'q':
if (data) {
stq_le_p(data + ofs, val.ll);
}
ofs += 8;
break;
case 's':
{
int cnt = atoi(p);
if (data) {
if (val.str) {
strncpy((void *)data + ofs, val.str, cnt);
} else {
memset((void *)data + ofs, 0, cnt);
}
}
ofs += cnt;
break;
}
}
}
return ofs;
}
static size_t vpack(uint8_t **p_data, const char *fmt, va_list ap1)
{
size_t size = 0;
uint8_t *data = NULL;
if (p_data) {
va_list ap2;
va_copy(ap2, ap1);
size = vfill(NULL, 0, fmt, ap2);
*p_data = data = g_malloc(size);
}
return vfill(data, size, fmt, ap1);
}
static size_t fill(uint8_t *data, size_t size, const char *fmt, ...)
{
va_list ap;
size_t ret;
va_start(ap, fmt);
ret = vfill(data, size, fmt, ap);
va_end(ap);
return ret;
}
/* Functions to build the page header and fill in the length, always used
* through the macros.
*/
#define MPTSAS_CONFIG_PACK(number, type, version, fmt, ...) \
mptsas_config_pack(data, "b*bbb" fmt, version, number, type, \
## __VA_ARGS__)
static size_t mptsas_config_pack(uint8_t **data, const char *fmt, ...)
{
va_list ap;
size_t ret;
va_start(ap, fmt);
ret = vpack(data, fmt, ap);
va_end(ap);
if (data) {
assert(ret < 256 && (ret % 4) == 0);
stb_p(*data + 1, ret / 4);
}
return ret;
}
#define MPTSAS_CONFIG_PACK_EXT(number, type, version, fmt, ...) \
mptsas_config_pack_ext(data, "b*bbb*wb*b" fmt, version, number, \
MPI_CONFIG_PAGETYPE_EXTENDED, type, ## __VA_ARGS__)
static size_t mptsas_config_pack_ext(uint8_t **data, const char *fmt, ...)
{
va_list ap;
size_t ret;
va_start(ap, fmt);
ret = vpack(data, fmt, ap);
va_end(ap);
if (data) {
assert(ret < 65536 && (ret % 4) == 0);
stw_le_p(*data + 4, ret / 4);
}
return ret;
}
/* Manufacturing pages */
static
size_t mptsas_config_manufacturing_0(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(0, MPI_CONFIG_PAGETYPE_MANUFACTURING, 0x00,
"s16s8s16s16s16",
"QEMU MPT Fusion",
"2.5",
"QEMU MPT Fusion",
"QEMU",
"0000111122223333");
}
static
size_t mptsas_config_manufacturing_1(MPTSASState *s, uint8_t **data, int address)
{
/* VPD - all zeros */
return MPTSAS_CONFIG_PACK(1, MPI_CONFIG_PAGETYPE_MANUFACTURING, 0x00,
"s256");
}
static
size_t mptsas_config_manufacturing_2(MPTSASState *s, uint8_t **data, int address)
{
PCIDeviceClass *pcic = PCI_DEVICE_GET_CLASS(s);
return MPTSAS_CONFIG_PACK(2, MPI_CONFIG_PAGETYPE_MANUFACTURING, 0x00,
"wb*b*l",
pcic->device_id, pcic->revision);
}
static
size_t mptsas_config_manufacturing_3(MPTSASState *s, uint8_t **data, int address)
{
PCIDeviceClass *pcic = PCI_DEVICE_GET_CLASS(s);
return MPTSAS_CONFIG_PACK(3, MPI_CONFIG_PAGETYPE_MANUFACTURING, 0x00,
"wb*b*l",
pcic->device_id, pcic->revision);
}
static
size_t mptsas_config_manufacturing_4(MPTSASState *s, uint8_t **data, int address)
{
/* All zeros */
return MPTSAS_CONFIG_PACK(4, MPI_CONFIG_PAGETYPE_MANUFACTURING, 0x05,
"*l*b*b*b*b*b*b*w*s56*l*l*l*l*l*l"
"*b*b*w*b*b*w*l*l");
}
static
size_t mptsas_config_manufacturing_5(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(5, MPI_CONFIG_PAGETYPE_MANUFACTURING, 0x02,
"q*b*b*w*l*l", s->sas_addr);
}
static
size_t mptsas_config_manufacturing_6(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(6, MPI_CONFIG_PAGETYPE_MANUFACTURING, 0x00,
"*l");
}
static
size_t mptsas_config_manufacturing_7(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(7, MPI_CONFIG_PAGETYPE_MANUFACTURING, 0x00,
"*l*l*l*s16*b*b*w", MPTSAS_NUM_PORTS);
}
static
size_t mptsas_config_manufacturing_8(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(8, MPI_CONFIG_PAGETYPE_MANUFACTURING, 0x00,
"*l");
}
static
size_t mptsas_config_manufacturing_9(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(9, MPI_CONFIG_PAGETYPE_MANUFACTURING, 0x00,
"*l");
}
static
size_t mptsas_config_manufacturing_10(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(10, MPI_CONFIG_PAGETYPE_MANUFACTURING, 0x00,
"*l");
}
/* I/O unit pages */
static
size_t mptsas_config_io_unit_0(MPTSASState *s, uint8_t **data, int address)
{
PCIDevice *pci = PCI_DEVICE(s);
uint64_t unique_value = 0x53504D554D4551LL; /* "QEMUMPTx" */
unique_value |= (uint64_t)pci->devfn << 56;
return MPTSAS_CONFIG_PACK(0, MPI_CONFIG_PAGETYPE_IO_UNIT, 0x00,
"q", unique_value);
}
static
size_t mptsas_config_io_unit_1(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(1, MPI_CONFIG_PAGETYPE_IO_UNIT, 0x02, "l",
0x41 /* single function, RAID disabled */ );
}
static
size_t mptsas_config_io_unit_2(MPTSASState *s, uint8_t **data, int address)
{
PCIDevice *pci = PCI_DEVICE(s);
uint8_t devfn = pci->devfn;
return MPTSAS_CONFIG_PACK(2, MPI_CONFIG_PAGETYPE_IO_UNIT, 0x02,
"llbbw*b*b*w*b*b*w*b*b*w*l",
0, 0x100, 0 /* pci bus? */, devfn, 0);
}
static
size_t mptsas_config_io_unit_3(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(3, MPI_CONFIG_PAGETYPE_IO_UNIT, 0x01,
"*b*b*w*l");
}
static
size_t mptsas_config_io_unit_4(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(4, MPI_CONFIG_PAGETYPE_IO_UNIT, 0x00, "*l*l*q");
}
/* I/O controller pages */
static
size_t mptsas_config_ioc_0(MPTSASState *s, uint8_t **data, int address)
{
PCIDeviceClass *pcic = PCI_DEVICE_GET_CLASS(s);
return MPTSAS_CONFIG_PACK(0, MPI_CONFIG_PAGETYPE_IOC, 0x01,
"*l*lwwb*b*b*blww",
pcic->vendor_id, pcic->device_id, pcic->revision,
pcic->subsystem_vendor_id,
pcic->subsystem_id);
}
static
size_t mptsas_config_ioc_1(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(1, MPI_CONFIG_PAGETYPE_IOC, 0x03,
"*l*l*b*b*b*b");
}
static
size_t mptsas_config_ioc_2(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(2, MPI_CONFIG_PAGETYPE_IOC, 0x04,
"*l*b*b*b*b");
}
static
size_t mptsas_config_ioc_3(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(3, MPI_CONFIG_PAGETYPE_IOC, 0x00,
"*b*b*w");
}
static
size_t mptsas_config_ioc_4(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(4, MPI_CONFIG_PAGETYPE_IOC, 0x00,
"*b*b*w");
}
static
size_t mptsas_config_ioc_5(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(5, MPI_CONFIG_PAGETYPE_IOC, 0x00,
"*l*b*b*w");
}
static
size_t mptsas_config_ioc_6(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK(6, MPI_CONFIG_PAGETYPE_IOC, 0x01,
"*l*b*b*b*b*b*b*b*b*b*b*w*l*l*l*l*b*b*w"
"*w*w*w*w*l*l*l");
}
/* SAS I/O unit pages (extended) */
#define MPTSAS_CONFIG_SAS_IO_UNIT_0_SIZE 16
#define MPI_SAS_IOUNIT0_RATE_FAILED_SPEED_NEGOTIATION 0x02
#define MPI_SAS_IOUNIT0_RATE_1_5 0x08
#define MPI_SAS_IOUNIT0_RATE_3_0 0x09
#define MPI_SAS_DEVICE_INFO_NO_DEVICE 0x00000000
#define MPI_SAS_DEVICE_INFO_END_DEVICE 0x00000001
#define MPI_SAS_DEVICE_INFO_SSP_TARGET 0x00000400
#define MPI_SAS_DEVICE0_ASTATUS_NO_ERRORS 0x00
#define MPI_SAS_DEVICE0_FLAGS_DEVICE_PRESENT 0x0001
#define MPI_SAS_DEVICE0_FLAGS_DEVICE_MAPPED 0x0002
#define MPI_SAS_DEVICE0_FLAGS_MAPPING_PERSISTENT 0x0004
static SCSIDevice *mptsas_phy_get_device(MPTSASState *s, int i,
int *phy_handle, int *dev_handle)
{
SCSIDevice *d = scsi_device_find(&s->bus, 0, i, 0);
if (phy_handle) {
*phy_handle = i + 1;
}
if (dev_handle) {
*dev_handle = d ? i + 1 + MPTSAS_NUM_PORTS : 0;
}
return d;
}
static
size_t mptsas_config_sas_io_unit_0(MPTSASState *s, uint8_t **data, int address)
{
size_t size = MPTSAS_CONFIG_PACK_EXT(0, MPI_CONFIG_EXTPAGETYPE_SAS_IO_UNIT, 0x04,
"*w*wb*b*w"
repl(MPTSAS_NUM_PORTS, "*s16"),
MPTSAS_NUM_PORTS);
if (data) {
size_t ofs = size - MPTSAS_NUM_PORTS * MPTSAS_CONFIG_SAS_IO_UNIT_0_SIZE;
int i;
for (i = 0; i < MPTSAS_NUM_PORTS; i++) {
int phy_handle, dev_handle;
SCSIDevice *dev = mptsas_phy_get_device(s, i, &phy_handle, &dev_handle);
fill(*data + ofs, MPTSAS_CONFIG_SAS_IO_UNIT_0_SIZE,
"bbbblwwl", i, 0, 0,
(dev
? MPI_SAS_IOUNIT0_RATE_3_0
: MPI_SAS_IOUNIT0_RATE_FAILED_SPEED_NEGOTIATION),
(dev
? MPI_SAS_DEVICE_INFO_END_DEVICE | MPI_SAS_DEVICE_INFO_SSP_TARGET
: MPI_SAS_DEVICE_INFO_NO_DEVICE),
dev_handle,
dev_handle,
0);
ofs += MPTSAS_CONFIG_SAS_IO_UNIT_0_SIZE;
}
assert(ofs == size);
}
return size;
}
#define MPTSAS_CONFIG_SAS_IO_UNIT_1_SIZE 12
static
size_t mptsas_config_sas_io_unit_1(MPTSASState *s, uint8_t **data, int address)
{
size_t size = MPTSAS_CONFIG_PACK_EXT(1, MPI_CONFIG_EXTPAGETYPE_SAS_IO_UNIT, 0x07,
"*w*w*w*wb*b*b*b"
repl(MPTSAS_NUM_PORTS, "*s12"),
MPTSAS_NUM_PORTS);
if (data) {
size_t ofs = size - MPTSAS_NUM_PORTS * MPTSAS_CONFIG_SAS_IO_UNIT_1_SIZE;
int i;
for (i = 0; i < MPTSAS_NUM_PORTS; i++) {
SCSIDevice *dev = mptsas_phy_get_device(s, i, NULL, NULL);
fill(*data + ofs, MPTSAS_CONFIG_SAS_IO_UNIT_1_SIZE,
"bbbblww", i, 0, 0,
(MPI_SAS_IOUNIT0_RATE_3_0 << 4) | MPI_SAS_IOUNIT0_RATE_1_5,
(dev
? MPI_SAS_DEVICE_INFO_END_DEVICE | MPI_SAS_DEVICE_INFO_SSP_TARGET
: MPI_SAS_DEVICE_INFO_NO_DEVICE),
0, 0);
ofs += MPTSAS_CONFIG_SAS_IO_UNIT_1_SIZE;
}
assert(ofs == size);
}
return size;
}
static
size_t mptsas_config_sas_io_unit_2(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK_EXT(2, MPI_CONFIG_EXTPAGETYPE_SAS_IO_UNIT, 0x06,
"*b*b*w*w*w*b*b*w");
}
static
size_t mptsas_config_sas_io_unit_3(MPTSASState *s, uint8_t **data, int address)
{
return MPTSAS_CONFIG_PACK_EXT(3, MPI_CONFIG_EXTPAGETYPE_SAS_IO_UNIT, 0x06,
"*l*l*l*l*l*l*l*l*l");
}
/* SAS PHY pages (extended) */
static int mptsas_phy_addr_get(MPTSASState *s, int address)
{
int i;
if ((address >> MPI_SAS_PHY_PGAD_FORM_SHIFT) == 0) {
i = address & 255;
} else if ((address >> MPI_SAS_PHY_PGAD_FORM_SHIFT) == 1) {
i = address & 65535;
} else {
return -EINVAL;
}
if (i >= MPTSAS_NUM_PORTS) {
return -EINVAL;
}
return i;
}
static
size_t mptsas_config_phy_0(MPTSASState *s, uint8_t **data, int address)
{
int phy_handle = -1;
int dev_handle = -1;
int i = mptsas_phy_addr_get(s, address);
SCSIDevice *dev;
if (i < 0) {
trace_mptsas_config_sas_phy(s, address, i, phy_handle, dev_handle, 0);
return i;
}
dev = mptsas_phy_get_device(s, i, &phy_handle, &dev_handle);
trace_mptsas_config_sas_phy(s, address, i, phy_handle, dev_handle, 0);
return MPTSAS_CONFIG_PACK_EXT(0, MPI_CONFIG_EXTPAGETYPE_SAS_PHY, 0x01,
"w*wqwb*blbb*b*b*l",
dev_handle, s->sas_addr, dev_handle, i,
(dev
? MPI_SAS_DEVICE_INFO_END_DEVICE /* | MPI_SAS_DEVICE_INFO_SSP_TARGET?? */
: MPI_SAS_DEVICE_INFO_NO_DEVICE),
(MPI_SAS_IOUNIT0_RATE_3_0 << 4) | MPI_SAS_IOUNIT0_RATE_1_5,
(MPI_SAS_IOUNIT0_RATE_3_0 << 4) | MPI_SAS_IOUNIT0_RATE_1_5);
}
static
size_t mptsas_config_phy_1(MPTSASState *s, uint8_t **data, int address)
{
int phy_handle = -1;
int dev_handle = -1;
int i = mptsas_phy_addr_get(s, address);
if (i < 0) {
trace_mptsas_config_sas_phy(s, address, i, phy_handle, dev_handle, 1);
return i;
}
(void) mptsas_phy_get_device(s, i, &phy_handle, &dev_handle);
trace_mptsas_config_sas_phy(s, address, i, phy_handle, dev_handle, 1);
return MPTSAS_CONFIG_PACK_EXT(1, MPI_CONFIG_EXTPAGETYPE_SAS_PHY, 0x01,
"*l*l*l*l*l");
}
/* SAS device pages (extended) */
static int mptsas_device_addr_get(MPTSASState *s, int address)
{
uint32_t handle, i;
uint32_t form = address >> MPI_SAS_PHY_PGAD_FORM_SHIFT;
if (form == MPI_SAS_DEVICE_PGAD_FORM_GET_NEXT_HANDLE) {
handle = address & MPI_SAS_DEVICE_PGAD_GNH_HANDLE_MASK;
do {
if (handle == 65535) {
handle = MPTSAS_NUM_PORTS + 1;
} else {
++handle;
}
i = handle - 1 - MPTSAS_NUM_PORTS;
} while (i < MPTSAS_NUM_PORTS && !scsi_device_find(&s->bus, 0, i, 0));
} else if (form == MPI_SAS_DEVICE_PGAD_FORM_BUS_TARGET_ID) {
if (address & MPI_SAS_DEVICE_PGAD_BT_BUS_MASK) {
return -EINVAL;
}
i = address & MPI_SAS_DEVICE_PGAD_BT_TID_MASK;
} else if (form == MPI_SAS_DEVICE_PGAD_FORM_HANDLE) {
handle = address & MPI_SAS_DEVICE_PGAD_H_HANDLE_MASK;
i = handle - 1 - MPTSAS_NUM_PORTS;
} else {
return -EINVAL;
}
if (i >= MPTSAS_NUM_PORTS) {
return -EINVAL;
}
return i;
}
static
size_t mptsas_config_sas_device_0(MPTSASState *s, uint8_t **data, int address)
{
int phy_handle = -1;
int dev_handle = -1;
int i = mptsas_device_addr_get(s, address);
SCSIDevice *dev = mptsas_phy_get_device(s, i, &phy_handle, &dev_handle);
trace_mptsas_config_sas_device(s, address, i, phy_handle, dev_handle, 0);
if (!dev) {
return -ENOENT;
}
return MPTSAS_CONFIG_PACK_EXT(0, MPI_CONFIG_EXTPAGETYPE_SAS_DEVICE, 0x05,
"*w*wqwbbwbblwb*b",
dev->wwn, phy_handle, i,
MPI_SAS_DEVICE0_ASTATUS_NO_ERRORS,
dev_handle, i, 0,
MPI_SAS_DEVICE_INFO_END_DEVICE | MPI_SAS_DEVICE_INFO_SSP_TARGET,
(MPI_SAS_DEVICE0_FLAGS_DEVICE_PRESENT |
MPI_SAS_DEVICE0_FLAGS_DEVICE_MAPPED |
MPI_SAS_DEVICE0_FLAGS_MAPPING_PERSISTENT), i);
}
static
size_t mptsas_config_sas_device_1(MPTSASState *s, uint8_t **data, int address)
{
int phy_handle = -1;
int dev_handle = -1;
int i = mptsas_device_addr_get(s, address);
SCSIDevice *dev = mptsas_phy_get_device(s, i, &phy_handle, &dev_handle);
trace_mptsas_config_sas_device(s, address, i, phy_handle, dev_handle, 1);
if (!dev) {
return -ENOENT;
}
return MPTSAS_CONFIG_PACK_EXT(1, MPI_CONFIG_EXTPAGETYPE_SAS_DEVICE, 0x00,
"*lq*lwbb*s20",
dev->wwn, dev_handle, i, 0);
}
static
size_t mptsas_config_sas_device_2(MPTSASState *s, uint8_t **data, int address)
{
int phy_handle = -1;
int dev_handle = -1;
int i = mptsas_device_addr_get(s, address);
SCSIDevice *dev = mptsas_phy_get_device(s, i, &phy_handle, &dev_handle);
trace_mptsas_config_sas_device(s, address, i, phy_handle, dev_handle, 2);
if (!dev) {
return -ENOENT;
}
return MPTSAS_CONFIG_PACK_EXT(2, MPI_CONFIG_EXTPAGETYPE_SAS_DEVICE, 0x01,
"ql", dev->wwn, 0);
}
typedef struct MPTSASConfigPage {
uint8_t number;
uint8_t type;
size_t (*mpt_config_build)(MPTSASState *s, uint8_t **data, int address);
} MPTSASConfigPage;
static const MPTSASConfigPage mptsas_config_pages[] = {
{
0, MPI_CONFIG_PAGETYPE_MANUFACTURING,
mptsas_config_manufacturing_0,
}, {
1, MPI_CONFIG_PAGETYPE_MANUFACTURING,
mptsas_config_manufacturing_1,
}, {
2, MPI_CONFIG_PAGETYPE_MANUFACTURING,
mptsas_config_manufacturing_2,
}, {
3, MPI_CONFIG_PAGETYPE_MANUFACTURING,
mptsas_config_manufacturing_3,
}, {
4, MPI_CONFIG_PAGETYPE_MANUFACTURING,
mptsas_config_manufacturing_4,
}, {
5, MPI_CONFIG_PAGETYPE_MANUFACTURING,
mptsas_config_manufacturing_5,
}, {
6, MPI_CONFIG_PAGETYPE_MANUFACTURING,
mptsas_config_manufacturing_6,
}, {
7, MPI_CONFIG_PAGETYPE_MANUFACTURING,
mptsas_config_manufacturing_7,
}, {
8, MPI_CONFIG_PAGETYPE_MANUFACTURING,
mptsas_config_manufacturing_8,
}, {
9, MPI_CONFIG_PAGETYPE_MANUFACTURING,
mptsas_config_manufacturing_9,
}, {
10, MPI_CONFIG_PAGETYPE_MANUFACTURING,
mptsas_config_manufacturing_10,
}, {
0, MPI_CONFIG_PAGETYPE_IO_UNIT,
mptsas_config_io_unit_0,
}, {
1, MPI_CONFIG_PAGETYPE_IO_UNIT,
mptsas_config_io_unit_1,
}, {
2, MPI_CONFIG_PAGETYPE_IO_UNIT,
mptsas_config_io_unit_2,
}, {
3, MPI_CONFIG_PAGETYPE_IO_UNIT,
mptsas_config_io_unit_3,
}, {
4, MPI_CONFIG_PAGETYPE_IO_UNIT,
mptsas_config_io_unit_4,
}, {
0, MPI_CONFIG_PAGETYPE_IOC,
mptsas_config_ioc_0,
}, {
1, MPI_CONFIG_PAGETYPE_IOC,
mptsas_config_ioc_1,
}, {
2, MPI_CONFIG_PAGETYPE_IOC,
mptsas_config_ioc_2,
}, {
3, MPI_CONFIG_PAGETYPE_IOC,
mptsas_config_ioc_3,
}, {
4, MPI_CONFIG_PAGETYPE_IOC,
mptsas_config_ioc_4,
}, {
5, MPI_CONFIG_PAGETYPE_IOC,
mptsas_config_ioc_5,
}, {
6, MPI_CONFIG_PAGETYPE_IOC,
mptsas_config_ioc_6,
}, {
0, MPI_CONFIG_EXTPAGETYPE_SAS_IO_UNIT,
mptsas_config_sas_io_unit_0,
}, {
1, MPI_CONFIG_EXTPAGETYPE_SAS_IO_UNIT,
mptsas_config_sas_io_unit_1,
}, {
2, MPI_CONFIG_EXTPAGETYPE_SAS_IO_UNIT,
mptsas_config_sas_io_unit_2,
}, {
3, MPI_CONFIG_EXTPAGETYPE_SAS_IO_UNIT,
mptsas_config_sas_io_unit_3,
}, {
0, MPI_CONFIG_EXTPAGETYPE_SAS_PHY,
mptsas_config_phy_0,
}, {
1, MPI_CONFIG_EXTPAGETYPE_SAS_PHY,
mptsas_config_phy_1,
}, {
0, MPI_CONFIG_EXTPAGETYPE_SAS_DEVICE,
mptsas_config_sas_device_0,
}, {
1, MPI_CONFIG_EXTPAGETYPE_SAS_DEVICE,
mptsas_config_sas_device_1,
}, {
2, MPI_CONFIG_EXTPAGETYPE_SAS_DEVICE,
mptsas_config_sas_device_2,
}
};
static const MPTSASConfigPage *mptsas_find_config_page(int type, int number)
{
const MPTSASConfigPage *page;
int i;
for (i = 0; i < ARRAY_SIZE(mptsas_config_pages); i++) {
page = &mptsas_config_pages[i];
if (page->type == type && page->number == number) {
return page;
}
}
return NULL;
}
void mptsas_process_config(MPTSASState *s, MPIMsgConfig *req)
{
PCIDevice *pci = PCI_DEVICE(s);
MPIMsgConfigReply reply;
const MPTSASConfigPage *page;
size_t length;
uint8_t type;
uint8_t *data = NULL;
uint32_t flags_and_length;
uint32_t dmalen;
uint64_t pa;
mptsas_fix_config_endianness(req);
QEMU_BUILD_BUG_ON(sizeof(s->doorbell_msg) < sizeof(*req));
QEMU_BUILD_BUG_ON(sizeof(s->doorbell_reply) < sizeof(reply));
/* Copy common bits from the request into the reply. */
memset(&reply, 0, sizeof(reply));
reply.Action = req->Action;
reply.Function = req->Function;
reply.MsgContext = req->MsgContext;
reply.MsgLength = sizeof(reply) / 4;
reply.PageType = req->PageType;
reply.PageNumber = req->PageNumber;
reply.PageLength = req->PageLength;
reply.PageVersion = req->PageVersion;
type = req->PageType & MPI_CONFIG_PAGETYPE_MASK;
if (type == MPI_CONFIG_PAGETYPE_EXTENDED) {
type = req->ExtPageType;
if (type <= MPI_CONFIG_PAGETYPE_MASK) {
reply.IOCStatus = MPI_IOCSTATUS_CONFIG_INVALID_TYPE;
goto out;
}
reply.ExtPageType = req->ExtPageType;
}
page = mptsas_find_config_page(type, req->PageNumber);
switch(req->Action) {
case MPI_CONFIG_ACTION_PAGE_DEFAULT:
case MPI_CONFIG_ACTION_PAGE_HEADER:
case MPI_CONFIG_ACTION_PAGE_READ_NVRAM:
case MPI_CONFIG_ACTION_PAGE_READ_CURRENT:
case MPI_CONFIG_ACTION_PAGE_READ_DEFAULT:
case MPI_CONFIG_ACTION_PAGE_WRITE_CURRENT:
case MPI_CONFIG_ACTION_PAGE_WRITE_NVRAM:
break;
default:
reply.IOCStatus = MPI_IOCSTATUS_CONFIG_INVALID_ACTION;
goto out;
}
if (!page) {
page = mptsas_find_config_page(type, 1);
if (page) {
reply.IOCStatus = MPI_IOCSTATUS_CONFIG_INVALID_PAGE;
} else {
reply.IOCStatus = MPI_IOCSTATUS_CONFIG_INVALID_TYPE;
}
goto out;
}
if (req->Action == MPI_CONFIG_ACTION_PAGE_DEFAULT ||
req->Action == MPI_CONFIG_ACTION_PAGE_HEADER) {
length = page->mpt_config_build(s, NULL, req->PageAddress);
if ((ssize_t)length < 0) {
reply.IOCStatus = MPI_IOCSTATUS_CONFIG_INVALID_PAGE;
goto out;
} else {
goto done;
}
}
if (req->Action == MPI_CONFIG_ACTION_PAGE_WRITE_CURRENT ||
req->Action == MPI_CONFIG_ACTION_PAGE_WRITE_NVRAM) {
length = page->mpt_config_build(s, NULL, req->PageAddress);
if ((ssize_t)length < 0) {
reply.IOCStatus = MPI_IOCSTATUS_CONFIG_INVALID_PAGE;
} else {
reply.IOCStatus = MPI_IOCSTATUS_CONFIG_CANT_COMMIT;
}
goto out;
}
flags_and_length = req->PageBufferSGE.FlagsLength;
dmalen = flags_and_length & MPI_SGE_LENGTH_MASK;
if (dmalen == 0) {
length = page->mpt_config_build(s, NULL, req->PageAddress);
if ((ssize_t)length < 0) {
reply.IOCStatus = MPI_IOCSTATUS_CONFIG_INVALID_PAGE;
goto out;
} else {
goto done;
}
}
if (flags_and_length & MPI_SGE_FLAGS_64_BIT_ADDRESSING) {
pa = req->PageBufferSGE.u.Address64;
} else {
pa = req->PageBufferSGE.u.Address32;
}
/* Only read actions left. */
length = page->mpt_config_build(s, &data, req->PageAddress);
if ((ssize_t)length < 0) {
reply.IOCStatus = MPI_IOCSTATUS_CONFIG_INVALID_PAGE;
goto out;
} else {
assert(data[2] == page->number);
pci_dma_write(pci, pa, data, MIN(length, dmalen));
goto done;
}
abort();
done:
if (type > MPI_CONFIG_PAGETYPE_MASK) {
reply.ExtPageLength = length / 4;
reply.ExtPageType = req->ExtPageType;
} else {
reply.PageLength = length / 4;
}
out:
mptsas_fix_config_reply_endianness(&reply);
mptsas_reply(s, (MPIDefaultReply *)&reply);
g_free(data);
}

204
hw/scsi/mptendian.c Normal file
View File

@ -0,0 +1,204 @@
/*
* QEMU LSI SAS1068 Host Bus Adapter emulation
* Endianness conversion for MPI data structures
*
* Copyright (c) 2016 Red Hat, Inc.
*
* Authors: Paolo Bonzini <pbonzini@redhat.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "hw/hw.h"
#include "hw/pci/pci.h"
#include "sysemu/dma.h"
#include "sysemu/block-backend.h"
#include "hw/pci/msi.h"
#include "qemu/iov.h"
#include "hw/scsi/scsi.h"
#include "block/scsi.h"
#include "trace.h"
#include "mptsas.h"
#include "mpi.h"
static void mptsas_fix_sgentry_endianness(MPISGEntry *sge)
{
le32_to_cpus(&sge->FlagsLength);
if (sge->FlagsLength & MPI_SGE_FLAGS_64_BIT_ADDRESSING) {
le64_to_cpus(&sge->u.Address64);
} else {
le32_to_cpus(&sge->u.Address32);
}
}
static void mptsas_fix_sgentry_endianness_reply(MPISGEntry *sge)
{
if (sge->FlagsLength & MPI_SGE_FLAGS_64_BIT_ADDRESSING) {
cpu_to_le64s(&sge->u.Address64);
} else {
cpu_to_le32s(&sge->u.Address32);
}
cpu_to_le32s(&sge->FlagsLength);
}
void mptsas_fix_scsi_io_endianness(MPIMsgSCSIIORequest *req)
{
le32_to_cpus(&req->MsgContext);
le32_to_cpus(&req->Control);
le32_to_cpus(&req->DataLength);
le32_to_cpus(&req->SenseBufferLowAddr);
}
void mptsas_fix_scsi_io_reply_endianness(MPIMsgSCSIIOReply *reply)
{
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
cpu_to_le32s(&reply->TransferCount);
cpu_to_le32s(&reply->SenseCount);
cpu_to_le32s(&reply->ResponseInfo);
cpu_to_le16s(&reply->TaskTag);
}
void mptsas_fix_scsi_task_mgmt_endianness(MPIMsgSCSITaskMgmt *req)
{
le32_to_cpus(&req->MsgContext);
le32_to_cpus(&req->TaskMsgContext);
}
void mptsas_fix_scsi_task_mgmt_reply_endianness(MPIMsgSCSITaskMgmtReply *reply)
{
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
cpu_to_le32s(&reply->TerminationCount);
}
void mptsas_fix_ioc_init_endianness(MPIMsgIOCInit *req)
{
le32_to_cpus(&req->MsgContext);
le16_to_cpus(&req->ReplyFrameSize);
le32_to_cpus(&req->HostMfaHighAddr);
le32_to_cpus(&req->SenseBufferHighAddr);
le32_to_cpus(&req->ReplyFifoHostSignalingAddr);
mptsas_fix_sgentry_endianness(&req->HostPageBufferSGE);
le16_to_cpus(&req->MsgVersion);
le16_to_cpus(&req->HeaderVersion);
}
void mptsas_fix_ioc_init_reply_endianness(MPIMsgIOCInitReply *reply)
{
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
}
void mptsas_fix_ioc_facts_endianness(MPIMsgIOCFacts *req)
{
le32_to_cpus(&req->MsgContext);
}
void mptsas_fix_ioc_facts_reply_endianness(MPIMsgIOCFactsReply *reply)
{
cpu_to_le16s(&reply->MsgVersion);
cpu_to_le16s(&reply->HeaderVersion);
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCExceptions);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
cpu_to_le16s(&reply->ReplyQueueDepth);
cpu_to_le16s(&reply->RequestFrameSize);
cpu_to_le16s(&reply->ProductID);
cpu_to_le32s(&reply->CurrentHostMfaHighAddr);
cpu_to_le16s(&reply->GlobalCredits);
cpu_to_le32s(&reply->CurrentSenseBufferHighAddr);
cpu_to_le16s(&reply->CurReplyFrameSize);
cpu_to_le32s(&reply->FWImageSize);
cpu_to_le32s(&reply->IOCCapabilities);
cpu_to_le16s(&reply->HighPriorityQueueDepth);
mptsas_fix_sgentry_endianness_reply(&reply->HostPageBufferSGE);
cpu_to_le32s(&reply->ReplyFifoHostSignalingAddr);
}
void mptsas_fix_config_endianness(MPIMsgConfig *req)
{
le16_to_cpus(&req->ExtPageLength);
le32_to_cpus(&req->MsgContext);
le32_to_cpus(&req->PageAddress);
mptsas_fix_sgentry_endianness(&req->PageBufferSGE);
}
void mptsas_fix_config_reply_endianness(MPIMsgConfigReply *reply)
{
cpu_to_le16s(&reply->ExtPageLength);
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
}
void mptsas_fix_port_facts_endianness(MPIMsgPortFacts *req)
{
le32_to_cpus(&req->MsgContext);
}
void mptsas_fix_port_facts_reply_endianness(MPIMsgPortFactsReply *reply)
{
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
cpu_to_le16s(&reply->MaxDevices);
cpu_to_le16s(&reply->PortSCSIID);
cpu_to_le16s(&reply->ProtocolFlags);
cpu_to_le16s(&reply->MaxPostedCmdBuffers);
cpu_to_le16s(&reply->MaxPersistentIDs);
cpu_to_le16s(&reply->MaxLanBuckets);
}
void mptsas_fix_port_enable_endianness(MPIMsgPortEnable *req)
{
le32_to_cpus(&req->MsgContext);
}
void mptsas_fix_port_enable_reply_endianness(MPIMsgPortEnableReply *reply)
{
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
}
void mptsas_fix_event_notification_endianness(MPIMsgEventNotify *req)
{
le32_to_cpus(&req->MsgContext);
}
void mptsas_fix_event_notification_reply_endianness(MPIMsgEventNotifyReply *reply)
{
int length = reply->EventDataLength;
int i;
cpu_to_le16s(&reply->EventDataLength);
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
cpu_to_le32s(&reply->Event);
cpu_to_le32s(&reply->EventContext);
/* Really depends on the event kind. This will do for now. */
for (i = 0; i < length; i++) {
cpu_to_le32s(&reply->Data[i]);
}
}

1441
hw/scsi/mptsas.c Normal file

File diff suppressed because it is too large Load Diff

100
hw/scsi/mptsas.h Normal file
View File

@ -0,0 +1,100 @@
#ifndef MPTSAS_H
#define MPTSAS_H
#include "mpi.h"
#define MPTSAS_NUM_PORTS 8
#define MPTSAS_MAX_FRAMES 2048 /* Firmware limit at 65535 */
#define MPTSAS_REQUEST_QUEUE_DEPTH 128
#define MPTSAS_REPLY_QUEUE_DEPTH 128
#define MPTSAS_MAXIMUM_CHAIN_DEPTH 0x22
typedef struct MPTSASState MPTSASState;
typedef struct MPTSASRequest MPTSASRequest;
enum {
DOORBELL_NONE,
DOORBELL_WRITE,
DOORBELL_READ
};
struct MPTSASState {
PCIDevice dev;
MemoryRegion mmio_io;
MemoryRegion port_io;
MemoryRegion diag_io;
QEMUBH *request_bh;
uint32_t msi_available;
uint64_t sas_addr;
bool msi_in_use;
/* Doorbell register */
uint32_t state;
uint8_t who_init;
uint8_t doorbell_state;
/* Buffer for requests that are sent through the doorbell register. */
uint32_t doorbell_msg[256];
int doorbell_idx;
int doorbell_cnt;
uint16_t doorbell_reply[256];
int doorbell_reply_idx;
int doorbell_reply_size;
/* Other registers */
uint8_t diagnostic_idx;
uint32_t diagnostic;
uint32_t intr_mask;
uint32_t intr_status;
/* Request queues */
uint32_t request_post[MPTSAS_REQUEST_QUEUE_DEPTH + 1];
uint16_t request_post_head;
uint16_t request_post_tail;
uint32_t reply_post[MPTSAS_REPLY_QUEUE_DEPTH + 1];
uint16_t reply_post_head;
uint16_t reply_post_tail;
uint32_t reply_free[MPTSAS_REPLY_QUEUE_DEPTH + 1];
uint16_t reply_free_head;
uint16_t reply_free_tail;
/* IOC Facts */
hwaddr host_mfa_high_addr;
hwaddr sense_buffer_high_addr;
uint16_t max_devices;
uint16_t max_buses;
uint16_t reply_frame_size;
SCSIBus bus;
QTAILQ_HEAD(, MPTSASRequest) pending;
};
void mptsas_fix_scsi_io_endianness(MPIMsgSCSIIORequest *req);
void mptsas_fix_scsi_io_reply_endianness(MPIMsgSCSIIOReply *reply);
void mptsas_fix_scsi_task_mgmt_endianness(MPIMsgSCSITaskMgmt *req);
void mptsas_fix_scsi_task_mgmt_reply_endianness(MPIMsgSCSITaskMgmtReply *reply);
void mptsas_fix_ioc_init_endianness(MPIMsgIOCInit *req);
void mptsas_fix_ioc_init_reply_endianness(MPIMsgIOCInitReply *reply);
void mptsas_fix_ioc_facts_endianness(MPIMsgIOCFacts *req);
void mptsas_fix_ioc_facts_reply_endianness(MPIMsgIOCFactsReply *reply);
void mptsas_fix_config_endianness(MPIMsgConfig *req);
void mptsas_fix_config_reply_endianness(MPIMsgConfigReply *reply);
void mptsas_fix_port_facts_endianness(MPIMsgPortFacts *req);
void mptsas_fix_port_facts_reply_endianness(MPIMsgPortFactsReply *reply);
void mptsas_fix_port_enable_endianness(MPIMsgPortEnable *req);
void mptsas_fix_port_enable_reply_endianness(MPIMsgPortEnableReply *reply);
void mptsas_fix_event_notification_endianness(MPIMsgEventNotify *req);
void mptsas_fix_event_notification_reply_endianness(MPIMsgEventNotifyReply *reply);
void mptsas_reply(MPTSASState *s, MPIDefaultReply *reply);
void mptsas_process_config(MPTSASState *s, MPIMsgConfig *req);
#endif /* MPTSAS_H */

View File

@ -77,8 +77,6 @@ struct SCSIDiskState
bool media_changed;
bool media_event;
bool eject_request;
uint64_t wwn;
uint64_t port_wwn;
uint16_t port_index;
uint64_t max_unmap_size;
uint64_t max_io_size;
@ -633,21 +631,21 @@ static int scsi_disk_emulate_inquiry(SCSIRequest *req, uint8_t *outbuf)
memcpy(outbuf+buflen, str, id_len);
buflen += id_len;
if (s->wwn) {
if (s->qdev.wwn) {
outbuf[buflen++] = 0x1; // Binary
outbuf[buflen++] = 0x3; // NAA
outbuf[buflen++] = 0; // reserved
outbuf[buflen++] = 8;
stq_be_p(&outbuf[buflen], s->wwn);
stq_be_p(&outbuf[buflen], s->qdev.wwn);
buflen += 8;
}
if (s->port_wwn) {
if (s->qdev.port_wwn) {
outbuf[buflen++] = 0x61; // SAS / Binary
outbuf[buflen++] = 0x93; // PIV / Target port / NAA
outbuf[buflen++] = 0; // reserved
outbuf[buflen++] = 8;
stq_be_p(&outbuf[buflen], s->port_wwn);
stq_be_p(&outbuf[buflen], s->qdev.port_wwn);
buflen += 8;
}
@ -2575,6 +2573,7 @@ static void scsi_block_realize(SCSIDevice *dev, Error **errp)
s->features |= (1 << SCSI_DISK_F_NO_REMOVABLE_DEVOPS);
scsi_realize(&s->qdev, errp);
scsi_generic_read_device_identification(&s->qdev);
}
static bool scsi_block_is_passthrough(SCSIDiskState *s, uint8_t *buf)
@ -2668,8 +2667,8 @@ static Property scsi_hd_properties[] = {
SCSI_DISK_F_REMOVABLE, false),
DEFINE_PROP_BIT("dpofua", SCSIDiskState, features,
SCSI_DISK_F_DPOFUA, false),
DEFINE_PROP_UINT64("wwn", SCSIDiskState, wwn, 0),
DEFINE_PROP_UINT64("port_wwn", SCSIDiskState, port_wwn, 0),
DEFINE_PROP_UINT64("wwn", SCSIDiskState, qdev.wwn, 0),
DEFINE_PROP_UINT64("port_wwn", SCSIDiskState, qdev.port_wwn, 0),
DEFINE_PROP_UINT16("port_index", SCSIDiskState, port_index, 0),
DEFINE_PROP_UINT64("max_unmap_size", SCSIDiskState, max_unmap_size,
DEFAULT_MAX_UNMAP_SIZE),
@ -2718,8 +2717,8 @@ static const TypeInfo scsi_hd_info = {
static Property scsi_cd_properties[] = {
DEFINE_SCSI_DISK_PROPERTIES(),
DEFINE_PROP_UINT64("wwn", SCSIDiskState, wwn, 0),
DEFINE_PROP_UINT64("port_wwn", SCSIDiskState, port_wwn, 0),
DEFINE_PROP_UINT64("wwn", SCSIDiskState, qdev.wwn, 0),
DEFINE_PROP_UINT64("port_wwn", SCSIDiskState, qdev.port_wwn, 0),
DEFINE_PROP_UINT16("port_index", SCSIDiskState, port_index, 0),
DEFINE_PROP_UINT64("max_io_size", SCSIDiskState, max_io_size,
DEFAULT_MAX_IO_SIZE),
@ -2783,8 +2782,8 @@ static Property scsi_disk_properties[] = {
SCSI_DISK_F_REMOVABLE, false),
DEFINE_PROP_BIT("dpofua", SCSIDiskState, features,
SCSI_DISK_F_DPOFUA, false),
DEFINE_PROP_UINT64("wwn", SCSIDiskState, wwn, 0),
DEFINE_PROP_UINT64("port_wwn", SCSIDiskState, port_wwn, 0),
DEFINE_PROP_UINT64("wwn", SCSIDiskState, qdev.wwn, 0),
DEFINE_PROP_UINT64("port_wwn", SCSIDiskState, qdev.port_wwn, 0),
DEFINE_PROP_UINT16("port_index", SCSIDiskState, port_index, 0),
DEFINE_PROP_UINT64("max_unmap_size", SCSIDiskState, max_unmap_size,
DEFAULT_MAX_UNMAP_SIZE),

View File

@ -355,6 +355,96 @@ static int32_t scsi_send_command(SCSIRequest *req, uint8_t *cmd)
}
}
static int read_naa_id(const uint8_t *p, uint64_t *p_wwn)
{
int i;
if ((p[1] & 0xF) == 3) {
/* NAA designator type */
if (p[3] != 8) {
return -EINVAL;
}
*p_wwn = ldq_be_p(p + 4);
return 0;
}
if ((p[1] & 0xF) == 8) {
/* SCSI name string designator type */
if (p[3] < 20 || memcmp(&p[4], "naa.", 4)) {
return -EINVAL;
}
if (p[3] > 20 && p[24] != ',') {
return -EINVAL;
}
*p_wwn = 0;
for (i = 8; i < 24; i++) {
char c = toupper(p[i]);
c -= (c >= '0' && c <= '9' ? '0' : 'A' - 10);
*p_wwn = (*p_wwn << 4) | c;
}
return 0;
}
return -EINVAL;
}
void scsi_generic_read_device_identification(SCSIDevice *s)
{
uint8_t cmd[6];
uint8_t buf[250];
uint8_t sensebuf[8];
sg_io_hdr_t io_header;
int ret;
int i, len;
memset(cmd, 0, sizeof(cmd));
memset(buf, 0, sizeof(buf));
cmd[0] = INQUIRY;
cmd[1] = 1;
cmd[2] = 0x83;
cmd[4] = sizeof(buf);
memset(&io_header, 0, sizeof(io_header));
io_header.interface_id = 'S';
io_header.dxfer_direction = SG_DXFER_FROM_DEV;
io_header.dxfer_len = sizeof(buf);
io_header.dxferp = buf;
io_header.cmdp = cmd;
io_header.cmd_len = sizeof(cmd);
io_header.mx_sb_len = sizeof(sensebuf);
io_header.sbp = sensebuf;
io_header.timeout = 6000; /* XXX */
ret = blk_ioctl(s->conf.blk, SG_IO, &io_header);
if (ret < 0 || io_header.driver_status || io_header.host_status) {
return;
}
len = MIN((buf[2] << 8) | buf[3], sizeof(buf) - 4);
for (i = 0; i + 3 <= len; ) {
const uint8_t *p = &buf[i + 4];
uint64_t wwn;
if (i + (p[3] + 4) > len) {
break;
}
if ((p[1] & 0x10) == 0) {
/* Associated with the logical unit */
if (read_naa_id(p, &wwn) == 0) {
s->wwn = wwn;
}
} else if ((p[1] & 0x10) == 0x10) {
/* Associated with the target port */
if (read_naa_id(p, &wwn) == 0) {
s->port_wwn = wwn;
}
}
i += p[3] + 4;
}
}
static int get_stream_blocksize(BlockBackend *blk)
{
uint8_t cmd[6];
@ -458,6 +548,8 @@ static void scsi_generic_realize(SCSIDevice *s, Error **errp)
}
DPRINTF("block size %d\n", s->blocksize);
scsi_generic_read_device_identification(s);
}
const SCSIReqOps scsi_generic_req_ops = {

View File

@ -49,13 +49,43 @@ static inline void *ramblock_ptr(RAMBlock *block, ram_addr_t offset)
return (char *)block->host + offset;
}
/* The dirty memory bitmap is split into fixed-size blocks to allow growth
* under RCU. The bitmap for a block can be accessed as follows:
*
* rcu_read_lock();
*
* DirtyMemoryBlocks *blocks =
* atomic_rcu_read(&ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]);
*
* ram_addr_t idx = (addr >> TARGET_PAGE_BITS) / DIRTY_MEMORY_BLOCK_SIZE;
* unsigned long *block = blocks.blocks[idx];
* ...access block bitmap...
*
* rcu_read_unlock();
*
* Remember to check for the end of the block when accessing a range of
* addresses. Move on to the next block if you reach the end.
*
* Organization into blocks allows dirty memory to grow (but not shrink) under
* RCU. When adding new RAMBlocks requires the dirty memory to grow, a new
* DirtyMemoryBlocks array is allocated with pointers to existing blocks kept
* the same. Other threads can safely access existing blocks while dirty
* memory is being grown. When no threads are using the old DirtyMemoryBlocks
* anymore it is freed by RCU (but the underlying blocks stay because they are
* pointed to from the new DirtyMemoryBlocks).
*/
#define DIRTY_MEMORY_BLOCK_SIZE ((ram_addr_t)256 * 1024 * 8)
typedef struct {
struct rcu_head rcu;
unsigned long *blocks[];
} DirtyMemoryBlocks;
typedef struct RAMList {
QemuMutex mutex;
/* Protected by the iothread lock. */
unsigned long *dirty_memory[DIRTY_MEMORY_NUM];
RAMBlock *mru_block;
/* RCU-enabled, writes protected by the ramlist lock. */
QLIST_HEAD(, RAMBlock) blocks;
DirtyMemoryBlocks *dirty_memory[DIRTY_MEMORY_NUM];
uint32_t version;
} RAMList;
extern RAMList ram_list;
@ -89,30 +119,70 @@ static inline bool cpu_physical_memory_get_dirty(ram_addr_t start,
ram_addr_t length,
unsigned client)
{
unsigned long end, page, next;
DirtyMemoryBlocks *blocks;
unsigned long end, page;
bool dirty = false;
assert(client < DIRTY_MEMORY_NUM);
end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS;
page = start >> TARGET_PAGE_BITS;
next = find_next_bit(ram_list.dirty_memory[client], end, page);
return next < end;
rcu_read_lock();
blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
while (page < end) {
unsigned long idx = page / DIRTY_MEMORY_BLOCK_SIZE;
unsigned long offset = page % DIRTY_MEMORY_BLOCK_SIZE;
unsigned long num = MIN(end - page, DIRTY_MEMORY_BLOCK_SIZE - offset);
if (find_next_bit(blocks->blocks[idx], offset, num) < num) {
dirty = true;
break;
}
page += num;
}
rcu_read_unlock();
return dirty;
}
static inline bool cpu_physical_memory_all_dirty(ram_addr_t start,
ram_addr_t length,
unsigned client)
{
unsigned long end, page, next;
DirtyMemoryBlocks *blocks;
unsigned long end, page;
bool dirty = true;
assert(client < DIRTY_MEMORY_NUM);
end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS;
page = start >> TARGET_PAGE_BITS;
next = find_next_zero_bit(ram_list.dirty_memory[client], end, page);
return next >= end;
rcu_read_lock();
blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
while (page < end) {
unsigned long idx = page / DIRTY_MEMORY_BLOCK_SIZE;
unsigned long offset = page % DIRTY_MEMORY_BLOCK_SIZE;
unsigned long num = MIN(end - page, DIRTY_MEMORY_BLOCK_SIZE - offset);
if (find_next_zero_bit(blocks->blocks[idx], offset, num) < num) {
dirty = false;
break;
}
page += num;
}
rcu_read_unlock();
return dirty;
}
static inline bool cpu_physical_memory_get_dirty_flag(ram_addr_t addr,
@ -154,28 +224,68 @@ static inline uint8_t cpu_physical_memory_range_includes_clean(ram_addr_t start,
static inline void cpu_physical_memory_set_dirty_flag(ram_addr_t addr,
unsigned client)
{
unsigned long page, idx, offset;
DirtyMemoryBlocks *blocks;
assert(client < DIRTY_MEMORY_NUM);
set_bit_atomic(addr >> TARGET_PAGE_BITS, ram_list.dirty_memory[client]);
page = addr >> TARGET_PAGE_BITS;
idx = page / DIRTY_MEMORY_BLOCK_SIZE;
offset = page % DIRTY_MEMORY_BLOCK_SIZE;
rcu_read_lock();
blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
set_bit_atomic(offset, blocks->blocks[idx]);
rcu_read_unlock();
}
static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start,
ram_addr_t length,
uint8_t mask)
{
DirtyMemoryBlocks *blocks[DIRTY_MEMORY_NUM];
unsigned long end, page;
unsigned long **d = ram_list.dirty_memory;
int i;
if (!mask && !xen_enabled()) {
return;
}
end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS;
page = start >> TARGET_PAGE_BITS;
rcu_read_lock();
for (i = 0; i < DIRTY_MEMORY_NUM; i++) {
blocks[i] = atomic_rcu_read(&ram_list.dirty_memory[i]);
}
while (page < end) {
unsigned long idx = page / DIRTY_MEMORY_BLOCK_SIZE;
unsigned long offset = page % DIRTY_MEMORY_BLOCK_SIZE;
unsigned long num = MIN(end - page, DIRTY_MEMORY_BLOCK_SIZE - offset);
if (likely(mask & (1 << DIRTY_MEMORY_MIGRATION))) {
bitmap_set_atomic(d[DIRTY_MEMORY_MIGRATION], page, end - page);
bitmap_set_atomic(blocks[DIRTY_MEMORY_MIGRATION]->blocks[idx],
offset, num);
}
if (unlikely(mask & (1 << DIRTY_MEMORY_VGA))) {
bitmap_set_atomic(d[DIRTY_MEMORY_VGA], page, end - page);
bitmap_set_atomic(blocks[DIRTY_MEMORY_VGA]->blocks[idx],
offset, num);
}
if (unlikely(mask & (1 << DIRTY_MEMORY_CODE))) {
bitmap_set_atomic(d[DIRTY_MEMORY_CODE], page, end - page);
bitmap_set_atomic(blocks[DIRTY_MEMORY_CODE]->blocks[idx],
offset, num);
}
page += num;
}
rcu_read_unlock();
xen_modified_memory(start, length);
}
@ -195,21 +305,41 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
/* start address is aligned at the start of a word? */
if ((((page * BITS_PER_LONG) << TARGET_PAGE_BITS) == start) &&
(hpratio == 1)) {
unsigned long **blocks[DIRTY_MEMORY_NUM];
unsigned long idx;
unsigned long offset;
long k;
long nr = BITS_TO_LONGS(pages);
idx = (start >> TARGET_PAGE_BITS) / DIRTY_MEMORY_BLOCK_SIZE;
offset = BIT_WORD((start >> TARGET_PAGE_BITS) %
DIRTY_MEMORY_BLOCK_SIZE);
rcu_read_lock();
for (i = 0; i < DIRTY_MEMORY_NUM; i++) {
blocks[i] = atomic_rcu_read(&ram_list.dirty_memory[i])->blocks;
}
for (k = 0; k < nr; k++) {
if (bitmap[k]) {
unsigned long temp = leul_to_cpu(bitmap[k]);
unsigned long **d = ram_list.dirty_memory;
atomic_or(&d[DIRTY_MEMORY_MIGRATION][page + k], temp);
atomic_or(&d[DIRTY_MEMORY_VGA][page + k], temp);
atomic_or(&blocks[DIRTY_MEMORY_MIGRATION][idx][offset], temp);
atomic_or(&blocks[DIRTY_MEMORY_VGA][idx][offset], temp);
if (tcg_enabled()) {
atomic_or(&d[DIRTY_MEMORY_CODE][page + k], temp);
atomic_or(&blocks[DIRTY_MEMORY_CODE][idx][offset], temp);
}
}
if (++offset >= BITS_TO_LONGS(DIRTY_MEMORY_BLOCK_SIZE)) {
offset = 0;
idx++;
}
}
rcu_read_unlock();
xen_modified_memory(start, pages << TARGET_PAGE_BITS);
} else {
uint8_t clients = tcg_enabled() ? DIRTY_CLIENTS_ALL : DIRTY_CLIENTS_NOCODE;
@ -261,18 +391,33 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(unsigned long *dest,
if (((page * BITS_PER_LONG) << TARGET_PAGE_BITS) == start) {
int k;
int nr = BITS_TO_LONGS(length >> TARGET_PAGE_BITS);
unsigned long *src = ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION];
unsigned long * const *src;
unsigned long idx = (page * BITS_PER_LONG) / DIRTY_MEMORY_BLOCK_SIZE;
unsigned long offset = BIT_WORD((page * BITS_PER_LONG) %
DIRTY_MEMORY_BLOCK_SIZE);
rcu_read_lock();
src = atomic_rcu_read(
&ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION])->blocks;
for (k = page; k < page + nr; k++) {
if (src[k]) {
unsigned long bits = atomic_xchg(&src[k], 0);
if (src[idx][offset]) {
unsigned long bits = atomic_xchg(&src[idx][offset], 0);
unsigned long new_dirty;
new_dirty = ~dest[k];
dest[k] |= bits;
new_dirty &= bits;
num_dirty += ctpopl(new_dirty);
}
if (++offset >= BITS_TO_LONGS(DIRTY_MEMORY_BLOCK_SIZE)) {
offset = 0;
idx++;
}
}
rcu_read_unlock();
} else {
for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) {
if (cpu_physical_memory_test_and_clear_dirty(

View File

@ -64,6 +64,7 @@
#define PCI_VENDOR_ID_LSI_LOGIC 0x1000
#define PCI_DEVICE_ID_LSI_53C810 0x0001
#define PCI_DEVICE_ID_LSI_53C895A 0x0012
#define PCI_DEVICE_ID_LSI_SAS1068 0x0054
#define PCI_DEVICE_ID_LSI_SAS1078 0x0060
#define PCI_DEVICE_ID_LSI_SAS0079 0x0079

View File

@ -108,6 +108,8 @@ struct SCSIDevice
int blocksize;
int type;
uint64_t max_lba;
uint64_t wwn;
uint64_t port_wwn;
};
extern const VMStateDescription vmstate_scsi_device;
@ -271,6 +273,7 @@ void scsi_device_purge_requests(SCSIDevice *sdev, SCSISense sense);
void scsi_device_set_ua(SCSIDevice *sdev, SCSISense sense);
void scsi_device_report_change(SCSIDevice *dev, SCSISense sense);
void scsi_device_unit_attention_reported(SCSIDevice *dev);
void scsi_generic_read_device_identification(SCSIDevice *dev);
int scsi_device_get_sense(SCSIDevice *dev, uint8_t *buf, int len, bool fixed);
SCSIDevice *scsi_device_find(SCSIBus *bus, int channel, int target, int lun);

View File

@ -8,6 +8,8 @@
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
* See docs/atomics.txt for discussion about the guarantees each
* atomic primitive is meant to provide.
*/
#ifndef __QEMU_ATOMIC_H
@ -15,12 +17,130 @@
#include "qemu/compiler.h"
/* For C11 atomic ops */
/* Compiler barrier */
#define barrier() ({ asm volatile("" ::: "memory"); (void)0; })
#ifndef __ATOMIC_RELAXED
#ifdef __ATOMIC_RELAXED
/* For C11 atomic ops */
/* Manual memory barriers
*
*__atomic_thread_fence does not include a compiler barrier; instead,
* the barrier is part of __atomic_load/__atomic_store's "volatile-like"
* semantics. If smp_wmb() is a no-op, absence of the barrier means that
* the compiler is free to reorder stores on each side of the barrier.
* Add one here, and similarly in smp_rmb() and smp_read_barrier_depends().
*/
#define smp_mb() ({ barrier(); __atomic_thread_fence(__ATOMIC_SEQ_CST); barrier(); })
#define smp_wmb() ({ barrier(); __atomic_thread_fence(__ATOMIC_RELEASE); barrier(); })
#define smp_rmb() ({ barrier(); __atomic_thread_fence(__ATOMIC_ACQUIRE); barrier(); })
#define smp_read_barrier_depends() ({ barrier(); __atomic_thread_fence(__ATOMIC_CONSUME); barrier(); })
/* Weak atomic operations prevent the compiler moving other
* loads/stores past the atomic operation load/store. However there is
* no explicit memory barrier for the processor.
*/
#define atomic_read(ptr) \
({ \
typeof(*ptr) _val; \
__atomic_load(ptr, &_val, __ATOMIC_RELAXED); \
_val; \
})
#define atomic_set(ptr, i) do { \
typeof(*ptr) _val = (i); \
__atomic_store(ptr, &_val, __ATOMIC_RELAXED); \
} while(0)
/* Atomic RCU operations imply weak memory barriers */
#define atomic_rcu_read(ptr) \
({ \
typeof(*ptr) _val; \
__atomic_load(ptr, &_val, __ATOMIC_CONSUME); \
_val; \
})
#define atomic_rcu_set(ptr, i) do { \
typeof(*ptr) _val = (i); \
__atomic_store(ptr, &_val, __ATOMIC_RELEASE); \
} while(0)
/* atomic_mb_read/set semantics map Java volatile variables. They are
* less expensive on some platforms (notably POWER & ARMv7) than fully
* sequentially consistent operations.
*
* As long as they are used as paired operations they are safe to
* use. See docs/atomic.txt for more discussion.
*/
#if defined(_ARCH_PPC)
#define atomic_mb_read(ptr) \
({ \
typeof(*ptr) _val; \
__atomic_load(ptr, &_val, __ATOMIC_RELAXED); \
smp_rmb(); \
_val; \
})
#define atomic_mb_set(ptr, i) do { \
typeof(*ptr) _val = (i); \
smp_wmb(); \
__atomic_store(ptr, &_val, __ATOMIC_RELAXED); \
smp_mb(); \
} while(0)
#else
#define atomic_mb_read(ptr) \
({ \
typeof(*ptr) _val; \
__atomic_load(ptr, &_val, __ATOMIC_SEQ_CST); \
_val; \
})
#define atomic_mb_set(ptr, i) do { \
typeof(*ptr) _val = (i); \
__atomic_store(ptr, &_val, __ATOMIC_SEQ_CST); \
} while(0)
#endif
/* All the remaining operations are fully sequentially consistent */
#define atomic_xchg(ptr, i) ({ \
typeof(*ptr) _new = (i), _old; \
__atomic_exchange(ptr, &_new, &_old, __ATOMIC_SEQ_CST); \
_old; \
})
/* Returns the eventual value, failed or not */
#define atomic_cmpxchg(ptr, old, new) \
({ \
typeof(*ptr) _old = (old), _new = (new); \
__atomic_compare_exchange(ptr, &_old, &_new, false, \
__ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); \
_old; \
})
/* Provide shorter names for GCC atomic builtins, return old value */
#define atomic_fetch_inc(ptr) __atomic_fetch_add(ptr, 1, __ATOMIC_SEQ_CST)
#define atomic_fetch_dec(ptr) __atomic_fetch_sub(ptr, 1, __ATOMIC_SEQ_CST)
#define atomic_fetch_add(ptr, n) __atomic_fetch_add(ptr, n, __ATOMIC_SEQ_CST)
#define atomic_fetch_sub(ptr, n) __atomic_fetch_sub(ptr, n, __ATOMIC_SEQ_CST)
#define atomic_fetch_and(ptr, n) __atomic_fetch_and(ptr, n, __ATOMIC_SEQ_CST)
#define atomic_fetch_or(ptr, n) __atomic_fetch_or(ptr, n, __ATOMIC_SEQ_CST)
/* And even shorter names that return void. */
#define atomic_inc(ptr) ((void) __atomic_fetch_add(ptr, 1, __ATOMIC_SEQ_CST))
#define atomic_dec(ptr) ((void) __atomic_fetch_sub(ptr, 1, __ATOMIC_SEQ_CST))
#define atomic_add(ptr, n) ((void) __atomic_fetch_add(ptr, n, __ATOMIC_SEQ_CST))
#define atomic_sub(ptr, n) ((void) __atomic_fetch_sub(ptr, n, __ATOMIC_SEQ_CST))
#define atomic_and(ptr, n) ((void) __atomic_fetch_and(ptr, n, __ATOMIC_SEQ_CST))
#define atomic_or(ptr, n) ((void) __atomic_fetch_or(ptr, n, __ATOMIC_SEQ_CST))
#else /* __ATOMIC_RELAXED */
/*
* We use GCC builtin if it's available, as that can use mfence on
@ -85,8 +205,6 @@
#endif /* _ARCH_PPC */
#endif /* C11 atomics */
/*
* For (host) platforms we don't have explicit barrier definitions
* for, we use the gcc __sync_synchronize() primitive to generate a
@ -98,42 +216,22 @@
#endif
#ifndef smp_wmb
#ifdef __ATOMIC_RELEASE
/* __atomic_thread_fence does not include a compiler barrier; instead,
* the barrier is part of __atomic_load/__atomic_store's "volatile-like"
* semantics. If smp_wmb() is a no-op, absence of the barrier means that
* the compiler is free to reorder stores on each side of the barrier.
* Add one here, and similarly in smp_rmb() and smp_read_barrier_depends().
*/
#define smp_wmb() ({ barrier(); __atomic_thread_fence(__ATOMIC_RELEASE); barrier(); })
#else
#define smp_wmb() __sync_synchronize()
#endif
#endif
#ifndef smp_rmb
#ifdef __ATOMIC_ACQUIRE
#define smp_rmb() ({ barrier(); __atomic_thread_fence(__ATOMIC_ACQUIRE); barrier(); })
#else
#define smp_rmb() __sync_synchronize()
#endif
#endif
#ifndef smp_read_barrier_depends
#ifdef __ATOMIC_CONSUME
#define smp_read_barrier_depends() ({ barrier(); __atomic_thread_fence(__ATOMIC_CONSUME); barrier(); })
#else
#define smp_read_barrier_depends() barrier()
#endif
#endif
#ifndef atomic_read
/* These will only be atomic if the processor does the fetch or store
* in a single issue memory operation
*/
#define atomic_read(ptr) (*(__typeof__(*ptr) volatile*) (ptr))
#endif
#ifndef atomic_set
#define atomic_set(ptr, i) ((*(__typeof__(*ptr) volatile*) (ptr)) = (i))
#endif
/**
* atomic_rcu_read - reads a RCU-protected pointer to a local variable
@ -146,30 +244,18 @@
* Inserts memory barriers on architectures that require them (currently only
* Alpha) and documents which pointers are protected by RCU.
*
* Unless the __ATOMIC_CONSUME memory order is available, atomic_rcu_read also
* includes a compiler barrier to ensure that value-speculative optimizations
* (e.g. VSS: Value Speculation Scheduling) does not perform the data read
* before the pointer read by speculating the value of the pointer. On new
* enough compilers, atomic_load takes care of such concern about
* dependency-breaking optimizations.
* atomic_rcu_read also includes a compiler barrier to ensure that
* value-speculative optimizations (e.g. VSS: Value Speculation
* Scheduling) does not perform the data read before the pointer read
* by speculating the value of the pointer.
*
* Should match atomic_rcu_set(), atomic_xchg(), atomic_cmpxchg().
*/
#ifndef atomic_rcu_read
#ifdef __ATOMIC_CONSUME
#define atomic_rcu_read(ptr) ({ \
typeof(*ptr) _val; \
__atomic_load(ptr, &_val, __ATOMIC_CONSUME); \
_val; \
})
#else
#define atomic_rcu_read(ptr) ({ \
typeof(*ptr) _val = atomic_read(ptr); \
smp_read_barrier_depends(); \
_val; \
})
#endif
#endif
/**
* atomic_rcu_set - assigns (publicizes) a pointer to a new data structure
@ -182,19 +268,10 @@
*
* Should match atomic_rcu_read().
*/
#ifndef atomic_rcu_set
#ifdef __ATOMIC_RELEASE
#define atomic_rcu_set(ptr, i) do { \
typeof(*ptr) _val = (i); \
__atomic_store(ptr, &_val, __ATOMIC_RELEASE); \
} while(0)
#else
#define atomic_rcu_set(ptr, i) do { \
smp_wmb(); \
atomic_set(ptr, i); \
} while (0)
#endif
#endif
/* These have the same semantics as Java volatile variables.
* See http://gee.cs.oswego.edu/dl/jmm/cookbook.html:
@ -218,13 +295,11 @@
* (see docs/atomics.txt), and I'm not sure that __ATOMIC_ACQ_REL is enough.
* Just always use the barriers manually by the rules above.
*/
#ifndef atomic_mb_read
#define atomic_mb_read(ptr) ({ \
typeof(*ptr) _val = atomic_read(ptr); \
smp_rmb(); \
_val; \
})
#endif
#ifndef atomic_mb_set
#define atomic_mb_set(ptr, i) do { \
@ -237,12 +312,6 @@
#ifndef atomic_xchg
#if defined(__clang__)
#define atomic_xchg(ptr, i) __sync_swap(ptr, i)
#elif defined(__ATOMIC_SEQ_CST)
#define atomic_xchg(ptr, i) ({ \
typeof(*ptr) _new = (i), _old; \
__atomic_exchange(ptr, &_new, &_old, __ATOMIC_SEQ_CST); \
_old; \
})
#else
/* __sync_lock_test_and_set() is documented to be an acquire barrier only. */
#define atomic_xchg(ptr, i) (smp_mb(), __sync_lock_test_and_set(ptr, i))
@ -266,4 +335,5 @@
#define atomic_and(ptr, n) ((void) __sync_fetch_and_and(ptr, n))
#define atomic_or(ptr, n) ((void) __sync_fetch_and_or(ptr, n))
#endif
#endif /* __ATOMIC_RELAXED */
#endif /* __QEMU_ATOMIC_H */

View File

@ -258,7 +258,7 @@ int qio_channel_socket_dgram_sync(QIOChannelSocket *ioc,
int fd;
trace_qio_channel_socket_dgram_sync(ioc, localAddr, remoteAddr);
fd = socket_dgram(localAddr, remoteAddr, errp);
fd = socket_dgram(remoteAddr, localAddr, errp);
if (fd < 0) {
trace_qio_channel_socket_dgram_fail(ioc);
return -1;

View File

@ -2361,7 +2361,7 @@ int kvm_set_one_reg(CPUState *cs, uint64_t id, void *source)
reg.addr = (uintptr_t) source;
r = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
if (r) {
trace_kvm_failed_reg_set(id, strerror(r));
trace_kvm_failed_reg_set(id, strerror(-r));
}
return r;
}
@ -2375,7 +2375,7 @@ int kvm_get_one_reg(CPUState *cs, uint64_t id, void *target)
reg.addr = (uintptr_t) target;
r = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
if (r) {
trace_kvm_failed_reg_get(id, strerror(r));
trace_kvm_failed_reg_get(id, strerror(-r));
}
return r;
}

View File

@ -609,7 +609,6 @@ static void migration_bitmap_sync_init(void)
iterations_prev = 0;
}
/* Called with iothread lock held, to protect ram_list.dirty_memory[] */
static void migration_bitmap_sync(void)
{
RAMBlock *block;
@ -1921,8 +1920,6 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
acct_clear();
}
/* iothread lock needed for ram_list.dirty_memory[] */
qemu_mutex_lock_iothread();
qemu_mutex_lock_ramlist();
rcu_read_lock();
bytes_transferred = 0;
@ -1947,7 +1944,6 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
memory_global_dirty_log_start();
migration_bitmap_sync();
qemu_mutex_unlock_ramlist();
qemu_mutex_unlock_iothread();
qemu_put_be64(f, ram_bytes_total() | RAM_SAVE_FLAG_MEM_SIZE);

View File

@ -417,12 +417,12 @@ static coroutine_fn int nbd_negotiate(NBDClientNewData *data)
memcpy(buf, "NBDMAGIC", 8);
if (client->exp) {
assert ((client->exp->nbdflags & ~65535) == 0);
cpu_to_be64w((uint64_t*)(buf + 8), NBD_CLIENT_MAGIC);
cpu_to_be64w((uint64_t*)(buf + 16), client->exp->size);
cpu_to_be16w((uint16_t*)(buf + 26), client->exp->nbdflags | myflags);
stq_be_p(buf + 8, NBD_CLIENT_MAGIC);
stq_be_p(buf + 16, client->exp->size);
stw_be_p(buf + 26, client->exp->nbdflags | myflags);
} else {
cpu_to_be64w((uint64_t*)(buf + 8), NBD_OPTS_MAGIC);
cpu_to_be16w((uint16_t *)(buf + 16), NBD_FLAG_FIXED_NEWSTYLE);
stq_be_p(buf + 8, NBD_OPTS_MAGIC);
stw_be_p(buf + 16, NBD_FLAG_FIXED_NEWSTYLE);
}
if (client->exp) {
@ -442,8 +442,8 @@ static coroutine_fn int nbd_negotiate(NBDClientNewData *data)
}
assert ((client->exp->nbdflags & ~65535) == 0);
cpu_to_be64w((uint64_t*)(buf + 18), client->exp->size);
cpu_to_be16w((uint16_t*)(buf + 26), client->exp->nbdflags | myflags);
stq_be_p(buf + 18, client->exp->size);
stw_be_p(buf + 26, client->exp->nbdflags | myflags);
if (nbd_negotiate_write(csock, buf + 18,
sizeof(buf) - 18) != sizeof(buf) - 18) {
LOG("write failed");
@ -528,9 +528,9 @@ static ssize_t nbd_send_reply(int csock, struct nbd_reply *reply)
[ 4 .. 7] error (0 == no error)
[ 7 .. 15] handle
*/
cpu_to_be32w((uint32_t*)buf, NBD_REPLY_MAGIC);
cpu_to_be32w((uint32_t*)(buf + 4), reply->error);
cpu_to_be64w((uint64_t*)(buf + 8), reply->handle);
stl_be_p(buf, NBD_REPLY_MAGIC);
stl_be_p(buf + 4, reply->error);
stq_be_p(buf + 8, reply->handle);
TRACE("Sending response to client");

View File

@ -1171,6 +1171,7 @@ typedef struct {
int connected;
guint timer_tag;
guint open_tag;
int slave_fd;
} PtyCharDriver;
static void pty_chr_update_read_handler_locked(CharDriverState *chr);
@ -1347,6 +1348,7 @@ static void pty_chr_close(struct CharDriverState *chr)
qemu_mutex_lock(&chr->chr_write_lock);
pty_chr_state(chr, 0);
close(s->slave_fd);
object_unref(OBJECT(s->ioc));
if (s->timer_tag) {
g_source_remove(s->timer_tag);
@ -1374,7 +1376,6 @@ static CharDriverState *qemu_chr_open_pty(const char *id,
return NULL;
}
close(slave_fd);
qemu_set_nonblock(master_fd);
chr = qemu_chr_alloc(common, errp);
@ -1399,6 +1400,7 @@ static CharDriverState *qemu_chr_open_pty(const char *id,
chr->explicit_be_open = true;
s->ioc = QIO_CHANNEL(qio_channel_file_new_fd(master_fd));
s->slave_fd = slave_fd;
s->timer_tag = 0;
return chr;
@ -2856,6 +2858,10 @@ static void tcp_chr_update_read_handler(CharDriverState *chr)
{
TCPCharDriver *s = chr->opaque;
if (!s->connected) {
return;
}
remove_fd_in_watch(chr);
if (s->ioc) {
chr->fd_in_tag = io_add_watch_poll(s->ioc,
@ -4380,7 +4386,7 @@ static CharDriverState *qmp_chardev_open_udp(const char *id,
QIOChannelSocket *sioc = qio_channel_socket_new();
if (qio_channel_socket_dgram_sync(sioc,
udp->remote, udp->local,
udp->local, udp->remote,
errp) < 0) {
object_unref(OBJECT(sioc));
return NULL;

View File

@ -1,68 +1,78 @@
@example
@c man begin SYNOPSIS
usage: qemu-nbd [OPTION]... @var{filename}
@command{qemu-nbd} [OPTION]... @var{filename}
@command{qemu-nbd} @option{-d} @var{dev}
@c man end
@end example
@c man begin DESCRIPTION
Export QEMU disk image using NBD protocol.
Export a QEMU disk image using the NBD protocol.
@c man end
@c man begin OPTIONS
@var{filename} is a disk image filename.
@var{dev} is an NBD device.
@table @option
@item @var{filename}
is a disk image filename
@item -p, --port=@var{port}
port to listen on (default @samp{10809})
The TCP port to listen on (default @samp{10809})
@item -o, --offset=@var{offset}
offset into the image
The offset into the image
@item -b, --bind=@var{iface}
interface to bind to (default @samp{0.0.0.0})
The interface to bind to (default @samp{0.0.0.0})
@item -k, --socket=@var{path}
Use a unix socket with path @var{path}
@item -f, --format=@var{format}
Set image format as @var{format}
@item -f, --format=@var{fmt}
Force the use of the block driver for format @var{fmt} instead of
auto-detecting
@item -r, --read-only
export read-only
Export the disk as read-only
@item -P, --partition=@var{num}
only expose partition @var{num}
Only expose partition @var{num}
@item -s, --snapshot
use @var{filename} as an external snapshot, create a temporary
Use @var{filename} as an external snapshot, create a temporary
file with backing_file=@var{filename}, redirect the write to
the temporary one
@item -l, --load-snapshot=@var{snapshot_param}
load an internal snapshot inside @var{filename} and export it
Load an internal snapshot inside @var{filename} and export it
as an read-only device, @var{snapshot_param} format is
'snapshot.id=[ID],snapshot.name=[NAME]' or '[ID_OR_NAME]'
@item -n, --nocache
@itemx --cache=@var{cache}
set cache mode to be used with the file. See the documentation of
The cache mode to be used with the file. See the documentation of
the emulator's @code{-drive cache=...} option for allowed values.
@item --aio=@var{aio}
choose asynchronous I/O mode between @samp{threads} (the default)
Set the asynchronous I/O mode between @samp{threads} (the default)
and @samp{native} (Linux only).
@item --discard=@var{discard}
toggles whether @dfn{discard} (also known as @dfn{trim} or @dfn{unmap})
requests are ignored or passed to the filesystem. The default is no
(@samp{--discard=ignore}).
Control whether @dfn{discard} (also known as @dfn{trim} or @dfn{unmap})
requests are ignored or passed to the filesystem. @var{discard} is one of
@samp{ignore} (or @samp{off}), @samp{unmap} (or @samp{on}). The default is
@samp{ignore}.
@item --detect-zeroes=@var{detect-zeroes}
Control the automatic conversion of plain zero writes by the OS to
driver-specific optimized zero write commands. @var{detect-zeroes} is one of
@samp{off}, @samp{on} or @samp{unmap}. @samp{unmap}
converts a zero write to an unmap operation and can only be used if
@var{discard} is set to @samp{unmap}. The default is @samp{off}.
@item -c, --connect=@var{dev}
connect @var{filename} to NBD device @var{dev}
Connect @var{filename} to NBD device @var{dev}
@item -d, --disconnect
disconnect the specified device
Disconnect the device @var{dev}
@item -e, --shared=@var{num}
device can be shared by @var{num} clients (default @samp{1})
@item -f, --format=@var{fmt}
force block driver for format @var{fmt} instead of auto-detecting
Allow up to @var{num} clients to share the device (default @samp{1})
@item -t, --persistent
don't exit on the last connection
Don't exit on the last connection
@item -v, --verbose
display extra debugging information
Display extra debugging information
@item -h, --help
display this help and exit
Display this help and exit
@item -V, --version
output version information and exit
Display version information and exit
@end table
@c man end
@ -79,7 +89,7 @@ warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
@c man end
@c man begin SEEALSO
qemu-img(1)
qemu(1), qemu-img(1)
@c man end
@end ignore

View File

@ -636,7 +636,7 @@ sub get_maintainers {
if ($email) {
if (! $interactive) {
$email_git_fallback = 0 if @email_to > 0 || @list_to > 0 || $email_git || $email_git_blame;
$email_git_fallback = 0 if @email_to > 0 || $email_git || $email_git_blame;
if ($email_git_fallback) {
print STDERR "get_maintainer.pl: No maintainers found, printing recent contributors.\n";
print STDERR "get_maintainer.pl: Do not blindly cc: them on patches! Use common sense.\n";

View File

@ -22,6 +22,7 @@ import resource
import struct
import re
from collections import defaultdict
from time import sleep
VMX_EXIT_REASONS = {
'EXCEPTION_NMI': 0,
@ -778,7 +779,7 @@ def get_providers(options):
return providers
def check_access():
def check_access(options):
if not os.path.exists('/sys/kernel/debug'):
sys.stderr.write('Please enable CONFIG_DEBUG_FS in your kernel.')
sys.exit(1)
@ -790,14 +791,24 @@ def check_access():
"Also ensure, that the kvm modules are loaded.\n")
sys.exit(1)
if not os.path.exists(PATH_DEBUGFS_TRACING):
sys.stderr.write("Please make {0} readable by the current user.\n"
.format(PATH_DEBUGFS_TRACING))
if not os.path.exists(PATH_DEBUGFS_TRACING) and (options.tracepoints
or not options.debugfs):
sys.stderr.write("Please enable CONFIG_TRACING in your kernel "
"when using the option -t (default).\n"
"If it is enabled, make {0} readable by the "
"current user.\n")
if options.tracepoints:
sys.exit(1)
sys.stderr.write("Falling back to debugfs statistics!\n"
options.debugfs = True
sleep(5)
return options
def main():
check_access()
options = get_options()
options = check_access(options)
providers = get_providers(options)
stats = Stats(providers, fields=options.fields)

View File

@ -861,7 +861,7 @@ int x86_cpu_handle_mmu_fault(CPUState *cs, vaddr addr,
/* Bits 20-13 provide bits 39-32 of the address, bit 21 is reserved.
* Leave bits 20-13 in place for setting accessed/dirty bits below.
*/
pte = pde | ((pde & 0x1fe000) << (32 - 13));
pte = pde | ((pde & 0x1fe000LL) << (32 - 13));
rsvd_mask = 0x200000;
goto do_check_protect_pse36;
}
@ -1056,7 +1056,7 @@ hwaddr x86_cpu_get_phys_page_debug(CPUState *cs, vaddr addr)
if (!(pde & PG_PRESENT_MASK))
return -1;
if ((pde & PG_PSE_MASK) && (env->cr[4] & CR4_PSE_MASK)) {
pte = pde | ((pde & 0x1fe000) << (32 - 13));
pte = pde | ((pde & 0x1fe000LL) << (32 - 13));
page_size = 4096 * 1024;
} else {
/* page directory entry */

View File

@ -44,10 +44,6 @@ DEF_HELPER_FLAGS_3(set_dr, TCG_CALL_NO_WG, void, env, int, tl)
DEF_HELPER_FLAGS_2(get_dr, TCG_CALL_NO_WG, tl, env, int)
DEF_HELPER_2(invlpg, void, env, tl)
DEF_HELPER_4(enter_level, void, env, int, int, tl)
#ifdef TARGET_X86_64
DEF_HELPER_4(enter64_level, void, env, int, int, tl)
#endif
DEF_HELPER_1(sysenter, void, env)
DEF_HELPER_2(sysexit, void, env, int)
#ifdef TARGET_X86_64

View File

@ -1379,80 +1379,6 @@ bool x86_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
return ret;
}
void helper_enter_level(CPUX86State *env, int level, int data32,
target_ulong t1)
{
target_ulong ssp;
uint32_t esp_mask, esp, ebp;
esp_mask = get_sp_mask(env->segs[R_SS].flags);
ssp = env->segs[R_SS].base;
ebp = env->regs[R_EBP];
esp = env->regs[R_ESP];
if (data32) {
/* 32 bit */
esp -= 4;
while (--level) {
esp -= 4;
ebp -= 4;
cpu_stl_data_ra(env, ssp + (esp & esp_mask),
cpu_ldl_data_ra(env, ssp + (ebp & esp_mask),
GETPC()),
GETPC());
}
esp -= 4;
cpu_stl_data_ra(env, ssp + (esp & esp_mask), t1, GETPC());
} else {
/* 16 bit */
esp -= 2;
while (--level) {
esp -= 2;
ebp -= 2;
cpu_stw_data_ra(env, ssp + (esp & esp_mask),
cpu_lduw_data_ra(env, ssp + (ebp & esp_mask),
GETPC()),
GETPC());
}
esp -= 2;
cpu_stw_data_ra(env, ssp + (esp & esp_mask), t1, GETPC());
}
}
#ifdef TARGET_X86_64
void helper_enter64_level(CPUX86State *env, int level, int data64,
target_ulong t1)
{
target_ulong esp, ebp;
ebp = env->regs[R_EBP];
esp = env->regs[R_ESP];
if (data64) {
/* 64 bit */
esp -= 8;
while (--level) {
esp -= 8;
ebp -= 8;
cpu_stq_data_ra(env, esp, cpu_ldq_data_ra(env, ebp, GETPC()),
GETPC());
}
esp -= 8;
cpu_stq_data_ra(env, esp, t1, GETPC());
} else {
/* 16 bit */
esp -= 2;
while (--level) {
esp -= 2;
ebp -= 2;
cpu_stw_data_ra(env, esp, cpu_lduw_data_ra(env, ebp, GETPC()),
GETPC());
}
esp -= 2;
cpu_stw_data_ra(env, esp, t1, GETPC());
}
}
#endif
void helper_lldt(CPUX86State *env, int selector)
{
SegmentCache *dt;

File diff suppressed because it is too large Load Diff

View File

@ -726,6 +726,28 @@ lm32_uart_memory_write(uint32_t addr, uint32_t value) "addr 0x%08x value 0x%08x"
lm32_uart_memory_read(uint32_t addr, uint32_t value) "addr 0x%08x value 0x%08x"
lm32_uart_irq_state(int level) "irq state %d"
# hw/scsi/mptsas.c
mptsas_command_complete(void *dev, uint32_t ctx, uint32_t status, uint32_t resid) "dev %p context 0x%08x status %x resid %d"
mptsas_diag_read(void *dev, uint32_t addr, uint32_t val) "dev %p addr 0x%08x value 0x%08x"
mptsas_diag_write(void *dev, uint32_t addr, uint32_t val) "dev %p addr 0x%08x value 0x%08x"
mptsas_irq_intx(void *dev, int level) "dev %p level %d"
mptsas_irq_msi(void *dev) "dev %p "
mptsas_mmio_read(void *dev, uint32_t addr, uint32_t val) "dev %p addr 0x%08x value 0x%x"
mptsas_mmio_unhandled_read(void *dev, uint32_t addr) "dev %p addr 0x%08x"
mptsas_mmio_unhandled_write(void *dev, uint32_t addr, uint32_t val) "dev %p addr 0x%08x value 0x%x"
mptsas_mmio_write(void *dev, uint32_t addr, uint32_t val) "dev %p addr 0x%08x value 0x%x"
mptsas_process_message(void *dev, int msg, uint32_t ctx) "dev %p cmd %d context 0x%08x\n"
mptsas_process_scsi_io_request(void *dev, int bus, int target, int lun, uint64_t len) "dev %p dev %d:%d:%d length %"PRIu64""
mptsas_reset(void *dev) "dev %p "
mptsas_scsi_overflow(void *dev, uint32_t ctx, uint64_t req, uint64_t found) "dev %p context 0x%08x: %"PRIu64"/%"PRIu64""
mptsas_sgl_overflow(void *dev, uint32_t ctx, uint64_t req, uint64_t found) "dev %p context 0x%08x: %"PRIu64"/%"PRIu64""
mptsas_unhandled_cmd(void *dev, uint32_t ctx, uint8_t msg_cmd) "dev %p context 0x%08x: Unhandled cmd %x"
mptsas_unhandled_doorbell_cmd(void *dev, int cmd) "dev %p value 0x%08x"
# hw/scsi/mptconfig.c
mptsas_config_sas_device(void *dev, int address, int port, int phy_handle, int dev_handle, int page) "dev %p address %d (port %d, handles: phy %d dev %d) page %d"
mptsas_config_sas_phy(void *dev, int address, int port, int phy_handle, int dev_handle, int page) "dev %p address %d (port %d, handles: phy %d dev %d) page %d"
# hw/scsi/megasas.c
megasas_init_firmware(uint64_t pa) "pa %" PRIx64 " "
megasas_init_queue(uint64_t queue_pa, int queue_len, uint64_t head, uint64_t tail, uint32_t flags) "queue at %" PRIx64 " len %d head %" PRIx64 " tail %" PRIx64 " flags %x"