Current QEMU will stall guest VM booting under ACPI mode when vcpu count
is >= 12. Analyzing the booting log, it turns out that DSDT table can't
be loaded correctly due to "Invalid character(s) in name (0x62303043),
repaired: [C00*]". This is because existing QEMU uses a lower case AML
ID for CPU devices (e.g. C000, C001, ..., C00a, C00b). The ACPI code
inside guest VM detects this lower case character as an invalid character
(see acpi_ut_valid_acpi_char() in drivers/acpi/acpica/utstring.c file)
and converts it to "*". This causes duplicated IDs (i.e. "C00a" ==>"C00*"
and "C00b" ==> "C00*"). So ACPI refuses to load the table.
This patch fixes the problem by changing the format with a upper case
character. It matches the CPU ID formats used in other parts of QEMU
code.
Reported-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Wei Huang <wei@redhat.com>
Reviewed-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Message-id: 1472852809-23042-1-git-send-email-wei@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If an alignment fault occurred and target EL is using AArch32,
then DFSR/IFSR bit LPAE[9] must be set correctly.
Signed-off-by: Sergey Sorokin <afarallax@yandex.ru>
Message-id: 1471283293-169850-1-git-send-email-afarallax@yandex.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The uboot in the previous release of the SDK was using a hardcoded
value for memory size. This is not true anymore, the value is now
retrieved from the memory controller.
Below is a model for this device, only supporting unlock and
configuration. Without it, we endup running a guest with 64MB, which
is a bit low nowdays. It uses a 'silicon-rev' property and ram_size to
build a default value. Some bits should be linked to SCU strapping
registers but it seems a bit complex to add for the current need.
The model is ready for the AST2500 SOC.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJXzpyJAAoJEH8JsnLIjy/WwToQAJ29bQ8dbVxybQtApZn0l3DH
aBcguj822Vqa+KaxOLAfzkxmG5MurIvWzRQD1BvjxaprRykB+hDh4oAJCmVjfedP
B28h24TUF+w8WIbpxf9weQFNpsT2Ire8ZySc0JZhpYMqxXCqy6NzDs98sjedDC0O
jNbfic1L+yEpZumVE0Fzr4/YgPumt7wP0X42nb6G8R+VlChm3nweNCFF7hNQvTuB
GNNbd9ckUS0BTcQazm04yRR/WzXW6uFqa00QeWsNGGd1mmZ0kUxiqxVgx/fuBMrL
yC4LxFit7eNRoeVqu/nu8GsG+2Ol5zsalfJKFcoWmpg8pygOayc5SXecRUZRw7tg
3oB7ZijbrBUFlr4y6cNVCGPtRluQshpLGHlgo68ulEIlHprqECwgPIdoOPr0bs+v
Gb8ho2Y+lrISPIsjYWK5UFSmZf0SIBGILZUSD3lzQ+oOHXGKbdPAaFvSUqXENHSN
xjtMYjr5t+NjrNNd2Q+VUJPlimHGw5jAowjsQSTk3ndcvJYeIVs+AwLqNTKc3dY4
Oxx1IZ2RngDC63PmZUgh2Bs8pwFg7HaZJejmtq5jY8eHJZM/QMkCJX9TSRCZsMRB
n0GxfCYabX526h3Yo94d74s5xRnHaC+Lem8PU/VEGR3/dMn21jf/PI9e+/0BnBTC
iET2gkU70c4lunWkFHMd
=LZpU
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches
# gpg: Signature made Tue 06 Sep 2016 11:38:01 BST
# gpg: using RSA key 0x7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream: (36 commits)
block: Allow node name for 'qemu-io' HMP command
qemu-iotests: Log QMP traffic in debug mode
block jobs: Improve error message for missing job ID
coroutine: Assert that no locks are held on termination
coroutine: Let CoMutex remember who holds it
qcow2: fix iovec size at qcow2_co_pwritev_compressed
test-coroutine: Fix coroutine pool corruption
qemu-iotests: add vmdk for test backup compression in 055
qemu-iotests: test backup compression in 055
blockdev-backup: added support for data compression
drive-backup: added support for data compression
block: simplify blockdev-backup
block: simplify drive-backup
block/io: turn on dirty_bitmaps for the compressed writes
block: remove BlockDriver.bdrv_write_compressed
qcow: cleanup qcow_co_pwritev_compressed to avoid the recursion
qcow: add qcow_co_pwritev_compressed
vmdk: add vmdk_co_pwritev_compressed
qcow2: cleanup qcow2_co_pwritev_compressed to avoid the recursion
qcow2: add qcow2_co_pwritev_compressed
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- cpumodel support for s390x
- various fixes and improvements
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXztxBAAoJEN7Pa5PG8C+vMScP/3AjIhWHy1h5iTZhjAOz+vIX
1JzCj3/vWf+1JxEYbKO7ySLXG4KbwMPHyWfGvlEyeYpt+ICXE9V0wXU1+t/y7SCr
zTDVBjpWJlVm26/Dix2Ejvk5Dqe1E485GSnxfQfFTglafaZqt2cQ4sSOL+M3VChZ
MJtFpDRils4QOqxk3RrHAv+L7D0mKgBIDutSpojkY/B9z7pGr/lZVpgjam4wB+6S
f2KmzKJ1u0U00f1dE5BN/wf5cd1pr+N3FyXshKxZOZ4v9AfYolZ9tcruqHmgXWYS
Z708D4SgoOshmEzztzmjyoDgaOUxpOiA2NUuOS4PlElJMKob/8b1+XDVxv9qfC5p
NRSmmzne9UdsEHy6fz+/URJgM+qf9IadcusL94ocbtYLfEYUFaQ5ja81qQ1NM0Uo
TLuRknFknc0cotq1+UVQsfyOPyKEypt2XNPkPpgV15+XZG1WlElCTZfMPOuP7zYU
YG3FBDqBj4JidkEYpg/w+ARH6fzaRAOISLclJFgFa4poyutbbr88AF1NzHmMyjDM
Kvj7NFRfhBgm+vGs4s9ZNgGGfxaHBdzd5xV1u+xYaeW25VCFlOGk/7d5jXDCvnvm
Juv8SMai3fgFfEis5eZmZEz3Sq1Zja3jCqq0tJmFkJMC3tmFniDzh+9KQso2mMZo
ezRdgCkmdOm/OyVwhNy6
=hiH2
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20160906-v2' into staging
First (big) chunk of s390x updates:
- cpumodel support for s390x
- various fixes and improvements
# gpg: Signature made Tue 06 Sep 2016 16:09:53 BST
# gpg: using RSA key 0xDECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20160906-v2: (38 commits)
s390x/cpumodel: implement QMP interface "query-cpu-model-baseline"
s390x/cpumodel: implement QMP interface "query-cpu-model-comparison"
s390x/cpumodel: implement QMP interface "query-cpu-model-expansion"
qmp: add QMP interface "query-cpu-model-baseline"
qmp: add QMP interface "query-cpu-model-comparison"
qmp: add QMP interface "query-cpu-model-expansion"
s390x/kvm: don't enable key wrapping if msa3 is disabled
s390x/kvm: let the CPU model control CMM(A)
s390x/kvm: disable host model for problematic compat machines
s390x/kvm: implement CPU model support
s390x/kvm: allow runtime-instrumentation for "none" machine
s390x/sclp: propagate hmfai
s390x/sclp: propagate the mha via sclp
s390x/sclp: propagate the ibc val (lowest and unblocked ibc)
s390x/sclp: indicate sclp features
s390x/sclp: introduce sclp feature blocks
s390x/sclp: factor out preparation of cpu entries
s390x/cpumodel: check and apply the CPU model
s390x/cpumodel: let the CPU model handle feature checks
s390x/cpumodel: expose features and feature groups as properties
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Let's implement that interface by reusing our conversion code and
lookup code for CPU definitions.
In order to find a compatible CPU model, we first detect the maximum
possible CPU generation and then try to find a maximum model, satisfying
all base features (not exceeding the maximum generation).
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-31-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's implement that interface by reusing our convertion code implemented
for expansion.
We use CPU generations and CPU features to calculate the result. This
means, that a zEC12 cannot simply be converted into a z13 by stripping
of features. This is required, as other magic values (e.g. maximum
address sizes) belong to a CPU generation and cannot simply be
emulated by an older generation.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-30-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
In order to expand CPU models, we create temporary cpus that handle the
feature/group parsing. Only CPU feature properties are expanded.
When converting the data structure back, we always fall back to the
static base CPU model, which is by definition migration-safe.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-29-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's provide a standardized interface to baseline two CPU models, to
create a third, compatible one. This is especially helpful when two
CPU models are not identical, but a CPU model is required that is
guaranteed to run under both configurations, where the original models run.
"query-cpu-model-baseline" takes two CPU models and returns a third,
compatible model. The result will always be a static CPU model.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-28-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's provide a standardized interface to compare two CPU models.
"query-cpu-model-compare" takes two models and returns how they compare
in a specific configuration.
The result will give guarantees about runnability. E.g. if a CPU model A
is a subset of CPU model B, model A is guaranteed to run in configurations
where model B runs, but not the other way around (might or might not run).
Usually, CPU features or CPU generations are used to calculate the result.
If a model is not guaranteed to run in a certain environment (e.g.
incompatible), a compatible one can be created by "baselining" both models
(follow up patch).
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-27-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's provide a standardized interface to expand CPU models. This interface
can be used by tooling to get details about a specific CPU model in a
certain configuration, e.g. about the "host" model.
To take care of all architectures, two detail levels for an expansion
are introduced. Certain architectures might not support all detail levels.
While "full" will expand and indicate all relevant properties/features
of a CPU model, "static" expands to a static base CPU model, that will
never change between QEMU versions and therefore have the same features
when used under different compatibility machines.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-26-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
As the CPU model now controls msa3, trying to set wrapping keys without
msa3 being around/enable in the kernel will produce misleading errors.
So let's simply not configure key wrapping if msa3 is not enabled and
make compat machines with disabled CPU model work correctly.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-25-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Starting with recent kernels, if the cmma attributes are available, we
actually have hardware support. Enabling CMMA then means providing the
guest VCPU with CMM, therefore enabling its CMM facility.
Let's not blindly enable CMM anymore but let's control it using CPU models.
For disabled CPU models, CMMA will continue to always get enabled.
Also enable it in the applicable default models.
Please note that CMM doesn't work with hugetlbfs, therefore we will
warn the user and keep it disabled. Migrating from/to a hugetlbfs
configuration works, as it will be disabled on both sides.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-24-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Compatibility machines that touch runtime-instrumentation should not
be used with the CPU model. Otherwise the host model will look different,
depending on the QEMU machine QEMU has been started with.
So let's simply disable the host model for existing compatibility machines
that all disable ri. This, in return, disables the CPU model for these
compat machines completely.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-23-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's implement our two hooks so we can support CPU models.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-22-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
To be able to query the correct host model for the "none" machine,
let's allow runtime-instrumentation for that machine.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-21-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
The mha is provided in the CPU model, so get any CPU and extract the value.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-18-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
If we have a lowest ibc, we can indicate the ibc to the guest.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-17-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
We have three different blocks in the SCLP read-SCP information response
that indicate sclp features. Let's prepare propagation.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-16-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
The sclp "read cpu info" and "read scp info" commands can include
features for the cpu info and configuration characteristics (extended),
decribing some advanced features available in the configuration.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-15-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's factor out the common code of "read cpu info" and "read scp
info". This will make the introduction of new cpu entry fields easier.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-14-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
We have to test if a configured CPU model is runnable in the current
configuration, and if not report why that is the case. This is done by
comparing it to the maximum supported model (host for KVM or z900 for TCG).
Also, we want to do some base sanity checking for a configured CPU model.
We'll cache the maximum model and the applied model (for performance
reasons and because KVM can only be configured before any VCPU is created).
For unavailable "host" model, we have to make sure that we inform KVM,
so it can do some compatibility stuff (enable CMMA later on to be precise).
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-13-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
If we have certain features enabled, we have to migrate additional state
(e.g. vector registers or runtime-instrumentation registers). Let the
CPU model control that unless we have no "host" CPU model in the KVM
case. This will later on be the case for compatibility machines, so
migration from QEMU versions without the CPU model will still work.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-12-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's add all features and feature groups as properties to all CPU models.
If the "host" CPU model is unknown, we can neither query nor change
features. KVM will just continue to work like it did until now.
We will not allow to enable features that were not part of the original
CPU model, because that could collide with the IBC in KVM.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-11-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
A CPU model consists of a CPU definition, to which delta changes are
applied - features added or removed (e.g. z13-base,vx=on). In addition,
certain properties (e.g. cpu id) can later on change during migration
but belong into the CPU model. This data will later be filled from the
host model in the KVM case.
Therefore, store the configured CPU model inside the CPU instance, so
we can later on perform delta changes using properties.
For the "qemu" model, we emulate in TCG a z900. "host" will be
uninitialized (cpu->model == NULL) unless we have CPU model support in KVM
later on. The other models are all initialized from their definitions.
Only the "host" model can have a cpu->model == NULL.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-10-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This patch adds the CPU model definitions that are known on s390x -
like z900, zBC12 or z13. For each definition, introduce two CPU models:
1. Base model (e.g. z13-base): Minimum feature set we expect to be around
on all z13 systems. These models are migration-safe and will never
change.
2. Flexible models (e.g. z13): Models that can change between QEMU versions
and will be extended over time as we implement further features that
are already part of such a model in real hardware of certain
configurations.
We want to work on features using ordinary bitmap operations, however we
can't initialize a bitmap statically (unsigned long[] ...). Therefore we
store the generated feature lists in separate arrays and convert them to
proper bitmaps before registering all our CPU model classes.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-9-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's use the generated groups to create feature group representations for
the user. These groups can later be used to enable/disable multiple
features in one shot and will be used to reduce the amount of reported
features to the user if all subfeatures are in place.
We want to work on features using ordinary bitmap operations, however we
can't initialize a bitmap statically (unsigned long[] ...). Therefore
we store the generated feature lists in separate arrays and convert
them to a proper bitmaps before they will ever be used.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-8-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Feature groups will be very helpful to reduce the amount of features
typically available in sane configurations. E.g. the MSA facilities
introduced loads of subfunctions, which could - in theory - go away
in the future, but we want to avoid reporting hundrets of features to
the user if usually all of them are in place.
Groups only contain features that were introduced in one shot, not just
random features. Therefore, groups can never change. This is an important
property regarding migration.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-7-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This patch introduces the helper "gen-features" which allows to generate
feature list definitions at compile time. Its flexibility is better and the
error-proneness is lower when compared to static programming time added
statements.
The helper includes "target-s390x/cpu_features.h" to be able to use named
facility bits instead of numbers. The generated defines will be used for
the definition of CPU models.
We generate feature lists for each HW generation and GA for EC models. BC
models are always based on a EC version and have no separate definitions.
Base features: Features we expect to be always available in sane setups.
Migration safe - will never change. Can be seen as "minimum features
required for a CPU model".
Default features: Features we expect to be stable and around in latest
setups (e.g. having KVM support) - not migration safe.
Max features: All supported features that are theoretically allowed for a
CPU model. Exceeding these features could otherwise produce problems with
IBC (instruction blocking controls) in KVM.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Michael Mueller <mimu@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
[generate base, default and models. renaming and cleanup]
Message-Id: <20160905085244.99980-6-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
The patch introduces s390x CPU features (most of them refered to as
facilities) along with their discription and some functions that will be
helpful when working with the features later on.
Please note that we don't introduce all known CPU features, only the
ones currently supported by KVM + QEMU. We don't want to enable later
on blindly any facilities, for which we don't know yet if we need QEMU
support to properly support them (e.g. migrate additional state when
active). We can update QEMU later on.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Michael Mueller <mimu@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
[reworked to include non-stfle features, added definitions]
Message-Id: <20160905085244.99980-5-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's expose the description and migration safety and whether a definition
is static, as class properties, this can be helpful in the future.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-4-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This patch introduces two CPU models, "host" and "qemu".
"qemu" is used as default when running under TCG. "host" is used
as default when running under KVM. "host" cannot be used without KVM.
"host" is not migration-safe. They both inherit from the base s390x CPU,
which is turned into an abstract class.
This patch also changes CPU creation to take care of the passed CPU string
and reuses common code parse_features() function for that purpose. Unknown
CPU definitions are now reported. The "-cpu ?" and "query-cpu-definition"
commands are changed to list all CPU subclasses automatically, including
migration-safety and whether static.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-3-dahi@linux.vnet.ibm.com>
[CH: fix up self-assignments in s390_cpu_list, as spotted by clang]
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This includes a few features that were submitted just after hard
freeze, and a bug fix for memory backend initialization ordering.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJXzcx2AAoJECgHk2+YTcWmTKoP/1xCdbQoYPui9lPbh2z/A6PL
jmi6Koxy1ohmNaJcxrsnh7uM6QBvidU99iUPZNqNuqLOgVj/T4UPA7+rbesKwJJQ
bG4cFK2puAemlXggvqk2hLfVu6C9dlL3W52DqnsYcVjyJBNpWYcessORNRHfRESw
9LohZzMiBoMpgetRc8GgfnWW2C65YbruLXNA73o1YQDv7XljeUdYpuG9wHaeMCKg
chFlsPIU+JzynefhK8TnP48ju6CGL0aFY3VplHn3OfTDYN9Vvp3Ap7JjJMdKU15M
+6bnTkJUkoA8T3HMiRshGyXd+HGhgmxsjBMSmU2mDR0Z3sjR51F/V5PJzd+OcLLE
LSZ5kj66BbPppESvnTHGTQkKQn+L4d6s4EzD+NkExEAlm2/b3Ya6CJ4wQecBWfkp
r7i2ylQTf8W92qpEBe/HhArI3YNzcFtQvhaVMqqj2O3/FPVtAjJBw5PBHzuNjv9z
O8lvT/UAIgTeWwVAB+hRGXfU4EogK7P9ijEQt/Dmo0EaKP65OLf5SoFfPCuhClId
XqKGNLzRHmvvALrxLWxoRfnbTMIEGxZVP7QW9UCLM5gaDrJEyelebXg0pR8LEVpD
MvxH5h8OAJQ4r6VtG3UqZ+CBjH6b80H463YspBclNLdhGkZNd27c/12/Mk66q7nI
EqfbZ8Y6hvPE3XRWoWDB
=BhnF
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-pull-request' into staging
x86 and memory backends queue, 2016-09-05
This includes a few features that were submitted just after hard
freeze, and a bug fix for memory backend initialization ordering.
# gpg: Signature made Mon 05 Sep 2016 20:50:14 BST
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-pull-request:
vl: Delay initialization of memory backends
vhost-user-test: Use libqos instead of pxe-virtio.rom
target-i386: Add more Intel AVX-512 instructions support
exec: Ensure the only one cpu_index allocation method is used
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Initialization of memory backends may take a while when
prealloc=yes is used, depending on their size. Initializing
memory backends before chardevs may delay the creation of monitor
sockets, and trigger timeouts on management software that waits
until the monitor socket is created by QEMU. See, for example,
the bug report at:
https://bugzilla.redhat.com/show_bug.cgi?id=1371211
In addition to that, allocating memory before calling
configure_accelerator() breaks the tcg_enabled() checks at
memory_region_init_*().
This patch fixes those problems by adding "memory-backend-*"
classes to the delayed-initialization list.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
vhost-user-test relies on iPXE just to initialize the virtio-net
device, and doesn't do any actual packet tx/rx testing.
In addition to that, the test relies on TCG, which is
imcompatible with vhost. The test only worked by accident: a bug
the memory backend initialization made memory regions not have
the DIRTY_MEMORY_CODE bit set in dirty_log_mask.
This changes vhost-user-test to initialize the virtio-net device
using libqos, and not use TCG nor pxe-virtio.rom.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Make sure that cpu_index auto allocation isn't used in
combination with manual cpu_index assignment. And
dissallow out of order cpu removal if auto allocation
is in use.
Target that wishes to support out of order unplug should
switch to manual cpu_index assignment. Following patch
could be used as an example:
(pc: init CPUState->cpu_index with index in possible_cpus[]))
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
An explicit if/else is clearer than arithmetic assuming #true is 1,
while the compiler should be able to generate just as optimal code.
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Message-id: 147194273830.26836.5875729707953474838.stgit@fimbulvetr.bsc.es
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Removes the event state array used for early initialization. Since only
events with the "vcpu" property need a late initialization fixup,
threats their initialization specially.
Assumes that the user won't touch the state of "vcpu" events between
early and late initialization (e.g., through QMP).
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Message-id: 147194273191.26836.14423079546263831356.stgit@fimbulvetr.bsc.es
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This patch adds a tracing backend which sends output using syslog().
The syslog backend is limited to POSIX compliant systems.
openlog() is called with facility set to LOG_DAEMON, with the LOG_PID
option. Trace events are logged at level LOG_INFO.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Message-id: 1470318254-29989-1-git-send-email-paul.durrant@citrix.com
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
When using a node name, create a temporary BlockBackend that is used to
run the qemu-io command.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Python tests are already annoying enough to debug. With QMP traffic
available it's a little bit easier at least.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
If a block job is started with a node name rather than a device name and
no explicit job ID is passed, it was reported that '' isn't a
well-formed ID. Which is correct, but we can make the message a little
bit nicer.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
A coroutine that takes a lock must also release it again. If the
coroutine terminates without having released all its locks, it's buggy
and we'll probably run into a deadlock sooner or later. Make sure that
we don't get such cases.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
In cases of deadlocks, knowing who holds a given CoMutex is really
helpful for debugging. Keeping the information around doesn't cost much
and allows us to add another assertion to keep the code correct, so
let's just add it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Use bytes as the size would be more exact than s->cluster_size. Although
qemu_iovec_to_buf() will not allow to go beyond the qiov.
Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The test case overwrites the Coroutine object with 0xff as a way to
assert that the coroutine isn't used any more. However, this means that
the coroutine pool now contains a corrupted object and later test cases
may get this corrupted object and crash.
This patch saves the real content of the object and restores it after
completing the test. The only use of the coroutine pool between those
two points is the deletion of co2. As this only means an insertion at
the head of an SLIST (release_pool or alloc_pool), it doesn't access the
invalid list pointers that co1 has during this period.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>