Commit Graph

60611 Commits

Author SHA1 Message Date
Aaron Lindsay e4e91a217c target/arm: Make PMOVSCLR and PMUSERENR 64 bits wide
This is a bug fix to ensure 64-bit reads of these registers don't read
adjacent data.

Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Message-id: 1523997485-1905-13-git-send-email-alindsay@codeaurora.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26 11:04:39 +01:00
Aaron Lindsay ac57fd24cd target/arm: Fix bitmask for PMCCFILTR writes
It was shifted to the left one bit too few.

Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1523997485-1905-10-git-send-email-alindsay@codeaurora.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26 11:04:39 +01:00
Aaron Lindsay e69ad9df6c target/arm: Allow EL change hooks to do IO
During code generation, surround CPSR writes and exception returns which
call the EL change hooks with gen_io_start/end. The immediate need is
for the PMU to access the clock and icount during EL change to support
mode filtering.

Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Message-id: 1523997485-1905-9-git-send-email-alindsay@codeaurora.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26 11:04:39 +01:00
Aaron Lindsay b5c53d1b38 target/arm: Add pre-EL change hooks
Because the design of the PMU requires that the counter values be
converted between their delta and guest-visible forms for mode
filtering, an additional hook which occurs before the EL is changed is
necessary.

Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Message-id: 1523997485-1905-8-git-send-email-alindsay@codeaurora.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26 11:04:39 +01:00
Aaron Lindsay 08267487c9 target/arm: Support multiple EL change hooks
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Message-id: 1523997485-1905-7-git-send-email-alindsay@codeaurora.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26 11:04:39 +01:00
Aaron Lindsay d5a5e4c93d target/arm: Fetch GICv3 state directly from CPUARMState
This eliminates the need for fetching it from el_change_hook_opaque, and
allows for supporting multiple el_change_hooks without having to hack
something together to find the registered opaque belonging to GICv3.

Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1523997485-1905-6-git-send-email-alindsay@codeaurora.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26 11:04:39 +01:00
Aaron Lindsay 7ece99b17e target/arm: Mask PMU register writes based on PMCR_EL0.N
This is in preparation for enabling counters other than PMCCNTR

Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1523997485-1905-5-git-send-email-alindsay@codeaurora.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26 11:04:39 +01:00
Aaron Lindsay 169c893874 target/arm: Treat PMCCNTR as alias of PMCCNTR_EL0
They share the same underlying state

Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1523997485-1905-3-git-send-email-alindsay@codeaurora.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26 11:04:39 +01:00
Aaron Lindsay ccbc0e3384 target/arm: Check PMCNTEN for whether PMCCNTR is enabled
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1523997485-1905-2-git-send-email-alindsay@codeaurora.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26 11:04:38 +01:00
Peter Maydell 4818bad98c target/arm: Use v7m_stack_read() for reading the frame signature
In commit 95695effe8 we changed the v7M/v8M stack
pop code to use a new v7m_stack_read() function that checks
whether the read should fail due to an MPU or bus abort.
We missed one call though, the one which reads the signature
word for the callee-saved register part of the frame.

Correct the omission.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180419142106.9694-1-peter.maydell@linaro.org
2018-04-26 11:04:38 +01:00
Peter Maydell 145772707f target/arm: Remove stale TODO comment
Remove a stale TODO comment -- we have now made the arm_ldl_ptw()
and arm_ldq_ptw() functions propagate physical memory read errors
out to their callers.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180419142151.9862-1-peter.maydell@linaro.org
2018-04-26 11:04:38 +01:00
Igor Mammedov 75ed2c0248 arm: always start from first_cpu when registering loader cpu reset callback
if arm_load_kernel() were passed non first_cpu, QEMU would end up
with partially set do_cpu_reset() callback leaving some CPUs without it.

Make sure that do_cpu_reset() is registered for all CPUs by enumerating
CPUs from first_cpu.

(In practice every board that we have was passing us the first CPU
as the boot CPU, either directly or indirectly, so this wasn't
causing incorrect behaviour.)

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: added a note that this isn't a behaviour change]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26 11:04:38 +01:00
Geert Uytterhoeven 14ec3cbd7c device_tree: Increase FDT_MAX_SIZE to 1 MiB
It is not uncommon for a contemporary FDT to be larger than 64 KiB,
leading to failures loading the device tree from sysfs:

    qemu-system-aarch64: qemu_fdt_setprop: Couldn't set ...: FDT_ERR_NOSPACE

Hence increase the limit to 1 MiB, like on PPC.

For reference, the largest arm64 DTB created from the Linux sources is
ca. 75 KiB large (100 KiB when built with symbols/fixup support).

Cc: qemu-stable@nongnu.org
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Message-id: 1523541337-23919-1-git-send-email-geert+renesas@glider.be
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26 11:04:38 +01:00
Peter Maydell 8e383d19b4 Migration pull for 2.13
Alexey Perevalov postcopy blocktime statistics
 Xiao Guangrong's compression performance improvements
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJa4NUpAAoJEAUWMx68W/3njR4P/2eGZOBNU8W5e8MLr1hi95yS
 3yQiwNAWQ4rfvroyUAIVwvjbRUgXCMUaqOTu+kbwt/bXFbjrjlBAXBY61yEeWq3M
 gSCCnD/JUpEZfkieqhdulRzJDh/6VeQg4AbdIL8MyfxV0jODeVm2JfyMeHCHOEaP
 LaEiiMhteONu+hsTFms2oeTbyBVgVu/ceq5pFClr8dadYQtOEltSRWPwxHOpnm3P
 athGslx480AXDB3FqVucBpIq16qCqb8/jjXY1W2iJtCfrdtDExmmQA3KKSOe7WJJ
 AewsBPv1aFo9cqL7eefNXtMirulXhr38jMJ658bZyf2756TH4hE9eB6R3oz4FpTT
 j9iiTeNIfaYqgqOisW5fPng4gzxOYECmQZroYP2VSXSLfGeMQ5a5s0cJdNas6dpR
 Us0iedgJyp17p5g/68fZExCVlvEcPNPlfE6pd5T+DSEqdp5Dp0Qv/TBh5p/nG/gm
 /IBoEXcK9zqelHPOywVrTkEvkV+xzP+ORs9Ly1DQKWajzxCaLaClAP1INAj7ayFf
 yqJi8OvW66S487lQwdz3wvUGGTI1MCpYyoOplWYk22ohYft2BzYPtt/lX3pZanud
 YutxMSKfZMyFV81MkyRgoui5tG3jSXnv/IbGHaK42CIMuEcJsSseq2Ea3FmhznQV
 E90XGkay5Wi2alBKNEmX
 =3MTi
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/dgilbert/tags/pull-migration-20180425a' into staging

Migration pull for 2.13

Alexey Perevalov postcopy blocktime statistics
Xiao Guangrong's compression performance improvements

# gpg: Signature made Wed 25 Apr 2018 20:21:13 BST
# gpg:                using RSA key 0516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>"
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A  9FA9 0516 331E BC5B FDE7

* remotes/dgilbert/tags/pull-migration-20180425a:
  migration: remove ram_save_compressed_page()
  migration: introduce save_normal_page()
  migration: move calling save_zero_page to the common place
  migration: move calling control_save_page to the common place
  migration: move some code to ram_save_host_page
  migration: introduce control_save_page()
  migration: detect compression and decompression errors
  migration: stop decompression to allocate and free memory frequently
  migration: stop compression to allocate and free memory frequently
  migration: stop compressing page in migration thread
  migration: add postcopy total blocktime into query-migrate
  migration: add blocktime calculation into migration-test
  migration: postcopy_blocktime documentation
  migration: calculate vCPU blocktime on dst side
  migration: add postcopy blocktime ctx into MigrationIncomingState
  migration: introduce postcopy-blocktime capability

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26 09:12:31 +01:00
Xiao Guangrong da3f56cb2e migration: remove ram_save_compressed_page()
Now, we can reuse the path in ram_save_page() to post the page out
as normal, then the only thing remained in ram_save_compressed_page()
is compression that we can move it out to the caller

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Message-Id: <20180330075128.26919-11-xiaoguangrong@tencent.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:04:14 +01:00
Xiao Guangrong 65dacaa04f migration: introduce save_normal_page()
It directly sends the page to the stream neither checking zero nor
using xbzrle or compression

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Message-Id: <20180330075128.26919-10-xiaoguangrong@tencent.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:04:12 +01:00
Xiao Guangrong d7400a3409 migration: move calling save_zero_page to the common place
save_zero_page() is always our first approach to try, move it to
the common place before calling ram_save_compressed_page
and ram_save_page

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Message-Id: <20180330075128.26919-9-xiaoguangrong@tencent.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:04:11 +01:00
Xiao Guangrong a8ec91f941 migration: move calling control_save_page to the common place
The function is called by both ram_save_page and ram_save_target_page,
so move it to the common caller to cleanup the code

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Message-Id: <20180330075128.26919-8-xiaoguangrong@tencent.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:04:10 +01:00
Xiao Guangrong 1faa5665c0 migration: move some code to ram_save_host_page
Move some code from ram_save_target_page() to ram_save_host_page()
to make it be more readable for latter patches that dramatically
clean ram_save_target_page() up

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Message-Id: <20180330075128.26919-7-xiaoguangrong@tencent.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:04:09 +01:00
Xiao Guangrong 059ff0fb29 migration: introduce control_save_page()
Abstract the common function control_save_page() to cleanup the code,
no logic is changed

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Message-Id: <20180330075128.26919-6-xiaoguangrong@tencent.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:04:09 +01:00
Xiao Guangrong 34ab9e9743 migration: detect compression and decompression errors
Currently the page being compressed is allowed to be updated by
the VM on the source QEMU, correspondingly the destination QEMU
just ignores the decompression error. However, we completely miss
the chance to catch real errors, then the VM is corrupted silently

To make the migration more robuster, we copy the page to a buffer
first to avoid it being written by VM, then detect and handle the
errors of both compression and decompression errors properly

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Message-Id: <20180330075128.26919-5-xiaoguangrong@tencent.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:04:08 +01:00
Xiao Guangrong 797ca154b4 migration: stop decompression to allocate and free memory frequently
Current code uses uncompress() to decompress memory which manages
memory internally, that causes huge memory is allocated and freed
very frequently, more worse, frequently returning memory to kernel
will flush TLBs

So, we maintain the memory by ourselves and reuse it for each
decompression

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Jiang Biao <jiang.biao2@zte.com.cn>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Message-Id: <20180330075128.26919-4-xiaoguangrong@tencent.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:04:07 +01:00
Xiao Guangrong dcaf446ebd migration: stop compression to allocate and free memory frequently
Current code uses compress2() to compress memory which manages memory
internally, that causes huge memory is allocated and freed very
frequently

More worse, frequently returning memory to kernel will flush TLBs
and trigger invalidation callbacks on mmu-notification which
interacts with KVM MMU, that dramatically reduce the performance
of VM

So, we maintain the memory by ourselves and reuse it for each
compression

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Jiang Biao <jiang.biao2@zte.com.cn>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Message-Id: <20180330075128.26919-3-xiaoguangrong@tencent.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:04:06 +01:00
Xiao Guangrong 263a289ae6 migration: stop compressing page in migration thread
As compression is a heavy work, do not do it in migration thread,
instead, we post it out as a normal page

Reviewed-by: Wei Wang <wei.w.wang@intel.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Message-Id: <20180330075128.26919-2-xiaoguangrong@tencent.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:04:05 +01:00
Alexey Perevalov 65ace06045 migration: add postcopy total blocktime into query-migrate
Postcopy total blocktime is available on destination side only.
But query-migrate was possible only for source. This patch
adds ability to call query-migrate on destination.
To be able to see postcopy blocktime, need to request postcopy-blocktime
capability.

The query-migrate command will show following sample result:
{"return":
    "postcopy-vcpu-blocktime": [115, 100],
    "status": "completed",
    "postcopy-blocktime": 100
}}

postcopy_vcpu_blocktime contains list, where the first item is the first
vCPU in QEMU.

This patch has a drawback, it combines states of incoming and
outgoing migration. Ongoing migration state will overwrite incoming
state. Looks like better to separate query-migrate for incoming and
outgoing migration or add parameter to indicate type of migration.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1521742647-25550-7-git-send-email-a.perevalov@samsung.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:02:17 +01:00
Alexey Perevalov 346f3dab04 migration: add blocktime calculation into migration-test
This patch just requests blocktime calculation,
and check it in case when UFFD_FEATURE_THREAD_ID feature is set
on the host.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1521742647-25550-6-git-send-email-a.perevalov@samsung.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:02:16 +01:00
Alexey Perevalov 9ed01779e8 migration: postcopy_blocktime documentation
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1521742647-25550-5-git-send-email-a.perevalov@samsung.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:02:16 +01:00
Alexey Perevalov 575b0b332e migration: calculate vCPU blocktime on dst side
This patch provides blocktime calculation per vCPU,
as a summary and as a overlapped value for all vCPUs.

This approach was suggested by Peter Xu, as an improvements of
previous approch where QEMU kept tree with faulted page address and cpus bitmask
in it. Now QEMU is keeping array with faulted page address as value and vCPU
as index. It helps to find proper vCPU at UFFD_COPY time. Also it keeps
list for blocktime per vCPU (could be traced with page_fault_addr)

Blocktime will not calculated if postcopy_blocktime field of
MigrationIncomingState wasn't initialized.

Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1521742647-25550-4-git-send-email-a.perevalov@samsung.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:02:13 +01:00
Alexey Perevalov 2a4c42f18c migration: add postcopy blocktime ctx into MigrationIncomingState
This patch adds request to kernel space for UFFD_FEATURE_THREAD_ID, in
case this feature is provided by kernel.

PostcopyBlocktimeContext is encapsulated inside postcopy-ram.c,
due to it being a postcopy-only feature.
Also it defines PostcopyBlocktimeContext's instance live time.
Information from PostcopyBlocktimeContext instance will be provided
much after postcopy migration end, instance of PostcopyBlocktimeContext
will live till QEMU exit, but part of it (vcpu_addr,
page_fault_vcpu_time) used only during calculation, will be released
when postcopy ended or failed.

To enable postcopy blocktime calculation on destination, need to
request proper compatibility (Patch for documentation will be at the
tail of the patch set).

As an example following command enable that capability, assume QEMU was
started with
-chardev socket,id=charmonitor,path=/var/lib/migrate-vm-monitor.sock
option to control it

[root@host]#printf "{\"execute\" : \"qmp_capabilities\"}\r\n \
{\"execute\": \"migrate-set-capabilities\" , \"arguments\":   {
\"capabilities\": [ { \"capability\": \"postcopy-blocktime\", \"state\":
true } ] } }" | nc -U /var/lib/migrate-vm-monitor.sock

Or just with HMP
(qemu) migrate_set_capability postcopy-blocktime on

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1521742647-25550-3-git-send-email-a.perevalov@samsung.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:02:12 +01:00
Alexey Perevalov f22f928ec9 migration: introduce postcopy-blocktime capability
Right now it could be used on destination side to
enable vCPU blocktime calculation for postcopy live migration.
vCPU blocktime - it's time since vCPU thread was put into
interruptible sleep, till memory page was copied and thread awake.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1521742647-25550-2-git-send-email-a.perevalov@samsung.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-04-25 18:02:12 +01:00
Peter Maydell 4743c23509 Update version for v2.12.0 release
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-24 16:44:55 +01:00
Peter Maydell 27e757e29c Update version for v2.12.0-rc4 release
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-17 22:26:44 +01:00
Peter Maydell 6f660996f1 Revert "mux: fix ctrl-a b again"
This reverts commit 1b2503fcf7.

Unfortunately this fix regresses console handling on MIPS Malta;
since the mux ctrl-a b bug is not a regression since 2.11, we
take the conservative approach and just drop it from 2.12.

Reported-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-17 21:11:30 +01:00
Richard Henderson ce8d408205 fpu: Bound increment for scalbn
Without bounding the increment, we can overflow exp either here
in scalbn_decomposed or when adding the bias in round_canonical.
This can result in e.g. underflowing to 0 instead of overflowing
to infinity.

The old softfloat code did bound the increment.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-17 14:52:38 +01:00
Marc-André Lureau 1b2503fcf7 mux: fix ctrl-a b again
Commit fb5e19d2e1 originally fixed the
regression, but was inadvertently broken again in merge commit
2d6752d38d.

Fixes:
https://bugs.launchpad.net/qemu/+bug/1654137

Cc: qemu-stable@nongnu.org
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-id: 20180416181844.7851-1-marcandre.lureau@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-17 12:52:48 +01:00
Peter Maydell bb3ba35f20 linux-user: check that all of AArch64 SVE extended sigframe is writable
In commit 8c5931de0a we added support for SVE extended
sigframe records.  These mean that the signal frame might now be
larger than the size of the target_rt_sigframe record, so make sure
we call lock_user on the entire frame size when we're creating it.
(The code for restoring the signal frame already correctly handles
the extended records by locking the 'extra' section separately to the
main section.)

In particular, this fixes a bug even for non-SVE signal frames,
because it extends the locked section to cover the
target_rt_frame_record. Previously this was part of 'struct
target_rt_sigframe', but in commit e1eecd1d9d we pulled
it out into its own struct, and so locking the target_rt_sigframe
alone doesn't cover it. This bug would mean that we would fail
to correctly handle the case where a signal was taken with
SP pointing 16 bytes into an unwritable page, with the page
immediately below it in memory being writable.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2018-04-17 12:04:58 +01:00
Peter Maydell 0df2121693 i386: Don't automatically enable FEAT_KVM_HINTS bits
Bug fix for "-cpu host" with newer kernels.
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABCAAGBQJa1NFhAAoJECgHk2+YTcWm1qgP/AuDGp7c/z23xMtuwDB1YoDo
 Jzovze97rP9RRlKt2F1WXyNLezxO9bpomTPozFF9UIcG6cavakKByBFFJyW8gjya
 yWpsq/kkbR14amYKBbBJgi5uTLBPClFqSAHnJfzkozT5oWOhU5R0/n26gv3Lfgh2
 TTJg1+tMG0W3EIQ+MeJ2uXuop910134mtFge7//COIVVzPvC24+mNJqB5nlMUtQA
 3PSWrMBRRXDkI9iTYfn58+ZHojdAGdK+VECpEBNI8bmkJgw4wnBfGEFHZiSZBLD2
 EG4u924HxMhDtQ3Y3LcwgNcEPGBqNOseMQSYiN5sPHQtNCtFo9cZWq/5lKedKXDH
 sFSdaLFJwwCuvEDE/8U3MKtHxzxtP9yGXctu1KqEK+ZkC4nKLoZgbIQHtrCa7nzb
 9PsA08L1XnNSnodfPKwxl8ZfRFRrUeFLbhFML9588jhG2dktWeIMwTOrIvVHail2
 75hr65SPxAzERTX9stEv4tcGgN+z1yDdUrfONK4GSH9/CRbDt73yuH/Gc26t1x/8
 PbleeDltMLKZRoXhR8SzcwTDmq5J7PQAnPTGghPe2tCyjNz1ax+yYkPfIGtR4ovc
 6M+s9STs9Au+k2YSr4ISttps/ofY8D3vKCVYsUrRnle7qBDDoe07GGmI1f3sybVi
 tb9v82dy71QOGWgz7Ols
 =oq3l
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/ehabkost/tags/x86-next-pull-request' into staging

i386: Don't automatically enable FEAT_KVM_HINTS bits

Bug fix for "-cpu host" with newer kernels.

# gpg: Signature made Mon 16 Apr 2018 17:37:53 BST
# gpg:                using RSA key 2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF  D1AA 2807 936F 984D C5A6

* remotes/ehabkost/tags/x86-next-pull-request:
  i386: Don't automatically enable FEAT_KVM_HINTS bits

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-17 11:13:47 +01:00
Peter Maydell 32f418c003 vhost: bugfix
This fixes a regression in vhost.
 
 Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJa1MuLAAoJECgfDbjSjVRpYzoH/RqprLADb2Q8jofq46/jw9gn
 +N+mIBDlZL2clobEpTG9QaSzwZr/dnXPa2R9pPSNONaeK3Qvqq7l+KXO2TGS3g6+
 XUckTurNBsjoFZ4Lfgrn9jHm1ZH6YO/vsUuwW4nsx+yQZsm2EryiWK8QiAuXKkDT
 L1wJmkUbZFJ/u7GJorSHA3C9AirvvrTYUXGCw/J8nXZBqblKRB7lELmkk3hT0jjk
 lebzr6v2eORjz+2D6s6SqTFCVNA1nUZourbkiy64GlAj3sctSWfBLO/2DoiZDdEv
 PQhiW1EaWeb2+vl6Eot7nRwYmPZztUbh1rqsk/I9fi8uBvyZaYLKAlIEoRNDf3Y=
 =xj3W
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging

vhost: bugfix

This fixes a regression in vhost.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

# gpg: Signature made Mon 16 Apr 2018 17:12:59 BST
# gpg:                using RSA key 281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg:                 aka "Michael S. Tsirkin <mst@redhat.com>"
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17  0970 C350 3912 AFBE 8E67
#      Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA  8A0D 281F 0DB8 D28D 5469

* remotes/mst/tags/for_upstream:
  vhost: do not verify ring mappings when IOMMU is enabled

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-17 09:57:52 +01:00
Alex Bennée 9cb4e398c2 fpu/softfloat: check for Inf / x or 0 / x before /0
The re-factoring of div_floats changed the order of checking meaning
an operation like -inf/0 erroneously raises the divbyzero flag.
IEEE-754 (2008) specifies this should only occur for operations on
finite operands.

We fix this by moving the check on the dividend being Inf/0 to before
the divisor is zero check.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180416135442.30606-1-alex.bennee@linaro.org
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Tested-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-16 18:40:48 +01:00
Eduardo Habkost 0d914f39a7 i386: Don't automatically enable FEAT_KVM_HINTS bits
The assumption in the cpu->max_features code is that anything
enabled on GET_SUPPORTED_CPUID should be enabled on "-cpu host".
This shouldn't be the case for FEAT_KVM_HINTS.

This adds a new FeatureWordInfo::no_autoenable_flags field, that
can be used to prevent FEAT_KVM_HINTS bits to be enabled
automatically.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180410211534.26079-1-ehabkost@redhat.com>
Tested-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2018-04-16 13:36:52 -03:00
Jason Wang aebbdbee55 vhost: do not verify ring mappings when IOMMU is enabled
When IOMMU is enabled, we store virtqueue metadata as iova (though it
may has _phys suffix) and access them through dma helpers. Any
translation failures could be reported by IOMMU.

In this case, trying to validate iova against gpa won't work and will
cause a false error reporting. So this patch bypasses the ring
verification if IOMMU is enabled which is similar to the behavior
before 0ca1fd2d68 that calls vhost_memory_map() which is a nop when
IOMMU is enabled.

Fixes: 0ca1fd2d68 ("vhost: Simplify ring verification checks")
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2018-04-16 19:11:38 +03:00
Michael Tokarev 2a6b5372d7 Makefile: install gtk message catalogs if CONFIG_GTK=y too, not only =m
Fixes 722cd74964

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180416093719.2543-1-mjt@msgid.tls.msk.ru
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-16 16:25:24 +01:00
Peter Maydell 042f6a31af A fix for handling dirty bitmaps stored in qcow2 files. This is not
absolutely necessary for 2.12, but if there is an rc4, it should go in.
 -----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJa1Jh8AAoJEPQH2wBh1c9A798H/2qB2/2n+yqhTlfiU5o4sM2L
 PmbmcZLK1zlMyHZm5YOLjLQ4NWRrMTpExRxgKdUwvSvOBygcCw4H/w3CFqbF5Bjw
 enHI8Ku3cepxkYGl6qKkDDYhTODcIZQ3yIvxkSnn/Kkz7zbKLNaU6oFawrkt2lpH
 yk6DWMHl7qOzRaP4WHE041sIzPQLYcLyGcoAhEMyTieyAn5c8utIz8lT4239xoKo
 U/wfC2/fmvVd1bYd1Qiwk//QldHXT1X7TD3usqMfWXumyFwvOM76he9/dzPSlCl1
 wbbLzfa64VYxIXb4hrbVJR48o+rzFbZYj/Ty8YXz6o3cYYWqKc0pIF2WgkHzf9g=
 =YUD+
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2018-04-16' into staging

A fix for handling dirty bitmaps stored in qcow2 files.  This is not
absolutely necessary for 2.12, but if there is an rc4, it should go in.

# gpg: Signature made Mon 16 Apr 2018 13:35:08 BST
# gpg:                using RSA key F407DB0061D5CF40
# gpg: Good signature from "Max Reitz <mreitz@redhat.com>"
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1  1829 F407 DB00 61D5 CF40

* remotes/maxreitz/tags/pull-block-2018-04-16:
  iotests: fix 169
  qcow2: try load bitmaps only once

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-16 15:30:54 +01:00
Vladimir Sementsov-Ogievskiy 25bf2426f3 iotests: fix 169
Improve and fix 169:
    - use MIGRATION events instead of RESUME
    - make a TODO: enable dirty-bitmaps capability for offline case
    - recreate vm_b without -incoming near test end

This (likely) fixes racy faults at least of the following types:

    - timeout on waiting for RESUME event
    - sha256 mismatch on line 136 (142 after this patch)
    - fail to self.vm_b.launch() on line 135 (141 now after this patch)

And surely fixes cat processes, left after test finish.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20180411122606.367301-3-vsementsov@virtuozzo.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-04-16 13:35:32 +02:00
Vladimir Sementsov-Ogievskiy 605bc8be42 qcow2: try load bitmaps only once
Checking reopen by existence of some bitmaps is wrong, as it may be
some other bitmaps, or on the other hand, user may remove bitmaps. This
criteria is bad. To simplify things and make behavior more predictable
let's just add a flag to remember, that we've already tried to load
bitmaps on open and do not want do it again.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20180411122606.367301-2-vsementsov@virtuozzo.com
[mreitz: Changed comment wording according to Eric Blake's suggestion]
Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-04-16 13:35:32 +02:00
Peter Maydell aac8f55633 linux-user/signal.c: Put AArch64 frame record in the right place
AArch64 stack frames include a 'frame record' which holds a pointer
to the next frame record in the chain and the LR on entry to the
function. The procedure calling standard doesn't mandate where
exactly this frame record is in the stack frame, but for signal
frames the kernel puts it right at the top. We used to put it
there too, but in commit 7f0f4208b3 we accidentally put
the "enlarge to the 4K reserved space minimum" check after the
"allow for the frame record" code, rather than before it, with
the effect that the frame record would be inside the reserved
space and immediately after the last used part of it.

Move the frame record back out of the reserved space to where
we used to put it.

This bug shouldn't break any sensible guest code, but test
programs that deliberately look at the internal details
of the signal frame layout will not find what they are
expecting to see.

Fixes: 7f0f4208b3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-id: 20180412140222.2096-1-peter.maydell@linaro.org
2018-04-16 11:52:33 +01:00
Peter Maydell 161dfd1e7f tcg/mips: Handle large offsets from target env to tlb_table
The MIPS TCG target makes the assumption that the offset from the
target env pointer to the tlb_table is less than about 64K. This
used to be true, but gradual addition of features to the Arm
target means that it's no longer true there. This results in
the build-time assertion failing:

In file included from /home/pm215/qemu/include/qemu/osdep.h:36:0,
                 from /home/pm215/qemu/tcg/tcg.c:28:
/home/pm215/qemu/tcg/mips/tcg-target.inc.c: In function ‘tcg_out_tlb_load’:
/home/pm215/qemu/include/qemu/compiler.h:90:36: error: static assertion failed: "not expecting: offsetof(CPUArchState, tlb_table[NB_MMU_MODES - 1][1]) > 0x7ff0 + 0x7fff"
 #define QEMU_BUILD_BUG_MSG(x, msg) _Static_assert(!(x), msg)
                                    ^
/home/pm215/qemu/include/qemu/compiler.h:98:30: note: in expansion of macro ‘QEMU_BUILD_BUG_MSG’
 #define QEMU_BUILD_BUG_ON(x) QEMU_BUILD_BUG_MSG(x, "not expecting: " #x)
                              ^
/home/pm215/qemu/tcg/mips/tcg-target.inc.c:1236:9: note: in expansion of macro ‘QEMU_BUILD_BUG_ON’
         QEMU_BUILD_BUG_ON(offsetof(CPUArchState,
         ^
/home/pm215/qemu/rules.mak:66: recipe for target 'tcg/tcg.o' failed

An ideal long term approach would be to rearrange the CPU state
so that the tlb_table was not so far along it, but this is tricky
because it would move it from the "not cleared on CPU reset" part
of the struct to the "cleared on CPU reset" part. As a simple fix
for the 2.12 release, make the MIPS TCG target handle an arbitrary
offset by emitting more add instructions. This will mean an extra
instruction in the fastpath for TCG loads and stores for the
affected guests (currently just aarch64-softmmu).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-id: 20180413142336.32163-1-peter.maydell@linaro.org
2018-04-16 11:51:37 +01:00
Peter Maydell ae2b1b4e1b -----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJa0x9XAAoJEPMMOL0/L748nEgQAKzn+70LB/2QawU3aybW8L+k
 AZO8HpRhOVL5WF6hrybJ8220nJL2tlj+sR4TgJX802FxhoejZslyCf842xVCY26g
 wEmIbz+c7oaNre9htGEIcMOOMF3GtMd0zqVNFSvbhy5iXzQKOW2DYcMVEvQgpRPp
 7mRwtXTls713TCI4KcUSwvswuTFI9WJIFqdTP1E/7yjkIwf4DlkrF9E/v8hnLa9h
 BsmQ2uBtAD7Cu+byJjdOa4v1iapWQoWgEzLRqwOo1X1tEzoInC6+z7iz5neAbF9j
 A0Q/TAbnsB14AXz4bZBmotCjoeWFfyw07GkMVjl230Q4W61nSvjvkLRCSwN2Wety
 U1n1Zz8/IYfsCdjv/pfDeBIWkDyGZlcxspaXhYlTv6DVyV7q3Nw0orjKHSiIJY4c
 Sgcwp02cwReXYSJugQs7V68eRMFAz3kAq+iPLXya5MjasZOwCY1BBIFDUI3ZBRST
 0VZeYZ4VnmY6baaw58gxdiDbTHsqidqkL82dfrmScwpEbR2+5v85spN8oxyNcIp3
 NNSUZmBdsYjb1iAt4ntNZYUulrMm6o5y3227aXAeCu6ojfhUOKFaK07YuI99AFgK
 U+1SBbpqEXP0oaoZLELWcdD+x7zET6RWc1XocFD4+jW27qr5fhRQOpnvo626nKcX
 zKngDQTONXC/e1h2FsNz
 =eEOw
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/vivier/tags/m68k-for-2.12-pull-request' into staging

# gpg: Signature made Sun 15 Apr 2018 10:45:59 BST
# gpg:                using RSA key F30C38BD3F2FBE3C
# gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>"
# gpg:                 aka "Laurent Vivier <laurent@vivier.eu>"
# gpg:                 aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>"
# Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F  5173 F30C 38BD 3F2F BE3C

* remotes/vivier/tags/m68k-for-2.12-pull-request:
  m68k: fix exception stack frame for 68000

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-16 10:11:17 +01:00
Alex Bennée 801bc56336 fpu/softfloat: raise float_invalid for NaN/Inf in round_to_int_and_pack
The re-factor broke the raising of INVALID when NaN/Inf is passed to
the float_to_int conversion functions. round_to_uint_and_pack got this
right for NaN but also missed out the Inf handling.

Fixes https://bugs.launchpad.net/qemu/+bug/1759264

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Tested-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180413140334.26622-3-alex.bennee@linaro.org
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-16 10:10:31 +01:00
Pavel Dovgalyuk 000761dc0c m68k: fix exception stack frame for 68000
68000 CPUs do not save format in the exception stack frame.
This patch adds feature checking to prevent format saving for 68000.
m68k_ret() already includes this modification, this patch fixes
the exception processing function too.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180413133041.29509.59064.stgit@pasha-VirtualBox>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2018-04-15 11:37:58 +02:00