Commit Graph

76 Commits

Author SHA1 Message Date
Peter Maydell
04e3aabde3 cputlb: Support generating CPU exceptions on memory transaction failures
Call the new cpu_transaction_failed() hook at the places where
CPU generated code interacts with the memory system:
 io_readx()
 io_writex()
 get_page_addr_code()

Any access from C code (eg via cpu_physical_memory_rw(),
address_space_rw(), ld/st_*_phys()) will *not* trigger CPU exceptions
via cpu_transaction_failed().  Handling for transactions failures for
this kind of call should be done by using a function which returns a
MemTxResult and treating the failure case appropriately in the
calling code.

In an ideal world we would not generate CPU exceptions for
instruction fetch failures in get_page_addr_code() but instead wait
until the code translation process tried a load and it failed;
however that change would require too great a restructuring and
redesign to attempt at this point.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-09-04 15:21:55 +01:00
Richard Henderson
c86c6e4c80 cputlb: Tidy some macros
TGT_LE and TGT_BE are not size dependent and do not need to be
redefined.  The others are no longer used at all.

Reviewed-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-10-26 08:29:00 -07:00
Richard Henderson
82a45b96a2 cputlb: Move most of iotlb code out of line
Saves 2k code size off of a cold path.

Reviewed-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-10-26 08:29:00 -07:00
Richard Henderson
4097842885 cputlb: Remove includes from softmmu_template.h
We already include exec/address-spaces.h and exec/memory.h in
cputlb.c; the include of qemu/timer.h appears to be a fossil.

Reviewed-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-10-26 08:29:00 -07:00
Richard Henderson
3b08f0a925 cputlb: Move probe_write out of softmmu_template.h
Reviewed-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-10-26 08:29:00 -07:00
Richard Henderson
dea2198201 cputlb: Replace SHIFT with DATA_SIZE
Reviewed-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-10-26 08:29:00 -07:00
Richard Henderson
01ecaf438b tcg: Merge GETPC and GETRA
The return address argument to the softmmu template helpers was
confused.  In the legacy case, we wanted to indicate that there
is no return address, and so passed in NULL.  However, we then
immediately subtracted GETPC_ADJ from NULL, resulting in a non-zero
value, indicating the presence of an (invalid) return address.

Push the GETPC_ADJ subtraction down to the only point it's required:
immediately before use within cpu_restore_state_from_tb, after all
NULL pointer checks have been completed.

This makes GETPC and GETRA identical.  Remove GETRA as the lesser
used macro, replacing all uses with GETPC.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16 08:12:11 -07:00
Richard Henderson
85aa80813d tcg: Support arbitrary size + alignment
Previously we allowed fully unaligned operations, but not operations
that are aligned but with less alignment than the operation size.

In addition, arm32, ia64, mips, and sparc had been omitted from the
previous overalignment patch, which would have led to that alignment
being enforced.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16 08:12:06 -07:00
Samuel Damashek
81daabaf7a cputlb: Fix for self-modifying writes across page boundaries
As it currently stands, QEMU does not properly handle self-modifying code
when the write is unaligned and crosses a page boundary. The procedure
for handling a write to the current translation block is to write-protect
the current translation block, catch the write, split up the translation
block into the current instruction (which remains write-protected so that
the current instruction is not modified) and the remaining instructions
in the translation block, and then restore the CPU state to before the
write occurred so the write will be retried and successfully executed.
However, since unaligned writes across pages are split into one-byte
writes for simplicity, writes to the second page (which is not the
current TB) may succeed before a write to the current TB is attempted,
and since these writes are not invalidated before resuming state after
splitting the TB, these writes will be performed a second time, thus
corrupting the second page. Credit goes to Patrick Hulin for
discovering this.

In recent 64-bit versions of Windows running in emulated mode, this
results in either being very unstable (a BSOD after a couple minutes of
uptime), or being entirely unable to boot. Windows performs one or more
8-byte unaligned self-modifying writes (xors) which intersect the end
of the current TB and the beginning of the next TB, which runs into the
aforementioned issue. This commit fixes that issue by making the
unaligned write loop perform the writes in forwards order, instead of
reverse order. This way, QEMU immediately tries to write to the current
TB, and splits the TB before any write to the second page is executed.
The write then proceeds as intended. With this patch applied, I am able
to boot and use Windows 7 64-bit and Windows 10 64-bit in QEMU without
KVM.

Per Richard Henderson's input, this patch also ensures the second page
is in the TLB before executing the write loop, to ensure the second
page is mapped.

The original discussion of the issue is located at
http://lists.nongnu.org/archive/html/qemu-devel/2014-08/msg02161.html.

Signed-off-by: Samuel Damashek <samuel.damashek@invincea.com>
Message-Id: <20160706182652.16190-1-samuel.damashek@invincea.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-08 13:05:07 -07:00
Samuel Damashek
a390284b80 cputlb: Add address parameter to VICTIM_TLB_HIT
[rth: Split out from the original patch.]

Signed-off-by: Samuel Damashek <samuel.damashek@invincea.com>
Message-Id: <20160706182652.16190-1-samuel.damashek@invincea.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-08 13:04:41 -07:00
Richard Henderson
7e9a7c50d9 cputlb: Move VICTIM_TLB_HIT out of line
There are currently 22 invocations of this function,
and we're about to increase that number.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-08 12:58:55 -07:00
Sergey Sorokin
1f00b27f17 tcg: Improve the alignment check infrastructure
Some architectures (e.g. ARMv8) need the address which is aligned
to a size more than the size of the memory access.
To support such check it's enough the current costless alignment
check implementation in QEMU, but we need to support
an alignment size specifying.

Signed-off-by: Sergey Sorokin <afarallax@yandex.ru>
Message-Id: <1466705806-679898-1-git-send-email-afarallax@yandex.ru>
Signed-off-by: Richard Henderson <rth@twiddle.net>
[rth: Assert in tcg_canonicalize_memop.  Leave get_alignment_bits
available for, though unused by, user-mode.  Retain logging difference
based on ALIGNED_ONLY.]
2016-07-05 20:50:13 -07:00
Peter Maydell
a54c87b68a exec.c: Pass MemTxAttrs to iotlb_to_region so it uses the right AS
Pass the MemTxAttrs for the memory access to iotlb_to_region(); this
allows it to determine the correct AddressSpace to use for the lookup.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2016-01-21 14:15:05 +00:00
Pavel Dovgalyuk
b8611499b9 softmmu: remove now unused functions
Now that the cpu_ld/st_* function directly call helper_ret_ld/st, we can
drop the old helper_ld/st functions.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20150710095656.13280.7085.stgit@PASHA-ISP>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-09-11 08:16:05 -07:00
Pavel Dovgalyuk
282dffc8a4 softmmu: add helper function to pass through retaddr
This patch introduces several helpers to pass return address
which points to the TB. Correct return address allows correct
restoring of the guest PC and icount. These functions should be used when
helpers embedded into TB invoke memory operations.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20150710095650.13280.32255.stgit@PASHA-ISP>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-09-11 08:15:32 -07:00
Paolo Bonzini
414b15c909 exec: drop cpu_can_do_io, just read cpu->can_do_io
After commit 626cf8f (icount: set can_do_io outside TB execution,
2014-12-08), can_do_io is set to 1 if not executing code.  It is
no longer necessary to make this assumption in cpu_can_do_io.

It is also possible to remove the use_icount test, simply by
never setting cpu->can_do_io to 0 unless use_icount is true.

With these changes cpu_can_do_io boils down to a read of
cpu->can_do_io.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-14 23:40:32 +02:00
Yongbok Kim
3b4afc9e75 softmmu: Add probe_write()
Probe for whether the specified guest write access is permitted.
If it is not permitted then an exception will be taken in the same
way as if this were a real write access (and we will not return).
Otherwise the function will return, and there will be a valid
entry in the TLB for this access.

Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Reviewed-by: Leon Alrae <leon.alrae@imgtec.com>
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
2015-06-11 10:13:28 +01:00
Richard Henderson
dfb3630562 tcg: Add MO_ALIGN, MO_UNALN
These modifiers control, on a per-memory-op basis, whether
unaligned memory accesses are allowed.  The default setting
reflects the target's definition of ALIGNED_ONLY.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-05-14 12:15:18 -07:00
Richard Henderson
3972ef6f83 tcg: Push merged memop+mmu_idx parameter to softmmu routines
The extra information is not yet used but it is now available.
This requires minor changes through all of the tcg backends.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-05-14 12:15:14 -07:00
Peter Maydell
fadc1cbe85 Add MemTxAttrs to the IOTLB
Add a MemTxAttrs field to the IOTLB, and allow target-specific
code to set it via a new tlb_set_page_with_attrs() function;
pass the attributes through to the device when making IO accesses.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2015-04-26 16:49:24 +01:00
Peter Maydell
e469b22ffd Make CPU iotlb a structure rather than a plain hwaddr
Make the CPU iotlb a structure rather than a plain hwaddr;
this will allow us to add transaction attributes to it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2015-04-26 16:49:23 +01:00
Peter Maydell
3b64349539 memory: Replace io_mem_read/write with memory_region_dispatch_read/write
Rather than retaining io_mem_read/write as simple wrappers around
the memory_region_dispatch_read/write functions, make the latter
public and change all the callers to use them, since we need to
touch all the callsites anyway to add MemTxAttrs and MemTxResult
support. Delete io_mem_read and io_mem_write entirely.

(All the callers currently pass MEMTXATTRS_UNSPECIFIED
and convert the return value back to bool or ignore it.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2015-04-26 16:49:23 +01:00
Paolo Bonzini
9d82b5a792 exec: make iotlb RCU-friendly
After the previous patch, TLBs will be flushed on every change to
the memory mapping.  This patch augments that with synchronization
of the MemoryRegionSections referred to in the iotlb array.

With this change, it is guaranteed that iotlb_to_region will access
the correct memory map, even once the TLB will be accessed outside
the BQL.

Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-02-16 17:30:19 +01:00
Leon Alrae
55e9409366 softmmu: provide softmmu access type enum
New MIPS features depend on the access type and enum is more convenient than
using the numbers directly.

Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
2014-11-03 11:48:34 +00:00
Xin Tong
88e89a57f9 implementing victim TLB for QEMU system emulated TLB
QEMU system mode page table walks are expensive. Taken by running QEMU
qemu-system-x86_64 system mode on Intel PIN , a TLB miss and walking a
4-level page tables in guest Linux OS takes ~450 X86 instructions on
average.

QEMU system mode TLB is implemented using a directly-mapped hashtable.
This structure suffers from conflict misses. Increasing the
associativity of the TLB may not be the solution to conflict misses as
all the ways may have to be walked in serial.

A victim TLB is a TLB used to hold translations evicted from the
primary TLB upon replacement. The victim TLB lies between the main TLB
and its refill path. Victim TLB is of greater associativity (fully
associative in this patch). It takes longer to lookup the victim TLB,
but its likely better than a full page table walk. The memory
translation path is changed as follows :

Before Victim TLB:
1. Inline TLB lookup
2. Exit code cache on TLB miss.
3. Check for unaligned, IO accesses
4. TLB refill.
5. Do the memory access.
6. Return to code cache.

After Victim TLB:
1. Inline TLB lookup
2. Exit code cache on TLB miss.
3. Check for unaligned, IO accesses
4. Victim TLB lookup.
5. If victim TLB misses, TLB refill
6. Do the memory access.
7. Return to code cache

The advantage is that victim TLB can offer more associativity to a
directly mapped TLB and thus potentially fewer page table walks while
still keeping the time taken to flush within reasonable limits.
However, placing a victim TLB before the refill path increase TLB
refill path as the victim TLB is consulted before the TLB refill. The
performance results demonstrate that the pros outweigh the cons.

some performance results taken on SPECINT2006 train
datasets and kernel boot and qemu configure script on an
Intel(R) Xeon(R) CPU  E5620  @ 2.40GHz Linux machine are shown in the
Google Doc link below.

https://docs.google.com/spreadsheets/d/1eiItzekZwNQOal_h-5iJmC4tMDi051m9qidi5_nwvH4/edit?usp=sharing

In summary, victim TLB improves the performance of qemu-system-x86_64 by
11% on average on SPECINT2006, kernelboot and qemu configscript and with
highest improvement of in 26% in 456.hmmer. And victim TLB does not result
in any performance degradation in any of the measured benchmarks. Furthermore,
the implemented victim TLB is architecture independent and is expected to
benefit other architectures in QEMU as well.

Although there are measurement fluctuations, the performance
improvement is very significant and by no means in the range of
noises.

Signed-off-by: Xin Tong <trent.tong@gmail.com>
Message-id: 1407202523-23553-1-git-send-email-trent.tong@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-09-01 17:43:06 +01:00
Paolo Bonzini
58ed270df9 softmmu: move softmmu_template.h out of include/
It is only included in cputlb.c now.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:33 +02:00
Paolo Bonzini
022c62cbbc exec: move include files to include/exec/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19 08:31:31 +01:00
Yeongkyoon Lee
fdbb84d133 tcg: Add extended GETPC mechanism for MMU helpers with ldst optimization
Add GETPC_EXT which is used by MMU helpers to selectively calculate the code
address of accessing guest memory when called from a qemu_ld/st optimized code
or a C function. Currently, it supports only i386 and x86-64 hosts.

Signed-off-by: Yeongkyoon Lee <yeongkyoon.lee@samsung.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-11-03 09:44:20 +00:00
Avi Kivity
a8170e5e97 Rename target_phys_addr_t to hwaddr
target_phys_addr_t is unwieldly, violates the C standard (_t suffixes are
reserved) and its purpose doesn't match the name (most target_phys_addr_t
addresses are not target specific).  Replace it with a finger-friendly,
standards conformant hwaddr.

Outstanding patchsets can be fixed up with the command

  git rebase -i --exec 'find -name "*.[ch]"
                        | xargs s/target_phys_addr_t/hwaddr/g' origin

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-10-23 08:58:25 -05:00
Blue Swirl
89c33337fd Remove unused CONFIG_TCG_PASS_AREG0 and dead code
Now that CONFIG_TCG_PASS_AREG0 is enabled for all targets,
remove dead code and support for !CONFIG_TCG_PASS_AREG0 case.

Remove dyngen-exec.h and all references to it. Although included by
hw/spapr_hcall.c, it does not seem to use it.

Remove unused HELPER_CFLAGS.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-09-15 17:51:14 +00:00
Stefan Weil
b065927a02 w64: Fix data types in softmmu*.h
w64 requires uintptr_t.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
2012-04-15 21:25:17 +02:00
Blue Swirl
2050396801 Use uintptr_t for various op related functions
Use uintptr_t instead of void * or unsigned long in
several op related functions, env->mem_io_pc and
GETPC() macro.

Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-04-14 14:23:37 +00:00
Blue Swirl
e141ab52d2 softmmu templates: optionally pass CPUState to memory access functions
Optionally, make memory access helpers take a parameter for CPUState
instead of relying on global env.

On most targets, perform simple moves to reorder registers. On i386,
switch from regparm(3) calling convention to standard stack-based
version.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-03-18 12:21:52 +00:00
Blue Swirl
6a18ae2d29 i386: Remove REGPARM
Use stack based calling convention (GCC default) for interfacing with
generated code instead of register based convention (regparm(3)).

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-03-18 12:21:48 +00:00
Avi Kivity
37ec01d433 memory: dispatch directly via MemoryRegion
Instead of indirecting via io_mem_region, dispatch directly
through the MemoryRegion obtained from the iotlb or phys_page_find().

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 19:06:11 +02:00
Avi Kivity
aa102231f0 memory: store section indices in iotlb instead of io indices
A step towards eliminating io indices.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 17:06:55 +02:00
Avi Kivity
11c7ef0c73 Remove IO_MEM_SHIFT
We no longer use any of the lower bits of a ram_addr, so we might as well
use them for the io table index.  This increases the number of potential
I/O handlers by a factor of 8.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
0e0df1e24d Convert IO_MEM_{RAM,ROM,UNASSIGNED,NOTDIRTY} to MemoryRegions
Convert the fixed-address IO_MEM_RAM, IO_MEM_ROM, IO_MEM_UNASSIGNED,
and IO_MEM_NOTDIRTY io handlers to MemoryRegions.  These aren't real
regions, since they are never added to the memory hierarchy, but they
allow reuse of the dispatch functionality.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
1d393fa2d1 Avoid range comparisons on io index types
The code sometimes uses range comparisons on io indexes (e.g.
index =< IO_MEM_ROM).  Avoid these as they make moving to objects harder.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:49 +02:00
Avi Kivity
acbbec5d43 memory: move mmio access to functions
Currently mmio access goes directly to the io_mem_{read,write} arrays.
In preparation for eliminating them, add indirection via a function.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:49 +02:00
Blue Swirl
bccd9ec5f0 softmmu_header: pass CPUState to tlb_fill
Pass CPUState pointer to tlb_fill() instead of architecture local
cpu_single_env hacks.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-10-01 09:31:26 +00:00
Blue Swirl
efbf29b681 Document softmmu templates
Add some comments to describe each file.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-10-01 09:31:08 +00:00
Paul Brook
355b194369 Split TLB addend and target_phys_addr_t
Historically the qemu tlb "addend" field was used for both RAM and IO accesses,
so needed to be able to hold both host addresses (unsigned long) and guest
physical addresses (target_phys_addr_t).  However since the introduction of
the iotlb field it has only been used for RAM accesses.

This means we can change the type of addend to unsigned long, and remove
associated hacks in the big-endian TCG backends.

We can also remove the host dependence from target_phys_addr_t.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-04-05 00:28:53 +01:00
Blue Swirl
29e922b61f Compile qemu-timer only once
Arrange various declarations so that also non-CPU code can access
them, adjust users.

Move CPU specific code to cpus.c.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-03-29 19:24:00 +00:00
Edgar E. Iglesias
794401ca9c softmmu: Dont clobber retaddr in slow_ldx().
When splitting up unaligned IO accesses, ld calls slow_ld which was
clobbering retaddr.

AFAIK the problem only shows up when running emulations with -icount
that may abort TB execution on IO accesses.

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
2010-01-28 22:46:13 +01:00
Anthony Liguori
c227f0995e Revert "Get rid of _t suffix"
In the very least, a change like this requires discussion on the list.

The naming convention is goofy and it causes a massive merge problem.  Something
like this _must_ be presented on the list first so people can provide input
and cope with it.

This reverts commit 99a0949b72.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-10-01 16:12:16 -05:00
malc
99a0949b72 Get rid of _t suffix
Some not so obvious bits, slirp and Xen were left alone for the time
being.

Signed-off-by: malc <av1474@comtv.ru>
2009-10-01 22:45:02 +04:00
Anthony Liguori
4a1418e07b Unbreak large mem support by removing kqemu
kqemu introduces a number of restrictions on the i386 target.  The worst is that
it prevents large memory from working in the default build.

Furthermore, kqemu is fundamentally flawed in a number of ways.  It relies on
the TSC as a time source which will not be reliable on a multiple processor
system in userspace.  Since most modern processors are multicore, this severely
limits the utility of kqemu.

kvm is a viable alternative for people looking to accelerate qemu and has the
benefit of being supported by the upstream Linux kernel.  If someone can
implement work arounds to remove the restrictions introduced by kqemu, I'm
happy to avoid and/or revert this patch.

N.B. kqemu will still function in the 0.11 series but this patch removes it from
the 0.12 series.

Paul, please Ack or Nack this patch.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-08-24 08:02:55 -05:00
Blue Swirl
8167ee8839 Update to a hopefully more future proof FSF address
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-07-16 20:47:01 +00:00
blueswir1
640f42e4e9 kqemu: merge CONFIG_KQEMU and USE_KQEMU
Basically a recursive ":%s/USE_KQEMU/CONFIG_KQEMU/g".

Signed-off-by: Paul Bolle <pebolle@tiscali.nl>



git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7189 c046a42c-6fe2-441c-8c8c-71466251a162
2009-04-19 10:18:01 +00:00