As the IDEState lba field is an int32_t, make sure we cast to int64_t before
shifting to calculate the offset. Otherwise we end up with an overflow when
trying to access sectors beyond 2GB as can occur when using DVD images.
[Maintainer edit: fixed extraneous parentheses. --js]
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1451928613-29476-1-git-send-email-mark.cave-ayland@ilande.co.uk
Signed-off-by: John Snow <jsnow@redhat.com>
The "return;" statements at the end of functions do not make
much sense, so let's remove them.
Cc: qemu-block@nongnu.org
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
macio-ide is an IDE controller, so add it
to the storage category.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Commit bd4214fc dropped TRIM support by mistake. Given it is still
advertised to the host when using a drive with discard=on, this cause
the IDE bus to hang when the host issues a TRIM command.
This patch fixes that by re-adding the TRIM code, ported to the new
new DMA implementation.
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: John Snow <jsnow@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-id: 1438198068-32428-1-git-send-email-aurelien@aurel32.net
Signed-off-by: John Snow <jsnow@redhat.com>
prepare_buf should not always grab as many descriptors
as it can, sometimes it should self-limit.
For example, an NCQ transfer of 1 sector with a PRDT that
describes 4GiB of data should not copy 4GiB of data, it
should just transfer that first 512 bytes.
PIO is not affected, because the dma_buf_rw dma helpers
already have a byte limit built-in to them, but DMA/NCQ
will exhaust the entire list regardless of requested size.
AHCI 1.3 specifies in section 6.1.6 Command List Underflow that
NCQ is not required to detect underflow conditions. Non-NCQ
pathways signal underflow by writing to the PRDBC field, which
will already occur by writing the actual transferred byte count
to the PRDBC, signaling the underflow.
Our NCQ pathways aren't required to detect underflow, but since our DMA
backend uses the size of the PRDT to determine the size of the transer,
if our PRDT is bigger than the transaction (the underflow condition) it
doesn't cost us anything to detect it and truncate the PRDT.
This is a recoverable error and is not signaled to the guest, in either
NCQ or normal DMA cases.
For BMDMA, the existing pathways should see no guest-visible difference,
but any bytes described in the overage will no longer be transferred
before indicating to the guest that there was an underflow.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1435767578-32743-2-git-send-email-jsnow@redhat.com
Since the block alignment code is now effectively independent of the DMA
implementation, this variable is no longer required and can be removed.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1433455177-21243-5-git-send-email-mark.cave-ayland@ilande.co.uk
Signed-off-by: John Snow <jsnow@redhat.com>
With the offset/len functions taking care of all of the alignment mapping
in isolation from the DMA tranasaction, many comments are now unnecessary.
Remove these and tidy up a few constants at the same time.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1433455177-21243-4-git-send-email-mark.cave-ayland@ilande.co.uk
Signed-off-by: John Snow <jsnow@redhat.com>
In particular, this fixes a bug whereby chains of overlapping head/tail chains
would incorrectly write over each other's remainder cache. This is the access
pattern used by OS X/Darwin and fixes an issue with a corrupt Darwin
installation in my local tests.
While we are here, rename the DBDMA_io struct property remainder to
head_remainder for clarification.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1433455177-21243-3-git-send-email-mark.cave-ayland@ilande.co.uk
Signed-off-by: John Snow <jsnow@redhat.com>
For better handling of unaligned block device accesses.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1433455177-21243-2-git-send-email-mark.cave-ayland@ilande.co.uk
Signed-off-by: John Snow <jsnow@redhat.com>
Similarly switch the macio IDE routines over to use the new function and
tidy-up the remaining code as required.
[Maintainer edit: printf format codes adjusted for 32/64bit. --js]
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: John Snow <jsnow@redhat.com>
Message-id: 1425939893-14404-3-git-send-email-mark.cave-ayland@ilande.co.uk
Signed-off-by: John Snow <jsnow@redhat.com>
This considerably helps simplify the complexity of the macio read routines and
by switching macio CDROM accesses to use the new code, fixes the issue with
the CDROM device being detected intermittently by Darwin/OS X.
[Maintainer edit: printf format codes adjusted for 32/64bit. --js]
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ailande.co.uk>
Acked-by: John Snow <jsnow@redhat.com>
Message-id: 1425939893-14404-2-git-send-email-mark.cave-ayland@ilande.co.uk
Signed-off-by: John Snow <jsnow@redhat.com>
Start moving the initial state of the current request to IDEBus, so that
AHCI can use it. The set_unit callback is not used anymore once this is
done.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 1424708286-16483-9-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
With restarts now handled by ide_restart_cb and
the IDEDMAOps.restart_dma() member, remove the old
restart_cb callback.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 1424708286-16483-8-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This impacts both BMDMA and AHCI HBA interfaces for IDE.
Currently, we confuse the difference between a PRDT having
"0 bytes" and a PRDT having "0 complete sectors."
When we receive an incomplete sector, inconsistent error checking
leads to an infinite loop wherein the call succeeds, but it
didn't give us enough bytes -- leading us to re-call the
DMA chain over and over again. This leads to, in the BMDMA case,
leaked memory for short PRDTs, and infinite loops and resource
usage in the AHCI case.
The .prepare_buf() callback is reworked to return the number of
bytes that it successfully prepared. 0 is a valid, non-error
answer that means the table was empty and described no bytes.
-1 indicates an error.
Our current implementation uses the io_buffer in IDEState to
ultimately describe the size of a prepared scatter-gather list.
Even though the AHCI PRDT/SGList can be as large as 256GiB, the
AHCI command header limits transactions to just 4GiB. ATA8-ACS3,
however, defines the largest transaction to be an LBA48 command
that transfers 65,536 sectors. With a 512 byte sector size, this
is just 32MiB.
Since our current state structures use the int type to describe
the size of the buffer, and this state is migrated as int32, we
are limited to describing 2GiB buffer sizes unless we change the
migration protocol.
For this reason, this patch begins to unify the assertions in the
IDE pathways that the scatter-gather list provided by either the
AHCI PRDT or the PCI BMDMA PRDs can only describe, at a maximum,
2GiB. This should be resilient enough unless we need a sector
size that exceeds 32KiB.
Further, the likelihood of any guest operating system actually
attempting to transfer this much data in a single operation is
very slim.
To this end, the IDEState variables have been updated to more
explicitly clarify our maximum supported size. Callers to the
prepare_buf callback have been reworked to understand the new
return code, and all versions of the prepare_buf callback have
been adjusted accordingly.
Lastly, the ahci_populate_sglist helper, relied upon by the
AHCI implementation of .prepare_buf() as well as the PCI
implementation of the callback have had overflow assertions
added to help make clear the reasonings behind the various
type changes.
[Added %d -> %"PRId64" fix John sent because off_pos changed from int to
int64_t.
--Stefan]
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1414785819-26209-4-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Device models should access their block backends only through the
block-backend.h API. Convert them, and drop direct includes of
inappropriate headers.
Just four uses of BlockDriverState are left:
* The Xen paravirtual block device backend (xen_disk.c) opens images
itself when set up via xenbus, bypassing blockdev.c. I figure it
should go through qmp_blockdev_add() instead.
* Device model "usb-storage" prompts for keys. No other device model
does, and this one probably shouldn't do it, either.
* ide_issue_trim_cb() uses bdrv_aio_discard() instead of
blk_aio_discard() because it fishes its backend out of a BlockAIOCB,
which has only the BlockDriverState.
* PC87312State has an unused BlockDriverState[] member.
The next two commits take care of the latter two.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
I'll use it with block backends shortly, and the name is going to fit
badly there. It's a block layer thing anyway, not just a block driver
thing.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This is the next step for decoupling block accounting functions from
BlockDriverState.
In a future commit the BlockAcctStats structure will be moved from
BlockDriverState to the device models structures.
Note that bdrv_get_stats was introduced so device models can retrieve the
BlockAcctStats structure of a BlockDriverState without being aware of it's
layout.
This function should go away when BlockAcctStats will be embedded in the device
models structures.
CC: Kevin Wolf <kwolf@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Keith Busch <keith.busch@intel.com>
CC: Anthony Liguori <aliguori@amazon.com>
CC: "Michael S. Tsirkin" <mst@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Eric Blake <eblake@redhat.com>
CC: Peter Maydell <peter.maydell@linaro.org>
CC: Michael Tokarev <mjt@tls.msk.ru>
CC: John Snow <jsnow@redhat.com>
CC: Markus Armbruster <armbru@redhat.com>
CC: Alexander Graf <agraf@suse.de>
CC: Max Reitz <mreitz@redhat.com>
Signed-off-by: Benoît Canet <benoit.canet@nodalink.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The middle term goal is to move the BlockAcctStats structure in the device models.
(Capturing I/O accounting statistics in the device models is good for billing)
This patch make a small step in this direction by removing a reference to BDRV.
CC: Kevin Wolf <kwolf@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Keith Busch <keith.busch@intel.com>
CC: Anthony Liguori <aliguori@amazon.com>
CC: "Michael S. Tsirkin" <mst@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: John Snow <jsnow@redhat.com>
CC: Richard Henderson <rth@twiddle.net>
CC: Markus Armbruster <armbru@redhat.com>
CC: Alexander Graf <agraf@suse.de>i
Signed-off-by: Benoît Canet <benoit.canet@nodalink.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
It is now called only after the set_inactive callback. Put the two together.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Drop the unused return value and make the callback optional.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Drop the unused return value and make the callback optional.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Drop the unused return value and make the callback optional.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The macio IDE controller has some pretty nasty magic in its implementation to
allow for unaligned sector accesses. We used to handle these accesses
synchronously inside the IO callback handler.
However, the block infrastructure changed below our feet and now it's impossible
to call a synchronous block read/write from the aio callback handler of a
previous block access.
Work around that limitation by making the unaligned handling bits also go
through our asynchronous handler.
This fixes booting Mac OS X for me.
Reported-by: John Arbuckle <programmingkidx@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently the macio DMA routines assume that all DMA requests are for read/write
block transfers. This is not always the case for ATAPI, for example when
requesting a TOC where the response is generated directly in the IDE buffer.
Detect these non-block ATAPI DMA transfers (where no lba is specified in the
command) and copy the results directly into RAM as indicated by the DBDMA
descriptor. This fixes CDROM access under MorphOS.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Alexander Graf <agraf@suse.de>
After previous Peter patch, they are redundant. This way we don't
assign them except when needed. Once there, there were lots of case
where the ".fields" indentation was wrong:
.fields = (VMStateField []) {
and
.fields = (VMStateField []) {
Change all the combinations to:
.fields = (VMStateField[]){
The biggest problem (appart from aesthetics) was that checkpatch complained
when we copy&pasted the code from one place to another.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Fix a number of warnings for 32 bit builds (tested on MingW and Linux):
CC hw/ide/macio.o
qemu/hw/ide/macio.c: In function 'pmac_ide_atapi_transfer_cb':
qemu/hw/ide/macio.c:134:9: error: format '%lx' expects argument of type 'long unsigned int', but argument 3 has type 'hwaddr' [-Werror=format]
qemu/hw/ide/macio.c: In function 'pmac_ide_transfer_cb':
qemu/hw/ide/macio.c:215:5: error: format '%ld' expects argument of type 'long int', but argument 5 has type 'int64_t' [-Werror=format]
qemu/hw/ide/macio.c:222:9: error: format '%lx' expects argument of type 'long unsigned int', but argument 3 has type 'hwaddr' [-Werror=format]
qemu/hw/ide/macio.c:264:9: error: format '%lx' expects argument of type 'long unsigned int', but argument 3 has type 'hwaddr' [-Werror=format]
cc1: all warnings being treated as errors
make: *** [hw/ide/macio.o] Error 1
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Acked-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
A DMA request can happen for data that hasn't been completely been
provided by the IDE core yet. For example
- DBDMA request for 0x1000 bytes
- IDE request for 1 sector
- DBDMA wants to read 0x1000 bytes (8 sectors) from bdrv
- breakage
Instead, we should truncate our bdrv request to the maximum number
of sectors we're allowed to read at that given time. Once that transfer
is through, we will fall into our recently introduced waiting logic.
- DBDMA requests for 0x1000 bytes
- IDE request for 1 sector
- DBDMA wants to read MIN(0x1000, 1 * 512) bytes
- DBDMA finishes reading, indicates to IDE core that transfer is complete
- IDE request for 7 sectors
- DBDMA finishes the DMA
Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Alexander Graf <agraf@suse.de>
The DBDMA engine really just reads bytes from a producing device (IDE
in our case) and shoves these bytes into memory. It doesn't care whether
any alignment takes place or not.
Our code today however assumes that block accesses always happen on
sector (512 byte) boundaries. This is a fair assumption for most cases.
However, Mac OS X really likes to do unaligned, incomplete accesses
that it finishes with the next DMA request.
So we need to read / write the unaligned bits independent of the actual
asynchronous request, because that one can only handle 512-byte-aligned
data. We also need to cache these unaligned sectors until the next DMA
request, at which point the data might be successfully flushed from the
pipe.
Signed-off-by: Alexander Graf <agraf@suse.de>
We should only start processing DMA requests when we have data to process.
Hold off working through the DMA shuffling until the IDE core told us that
it's ready.
This is required because the guest can program the DMA engine or the IDE
transfer first. Both are legal.
Signed-off-by: Alexander Graf <agraf@suse.de>
We need to know when the IDE core starts a DMA transfer. Add a notifier
function so we have the chance to start transmitting data.
Signed-off-by: Alexander Graf <agraf@suse.de>
The macio code is basically undebuggable as it stands today, with no
debug prints anywhere whatsoever. DBDMA was better, but I needed a
few more to create reasonable logs that tell me where breakage is.
Add a DPRINTF macro in the macio source file and add a bunch of debug
prints that are all disabled by default of course.
Signed-off-by: Alexander Graf <agraf@suse.de>
On a real G3 Beige the secondary IDE bus lives on the mac-io chip, not
on some random PCI device. Move it there to become more compatible.
While at it, also clean up the IDE channel connection logic.
Signed-off-by: Alexander Graf <agraf@suse.de>
The DMAContext is a simple pointer to an AddressSpace that is now always
already available. Make everyone hold the address space directly,
and clean up the DMA API to use the AddressSpace directly.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commit 215e47b9 enabled TRIM by default, which revealed a bug in TRIM
support for the IDE macio emulation driver, introduced in d353fb72.
The call to dma_bdrv_io() is using a wrong opaque of type IDEState
instead of DBDMA_io. This patch fixes that.
Fixes LP#1179104
Reported-by: Michael Tokarev <mjt@tls.msk.ru>
Tested-off-by: Michael Tokarev <mjt@tls.msk.ru>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
An IDE bus provided by AHCI can only take a single IDE drive. If you add
a drive as slave, qemu used to accept the command line but the device
wouldn't be actually usable. Catch the situation instead and error out.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Many of these should be cleaned up with proper qdev-/QOM-ification.
Right now there are many catch-all headers in include/hw/ARCH depending
on cpu.h, and this makes it necessary to compile these files per-target.
However, fixing this does not belong in these patches.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It was not qdev'ified before. Turn it into a SysBusDevice.
Embed them into the MacIO devices.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Pass qemu_sglist_init the global dma_context_memory rather than a NULL
pointer; this fixes a segfault in dma_memory_map() when the guest
starts using DMA.
Reported-by: Amadeusz Sławiński <amade@asmblr.net>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
target_phys_addr_t is unwieldly, violates the C standard (_t suffixes are
reserved) and its purpose doesn't match the name (most target_phys_addr_t
addresses are not target specific). Replace it with a finger-friendly,
standards conformant hwaddr.
Outstanding patchsets can be fixed up with the command
git rebase -i --exec 'find -name "*.[ch]"
| xargs s/target_phys_addr_t/hwaddr/g' origin
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This patch cleans up return sentences in the end of void functions.
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Amos Kong <akong@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@gmail.com>
dma-helpers.c contains a number of helper functions for doing
scatter/gather DMA, and various block device related DMA. Currently,
these directly access guest memory using cpu_physical_memory_*(),
assuming no IOMMU translation.
This patch updates this code to use the new universal DMA helper
functions. qemu_sglist_init() now takes a DMAContext * to describe
the DMA address space in which the scatter/gather will take place.
We minimally update the callers qemu_sglist_init() to pass NULL
(i.e. no translation, same as current behaviour). Some of those
callers should pass something else in some cases to allow proper IOMMU
translation in future, but that will be fixed in later patches.
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>