Commit Graph

5 Commits

Author SHA1 Message Date
Phil Reid 29e3e06d89 iio: buffer-dma: Use ARRAY_SIZE in for loop range
Use the ARRAY_SIZE macro in the for loops that access queue->fileio.blocks.
Macro is already used in a couple of places where this access occurs,
but range was hardcoded in these locations.

Signed-off-by: Phil Reid <preid@electromag.com.au>
Acked-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2016-06-27 21:06:40 +01:00
Lars-Peter Clausen 9d452184fc iio: buffer-dmaengine: Use dmaengine_terminate_sync()
The DMAengine framework gained support for synchronized transfer
termination. Use the new dmaengine_terminate_sync() function instead of
dmaengine_terminate_all(), this avoids a potential race condition when
disabling the buffer.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2016-02-09 21:05:17 +00:00
Lars-Peter Clausen 2d6ca60f32 iio: Add a DMAengine framework based buffer
Add a generic fully device independent DMA buffer implementation that uses
the DMAegnine framework to perform the DMA transfers. This can be used by
converter drivers that whish to provide a DMA buffer for converters that
are connected to a DMA core that implements the DMAengine API.

Apart from allocating the buffer using iio_dmaengine_buffer_alloc() and
freeing it using iio_dmaengine_buffer_free() no additional converter driver
specific code is required when using this DMA buffer implementation.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2015-10-25 13:55:32 +00:00
Lars-Peter Clausen 670b19ae9b iio: Add generic DMA buffer infrastructure
The traditional approach used in IIO to implement buffered capture requires
the generation of at least one interrupt per sample. In the interrupt
handler the driver reads the sample from the device and copies it to a
software buffer. This approach has a rather large per sample overhead
associated with it. And while it works fine for samplerates in the range of
up to 1000 samples per second it starts to consume a rather large share of
the available CPU processing time once we go beyond that, this is
especially true on an embedded system with limited processing power. The
regular interrupt also causes increased power consumption by not allowing
the hardware into deeper sleep states, which is something that becomes more
and more important on mobile battery powered devices.

And while the recently added watermark support mitigates some of the issues
by allowing the device to generate interrupts at a rate lower than the data
output rate, this still requires a storage buffer inside the device and
even if it exists it is only a few 100 samples deep at most.

DMA support on the other hand allows to capture multiple millions or even
more samples without any CPU interaction. This allows the CPU to either go
to sleep for longer periods or focus on other tasks which increases overall
system performance and power consumption. In addition to that some devices
might not even offer a way to read the data other than using DMA, which
makes DMA mandatory to use for them.

The tasks involved in implementing a DMA buffer can be divided into two
categories. The first category is memory buffer management (allocation,
mapping, etc.) and hooking this up the IIO buffer callbacks like read(),
enable(), disable(), etc. The second category of tasks is to setup the
DMA hardware and manage the DMA transfers. Tasks from the first category
will be very similar for all IIO drivers supporting DMA buffers, while the
tasks from the second category will be hardware specific.

This patch implements a generic infrastructure that take care of the former
tasks. It provides a set of functions that implement the standard IIO
buffer iio_buffer_access_funcs callbacks. These can either be used as is or
be overloaded and augmented with driver specific code where necessary.

For the DMA buffer support infrastructure that is introduced in this series
sample data is grouped by so called blocks. A block is the basic unit at
which data is exchanged between the application and the hardware. The
application is responsible for allocating the memory associated with the
block and then passes the block to the hardware. When the hardware has
captured the amount of samples equal to size of a block it will notify the
application, which can then read the data from the block and process it.
The block size can freely chosen (within the constraints of the hardware).
This allows to make a trade-off between latency and management overhead.
The larger the block size the lower the per sample overhead but the latency
between when the data was captured and when the application will be able to
access it increases, in a similar way smaller block sizes have a larger per
sample management overhead but a lower latency. The ideal block size thus
depends on system and application requirements.

For the time being the infrastructure only implements a simple double
buffered scheme which allocates two blocks each with half the size of the
configured buffer size. This provides basic support for capturing
continuous uninterrupted data over the existing file-IO ABI. Future
extensions to the DMA buffer infrastructure will give applications a more
fine grained control over how many blocks are allocated and the size of
each block. But this requires userspace ABI additions which are
intentionally not part of this patch and will be added separately.

Tasks of the second category need to be implemented by a device specific
driver. They can be hooked up into the generic infrastructure using two
simple callbacks, submit() and abort().

The submit() callback is used to schedule DMA transfers for blocks. Once a
DMA transfer has been completed it is expected that the buffer driver calls
iio_dma_buffer_block_done() to notify. The abort() callback is used for
stopping all pending and active DMA transfers when the buffer is disabled.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2015-10-25 13:54:34 +00:00
Lars-Peter Clausen 8548a63b37 iio: Move generic buffer implementations to sub-directory
For generic IIO trigger implementations we already have a sub-directory,
but the generic buffer implementations currently reside in the IIO
top-level directory. The main reason is that things have historically grown
into this form.

With more generic buffer implementations on its way now is the perfect time
to clean this up and introduce a sub-directory for generic buffer
implementations to avoid too much clutter in the top-level directory.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2015-08-16 10:51:21 +01:00