Commit Graph

3608 Commits

Author SHA1 Message Date
Colin Ian King dd54006fed dmaengine: timb_dma: fix spelling mistake: "Couldnt" -> "Couldn't"
Trivial fix to spelling mistake in dev_err error message text.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-12-11 08:57:38 +05:30
Adam Wallis 6f6a23a213 dmaengine: dmatest: move callback wait queue to thread context
Commit adfa543e73 ("dmatest: don't use set_freezable_with_signal()")
introduced a bug (that is in fact documented by the patch commit text)
that leaves behind a dangling pointer. Since the done_wait structure is
allocated on the stack, future invocations to the DMATEST can produce
undesirable results (e.g., corrupted spinlocks).

Commit a9df21e34b ("dmaengine: dmatest: warn user when dma test times
out") attempted to WARN the user that the stack was likely corrupted but
did not fix the actual issue.

This patch fixes the issue by pushing the wait queue and callback
structs into the the thread structure. If a failure occurs due to time,
dmaengine_terminate_all will force the callback to safely call
wake_up_all() without possibility of using a freed pointer.

Cc: stable@vger.kernel.org
Bug: https://bugzilla.kernel.org/show_bug.cgi?id=197605
Fixes: adfa543e73 ("dmatest: don't use set_freezable_with_signal()")
Reviewed-by: Sinan Kaya <okaya@codeaurora.org>
Suggested-by: Shunyong Yang <shunyong.yang@hxt-semitech.com>
Signed-off-by: Adam Wallis <awallis@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-12-11 08:46:24 +05:30
Paul E. McKenney 98c1ec7cef drivers/dma/ioat: Remove now-redundant smp_read_barrier_depends()
Now that READ_ONCE() implies smp_read_barrier_depends(), the
__cleanup() and ioat_abort_descs() functions no longer need their
smp_read_barrier_depends() calls, which this commit removes.
It is actually not entirely clear why this driver ever included
smp_read_barrier_depends() given that it appears to be x86-only and
given that smp_read_barrier_depends() has no effect whatsoever except
on DEC Alpha.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: <dmaengine@vger.kernel.org>
2017-12-05 11:57:53 -08:00
Vasyl Gomonovych a8ffa34fa5 dmaengine: mic_x100_dma: Use PTR_ERR_OR_ZERO()
Fix ptr_ret.cocci warnings:
drivers/dma/mic_x100_dma.c:483:1-3: WARNING: PTR_ERR_OR_ZERO can be used

Use PTR_ERR_OR_ZERO rather than if(IS_ERR(...)) + PTR_ERR

Generated by: scripts/coccinelle/api/ptr_ret.cocci

Signed-off-by: Vasyl Gomonovych <gomonovych@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-12-04 22:40:38 +05:30
Peter Ujfalusi 2c6929d2ea dmaengine: s3c24xx-dma: Use vchan_terminate_vdesc() instead of desc_free
To avoid race with vchan_complete, use the race free way to terminate
running transfer.

Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-12-04 22:33:51 +05:30
Peter Ujfalusi 3ee7e42f3c dmaengine: k3dma: Use vchan_terminate_vdesc() instead of desc_free
To avoid race with vchan_complete, use the race free way to terminate
running transfer.

Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-12-04 22:33:51 +05:30
Peter Ujfalusi 397c59bce6 dmaengine: img-mdc-dma: Use vchan_terminate_vdesc() instead of desc_free
To avoid race with vchan_complete, use the race free way to terminate
running transfer.

Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-12-04 22:33:51 +05:30
Peter Ujfalusi 47d71bc75d dmaengine: amba-pl08x: Use vchan_terminate_vdesc() instead of desc_free
To avoid race with vchan_complete, use the race free way to terminate
running transfer.

Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-12-04 22:33:51 +05:30
Peter Ujfalusi f0dd52c85d dmaengine: dma-jz4780: Use vchan_terminate_vdesc() instead of desc_free
To avoid race with vchan_complete, use the race free way to terminate
running transfer.

Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-12-04 22:33:51 +05:30
Peter Ujfalusi de92436ac4 dmaengine: bcm2835-dma: Use vchan_terminate_vdesc() instead of desc_free
To avoid race with vchan_complete, use the race free way to terminate
running transfer.

Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-12-04 22:33:51 +05:30
Peter Ujfalusi 174334bcd9 dmaengine: edma: Use vchan_terminate_vdesc() instead of desc_free
To avoid race with vchan_complete, use the race free way to terminate
running transfer.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-12-04 22:33:51 +05:30
Peter Ujfalusi b1faf0f564 dmaengine: omap-dma: Use vchan_terminate_vdesc() instead of desc_free
To avoid race with vchan_complete, use the race free way to terminate
running transfer.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-12-04 22:33:51 +05:30
Peter Ujfalusi 1c7f072d94 dmaengine: virt-dma: Support for race free transfer termination
Even with the introduced vchan_synchronize() we can face race when
terminating a cyclic transfer.

If the terminate_all is called after the interrupt handler called
vchan_cyclic_callback(), but before the vchan_complete tasklet is called:
vc->cyclic is set to the cyclic descriptor, but the descriptor itself was
freed up in the driver's terminate_all() callback.
When the vhan_complete() is executed it will try to fetch the vc->cyclic
vdesc, but the pointer is pointing now to uninitialized memory leading to
(hard to reproduce) kernel crash.

In order to fix this, drivers should:
- call vchan_terminate_vdesc() from their terminate_all callback instead
calling their free_desc function to free up the descriptor.
- implement device_synchronize callback and call vchan_synchronize().

This way we can make sure that the descriptor is only going to be freed up
after the vchan_callback was executed in a safe manner.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-12-04 22:33:51 +05:30
Peter Ujfalusi 6af149d2b1 dmaengine: virt-dma: Add helper to free/reuse a descriptor
The vchan_vdesc_fini() can be used to free or reuse a given descriptor
after it has been marked as completed.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-12-04 22:33:51 +05:30
Gustavo A. R. Silva 62a277d43d dmaengine: at_hdmac: fix potential NULL pointer dereference in atc_prep_dma_interleaved
_xt_ is being dereferenced before it is null checked, hence there is a
potential null pointer dereference.

Fix this by moving the pointer dereference after _xt_ has been null
checked.

This issue was detected with the help of Coccinelle.

Fixes: 4483320e24 ("dmaengine: Use Pointer xt after NULL check.")
Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-11-29 19:48:17 +05:30
Christophe JAILLET 5c9afbda91 dmaengine: ioat: Fix error handling path
If the last test in 'ioat_dma_self_test()' fails, we must release all
the allocated resources and not just part of them.

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-11-29 19:47:46 +05:30
Kuninori Morimoto 73a47bd0da dmaengine: rcar-dmac: use TCRB instead of TCR for residue
SYS/RT/Audio DMAC includes independent data buffers for reading
and writing. Therefore, the read transfer counter and write transfer
counter have different values.
TCR indicates read counter, and TCRB indicates write counter.
The relationship is like below.

	        TCR       TCRB
	[SOURCE] -> [DMAC] -> [SINK]

In the MEM_TO_DEV direction, what really matters is how much data has
been written to the device. If the DMA is interrupted between read and
write, then, the data doesn't end up in the destination, so shouldn't
be counted. TCRB is thus the register we should use in this cases.

In the DEV_TO_MEM direction, the situation is more complex. Both the
read and write side are important. What matters from a data consumer
point of view is how much data has been written to memory.
On the other hand, if the transfer is interrupted between read and
write, we'll end up losing data. It can also be important to report.

In the MEM_TO_MEM direction, what matters is of course how much data
has been written to memory from data consumer point of view.
Here, because read and write have independent data buffers, it will
take a while for TCR and TCRB to become equal. Thus we should check
TCRB in this case, too.

Thus, all cases we should check TCRB instead of TCR.

Without this patch, Sound Capture has noise after PulseAudio support
(= 07b7acb51d ("ASoC: rsnd: update pointer more accurate")), because
the recorder will use wrong residue counter which indicates transferred
from sound device, but in reality the data was not yet put to memory
and recorder will record it.

However, because DMAC is buffering data until it can be transferable
size, TCRB might not be updated.
For example, if consumer doesn't know how much data can be received,
it requests enough size to DMAC. But in reality, it might receive very
few data. In such case, DMAC just buffered it until transferable size,
and no TCRB updated.

In such case, this buffered data will be transferred if CHCR::DE bit was
cleared, and this is happen if rcar_dmac_chan_halt(). In other word, it
happen when consumer called dmaengine_terminate_all().

Because of this behavior, it need to flush buffered data when it returns
"residue" (= dmaengine_tx_status()).
Otherwise, consumer might calculate wrong things if it called
dmaengine_tx_status() and dmaengine_terminate_all() consecutively.

Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Tested-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
Tested-by: Ryo Kodama <ryo.kodama.vz@renesas.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-11-29 19:42:57 +05:30
Kuninori Morimoto a8d46a7f5d dmaengine: rcar-dmac: ensure CHCR DE bit is actually 0 after clearing
DMAC reads data from source device, and buffered it until transferable
size for sink device. Because of this behavior, DMAC is including
buffered data .

Now, CHCR DE bit is controlling DMA transfer enable/disable.

If DE bit was cleared during data transferring, or during buffering,
it will flush buffered data if source device was peripheral device
(The buffered data will be removed if source device was memory).
Because of this behavior, driver should ensure that DE bit is actually
0 after clearing.

This patch adds new rcar_dmac_chcr_de_barrier() and call it after CHCR
register access.

Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Tested-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
Tested-by: Ryo Kodama <ryo.kodama.vz@renesas.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-11-29 19:42:57 +05:30
Dmitry Osipenko f6160f3598 dmaengine: tegra-apb: Support non-flow controlled slave configuration
This allows DMA client to issue a non-flow controlled TX. In particular
it is needed for the fuse driver that reads fuse registers using APBDMA
to workaround a HW bug that results in hang when CPU and DMA perform
simultaneous access to fuse peripheral.

Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-11-29 19:35:05 +05:30
Linus Torvalds 23c258763b dmaengine updates for 4.15-rc1
Updates for this cycle include:
 - New driver for Spreadtrum dma controller, ST MDMA and DMAMUX controllers
 - PM support for IMG MDC drivers
 - Updates to bcm-sba-raid driver and improvements to sun6i driver
 - Subsystem conversion for:
   - timers to use timer_setup()
   - remove usage of PCI pool API
   - usage of %p format specifier
 - Minor updates to bunch of drivers
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJaCn48AAoJEHwUBw8lI4NHTe8P/RpJH8tDat/joT7Hl71stEod
 vKa0iSkW2fdwd6PeaRfd+UTloska1NE9rdgfh8pCVveoHjCPQBVBOC7V8DbMtlsi
 /IlJjFT74wl2R1aSHcSGoLGsIEyurz+9SK88qCU54OQSjVHSnfmyGI4ycTLQGH9U
 zce5JHWHB5MkdftM4eJaSE/t0Md1DBkxadFSQRkwQqqDqoLE7jgJUK0TADRukQqS
 fsDYPh/OhYAizAHlmEGuLZQheN0ld5W7n1sGsEnBD88wtBMvYHzAwT17B+BobxEp
 jyaoE5nV4AgqWh1mvixrmgKoj2KL3DDC+QeoHYCExdcgIrvc86xN3homx9g9y38a
 b99pgDDvXjw4N7S6AmRyQlm/5D0QyjUaoHgGklsaR3ix81dFwDY15aZa8/uQ4EAT
 iKH8DxAgOq6aG1MkUycQ/7QTenRbN4yWQQa+Mm5ncoNU8bpazyxf2l5L9OJWpFjX
 Q6VagNim+plGeUhpJ4IEfPi7LChXFaYsb1D7A/dqpIRvaYzwsy80b/DNhobGMDF6
 eTpny64AKHnozWw/KP5k3DfcYvoU/ytcSsWf8h+CPN7EdLMBqUXFgkVwtyf6WKNc
 UPl+2in08GLgfGb+n2IAdaQzlJ4dK2P7f7mx0T4OvRymu35HXd8nJjmMJ5ZyBr1t
 Z/0JVfcA66AL+XSt179C
 =t9Ix
 -----END PGP SIGNATURE-----

Merge tag 'dmaengine-4.15-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "Updates for this cycle include:

   - new driver for Spreadtrum dma controller, ST MDMA and DMAMUX
     controllers

   - PM support for IMG MDC drivers

   - updates to bcm-sba-raid driver and improvements to sun6i driver

   - subsystem conversion for:
      - timers to use timer_setup()
      - remove usage of PCI pool API
      - usage of %p format specifier

   - minor updates to bunch of drivers"

* tag 'dmaengine-4.15-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (49 commits)
  dmaengine: ti-dma-crossbar: Correct am335x/am43xx mux value type
  dmaengine: dmatest: warn user when dma test times out
  dmaengine: Revert "rcar-dmac: use TCRB instead of TCR for residue"
  dmaengine: stm32_mdma: activate pack/unpack feature
  dmaengine: at_hdmac: Remove unnecessary 0x prefixes before %pad
  dmaengine: coh901318: Remove unnecessary 0x prefixes before %pad
  MAINTAINERS: Step down from a co-maintaner of DW DMAC driver
  dmaengine: pch_dma: Replace PCI pool old API
  dmaengine: Convert timers to use timer_setup()
  dmaengine: sprd: Add Spreadtrum DMA driver
  dt-bindings: dmaengine: Add Spreadtrum SC9860 DMA controller
  dmaengine: sun6i: Retrieve channel count/max request from devicetree
  dmaengine: Build bcm-sba-raid driver as loadable module for iProc SoCs
  dmaengine: bcm-sba-raid: Use common GPL comment header
  dmaengine: bcm-sba-raid: Use only single mailbox channel
  dmaengine: bcm-sba-raid: serialize dma_cookie_complete() using reqs_lock
  dmaengine: pl330: fix descriptor allocation fail
  dmaengine: rcar-dmac: use TCRB instead of TCR for residue
  dmaengine: sun6i: Add support for Allwinner A64 and compatibles
  arm64: allwinner: a64: Add devicetree binding for DMA controller
  ...
2017-11-14 16:49:31 -08:00
Vinod Koul cecd5fc551 Merge branch 'topic/xilinx' into for-linus 2017-11-14 10:37:28 +05:30
Vinod Koul 40b4ed1a1a Merge branch 'topic/timer_api' into for-linus 2017-11-14 10:37:18 +05:30
Vinod Koul 8e6c1db351 Merge branch 'topic/ti' into for-linus 2017-11-14 10:37:13 +05:30
Vinod Koul d2045ba3a4 Merge branch 'topic/sun' into for-linus 2017-11-14 10:37:07 +05:30
Vinod Koul 135ab7f53c Merge branch 'topic/sprd' into for-linus
Kconfig and Makefile conflicts so put them in right order (sprd ones after
stm ones)

Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-11-14 10:36:09 +05:30
Vinod Koul 9c60271336 Merge branch 'topic/stm' into for-linus 2017-11-14 10:34:56 +05:30
Vinod Koul 2c8528592c Merge branch 'topic/sa11x0' into for-linus 2017-11-14 10:33:33 +05:30
Vinod Koul b683fa223b Merge branch 'topic/renasas' into for-linus 2017-11-14 10:33:24 +05:30
Vinod Koul c7960fc5e0 Merge branch 'topic/qcom' into for-linus 2017-11-14 10:33:16 +05:30
Vinod Koul 4cd46d0c5e Merge branch 'topic/pl330' into for-linus 2017-11-14 10:33:04 +05:30
Vinod Koul 9427702dcc Merge branch 'topic/imx' into for-linus 2017-11-14 10:32:52 +05:30
Vinod Koul 340b11b9f1 Merge branch 'topic/img' into for-linus 2017-11-14 10:32:49 +05:30
Vinod Koul 76a0370a46 Merge branch 'topic/dmatest' into for-linus 2017-11-14 10:32:36 +05:30
Vinod Koul 575d34b6de Merge branch 'topic/bcom' into for-linus 2017-11-14 10:32:28 +05:30
Vinod Koul 049d0d3849 Merge branch 'topic/axi' into for-linus 2017-11-14 10:32:20 +05:30
Vinod Koul 5ddab696e7 Merge branch 'topic/print_fixes' into for-linus 2017-11-14 10:31:59 +05:30
Peter Ujfalusi 288e7560e4 dmaengine: ti-dma-crossbar: Correct am335x/am43xx mux value type
The used 0x1f mask is only valid for am335x family of SoC, different family
using this type of crossbar might have different number of electable
events. In case of am43xx family 0x3f mask should have been used for
example.
Instead of trying to handle each family's mask, just use u8 type to store
the mux value since the event offsets are aligned to byte offset.

Fixes: 42dbdcc6bf ("dmaengine: ti-dma-crossbar: Add support for crossbar on AM33xx/AM43xx")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-11-08 19:51:40 +05:30
Adam Wallis a9df21e34b dmaengine: dmatest: warn user when dma test times out
Commit adfa543e73 ("dmatest: don't use set_freezable_with_signal()")
introduced a bug (that is in fact documented by the patch commit text)
that leaves behind a dangling pointer. Since the done_wait structure is
allocated on the stack, future invocations to the DMATEST can produce
undesirable results (e.g., corrupted spinlocks). Ideally, this would be
cleaned up in the thread handler, but at the very least, the kernel
is left in a very precarious scenario that can lead to some long debug
sessions when the crash comes later.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=197605
Signed-off-by: Adam Wallis <awallis@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-11-08 11:24:03 +05:30
Vinod Koul 087ffdd288 dmaengine: Revert "rcar-dmac: use TCRB instead of TCR for residue"
This reverts commit 847449f23dcb: ("dmaengine: rcar-dmac: use TCRB instead
of TCR for residue") as it breaks small serial console.

Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-11-08 11:04:49 +05:30
Pierre-Yves MORDRET d83f4131c2 dmaengine: stm32_mdma: activate pack/unpack feature
If source and destination bus width differs pack/unpack MDMA
feature has to be activated for alignment.
This pack/unpack feature implies to have both source/destination address
and buffer length aligned on bus width.

Fixes: a4ffb13c89 ("dmaengine: Add STM32 MDMA driver")
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-11-08 10:49:53 +05:30
Vinod Koul 77ea824c6d dmaengine: at_hdmac: Remove unnecessary 0x prefixes before %pad
Since commit 3cab1e7112 ("lib/vsprintf: refactor duplicate code
to special_hex_number()") %pad doesn't need 0x prefix so drop that.

Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-11-08 10:47:04 +05:30
Vinod Koul 6d82e05b3c dmaengine: coh901318: Remove unnecessary 0x prefixes before %pad
Since commit 3cab1e7112 ("lib/vsprintf: refactor duplicate code
to special_hex_number()") %pad doesn't need 0x prefix so drop that.

Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-11-08 10:46:46 +05:30
Greg Kroah-Hartman b24413180f License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.

By default all files without license information are under the default
license of the kernel, which is GPL version 2.

Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier.  The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.

This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.

How this work was done:

Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
 - file had no licensing information it it.
 - file was a */uapi/* one with no licensing information in it,
 - file was a */uapi/* one with existing licensing information,

Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.

The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne.  Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.

The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed.  Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.

Criteria used to select files for SPDX license identifier tagging was:
 - Files considered eligible had to be source code files.
 - Make and config files were included as candidates if they contained >5
   lines of source
 - File already had some variant of a license header in it (even if <5
   lines).

All documentation files were explicitly excluded.

The following heuristics were used to determine which SPDX license
identifiers to apply.

 - when both scanners couldn't find any license traces, file was
   considered to have no license information in it, and the top level
   COPYING file license applied.

   For non */uapi/* files that summary was:

   SPDX license identifier                            # files
   ---------------------------------------------------|-------
   GPL-2.0                                              11139

   and resulted in the first patch in this series.

   If that file was a */uapi/* path one, it was "GPL-2.0 WITH
   Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:

   SPDX license identifier                            # files
   ---------------------------------------------------|-------
   GPL-2.0 WITH Linux-syscall-note                        930

   and resulted in the second patch in this series.

 - if a file had some form of licensing information in it, and was one
   of the */uapi/* ones, it was denoted with the Linux-syscall-note if
   any GPL family license was found in the file or had no licensing in
   it (per prior point).  Results summary:

   SPDX license identifier                            # files
   ---------------------------------------------------|------
   GPL-2.0 WITH Linux-syscall-note                       270
   GPL-2.0+ WITH Linux-syscall-note                      169
   ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
   ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
   LGPL-2.1+ WITH Linux-syscall-note                      15
   GPL-1.0+ WITH Linux-syscall-note                       14
   ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
   LGPL-2.0+ WITH Linux-syscall-note                       4
   LGPL-2.1 WITH Linux-syscall-note                        3
   ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
   ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1

   and that resulted in the third patch in this series.

 - when the two scanners agreed on the detected license(s), that became
   the concluded license(s).

 - when there was disagreement between the two scanners (one detected a
   license but the other didn't, or they both detected different
   licenses) a manual inspection of the file occurred.

 - In most cases a manual inspection of the information in the file
   resulted in a clear resolution of the license that should apply (and
   which scanner probably needed to revisit its heuristics).

 - When it was not immediately clear, the license identifier was
   confirmed with lawyers working with the Linux Foundation.

 - If there was any question as to the appropriate license identifier,
   the file was flagged for further research and to be revisited later
   in time.

In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.

Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights.  The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.

Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.

In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.

Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
 - a full scancode scan run, collecting the matched texts, detected
   license ids and scores
 - reviewing anything where there was a license detected (about 500+
   files) to ensure that the applied SPDX license was correct
 - reviewing anything where there was no detection but the patch license
   was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
   SPDX license was correct

This produced a worksheet with 20 files needing minor correction.  This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.

These .csv files were then reviewed by Greg.  Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected.  This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.)  Finally Greg ran the script using the .csv files to
generate the patches.

Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-02 11:10:55 +01:00
Romain Perier 10c191a11c dmaengine: pch_dma: Replace PCI pool old API
The PCI pool API is deprecated. This commit replaces the PCI pool old
API by the appropriate function with the DMA pool API.

Signed-off-by: Romain Perier <romain.perier@collabora.com>
Acked-by: Peter Senna Tschudin <peter.senna@collabora.com>
Tested-by: Peter Senna Tschudin <peter.senna@collabora.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-31 17:01:06 +05:30
Kees Cook bcdc4bd356 dmaengine: Convert timers to use timer_setup()
In preparation for unconditionally passing the struct timer_list pointer to
all timer callbacks, switch to using the new timer_setup() and from_timer()
to pass the timer pointer explicitly.

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-24 20:11:21 +05:30
Baolin Wang 9b3b8171f7 dmaengine: sprd: Add Spreadtrum DMA driver
This patch adds the DMA controller driver for Spreadtrum SC9860 platform.

Signed-off-by: Baolin Wang <baolin.wang@spreadtrum.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-24 20:07:55 +05:30
Stefan Brüns 464aa6f54b dmaengine: sun6i: Retrieve channel count/max request from devicetree
To avoid introduction of a new compatible for each small SoC/DMA controller
variation, move the definition of the channel count to the devicetree.

The number of vchans is no longer explicit, but limited by the highest
port/DMA request number. The result is a slight overallocation for SoCs
with a sparse port mapping.

Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-23 11:44:03 +05:30
Anup Patel 7076a1e4a4 dmaengine: Build bcm-sba-raid driver as loadable module for iProc SoCs
By default, we build Broadcom SBA RAID driver as loadable module for
iProc SOCs so that kernel image is little smaller and we load SBA RAID
driver only when required.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-23 11:35:47 +05:30
Anup Patel d5c334870e dmaengine: bcm-sba-raid: Use common GPL comment header
This patch makes the comment header of Broadcom SBA RAID driver
similar to the GPL comment header used across Broadcom driver
sources.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-23 11:35:47 +05:30
Anup Patel 4e9f8187ae dmaengine: bcm-sba-raid: Use only single mailbox channel
Each mailbox channel used by Broadcom SBA RAID driver is
a separate HW ring.

Currently, Broadcom SBA RAID driver creates one DMA channel
using one or more mailbox channels. When we are using more
than one mailbox channels for a DMA channel, the sba_request
are distributed evenly among multiple mailbox channels which
results in sba_request being completed out-of-order.

The above described out-of-order completion of sba_request
breaks the dma_async_is_complete() API because it assumes
DMA cookies are completed in orderly fashion.

To ensure correct behaviour of dma_async_is_complete() API,
this patch updates Broadcom SBA RAID driver to use only
single mailbox channel. If additional mailbox channels are
specified in DT then those will be ignored.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-23 11:35:47 +05:30
Anup Patel 5d74aa7f64 dmaengine: bcm-sba-raid: serialize dma_cookie_complete() using reqs_lock
As-per documentation in driver/dma/dmaengine.h, the
dma_cookie_complete() API should be called with lock
held.

This patch ensures that Broadcom SBA RAID driver calls
the dma_cookie_complete() API with reqs_lock held.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-23 11:35:47 +05:30
Stefan Roese c5709d3769 dmaengine: altera: Use IRQ-safe spinlock calls in the error paths as well
The patch edf10919 [dmaengine: altera: fix spinlock usage] missed to
change 2 occurrences of spin_unlock_bh() to spin_unlock_irqrestore().
This patch fixes this by moving to the IRQ-safe call in the error
paths as well.

Fixes: edf10919 (dmaengine: altera: fix spinlock usage)
Signed-off-by: Stefan Roese <sr@denx.de>
Reviewed-by: Sylvain Lesne <lesne@alse-fr.com>
[add fixes tag and fix typo in log]
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-20 11:51:10 +05:30
Alexander Kochetkov e588710311 dmaengine: pl330: fix descriptor allocation fail
If two concurrent threads call pl330_get_desc() when DMAC descriptor
pool is empty it is possible that allocation for one of threads will fail
with message:

kernel: dma-pl330 20078000.dma-controller: pl330_get_desc:2469 ALERT!

Here how that can happen. Thread A calls pl330_get_desc() to get
descriptor. If DMAC descriptor pool is empty pl330_get_desc() allocates
new descriptor on shared pool using add_desc() and then get newly
allocated descriptor using pluck_desc(). At the same time thread B calls
pluck_desc() and take newly allocated descriptor. In that case descriptor
allocation for thread A will fail.

Using on-stack pool for new descriptor allow avoid the issue described.
The patch modify pl330_get_desc() to use on-stack pool for allocation
new descriptors.

Signed-off-by: Alexander Kochetkov <al.kochet@gmail.com>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-20 11:46:45 +05:30
Hiroyuki Yokoyama 847449f23d dmaengine: rcar-dmac: use TCRB instead of TCR for residue
SYS/RT/Audio DMAC includes independent data buffers for reading
and writing. Therefore, the read transfer counter and write transfer
counter have different values.
TCR indicates read counter, and TCRB indicates write counter.
The relationship is like below.

        TCR       TCRB
[SOURCE] -> [DMAC] -> [SINK]

In the MEM_TO_DEV direction, what really matters is how much data has
been written to the device. If the DMA is interrupted between read and
write, then, the data doesn't end up in the destination, so shouldn't
be counted. TCRB is thus the register we should use in this cases.

In the DEV_TO_MEM direction, the situation is more complex. Both the
read and write side are important. What matters from a data consumer
point of view is how much data has been written to memory.
On the other hand, if the transfer is interrupted between read and
write, we'll end up losing data. It can also be important to report.

In the MEM_TO_MEM direction, what matters is of course how much data
has been written to memory from data consumer point of view.
Here, because read and write have independent data buffers, it will
take a while for TCR and TCRB to become equal. Thus we should check
TCRB in this case, too.

Thus, all cases we should check TCRB instead of TCR.

Without this patch, Sound Capture has noise after PluseAudio support
(= 07b7acb51d ("ASoC: rsnd: update pointer more accurate")), because
the recorder will use wrong residue counter which indicates transferred
from sound device, but in reality the data was not yet put to memory
and recorder will record it.

Signed-off-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
[Kuninori: added detail information in log]
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-20 11:43:32 +05:30
Stefan Brüns 12e0177055 dmaengine: sun6i: Add support for Allwinner A64 and compatibles
The A64 SoC has the same dma engine as the H3 (sun8i), with a
reduced amount of physical channels. To allow future reuse of the
compatible, leave the channel count etc. in the config data blank
and retrieve it from the devicetree.

Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-16 12:31:36 +05:30
Stefan Brüns 500fa9e76b dmaengine: sun6i: Move number of pchans/vchans/request to device struct
Preparatory patch: If the same compatible is used for different SoCs which
have a common register layout, but different number of channels, the
channel count can no longer be stored in the config. Store it in the
device structure instead.

Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-16 12:31:24 +05:30
Stefan Brüns d5f6d8cf31 dmaengine: sun6i: Enable additional burst lengths/widths on H3
The H3 supports bursts lengths of 1, 4, 8 and 16 transfers, each with
a width of 1, 2, 4 or 8 bytes.

The register value for the the width is log2-encoded, change the
conversion function to provide the correct value for width == 8.

Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-16 12:31:24 +05:30
Stefan Brüns 88d8622c00 dmaengine: sun6i: Restructure code to allow extension for new SoCs
The current code mixes three distinct operations when transforming
the slave config to register settings:

  1. special handling of DMA_SLAVE_BUSWIDTH_UNDEFINED, maxburst == 0
  2. range checking
  3. conversion of raw to register values

As the range checks depend on the specific SoC, move these out of the
conversion to distinct operations.

Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-16 12:31:24 +05:30
Stefan Brüns 5a6a6202fa dmaengine: sun6i: Correct burst length field offsets for H3
For the H3, the burst lengths field offsets in the channel configuration
register differs from earlier SoC generations.

Using the A31 register macros actually configured the H3 controller
do to bursts of length 1 always, which although working leads to higher
bus utilisation.

Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-16 12:31:24 +05:30
Stefan Brüns 50b1249754 dmaengine: sun6i: Correct setting of clock autogating register for A83T/H3
The H83T uses a compatible string different from the A23, but requires
the same clock autogating register setting.

The H3 also requires setting the clock autogating register, but has
the register at a different offset.

Add three suitable callbacks for the existing controller generations
and set it in the controller config structure.

Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-16 12:31:24 +05:30
Ed Blake 56d355e6f5 dmaengine: img-mdc: Add runtime PM
Add runtime PM support to disable the clock when the h/w is not in use.
The existing clock_prepare_enable is removed from probe() as the clock
is no longer permanently enabled.

Signed-off-by: Ed Blake <ed.blake@sondrel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-16 12:15:39 +05:30
Ed Blake fd9f22ae15 dmaengine: img-mdc: Add suspend / resume handling
Add suspend / resume handling using suspend_late and resume_early, and
check that all channels are idle before suspending.

DMA drivers should use suspend_late / resume_early to ensure that all
DMA client devices are suspended before the DMA device itself, and that
client devices are resumed after the DMA device. This avoids suspending
the DMA device while transactions are still active.

It is the responsibility of client drivers to terminate all DMA
transactions in their suspend handlers, so there should be no active
transactions by the time suspend_late is called.

There's no need to save and restore registers for MDC during suspend /
resume, as all transactions will be terminated as a result of the
suspend, and all required registers are programmed anyway at the start
of any new transactions following resume.

Signed-off-by: Ed Blake <ed.blake@sondrel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-16 12:15:30 +05:30
Geert Uytterhoeven f47a4133ea dmaengine: nbpfaxi: Use of_device_get_match_data() helper
Use the of_device_get_match_data() helper instead of open coding.
Note that when used with DT, there's always a valid match.

Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-12 22:20:51 +05:30
Peter Ujfalusi 05ec62a106 dmaengine: omap-dma: Implement protection for invalid max_burst
the device's max_burst to 16777215 (EN is 24bit unsigned value) so
clients can take this into consideration when setting up the transfer.

During slave transfer preparation check if the requested maxburst is valid.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <linux@armlinux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-12 22:13:48 +05:30
Peter Ujfalusi ea09ea51dd dmaengine: edma: Implement protection for invalid max_burst
the device's max_burst to 32767 (CIDX is 16bit signed value) so clients
can take this into consideration when setting up the transfer.

During slave transfer preparation check if the requested maxburst is valid.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-12 22:13:06 +05:30
Colin Ian King f2fd4d9f32 dmaengine: stm32: remove redundant initialization of hwdesc
hwdesc is being initialized to desc->hwdesc but this is never read
as hwdesc is overwritten in a for-loop.  Remove the redundant
initialization and move the declaration of hwdesc into the for-loop.

Cleans up clang warning:
Value stored to 'hwdesc' during its initialization is never read

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-12 19:34:41 +05:30
Arnd Bergmann ea62e2ccbb dmaengine: stm32_mdma: add CONFIG_OF dependency
Without CONFIG_OF we get a build warning:

warning: (STM32_MDMA) selects DMA_OF which has unmet direct dependencies (DMADEVICES && OF)

This adds a dependency on CONFIG_OF. Since this means
we no longer need to select 'DMA_OF', I'm dropping that line
as well.

Fixes: a4ffb13c89 ("dmaengine: Add STM32 MDMA driver")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-12 19:33:55 +05:30
Vinod Koul 38502f232e dmaengine: stm32: use %p format specfier for pointer
Pointer print was using explict cast and printing as %x which causes below
warn on some arch's so print using %p format specfier.

Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-08 20:28:15 +05:30
Dan Carpenter 4219ff33b2 dmaengine: stm32-dmamux: Fix a NULL vs IS_ERR() check in probe
devm_ioremap_resource() doesn't return NULL, it returns error pointers.

Fixes: df7e762db5 ("dmaengine: Add STM32 DMAMUX driver")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-08 16:17:39 +05:30
Pierre-Yves MORDRET a4ffb13c89 dmaengine: Add STM32 MDMA driver
This patch adds the driver for the STM32 MDMA controller.

Master Direct memory access (MDMA) is used in order to provide high-speed
data transfer between memory and memory or between peripherals and memory.

MDMA controller provides a master AXI interface for main memory and
peripheral registers access (system access port) and a master AHB
interface only for Cortex-M7 TCM memory access (TCM access port).

MDMA works in conjunction with the standard DMA controllers (DMA1 or DMA2).
It offers up to 64 channels, each dedicated to managing memory access
requests from one of the DMA stream memory buffer or other peripherals
(w/ integrated FIFO).

Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-08 14:45:34 +05:30
Sylvain Lesne edf10919e5 dmaengine: altera: fix spinlock usage
Since this lock is acquired in both process and IRQ context, failing to
to disable IRQs when trying to acquire the lock in process context can
lead to deadlocks.

Signed-off-by: Sylvain Lesne <lesne@alse-fr.com>
Reviewed-by: Stefan Roese <sr@denx.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-28 13:11:46 +05:30
Sylvain Lesne d9ec46416d dmaengine: altera: fix response FIFO emptying
Commit 6084fc2ec4 ("dmaengine: altera: Use macros instead of structs
to describe the registers") introduced a minus sign before a register
offset.

This leads to soft-locks of the DMA controller, since reading the last
status byte is required to pop the response from the FIFO. Failing to
do so will lead to a full FIFO, which means that the DMA controller
will stop processing descriptors.

Signed-off-by: Sylvain Lesne <lesne@alse-fr.com>
Reviewed-by: Stefan Roese <sr@denx.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-28 13:11:46 +05:30
Russell King 73d2a3cef4 dmaengine: sa11x0: add DMA filters
Add DMA filters for the sa11x0 DMA channels.  This will allow us to
migrate away from directly using the DMA filter function in drivers.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-27 16:11:51 +05:30
Pierre-Yves MORDRET df7e762db5 dmaengine: Add STM32 DMAMUX driver
This patch implements the STM32 DMAMUX driver.

The DMAMUX request multiplexer allows routing a DMA request line between
the peripherals and the DMA controllers of the product. The routing
function is ensured by a programmable multi-channel DMA request line
multiplexer. Each channel selects a unique DMA request line,
unconditionally or synchronously with events from its DMAMUX
synchronization inputs. The DMAMUX may also be used as a DMA request
generator from programmable events on its input trigger signals

Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-27 16:01:35 +05:30
Sricharan R 6b4faeac05 dmaengine: qcom-bam: Process multiple pending descriptors
The bam dmaengine has a circular FIFO to which we
add hw descriptors that describes the transaction.
The FIFO has space for about 4096 hw descriptors.

Currently we add one descriptor and wait for it to
complete with interrupt and then add the next pending
descriptor. In this way, the FIFO is underutilized
since only one descriptor is processed at a time, although
there is space in FIFO for the BAM to process more.

Instead keep adding descriptors to FIFO till its full,
that allows BAM to continue to work on the next descriptor
immediately after signalling completion interrupt for the
previous descriptor.

Also when the client has not set the DMA_PREP_INTERRUPT for
a descriptor, then do not configure BAM to trigger a interrupt
upon completion of that descriptor. This way we get a interrupt
only for the descriptor for which DMA_PREP_INTERRUPT was
requested and there signal completion of all the previous completed
descriptors. So we still do callbacks for all requested descriptors,
but just that the number of interrupts are reduced.

CURRENT:

            ------      -------   ---------------
            |DES 0|     |DESC 1|  |DESC 2 + INT |
            ------      -------   ---------------
               |           |            |
               |           |            |
INTERRUPT:   (INT)       (INT)	      (INT)
CALLBACK:     (CB)        (CB)         (CB)

		MTD_SPEEDTEST READ PAGE: 3560 KiB/s
		MTD_SPEEDTEST WRITE PAGE: 2664 KiB/s
		IOZONE READ: 2456 KB/s
		IOZONE WRITE: 1230 KB/s

	bam dma interrupts (after tests): 96508

CHANGE:

        ------  -------    -------------
        |DES 0| |DESC 1   |DESC 2 + INT |
        ------  -------   --------------
				|
				|
          		      (INT)
			      (CB for 0, 1, 2)

		MTD_SPEEDTEST READ PAGE: 3860 KiB/s
		MTD_SPEEDTEST WRITE PAGE: 2837 KiB/s
		IOZONE READ: 2677 KB/s
		IOZONE WRITE: 1308 KB/s

	bam dma interrupts (after tests): 58806

Signed-off-by: Sricharan R <sricharan@codeaurora.org>
Reviewed-by: Andy Gross <andy.gross@linaro.org>
Tested-by: Abhishek Sahu <absahu@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-25 11:47:26 +05:30
Peter Ujfalusi 2ccb4837c9 dmaengine: ti-dma-crossbar: Fix possible race condition with dma_inuse
When looking for unused xbar_out lane we should also protect the set_bit()
call with the same mutex to protect against concurrent threads picking the
same ID.

Fixes: ec9bfa1e1a ("dmaengine: ti-dma-crossbar: dra7: Use bitops instead of idr")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: stable@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-21 23:03:42 +05:30
Corentin Labbe 8f3b00347b dmaengine: sun6i: use of_device_get_match_data
The usage of of_device_get_match_data reduce the code size a bit.
Furthermore, it prevents an improbable dereference when
of_match_device() return NULL.

Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-21 22:53:38 +05:30
Peter Ujfalusi 87a2f622cc dmaengine: edma: Align the memcpy acnt array size with the transfer
Memory to Memory transfers does not have any special alignment needs
regarding to acnt array size, but if one of the areas are in memory mapped
regions (like PCIe memory), we need to make sure that the acnt array size
is aligned with the mem copy parameters.

Before "dmaengine: edma: Optimize memcpy operation" change the memcpy was set
up in a different way: acnt == number of bytes in a word based on
__ffs((src | dest | len), bcnt and ccnt for looping the necessary number of
words to comlete the trasnfer.

Instead of reverting the commit we can fix it to make sure that the ACNT size
is aligned to the traswnfer.

Fixes: df6694f803 (dmaengine: edma: Optimize memcpy operation)
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: stable@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-21 22:51:07 +05:30
Nicolin Chen f9d4a398f1 dmaengine: imx-sdma: Correct src_addr_widths and directions
The driver already supports DMA_DEV_TO_DEV in sdma_config(),
DMA_SLAVE_BUSWIDTH_2_BYTES and DMA_SLAVE_BUSWIDTH_1_BYTE in
sdma_prep_slave_sg(). So this patch adds them to the lists.

Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-21 22:45:42 +05:30
Lars-Peter Clausen f3ae7d9155 dmaengine: xilinx_dma: Move enum xdma_ip_type to driver file
The enum xdma_ip_type is only used inside the Xilinx DMA driver and not
exported to any consumers (nor should it be). So move it from the global
header to driver file itself.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-17 18:59:54 +05:30
Lars-Peter Clausen 008913dbeb dmaengine: axi-dmac: Fix software cyclic mode
When running in software cyclic mode the driver currently does not go back
to the first segment once the last segment has been reached. Effectively
making the transfer non-cyclic.

Fix this by going back to the first segment once the last segment has been
reached for cyclic transfers.

Special care need to be taken to avoid a segment from being submitted
multiple times concurrently, which could happen for transfers with a number
of segments that is smaller than the DMA controller's internal queue.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-17 18:58:18 +05:30
Lars-Peter Clausen 63ab76dbbd dmaengine: axi-dmac: Only use hardware cyclic mode for single segment transfers
In hardware cyclic mode the submitted segment is repeated. This means
hardware cyclic mode can only be used if the transfer has a single segment.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-17 18:58:18 +05:30
Linus Torvalds cd7b34fe1c dmaengine updates for 4.14-rc1
- Removal of DMA_SG support as we have no users for this feature
  - New driver for Altera / Intel mSGDMA IP core
  - Support for memset in dmatest and qcom_hidma driver
  - Update for non cyclic mode in k3dma, bunch of update in bam_dma, bcm sba-raid
  - Constify device ids across drivers
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJZsXteAAoJEHwUBw8lI4NHPwcP/iihF1n7jOQVtUm3zxPvUV+n
 GzU7+rqAEDLKBaIttK28LIjgvg0AC/4aiEsosfCzTjpkzMHteRw00YyplwF7/wdM
 O0owKOIub4PriDiL6d/SWFnhcWwv0/KLbyKscQcOwwvkksG/mwMn1VfW7alCrz1w
 81TOQaW9SxLxL7guJU0aQHljkudT53l8Dgsp55iC9Ccz515Iuu7dQm3DnSG3sYjJ
 Ct4u4MWWzDmmKKpbDoYe/Z+fiQT0WKuGfI7QHURVnw5qLo2sDKREWGbThhRG/lZj
 YlnLQnkjWwLU5dyX1MyIWipPxe83sjf/7OwJ7XUlLjD6o+lNEuQxjmNkVAh0hNRc
 dgrXRuqPRJMW40uOvAMDHTkexxikWc5ggt5LN9dIYDOdaS4Ch5ewf19SRi9pSDap
 FZeIWY1FWwQCAU7HQMwSYyRLBjlmEmeSkElkXCd+2wu5aH2oKOMUMbUIYcqL4fjD
 qMAR7kfn6e92fDT1gR1ZKL79Cfe9zsCQA3XmecpC/HwqiE3XtfZuDY/73cXD0MeO
 SbJUCv4ldPGjrTKBHvs0wiWbxi5Mj5sXglmSaD0lEhtMsOfhPHY2BGatTzSmKKwO
 WwmKAvM8qElQZy2Eh25dvlE04yAOofoJb6Pf/AraQOLTUkMyF8wRWEpltjUuttM9
 VzQLvh8s25naKM5mOAM2
 =88SI
 -----END PGP SIGNATURE-----

Merge tag 'dmaengine-4.14-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This one features the usual updates to the drivers and one good part
  of removing DA_SG from core as it has no users.

  Summary:

   - Remove DMA_SG support as we have no users for this feature
   - New driver for Altera / Intel mSGDMA IP core
   - Support for memset in dmatest and qcom_hidma driver
   - Update for non cyclic mode in k3dma, bunch of update in bam_dma,
     bcm sba-raid
   - Constify device ids across drivers"

* tag 'dmaengine-4.14-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (52 commits)
  dmaengine: sun6i: support V3s SoC variant
  dmaengine: sun6i: make gate bit in sun8i's DMA engines a common quirk
  dmaengine: rcar-dmac: document R8A77970 bindings
  dmaengine: xilinx_dma: Fix error code format specifier
  dmaengine: altera: Use macros instead of structs to describe the registers
  dmaengine: ti-dma-crossbar: Fix dra7 reserve function
  dmaengine: pl330: constify amba_id
  dmaengine: pl08x: constify amba_id
  dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_COMPLETED
  dmaengine: bcm-sba-raid: Explicitly ACK mailbox message after sending
  dmaengine: bcm-sba-raid: Add debugfs support
  dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_RECEIVED
  dmaengine: bcm-sba-raid: Re-factor sba_process_deferred_requests()
  dmaengine: bcm-sba-raid: Pre-ack async tx descriptor
  dmaengine: bcm-sba-raid: Peek mbox when we have no free requests
  dmaengine: bcm-sba-raid: Alloc resources before registering DMA device
  dmaengine: bcm-sba-raid: Improve sba_issue_pending() run duration
  dmaengine: bcm-sba-raid: Increase number of free sba_request
  dmaengine: bcm-sba-raid: Allow arbitrary number free sba_request
  dmaengine: bcm-sba-raid: Remove reqs_free_count from sba_device
  ...
2017-09-07 14:03:05 -07:00
Vinod Koul 41bd0314fa Merge branch 'topic/dmatest' into for-linus 2017-09-06 21:55:10 +05:30
Vinod Koul 346ea25e81 Merge branch 'topic/qcom' into for-linus 2017-09-06 21:54:48 +05:30
Vinod Koul 05890d550c Merge branch 'topic/ppc4xx' into for-linus 2017-09-06 21:54:41 +05:30
Vinod Koul 918a21eeec Merge branch 'topic/of' into for-linus 2017-09-06 21:54:31 +05:30
Vinod Koul dccafbf2d3 Merge branch 'topic/k3dma' into for-linus 2017-09-06 21:54:24 +05:30
Vinod Koul f6cc35eefe Merge branch 'topic/ioat' into for-linus 2017-09-06 21:54:16 +05:30
Vinod Koul 07e24b8559 Merge branch 'topic/bcm' into for-linus 2017-09-06 21:54:09 +05:30
Vinod Koul a431cbafcb Merge branch 'topic/altera' into for-linus 2017-09-06 21:54:01 +05:30
Icenowy Zheng a702e47eab dmaengine: sun6i: support V3s SoC variant
Allwinner V3s has a DMA engine similar to the ones from A31, but with
fewer channels and DRQs.

Add support for it.

Signed-off-by: Icenowy Zheng <icenowy@aosc.xyz>
Acked-by: Chen-Yu Tsai <wens@csie.org>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-05 09:07:20 +05:30
Icenowy Zheng 0430a7c753 dmaengine: sun6i: make gate bit in sun8i's DMA engines a common quirk
Originally we enable a special gate bit when the compatible indicates
A23/33.

But according to BSP sources and user manuals, more SoCs will need this
gate bit.

So make it a common quirk configured in the config struct.

Signed-off-by: Icenowy Zheng <icenowy@aosc.xyz>
Reviewed-by: Chen-Yu Tsai <wens@csie.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-05 09:07:20 +05:30
Lars-Peter Clausen 574897dc14 dmaengine: xilinx_dma: Fix error code format specifier
'err' is a signed int and error codes are typically negative numbers, so
use '%d' instead of '%u' to format the error code in the error message.

Fixes: ba16db36b5 ("dmaengine: vdma: Add clock support")
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-05 09:03:21 +05:30
Stefan Roese 6084fc2ec4 dmaengine: altera: Use macros instead of structs to describe the registers
This patch moves from a struct declaration for the DMA controller
registers to macros with offests to the base address. This is mainly
done to remove the sparse warnings, since the function parameter of
ioread32/iowrite32 is "void __iomem *" instead of a pointer to struct
members. With this patch applied, no sparse warning is seen anymore.

Please note that the struct for the descriptors is still kept in place,
as the code largely accesses the struct members as internal variables
before the complete struct is copied into the descriptor FIFO of the
DMA controller.

Additionally this patch also removes two warnings "variable xxx set but
not used" seen when compiling with "W=1". The registers need to be read
to flush the response FIFO, but nothing needs to be done with them. So
the code is correct here and the warning is a false one.

Signed-off-by: Stefan Roese <sr@denx.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-29 08:17:43 +05:30
Alexander Smirnov a2f6721b42 dmaengine: ti-dma-crossbar: Fix dra7 reserve function
DMA crossbar uses 'xbar->dma_inuse' variable to manage allocated routes.
Each bit represents respective DMA channel. If the channel is free, bit
is set to '0', if channel is allocated, bit should be set to '1'.

In reserve function, the bits for requested DMA channels are cleared, so
they are not really reserved, but freed and become ready for allocation.

Signed-off-by: Alexander Smirnov <asmirnov@ilbers.de>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 22:02:05 +05:30
Arvind Yadav b753351ec8 dmaengine: pl330: constify amba_id
amba_id are not supposed to change at runtime. All functions
working with const amba_id. So mark the non-const structs as const.

Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 21:11:08 +05:30
Arvind Yadav b80fa1217c dmaengine: pl08x: constify amba_id
amba_id are not supposed to change at runtime. All functions
working with const amba_id. So mark the non-const structs as const.

Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 21:11:08 +05:30
Anup Patel ecbf9ef15a dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_COMPLETED
The SBA_REQUEST_STATE_COMPLETED state was added to keep track
of sba_request which got completed but cannot be freed because
underlying Async Tx descriptor was not ACKed by DMA client.

Instead of above, we can free the sba_request with non-ACKed
Async Tx descriptor and sba_alloc_request() will ensure that
it always allocates sba_request with ACKed Async Tx descriptor.
This alternate approach makes SBA_REQUEST_STATE_COMPLETED state
redundant hence this patch removes it.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel 29e0f486d9 dmaengine: bcm-sba-raid: Explicitly ACK mailbox message after sending
We should explicitly ACK mailbox message because after
sending message we can know the send status via error
attribute of brcm_message.

This will also help SBA-RAID to use "txdone_ack" method
whenever mailbox controller supports it.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30