MTD changes:

Core changes:
   * Rework core functions to avoid duplicating generic checks in
     NAND/OneNAND sub-layers
   * Update the MAINTAINERS entry to reflect the fact that MTD
     maintainers now use a single git tree
 
   Driver changes:
   * CFI: use macros instead of inline functions to limit stack
     usage and make KASAN happy
 
 NAND changes:
   Core changes:
   * Fix NAND_CMD_NONE handling in nand_command[_lp]() hooks
   * Introduce the ->exec_op() infrastructure
   * Rework NAND buffers handling
   * Fix ECC requirements for K9F4G08U0D
   * Fix nand_do_read_oob() to return the number of bitflips
   * Mark K9F1G08U0E as not supporting subpage writes
 
   Driver changes:
   * MTK: Rework the driver to support new IP versions
   * OMAP OneNAND: Full rework to use new APIs (libgpio, dmaengine) and fix
     DT support
   * Marvell: Add a new driver to replace the pxa3xx one
 
 SPI NOR changes:
   Core changes:
   * Add support to new ISSI and Cypress/Spansion memory parts.
   * Fix support of Micron memories by checking error bits in the FSR.
   * Fix update of block-protection bits by reading back the SR.
   * Restore the internal state of the SPI flash memory when removing the
     device.
 
   Driver changes:
   * Maintenance for Freescale, Intel and Metiatek drivers.
   * Add support of the direct access mode for the Cadence QSPI controller.
 -----BEGIN PGP SIGNATURE-----
 
 iQJABAABCAAqBQJabumAIxxib3Jpcy5icmV6aWxsb25AZnJlZS1lbGVjdHJvbnMu
 Y29tAAoJEGXtNgF+CLcA0eAP/1s4u/Vs0RaDL2Jog0z+3fdx9HKYTK01hiQoe5Vf
 0ouGH0lR9usAmmJlXxxNpBHFvJxsofJoCNaciHAiydCMBpX6oAQMYMMcPs4Qo7C/
 vydLBDBmKZNyQ9dv6FbjP+3Y/5drIGF+VfxXZwhGA3lwP5CSVbB9ndI8+A5bScIV
 m2RMOA/lorbNHQahEkt7FHd92yQxBXlbhHBf5Foy2dGhO3rpTWzL/d1KPAkcfeli
 ehjfazkbuwFxGlYBFsrWxsnm0zqrqIWtdTE5/0i8iC1FfbxL5KjRnAFg8AsXIepn
 C2rCAxM/890mIFypT/8xhu+1u8+Bmb1r/pA9G+f3zpkiAHcUGC3eMO3IhX/jkcAd
 jCD/zeaSW8uHrBoJA6mGhO1tkBA97w15XCQC38UZkRMaJsY8Rv50ST4afA4in7mi
 bdRnpOOBYsBv9LvLm+FzQ0EgRQl642mFY8rae+gAjkF/zt8zGHSt6UNgtwMRxqZJ
 ns/TyhNm7roYV3cPpAgOWK//9XAGII9YZ6x9XmPNZLq62yf+zqJnfeuy7bXATRfG
 GGYk6wd+VdN+Ax2mqVKEJMCArjz0kLAHOtpIwv2/RxB1dlNMdugaDPUcqFteZbXh
 wlgORLXLqZ8jfy+ITFB5HMDs/NMyuRr815jdPGZafHIx8xOBQD32Izv7cpYctfWU
 f2NU
 =Mxo2
 -----END PGP SIGNATURE-----

Merge tag 'mtd/for-4.16' of git://git.infradead.org/linux-mtd

Pull MTD updates from Boris Brezillon:
 "MTD core changes:
   - Rework core functions to avoid duplicating generic checks in
     NAND/OneNAND sub-layers
   - Update the MAINTAINERS entry to reflect the fact that MTD
     maintainers now use a single git tree

  MTD driver changes:
   - CFI: use macros instead of inline functions to limit stack usage
     and make KASAN happy

  NAND core changes:
   - Fix NAND_CMD_NONE handling in nand_command[_lp]() hooks
   - Introduce the ->exec_op() infrastructure
   - Rework NAND buffers handling
   - Fix ECC requirements for K9F4G08U0D
   - Fix nand_do_read_oob() to return the number of bitflips
   - Mark K9F1G08U0E as not supporting subpage writes

  NAND driver changes:
   - MTK: Rework the driver to support new IP versions
   - OMAP OneNAND: Full rework to use new APIs (libgpio, dmaengine) and
     fix DT support
   - Marvell: Add a new driver to replace the pxa3xx one

  SPI NOR core changes:
   - Add support to new ISSI and Cypress/Spansion memory parts.
   - Fix support of Micron memories by checking error bits in the FSR.
   - Fix update of block-protection bits by reading back the SR.
   - Restore the internal state of the SPI flash memory when removing
     the device.

  SPI NOR driver changes:
   - Maintenance for Freescale, Intel and Metiatek drivers.
   - Add support of the direct access mode for the Cadence QSPI
     controller"

* tag 'mtd/for-4.16' of git://git.infradead.org/linux-mtd: (93 commits)
  mtd: nand: sunxi: Fix ECC strength choice
  mtd: nand: gpmi: Fix subpage reads
  mtd: nand: Fix build issues due to an anonymous union
  mtd: nand: marvell: Fix missing memory allocation modifier
  mtd: nand: marvell: remove redundant variable 'oob_len'
  mtd: nand: marvell: fix spelling mistake: "suceed"-> "succeed"
  mtd: onenand: omap2: Remove redundant dev_err call in omap2_onenand_probe()
  mtd: Remove duplicate checks on mtd_oob_ops parameter
  mtd: Fallback to ->_read/write_oob() when ->_read/write() is missing
  mtd: mtdpart: Make ECC stat handling consistent
  mtd: onenand: omap2: print resource using %pR format string
  mtd: mtk-nor: modify functions' name more generally
  mtd: onenand: samsung: remove incorrect __iomem annotation
  MAINTAINERS: Add entry for Marvell NAND controller driver
  ARM: OMAP2+: Remove gpmc-onenand
  mtd: onenand: omap2: Configure driver from DT
  mtd: onenand: omap2: Decouple DMA enabling from INT pin availability
  mtd: onenand: omap2: Do not make delay for GPIO OMAP3 specific
  mtd: onenand: omap2: Convert to use dmaengine for memcpy
  mtd: onenand: omap2: Unify OMAP2 and OMAP3 DMA implementation
  ...
This commit is contained in:
Linus Torvalds 2018-01-29 11:11:56 -08:00
commit 0fc7e74663
82 changed files with 6863 additions and 2440 deletions

View File

@ -12,7 +12,7 @@ Required properties:
- reg-names: Should contain the reg names "QuadSPI" and "QuadSPI-memory"
- interrupts : Should contain the interrupt for the device
- clocks : The clocks needed by the QuadSPI controller
- clock-names : the name of the clocks
- clock-names : Should contain the name of the clocks: "qspi_en" and "qspi".
Optional properties:
- fsl,qspi-has-second-chip: The controller has two buses, bus A and bus B.

View File

@ -9,13 +9,14 @@ Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
Required properties:
- compatible: "ti,omap2-onenand"
- reg: The CS line the peripheral is connected to
- gpmc,device-width Width of the ONENAND device connected to the GPMC
- gpmc,device-width: Width of the ONENAND device connected to the GPMC
in bytes. Must be 1 or 2.
Optional properties:
- dma-channel: DMA Channel index
- int-gpios: GPIO specifier for the INT pin.
For inline partition table parsing (optional):
@ -35,6 +36,7 @@ Example for an OMAP3430 board:
#size-cells = <1>;
onenand@0 {
compatible = "ti,omap2-onenand";
reg = <0 0 0>; /* CS0, offset 0 */
gpmc,device-width = <2>;

View File

@ -0,0 +1,123 @@
Marvell NAND Flash Controller (NFC)
Required properties:
- compatible: can be one of the following:
* "marvell,armada-8k-nand-controller"
* "marvell,armada370-nand-controller"
* "marvell,pxa3xx-nand-controller"
* "marvell,armada-8k-nand" (deprecated)
* "marvell,armada370-nand" (deprecated)
* "marvell,pxa3xx-nand" (deprecated)
Compatibles marked deprecated support only the old bindings described
at the bottom.
- reg: NAND flash controller memory area.
- #address-cells: shall be set to 1. Encode the NAND CS.
- #size-cells: shall be set to 0.
- interrupts: shall define the NAND controller interrupt.
- clocks: shall reference the NAND controller clock.
- marvell,system-controller: Set to retrieve the syscon node that handles
NAND controller related registers (only required with the
"marvell,armada-8k-nand[-controller]" compatibles).
Optional properties:
- label: see partition.txt. New platforms shall omit this property.
- dmas: shall reference DMA channel associated to the NAND controller.
This property is only used with "marvell,pxa3xx-nand[-controller]"
compatible strings.
- dma-names: shall be "rxtx".
This property is only used with "marvell,pxa3xx-nand[-controller]"
compatible strings.
Optional children nodes:
Children nodes represent the available NAND chips.
Required properties:
- reg: shall contain the native Chip Select ids (0-3).
- nand-rb: see nand.txt (0-1).
Optional properties:
- marvell,nand-keep-config: orders the driver not to take the timings
from the core and leaving them completely untouched. Bootloader
timings will then be used.
- label: MTD name.
- nand-on-flash-bbt: see nand.txt.
- nand-ecc-mode: see nand.txt. Will use hardware ECC if not specified.
- nand-ecc-algo: see nand.txt. This property is essentially useful when
not using hardware ECC. Howerver, it may be added when using hardware
ECC for clarification but will be ignored by the driver because ECC
mode is chosen depending on the page size and the strength required by
the NAND chip. This value may be overwritten with nand-ecc-strength
property.
- nand-ecc-strength: see nand.txt.
- nand-ecc-step-size: see nand.txt. Marvell's NAND flash controller does
use fixed strength (1-bit for Hamming, 16-bit for BCH), so the actual
step size will shrink or grow in order to fit the required strength.
Step sizes are not completely random for all and follow certain
patterns described in AN-379, "Marvell SoC NFC ECC".
See Documentation/devicetree/bindings/mtd/nand.txt for more details on
generic bindings.
Example:
nand_controller: nand-controller@d0000 {
compatible = "marvell,armada370-nand-controller";
reg = <0xd0000 0x54>;
#address-cells = <1>;
#size-cells = <0>;
interrupts = <GIC_SPI 84 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&coredivclk 0>;
nand@0 {
reg = <0>;
label = "main-storage";
nand-rb = <0>;
nand-ecc-mode = "hw";
marvell,nand-keep-config;
nand-on-flash-bbt;
nand-ecc-strength = <4>;
nand-ecc-step-size = <512>;
partitions {
compatible = "fixed-partitions";
#address-cells = <1>;
#size-cells = <1>;
partition@0 {
label = "Rootfs";
reg = <0x00000000 0x40000000>;
};
};
};
};
Note on legacy bindings: One can find, in not-updated device trees,
bindings slightly different than described above with other properties
described below as well as the partitions node at the root of a so
called "nand" node (without clear controller/chip separation).
Legacy properties:
- marvell,nand-enable-arbiter: To enable the arbiter, all boards blindly
used it, this bit was set by the bootloader for many boards and even if
it is marked reserved in several datasheets, it might be needed to set
it (otherwise it is harmless) so whether or not this property is set,
the bit is selected by the driver.
- num-cs: Number of chip-select lines to use, all boards blindly set 1
to this and for a reason, other values would have failed. The value of
this property is ignored.
Example:
nand0: nand@43100000 {
compatible = "marvell,pxa3xx-nand";
reg = <0x43100000 90>;
interrupts = <45>;
dmas = <&pdma 97 0>;
dma-names = "rxtx";
#address-cells = <1>;
marvell,nand-keep-config;
marvell,nand-enable-arbiter;
num-cs = <1>;
/* Partitions (optional) */
};

View File

@ -12,8 +12,10 @@ tree nodes.
The first part of NFC is NAND Controller Interface (NFI) HW.
Required NFI properties:
- compatible: Should be one of "mediatek,mt2701-nfc",
"mediatek,mt2712-nfc".
- compatible: Should be one of
"mediatek,mt2701-nfc",
"mediatek,mt2712-nfc",
"mediatek,mt7622-nfc".
- reg: Base physical address and size of NFI.
- interrupts: Interrupts of NFI.
- clocks: NFI required clocks.
@ -142,7 +144,10 @@ Example:
==============
Required BCH properties:
- compatible: Should be one of "mediatek,mt2701-ecc", "mediatek,mt2712-ecc".
- compatible: Should be one of
"mediatek,mt2701-ecc",
"mediatek,mt2712-ecc",
"mediatek,mt7622-ecc".
- reg: Base physical address and size of ECC.
- interrupts: Interrupts of ECC.
- clocks: ECC required clocks.

View File

@ -43,6 +43,7 @@ Optional NAND chip properties:
This is particularly useful when only the in-band area is
used by the upper layers, and you want to make your NAND
as reliable as possible.
- nand-rb: shall contain the native Ready/Busy ids.
The ECC strength and ECC step size properties define the correction capability
of a controller. Together, they say a controller can correct "{strength} bit

View File

@ -60,3 +60,6 @@ The main API is spi_nor_scan(). Before you call the hook, a driver should
initialize the necessary fields for spi_nor{}. Please see
drivers/mtd/spi-nor/spi-nor.c for detail. Please also refer to fsl-quadspi.c
when you want to write a new driver for a SPI NOR controller.
Another API is spi_nor_restore(), this is used to restore the status of SPI
flash chip such as addressing mode. Call it whenever detach the driver from
device or reboot the system.

View File

@ -2392,13 +2392,6 @@ F: Documentation/devicetree/bindings/input/atmel,maxtouch.txt
F: drivers/input/touchscreen/atmel_mxt_ts.c
F: include/linux/platform_data/atmel_mxt_ts.h
ATMEL NAND DRIVER
M: Wenyou Yang <wenyou.yang@atmel.com>
M: Josh Wu <rainyfeeling@outlook.com>
L: linux-mtd@lists.infradead.org
S: Supported
F: drivers/mtd/nand/atmel/*
ATMEL SAMA5D2 ADC DRIVER
M: Ludovic Desroches <ludovic.desroches@microchip.com>
L: linux-iio@vger.kernel.org
@ -8406,6 +8399,13 @@ L: linux-wireless@vger.kernel.org
S: Odd Fixes
F: drivers/net/wireless/marvell/mwl8k.c
MARVELL NAND CONTROLLER DRIVER
M: Miquel Raynal <miquel.raynal@free-electrons.com>
L: linux-mtd@lists.infradead.org
S: Maintained
F: drivers/mtd/nand/marvell_nand.c
F: Documentation/devicetree/bindings/mtd/marvell-nand.txt
MARVELL SOC MMC/SD/SDIO CONTROLLER DRIVER
M: Nicolas Pitre <nico@fluxnic.net>
S: Odd Fixes
@ -8953,7 +8953,7 @@ L: linux-mtd@lists.infradead.org
W: http://www.linux-mtd.infradead.org/
Q: http://patchwork.ozlabs.org/project/linux-mtd/list/
T: git git://git.infradead.org/linux-mtd.git master
T: git git://git.infradead.org/l2-mtd.git master
T: git git://git.infradead.org/linux-mtd.git mtd/next
S: Maintained
F: Documentation/devicetree/bindings/mtd/
F: drivers/mtd/
@ -9042,6 +9042,14 @@ F: drivers/media/platform/atmel/atmel-isc.c
F: drivers/media/platform/atmel/atmel-isc-regs.h
F: devicetree/bindings/media/atmel-isc.txt
MICROCHIP / ATMEL NAND DRIVER
M: Wenyou Yang <wenyou.yang@microchip.com>
M: Josh Wu <rainyfeeling@outlook.com>
L: linux-mtd@lists.infradead.org
S: Supported
F: drivers/mtd/nand/atmel/*
F: Documentation/devicetree/bindings/mtd/atmel-nand.txt
MICROCHIP KSZ SERIES ETHERNET SWITCH DRIVER
M: Woojung Huh <Woojung.Huh@microchip.com>
M: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com>
@ -9342,7 +9350,7 @@ L: linux-mtd@lists.infradead.org
W: http://www.linux-mtd.infradead.org/
Q: http://patchwork.ozlabs.org/project/linux-mtd/list/
T: git git://git.infradead.org/linux-mtd.git nand/fixes
T: git git://git.infradead.org/l2-mtd.git nand/next
T: git git://git.infradead.org/linux-mtd.git nand/next
S: Maintained
F: drivers/mtd/nand/
F: include/linux/mtd/*nand*.h
@ -12787,7 +12795,7 @@ L: linux-mtd@lists.infradead.org
W: http://www.linux-mtd.infradead.org/
Q: http://patchwork.ozlabs.org/project/linux-mtd/list/
T: git git://git.infradead.org/linux-mtd.git spi-nor/fixes
T: git git://git.infradead.org/l2-mtd.git spi-nor/next
T: git git://git.infradead.org/linux-mtd.git spi-nor/next
S: Maintained
F: drivers/mtd/spi-nor/
F: include/linux/mtd/spi-nor.h

View File

@ -52,6 +52,7 @@
onenand@0,0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "ti,omap2-onenand";
reg = <0 0 0x20000>; /* CS0, offset 0, IO size 128K */
gpmc,sync-read;

View File

@ -147,32 +147,32 @@
gpmc,sync-read;
gpmc,sync-write;
gpmc,burst-length = <16>;
gpmc,burst-read;
gpmc,burst-wrap;
gpmc,burst-read;
gpmc,burst-write;
gpmc,device-width = <2>; /* GPMC_DEVWIDTH_16BIT */
gpmc,mux-add-data = <2>; /* GPMC_MUX_AD */
gpmc,cs-on-ns = <0>;
gpmc,cs-rd-off-ns = <87>;
gpmc,cs-wr-off-ns = <87>;
gpmc,cs-rd-off-ns = <96>;
gpmc,cs-wr-off-ns = <96>;
gpmc,adv-on-ns = <0>;
gpmc,adv-rd-off-ns = <10>;
gpmc,adv-wr-off-ns = <10>;
gpmc,oe-on-ns = <15>;
gpmc,oe-off-ns = <87>;
gpmc,adv-rd-off-ns = <12>;
gpmc,adv-wr-off-ns = <12>;
gpmc,oe-on-ns = <18>;
gpmc,oe-off-ns = <96>;
gpmc,we-on-ns = <0>;
gpmc,we-off-ns = <87>;
gpmc,rd-cycle-ns = <112>;
gpmc,wr-cycle-ns = <112>;
gpmc,access-ns = <81>;
gpmc,page-burst-access-ns = <15>;
gpmc,we-off-ns = <96>;
gpmc,rd-cycle-ns = <114>;
gpmc,wr-cycle-ns = <114>;
gpmc,access-ns = <90>;
gpmc,page-burst-access-ns = <12>;
gpmc,bus-turnaround-ns = <0>;
gpmc,cycle2cycle-delay-ns = <0>;
gpmc,wait-monitoring-ns = <0>;
gpmc,clk-activation-ns = <5>;
gpmc,clk-activation-ns = <6>;
gpmc,wr-data-mux-bus-ns = <30>;
gpmc,wr-access-ns = <81>;
gpmc,sync-clk-ps = <15000>;
gpmc,wr-access-ns = <90>;
gpmc,sync-clk-ps = <12000>;
#address-cells = <1>;
#size-cells = <1>;

View File

@ -838,6 +838,7 @@
onenand@0,0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "ti,omap2-onenand";
reg = <0 0 0x20000>; /* CS0, offset 0, IO size 128K */
gpmc,sync-read;

View File

@ -367,6 +367,7 @@
onenand@0,0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "ti,omap2-onenand";
reg = <0 0 0x20000>; /* CS0, offset 0, IO size 128K */
gpmc,sync-read;

View File

@ -154,6 +154,7 @@
linux,mtd-name= "samsung,kfm2g16q2m-deb8";
#address-cells = <1>;
#size-cells = <1>;
compatible = "ti,omap2-onenand";
reg = <2 0 0x20000>; /* CS2, offset 0, IO size 4 */
gpmc,device-width = <2>;

View File

@ -57,7 +57,7 @@ CONFIG_MTD_CFI_STAA=y
CONFIG_MTD_PHYSMAP_OF=y
CONFIG_MTD_M25P80=y
CONFIG_MTD_NAND=y
CONFIG_MTD_NAND_PXA3xx=y
CONFIG_MTD_NAND_MARVELL=y
CONFIG_MTD_SPI_NOR=y
CONFIG_SRAM=y
CONFIG_MTD_UBI=y

View File

@ -232,6 +232,3 @@ obj-y += $(omap-hsmmc-m) $(omap-hsmmc-y)
obj-y += omap_phy_internal.o
obj-$(CONFIG_MACH_OMAP2_TUSB6010) += usb-tusb6010.o
onenand-$(CONFIG_MTD_ONENAND_OMAP2) := gpmc-onenand.o
obj-y += $(onenand-m) $(onenand-y)

View File

@ -1,409 +0,0 @@
/*
* linux/arch/arm/mach-omap2/gpmc-onenand.c
*
* Copyright (C) 2006 - 2009 Nokia Corporation
* Contacts: Juha Yrjola
* Tony Lindgren
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/string.h>
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <linux/mtd/onenand_regs.h>
#include <linux/io.h>
#include <linux/omap-gpmc.h>
#include <linux/platform_data/mtd-onenand-omap2.h>
#include <linux/err.h>
#include <asm/mach/flash.h>
#include "soc.h"
#define ONENAND_IO_SIZE SZ_128K
#define ONENAND_FLAG_SYNCREAD (1 << 0)
#define ONENAND_FLAG_SYNCWRITE (1 << 1)
#define ONENAND_FLAG_HF (1 << 2)
#define ONENAND_FLAG_VHF (1 << 3)
static unsigned onenand_flags;
static unsigned latency;
static struct omap_onenand_platform_data *gpmc_onenand_data;
static struct resource gpmc_onenand_resource = {
.flags = IORESOURCE_MEM,
};
static struct platform_device gpmc_onenand_device = {
.name = "omap2-onenand",
.id = -1,
.num_resources = 1,
.resource = &gpmc_onenand_resource,
};
static struct gpmc_settings onenand_async = {
.device_width = GPMC_DEVWIDTH_16BIT,
.mux_add_data = GPMC_MUX_AD,
};
static struct gpmc_settings onenand_sync = {
.burst_read = true,
.burst_wrap = true,
.burst_len = GPMC_BURST_16,
.device_width = GPMC_DEVWIDTH_16BIT,
.mux_add_data = GPMC_MUX_AD,
.wait_pin = 0,
};
static void omap2_onenand_calc_async_timings(struct gpmc_timings *t)
{
struct gpmc_device_timings dev_t;
const int t_cer = 15;
const int t_avdp = 12;
const int t_aavdh = 7;
const int t_ce = 76;
const int t_aa = 76;
const int t_oe = 20;
const int t_cez = 20; /* max of t_cez, t_oez */
const int t_wpl = 40;
const int t_wph = 30;
memset(&dev_t, 0, sizeof(dev_t));
dev_t.t_avdp_r = max_t(int, t_avdp, t_cer) * 1000;
dev_t.t_avdp_w = dev_t.t_avdp_r;
dev_t.t_aavdh = t_aavdh * 1000;
dev_t.t_aa = t_aa * 1000;
dev_t.t_ce = t_ce * 1000;
dev_t.t_oe = t_oe * 1000;
dev_t.t_cez_r = t_cez * 1000;
dev_t.t_cez_w = dev_t.t_cez_r;
dev_t.t_wpl = t_wpl * 1000;
dev_t.t_wph = t_wph * 1000;
gpmc_calc_timings(t, &onenand_async, &dev_t);
}
static void omap2_onenand_set_async_mode(void __iomem *onenand_base)
{
u32 reg;
/* Ensure sync read and sync write are disabled */
reg = readw(onenand_base + ONENAND_REG_SYS_CFG1);
reg &= ~ONENAND_SYS_CFG1_SYNC_READ & ~ONENAND_SYS_CFG1_SYNC_WRITE;
writew(reg, onenand_base + ONENAND_REG_SYS_CFG1);
}
static void set_onenand_cfg(void __iomem *onenand_base)
{
u32 reg = ONENAND_SYS_CFG1_RDY | ONENAND_SYS_CFG1_INT;
reg |= (latency << ONENAND_SYS_CFG1_BRL_SHIFT) |
ONENAND_SYS_CFG1_BL_16;
if (onenand_flags & ONENAND_FLAG_SYNCREAD)
reg |= ONENAND_SYS_CFG1_SYNC_READ;
else
reg &= ~ONENAND_SYS_CFG1_SYNC_READ;
if (onenand_flags & ONENAND_FLAG_SYNCWRITE)
reg |= ONENAND_SYS_CFG1_SYNC_WRITE;
else
reg &= ~ONENAND_SYS_CFG1_SYNC_WRITE;
if (onenand_flags & ONENAND_FLAG_HF)
reg |= ONENAND_SYS_CFG1_HF;
else
reg &= ~ONENAND_SYS_CFG1_HF;
if (onenand_flags & ONENAND_FLAG_VHF)
reg |= ONENAND_SYS_CFG1_VHF;
else
reg &= ~ONENAND_SYS_CFG1_VHF;
writew(reg, onenand_base + ONENAND_REG_SYS_CFG1);
}
static int omap2_onenand_get_freq(struct omap_onenand_platform_data *cfg,
void __iomem *onenand_base)
{
u16 ver = readw(onenand_base + ONENAND_REG_VERSION_ID);
int freq;
switch ((ver >> 4) & 0xf) {
case 0:
freq = 40;
break;
case 1:
freq = 54;
break;
case 2:
freq = 66;
break;
case 3:
freq = 83;
break;
case 4:
freq = 104;
break;
default:
pr_err("onenand rate not detected, bad GPMC async timings?\n");
freq = 0;
}
return freq;
}
static void omap2_onenand_calc_sync_timings(struct gpmc_timings *t,
unsigned int flags,
int freq)
{
struct gpmc_device_timings dev_t;
const int t_cer = 15;
const int t_avdp = 12;
const int t_cez = 20; /* max of t_cez, t_oez */
const int t_wpl = 40;
const int t_wph = 30;
int min_gpmc_clk_period, t_ces, t_avds, t_avdh, t_ach, t_aavdh, t_rdyo;
int div, gpmc_clk_ns;
if (flags & ONENAND_SYNC_READ)
onenand_flags = ONENAND_FLAG_SYNCREAD;
else if (flags & ONENAND_SYNC_READWRITE)
onenand_flags = ONENAND_FLAG_SYNCREAD | ONENAND_FLAG_SYNCWRITE;
switch (freq) {
case 104:
min_gpmc_clk_period = 9600; /* 104 MHz */
t_ces = 3;
t_avds = 4;
t_avdh = 2;
t_ach = 3;
t_aavdh = 6;
t_rdyo = 6;
break;
case 83:
min_gpmc_clk_period = 12000; /* 83 MHz */
t_ces = 5;
t_avds = 4;
t_avdh = 2;
t_ach = 6;
t_aavdh = 6;
t_rdyo = 9;
break;
case 66:
min_gpmc_clk_period = 15000; /* 66 MHz */
t_ces = 6;
t_avds = 5;
t_avdh = 2;
t_ach = 6;
t_aavdh = 6;
t_rdyo = 11;
break;
default:
min_gpmc_clk_period = 18500; /* 54 MHz */
t_ces = 7;
t_avds = 7;
t_avdh = 7;
t_ach = 9;
t_aavdh = 7;
t_rdyo = 15;
onenand_flags &= ~ONENAND_FLAG_SYNCWRITE;
break;
}
div = gpmc_calc_divider(min_gpmc_clk_period);
gpmc_clk_ns = gpmc_ticks_to_ns(div);
if (gpmc_clk_ns < 15) /* >66MHz */
onenand_flags |= ONENAND_FLAG_HF;
else
onenand_flags &= ~ONENAND_FLAG_HF;
if (gpmc_clk_ns < 12) /* >83MHz */
onenand_flags |= ONENAND_FLAG_VHF;
else
onenand_flags &= ~ONENAND_FLAG_VHF;
if (onenand_flags & ONENAND_FLAG_VHF)
latency = 8;
else if (onenand_flags & ONENAND_FLAG_HF)
latency = 6;
else if (gpmc_clk_ns >= 25) /* 40 MHz*/
latency = 3;
else
latency = 4;
/* Set synchronous read timings */
memset(&dev_t, 0, sizeof(dev_t));
if (onenand_flags & ONENAND_FLAG_SYNCREAD)
onenand_sync.sync_read = true;
if (onenand_flags & ONENAND_FLAG_SYNCWRITE) {
onenand_sync.sync_write = true;
onenand_sync.burst_write = true;
} else {
dev_t.t_avdp_w = max(t_avdp, t_cer) * 1000;
dev_t.t_wpl = t_wpl * 1000;
dev_t.t_wph = t_wph * 1000;
dev_t.t_aavdh = t_aavdh * 1000;
}
dev_t.ce_xdelay = true;
dev_t.avd_xdelay = true;
dev_t.oe_xdelay = true;
dev_t.we_xdelay = true;
dev_t.clk = min_gpmc_clk_period;
dev_t.t_bacc = dev_t.clk;
dev_t.t_ces = t_ces * 1000;
dev_t.t_avds = t_avds * 1000;
dev_t.t_avdh = t_avdh * 1000;
dev_t.t_ach = t_ach * 1000;
dev_t.cyc_iaa = (latency + 1);
dev_t.t_cez_r = t_cez * 1000;
dev_t.t_cez_w = dev_t.t_cez_r;
dev_t.cyc_aavdh_oe = 1;
dev_t.t_rdyo = t_rdyo * 1000 + min_gpmc_clk_period;
gpmc_calc_timings(t, &onenand_sync, &dev_t);
}
static int omap2_onenand_setup_async(void __iomem *onenand_base)
{
struct gpmc_timings t;
int ret;
/*
* Note that we need to keep sync_write set for the call to
* omap2_onenand_set_async_mode() to work to detect the onenand
* supported clock rate for the sync timings.
*/
if (gpmc_onenand_data->of_node) {
gpmc_read_settings_dt(gpmc_onenand_data->of_node,
&onenand_async);
if (onenand_async.sync_read || onenand_async.sync_write) {
if (onenand_async.sync_write)
gpmc_onenand_data->flags |=
ONENAND_SYNC_READWRITE;
else
gpmc_onenand_data->flags |= ONENAND_SYNC_READ;
onenand_async.sync_read = false;
}
}
onenand_async.sync_write = true;
omap2_onenand_calc_async_timings(&t);
ret = gpmc_cs_program_settings(gpmc_onenand_data->cs, &onenand_async);
if (ret < 0)
return ret;
ret = gpmc_cs_set_timings(gpmc_onenand_data->cs, &t, &onenand_async);
if (ret < 0)
return ret;
omap2_onenand_set_async_mode(onenand_base);
return 0;
}
static int omap2_onenand_setup_sync(void __iomem *onenand_base, int *freq_ptr)
{
int ret, freq = *freq_ptr;
struct gpmc_timings t;
if (!freq) {
/* Very first call freq is not known */
freq = omap2_onenand_get_freq(gpmc_onenand_data, onenand_base);
if (!freq)
return -ENODEV;
set_onenand_cfg(onenand_base);
}
if (gpmc_onenand_data->of_node) {
gpmc_read_settings_dt(gpmc_onenand_data->of_node,
&onenand_sync);
} else {
/*
* FIXME: Appears to be legacy code from initial ONENAND commit.
* Unclear what boards this is for and if this can be removed.
*/
if (!cpu_is_omap34xx())
onenand_sync.wait_on_read = true;
}
omap2_onenand_calc_sync_timings(&t, gpmc_onenand_data->flags, freq);
ret = gpmc_cs_program_settings(gpmc_onenand_data->cs, &onenand_sync);
if (ret < 0)
return ret;
ret = gpmc_cs_set_timings(gpmc_onenand_data->cs, &t, &onenand_sync);
if (ret < 0)
return ret;
set_onenand_cfg(onenand_base);
*freq_ptr = freq;
return 0;
}
static int gpmc_onenand_setup(void __iomem *onenand_base, int *freq_ptr)
{
struct device *dev = &gpmc_onenand_device.dev;
unsigned l = ONENAND_SYNC_READ | ONENAND_SYNC_READWRITE;
int ret;
ret = omap2_onenand_setup_async(onenand_base);
if (ret) {
dev_err(dev, "unable to set to async mode\n");
return ret;
}
if (!(gpmc_onenand_data->flags & l))
return 0;
ret = omap2_onenand_setup_sync(onenand_base, freq_ptr);
if (ret)
dev_err(dev, "unable to set to sync mode\n");
return ret;
}
int gpmc_onenand_init(struct omap_onenand_platform_data *_onenand_data)
{
int err;
struct device *dev = &gpmc_onenand_device.dev;
gpmc_onenand_data = _onenand_data;
gpmc_onenand_data->onenand_setup = gpmc_onenand_setup;
gpmc_onenand_device.dev.platform_data = gpmc_onenand_data;
if (cpu_is_omap24xx() &&
(gpmc_onenand_data->flags & ONENAND_SYNC_READWRITE)) {
dev_warn(dev, "OneNAND using only SYNC_READ on 24xx\n");
gpmc_onenand_data->flags &= ~ONENAND_SYNC_READWRITE;
gpmc_onenand_data->flags |= ONENAND_SYNC_READ;
}
if (cpu_is_omap34xx())
gpmc_onenand_data->flags |= ONENAND_IN_OMAP34XX;
else
gpmc_onenand_data->flags &= ~ONENAND_IN_OMAP34XX;
err = gpmc_cs_request(gpmc_onenand_data->cs, ONENAND_IO_SIZE,
(unsigned long *)&gpmc_onenand_resource.start);
if (err < 0) {
dev_err(dev, "Cannot request GPMC CS %d, error %d\n",
gpmc_onenand_data->cs, err);
return err;
}
gpmc_onenand_resource.end = gpmc_onenand_resource.start +
ONENAND_IO_SIZE - 1;
err = platform_device_register(&gpmc_onenand_device);
if (err) {
dev_err(dev, "Unable to register OneNAND device\n");
gpmc_cs_free(gpmc_onenand_data->cs);
}
return err;
}

View File

@ -161,7 +161,7 @@ CONFIG_MTD_BLOCK=y
CONFIG_MTD_M25P80=y
CONFIG_MTD_NAND=y
CONFIG_MTD_NAND_DENALI_DT=y
CONFIG_MTD_NAND_PXA3xx=y
CONFIG_MTD_NAND_MARVELL=y
CONFIG_MTD_SPI_NOR=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_NBD=m

View File

@ -32,7 +32,6 @@
#include <linux/pm_runtime.h>
#include <linux/platform_data/mtd-nand-omap2.h>
#include <linux/platform_data/mtd-onenand-omap2.h>
#include <asm/mach-types.h>
@ -1138,6 +1137,112 @@ struct gpmc_nand_ops *gpmc_omap_get_nand_ops(struct gpmc_nand_regs *reg, int cs)
}
EXPORT_SYMBOL_GPL(gpmc_omap_get_nand_ops);
static void gpmc_omap_onenand_calc_sync_timings(struct gpmc_timings *t,
struct gpmc_settings *s,
int freq, int latency)
{
struct gpmc_device_timings dev_t;
const int t_cer = 15;
const int t_avdp = 12;
const int t_cez = 20; /* max of t_cez, t_oez */
const int t_wpl = 40;
const int t_wph = 30;
int min_gpmc_clk_period, t_ces, t_avds, t_avdh, t_ach, t_aavdh, t_rdyo;
switch (freq) {
case 104:
min_gpmc_clk_period = 9600; /* 104 MHz */
t_ces = 3;
t_avds = 4;
t_avdh = 2;
t_ach = 3;
t_aavdh = 6;
t_rdyo = 6;
break;
case 83:
min_gpmc_clk_period = 12000; /* 83 MHz */
t_ces = 5;
t_avds = 4;
t_avdh = 2;
t_ach = 6;
t_aavdh = 6;
t_rdyo = 9;
break;
case 66:
min_gpmc_clk_period = 15000; /* 66 MHz */
t_ces = 6;
t_avds = 5;
t_avdh = 2;
t_ach = 6;
t_aavdh = 6;
t_rdyo = 11;
break;
default:
min_gpmc_clk_period = 18500; /* 54 MHz */
t_ces = 7;
t_avds = 7;
t_avdh = 7;
t_ach = 9;
t_aavdh = 7;
t_rdyo = 15;
break;
}
/* Set synchronous read timings */
memset(&dev_t, 0, sizeof(dev_t));
if (!s->sync_write) {
dev_t.t_avdp_w = max(t_avdp, t_cer) * 1000;
dev_t.t_wpl = t_wpl * 1000;
dev_t.t_wph = t_wph * 1000;
dev_t.t_aavdh = t_aavdh * 1000;
}
dev_t.ce_xdelay = true;
dev_t.avd_xdelay = true;
dev_t.oe_xdelay = true;
dev_t.we_xdelay = true;
dev_t.clk = min_gpmc_clk_period;
dev_t.t_bacc = dev_t.clk;
dev_t.t_ces = t_ces * 1000;
dev_t.t_avds = t_avds * 1000;
dev_t.t_avdh = t_avdh * 1000;
dev_t.t_ach = t_ach * 1000;
dev_t.cyc_iaa = (latency + 1);
dev_t.t_cez_r = t_cez * 1000;
dev_t.t_cez_w = dev_t.t_cez_r;
dev_t.cyc_aavdh_oe = 1;
dev_t.t_rdyo = t_rdyo * 1000 + min_gpmc_clk_period;
gpmc_calc_timings(t, s, &dev_t);
}
int gpmc_omap_onenand_set_timings(struct device *dev, int cs, int freq,
int latency,
struct gpmc_onenand_info *info)
{
int ret;
struct gpmc_timings gpmc_t;
struct gpmc_settings gpmc_s;
gpmc_read_settings_dt(dev->of_node, &gpmc_s);
info->sync_read = gpmc_s.sync_read;
info->sync_write = gpmc_s.sync_write;
info->burst_len = gpmc_s.burst_len;
if (!gpmc_s.sync_read && !gpmc_s.sync_write)
return 0;
gpmc_omap_onenand_calc_sync_timings(&gpmc_t, &gpmc_s, freq, latency);
ret = gpmc_cs_program_settings(cs, &gpmc_s);
if (ret < 0)
return ret;
return gpmc_cs_set_timings(cs, &gpmc_t, &gpmc_s);
}
EXPORT_SYMBOL_GPL(gpmc_omap_onenand_set_timings);
int gpmc_get_client_irq(unsigned irq_config)
{
if (!gpmc_irq_domain) {
@ -1916,41 +2021,6 @@ static void __maybe_unused gpmc_read_timings_dt(struct device_node *np,
of_property_read_bool(np, "gpmc,time-para-granularity");
}
#if IS_ENABLED(CONFIG_MTD_ONENAND)
static int gpmc_probe_onenand_child(struct platform_device *pdev,
struct device_node *child)
{
u32 val;
struct omap_onenand_platform_data *gpmc_onenand_data;
if (of_property_read_u32(child, "reg", &val) < 0) {
dev_err(&pdev->dev, "%pOF has no 'reg' property\n",
child);
return -ENODEV;
}
gpmc_onenand_data = devm_kzalloc(&pdev->dev, sizeof(*gpmc_onenand_data),
GFP_KERNEL);
if (!gpmc_onenand_data)
return -ENOMEM;
gpmc_onenand_data->cs = val;
gpmc_onenand_data->of_node = child;
gpmc_onenand_data->dma_channel = -1;
if (!of_property_read_u32(child, "dma-channel", &val))
gpmc_onenand_data->dma_channel = val;
return gpmc_onenand_init(gpmc_onenand_data);
}
#else
static int gpmc_probe_onenand_child(struct platform_device *pdev,
struct device_node *child)
{
return 0;
}
#endif
/**
* gpmc_probe_generic_child - configures the gpmc for a child device
* @pdev: pointer to gpmc platform device
@ -2053,6 +2123,16 @@ static int gpmc_probe_generic_child(struct platform_device *pdev,
}
}
if (of_node_cmp(child->name, "onenand") == 0) {
/* Warn about older DT blobs with no compatible property */
if (!of_property_read_bool(child, "compatible")) {
dev_warn(&pdev->dev,
"Incompatible OneNAND node: missing compatible");
ret = -EINVAL;
goto err;
}
}
if (of_device_is_compatible(child, "ti,omap2-nand")) {
/* NAND specific setup */
val = 8;
@ -2077,8 +2157,9 @@ static int gpmc_probe_generic_child(struct platform_device *pdev,
} else {
ret = of_property_read_u32(child, "bank-width",
&gpmc_s.device_width);
if (ret < 0) {
dev_err(&pdev->dev, "%pOF has no 'bank-width' property\n",
if (ret < 0 && !gpmc_s.device_width) {
dev_err(&pdev->dev,
"%pOF has no 'gpmc,device-width' property\n",
child);
goto err;
}
@ -2188,11 +2269,7 @@ static void gpmc_probe_dt_children(struct platform_device *pdev)
if (!child->name)
continue;
if (of_node_cmp(child->name, "onenand") == 0)
ret = gpmc_probe_onenand_child(pdev, child);
else
ret = gpmc_probe_generic_child(pdev, child);
ret = gpmc_probe_generic_child(pdev, child);
if (ret) {
dev_err(&pdev->dev, "failed to probe DT child '%s': %d\n",
child->name, ret);

View File

@ -904,9 +904,6 @@ static int doc_read_oob(struct mtd_info *mtd, loff_t from,
if (ooblen % DOC_LAYOUT_OOB_SIZE)
return -EINVAL;
if (from + len > mtd->size)
return -EINVAL;
ops->oobretlen = 0;
ops->retlen = 0;
ret = 0;
@ -990,36 +987,6 @@ err_in_read:
goto out;
}
/**
* doc_read - Read bytes from flash
* @mtd: the device
* @from: the offset from first block and first page, in bytes, aligned on page
* size
* @len: the number of bytes to read (must be a multiple of 4)
* @retlen: the number of bytes actually read
* @buf: the filled in buffer
*
* Reads flash memory pages. This function does not read the OOB chunk, but only
* the page data.
*
* Returns 0 if read successful, of -EIO, -EINVAL if an error occurred
*/
static int doc_read(struct mtd_info *mtd, loff_t from, size_t len,
size_t *retlen, u_char *buf)
{
struct mtd_oob_ops ops;
size_t ret;
memset(&ops, 0, sizeof(ops));
ops.datbuf = buf;
ops.len = len;
ops.mode = MTD_OPS_AUTO_OOB;
ret = doc_read_oob(mtd, from, &ops);
*retlen = ops.retlen;
return ret;
}
static int doc_reload_bbt(struct docg3 *docg3)
{
int block = DOC_LAYOUT_BLOCK_BBT;
@ -1471,8 +1438,6 @@ static int doc_write_oob(struct mtd_info *mtd, loff_t ofs,
if (len && ooblen &&
(len / DOC_LAYOUT_PAGE_SIZE) != (ooblen / oobdelta))
return -EINVAL;
if (ofs + len > mtd->size)
return -EINVAL;
ops->oobretlen = 0;
ops->retlen = 0;
@ -1513,39 +1478,6 @@ static int doc_write_oob(struct mtd_info *mtd, loff_t ofs,
return ret;
}
/**
* doc_write - Write a buffer to the chip
* @mtd: the device
* @to: the offset from first block and first page, in bytes, aligned on page
* size
* @len: the number of bytes to write (must be a full page size, ie. 512)
* @retlen: the number of bytes actually written (0 or 512)
* @buf: the buffer to get bytes from
*
* Writes data to the chip.
*
* Returns 0 if write successful, -EIO if write error
*/
static int doc_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
{
struct docg3 *docg3 = mtd->priv;
int ret;
struct mtd_oob_ops ops;
doc_dbg("doc_write(to=%lld, len=%zu)\n", to, len);
ops.datbuf = (char *)buf;
ops.len = len;
ops.mode = MTD_OPS_PLACE_OOB;
ops.oobbuf = NULL;
ops.ooblen = 0;
ops.ooboffs = 0;
ret = doc_write_oob(mtd, to, &ops);
*retlen = ops.retlen;
return ret;
}
static struct docg3 *sysfs_dev2docg3(struct device *dev,
struct device_attribute *attr)
{
@ -1866,8 +1798,6 @@ static int __init doc_set_driver_info(int chip_id, struct mtd_info *mtd)
mtd->writebufsize = mtd->writesize = DOC_LAYOUT_PAGE_SIZE;
mtd->oobsize = DOC_LAYOUT_OOB_SIZE;
mtd->_erase = doc_erase;
mtd->_read = doc_read;
mtd->_write = doc_write;
mtd->_read_oob = doc_read_oob;
mtd->_write_oob = doc_write_oob;
mtd->_block_isbad = doc_block_isbad;

View File

@ -307,10 +307,18 @@ static int m25p_remove(struct spi_device *spi)
{
struct m25p *flash = spi_get_drvdata(spi);
spi_nor_restore(&flash->spi_nor);
/* Clean up MTD stuff. */
return mtd_device_unregister(&flash->spi_nor.mtd);
}
static void m25p_shutdown(struct spi_device *spi)
{
struct m25p *flash = spi_get_drvdata(spi);
spi_nor_restore(&flash->spi_nor);
}
/*
* Do NOT add to this array without reading the following:
*
@ -386,6 +394,7 @@ static struct spi_driver m25p80_driver = {
.id_table = m25p_ids,
.probe = m25p_probe,
.remove = m25p_remove,
.shutdown = m25p_shutdown,
/* REVISIT: many of these chips have deep power-down modes, which
* should clearly be entered on suspend() to minimize power use.

View File

@ -68,6 +68,7 @@ static int mchp23k256_write(struct mtd_info *mtd, loff_t to, size_t len,
struct spi_transfer transfer[2] = {};
struct spi_message message;
unsigned char command[MAX_CMD_SIZE];
int ret;
spi_message_init(&message);
@ -84,12 +85,16 @@ static int mchp23k256_write(struct mtd_info *mtd, loff_t to, size_t len,
mutex_lock(&flash->lock);
spi_sync(flash->spi, &message);
ret = spi_sync(flash->spi, &message);
mutex_unlock(&flash->lock);
if (ret)
return ret;
if (retlen && message.actual_length > sizeof(command))
*retlen += message.actual_length - sizeof(command);
mutex_unlock(&flash->lock);
return 0;
}
@ -100,6 +105,7 @@ static int mchp23k256_read(struct mtd_info *mtd, loff_t from, size_t len,
struct spi_transfer transfer[2] = {};
struct spi_message message;
unsigned char command[MAX_CMD_SIZE];
int ret;
spi_message_init(&message);
@ -117,12 +123,16 @@ static int mchp23k256_read(struct mtd_info *mtd, loff_t from, size_t len,
mutex_lock(&flash->lock);
spi_sync(flash->spi, &message);
ret = spi_sync(flash->spi, &message);
mutex_unlock(&flash->lock);
if (ret)
return ret;
if (retlen && message.actual_length > sizeof(command))
*retlen += message.actual_length - sizeof(command);
mutex_unlock(&flash->lock);
return 0;
}

View File

@ -503,6 +503,11 @@ int add_mtd_device(struct mtd_info *mtd)
return -EEXIST;
BUG_ON(mtd->writesize == 0);
if (WARN_ON((!mtd->erasesize || !mtd->_erase) &&
!(mtd->flags & MTD_NO_ERASE)))
return -EINVAL;
mutex_lock(&mtd_table_mutex);
i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL);
@ -1053,7 +1058,20 @@ int mtd_read(struct mtd_info *mtd, loff_t from, size_t len, size_t *retlen,
* representing the maximum number of bitflips that were corrected on
* any one ecc region (if applicable; zero otherwise).
*/
ret_code = mtd->_read(mtd, from, len, retlen, buf);
if (mtd->_read) {
ret_code = mtd->_read(mtd, from, len, retlen, buf);
} else if (mtd->_read_oob) {
struct mtd_oob_ops ops = {
.len = len,
.datbuf = buf,
};
ret_code = mtd->_read_oob(mtd, from, &ops);
*retlen = ops.retlen;
} else {
return -ENOTSUPP;
}
if (unlikely(ret_code < 0))
return ret_code;
if (mtd->ecc_strength == 0)
@ -1068,11 +1086,25 @@ int mtd_write(struct mtd_info *mtd, loff_t to, size_t len, size_t *retlen,
*retlen = 0;
if (to < 0 || to >= mtd->size || len > mtd->size - to)
return -EINVAL;
if (!mtd->_write || !(mtd->flags & MTD_WRITEABLE))
if ((!mtd->_write && !mtd->_write_oob) ||
!(mtd->flags & MTD_WRITEABLE))
return -EROFS;
if (!len)
return 0;
ledtrig_mtd_activity();
if (!mtd->_write) {
struct mtd_oob_ops ops = {
.len = len,
.datbuf = (u8 *)buf,
};
int ret;
ret = mtd->_write_oob(mtd, to, &ops);
*retlen = ops.retlen;
return ret;
}
return mtd->_write(mtd, to, len, retlen, buf);
}
EXPORT_SYMBOL_GPL(mtd_write);

View File

@ -105,34 +105,17 @@ static int part_read_oob(struct mtd_info *mtd, loff_t from,
struct mtd_oob_ops *ops)
{
struct mtd_part *part = mtd_to_part(mtd);
struct mtd_ecc_stats stats;
int res;
if (from >= mtd->size)
return -EINVAL;
if (ops->datbuf && from + ops->len > mtd->size)
return -EINVAL;
/*
* If OOB is also requested, make sure that we do not read past the end
* of this partition.
*/
if (ops->oobbuf) {
size_t len, pages;
len = mtd_oobavail(mtd, ops);
pages = mtd_div_by_ws(mtd->size, mtd);
pages -= mtd_div_by_ws(from, mtd);
if (ops->ooboffs + ops->ooblen > pages * len)
return -EINVAL;
}
stats = part->parent->ecc_stats;
res = part->parent->_read_oob(part->parent, from + part->offset, ops);
if (unlikely(res)) {
if (mtd_is_bitflip(res))
mtd->ecc_stats.corrected++;
if (mtd_is_eccerr(res))
mtd->ecc_stats.failed++;
}
if (unlikely(mtd_is_eccerr(res)))
mtd->ecc_stats.failed +=
part->parent->ecc_stats.failed - stats.failed;
else
mtd->ecc_stats.corrected +=
part->parent->ecc_stats.corrected - stats.corrected;
return res;
}
@ -189,10 +172,6 @@ static int part_write_oob(struct mtd_info *mtd, loff_t to,
{
struct mtd_part *part = mtd_to_part(mtd);
if (to >= mtd->size)
return -EINVAL;
if (ops->datbuf && to + ops->len > mtd->size)
return -EINVAL;
return part->parent->_write_oob(part->parent, to + part->offset, ops);
}
@ -435,8 +414,10 @@ static struct mtd_part *allocate_partition(struct mtd_info *parent,
parent->dev.parent;
slave->mtd.dev.of_node = part->of_node;
slave->mtd._read = part_read;
slave->mtd._write = part_write;
if (parent->_read)
slave->mtd._read = part_read;
if (parent->_write)
slave->mtd._write = part_write;
if (parent->_panic_write)
slave->mtd._panic_write = part_panic_write;

View File

@ -1223,8 +1223,9 @@ static int mtdswap_show(struct seq_file *s, void *data)
unsigned int max[MTDSWAP_TREE_CNT];
unsigned int i, cw = 0, cwp = 0, cwecount = 0, bb_cnt, mapped, pages;
uint64_t use_size;
char *name[] = {"clean", "used", "low", "high", "dirty", "bitflip",
"failing"};
static const char * const name[] = {
"clean", "used", "low", "high", "dirty", "bitflip", "failing"
};
mutex_lock(&d->mbd_dev->lock);

View File

@ -315,6 +315,7 @@ config MTD_NAND_ATMEL
config MTD_NAND_PXA3xx
tristate "NAND support on PXA3xx and Armada 370/XP"
depends on !MTD_NAND_MARVELL
depends on PXA3xx || ARCH_MMP || PLAT_ORION || ARCH_MVEBU
help
@ -323,6 +324,18 @@ config MTD_NAND_PXA3xx
platforms (XP, 370, 375, 38x, 39x) and 64-bit Armada
platforms (7K, 8K) (NFCv2).
config MTD_NAND_MARVELL
tristate "NAND controller support on Marvell boards"
depends on PXA3xx || ARCH_MMP || PLAT_ORION || ARCH_MVEBU || \
COMPILE_TEST
depends on HAS_IOMEM
help
This enables the NAND flash controller driver for Marvell boards,
including:
- PXA3xx processors (NFCv1)
- 32-bit Armada platforms (XP, 37x, 38x, 39x) (NFCv2)
- 64-bit Aramda platforms (7k, 8k) (NFCv2)
config MTD_NAND_SLC_LPC32XX
tristate "NXP LPC32xx SLC Controller"
depends on ARCH_LPC32XX
@ -376,9 +389,7 @@ config MTD_NAND_GPMI_NAND
Enables NAND Flash support for IMX23, IMX28 or IMX6.
The GPMI controller is very powerful, with the help of BCH
module, it can do the hardware ECC. The GPMI supports several
NAND flashs at the same time. The GPMI may conflicts with other
block, such as SD card. So pay attention to it when you enable
the GPMI.
NAND flashs at the same time.
config MTD_NAND_BRCMNAND
tristate "Broadcom STB NAND controller"

View File

@ -32,6 +32,7 @@ obj-$(CONFIG_MTD_NAND_OMAP2) += omap2_nand.o
obj-$(CONFIG_MTD_NAND_OMAP_BCH_BUILD) += omap_elm.o
obj-$(CONFIG_MTD_NAND_CM_X270) += cmx270_nand.o
obj-$(CONFIG_MTD_NAND_PXA3xx) += pxa3xx_nand.o
obj-$(CONFIG_MTD_NAND_MARVELL) += marvell_nand.o
obj-$(CONFIG_MTD_NAND_TMIO) += tmio_nand.o
obj-$(CONFIG_MTD_NAND_PLATFORM) += plat_nand.o
obj-$(CONFIG_MTD_NAND_PASEMI) += pasemi_nand.o

View File

@ -841,6 +841,8 @@ static int atmel_nand_pmecc_write_pg(struct nand_chip *chip, const u8 *buf,
struct atmel_nand *nand = to_atmel_nand(chip);
int ret;
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
ret = atmel_nand_pmecc_enable(chip, NAND_ECC_WRITE, raw);
if (ret)
return ret;
@ -857,7 +859,7 @@ static int atmel_nand_pmecc_write_pg(struct nand_chip *chip, const u8 *buf,
atmel_nand_write_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
return nand_prog_page_end_op(chip);
}
static int atmel_nand_pmecc_write_page(struct mtd_info *mtd,
@ -881,6 +883,8 @@ static int atmel_nand_pmecc_read_pg(struct nand_chip *chip, u8 *buf,
struct mtd_info *mtd = nand_to_mtd(chip);
int ret;
nand_read_page_op(chip, page, 0, NULL, 0);
ret = atmel_nand_pmecc_enable(chip, NAND_ECC_READ, raw);
if (ret)
return ret;
@ -1000,7 +1004,7 @@ static int atmel_hsmc_nand_pmecc_read_pg(struct nand_chip *chip, u8 *buf,
* to the non-optimized one.
*/
if (nand->activecs->rb.type != ATMEL_NAND_NATIVE_RB) {
chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page);
nand_read_page_op(chip, page, 0, NULL, 0);
return atmel_nand_pmecc_read_pg(chip, buf, oob_required, page,
raw);
@ -1178,7 +1182,6 @@ static int atmel_hsmc_nand_ecc_init(struct atmel_nand *nand)
chip->ecc.write_page = atmel_hsmc_nand_pmecc_write_page;
chip->ecc.read_page_raw = atmel_hsmc_nand_pmecc_read_page_raw;
chip->ecc.write_page_raw = atmel_hsmc_nand_pmecc_write_page_raw;
chip->ecc.options |= NAND_ECC_CUSTOM_PAGE_ACCESS;
return 0;
}

View File

@ -572,6 +572,8 @@ static void bf5xx_nand_dma_write_buf(struct mtd_info *mtd,
static int bf5xx_nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
uint8_t *buf, int oob_required, int page)
{
nand_read_page_op(chip, page, 0, NULL, 0);
bf5xx_nand_read_buf(mtd, buf, mtd->writesize);
bf5xx_nand_read_buf(mtd, chip->oob_poi, mtd->oobsize);
@ -582,10 +584,10 @@ static int bf5xx_nand_write_page_raw(struct mtd_info *mtd,
struct nand_chip *chip, const uint8_t *buf, int oob_required,
int page)
{
bf5xx_nand_write_buf(mtd, buf, mtd->writesize);
nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize);
bf5xx_nand_write_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
return nand_prog_page_end_op(chip);
}
/*

View File

@ -1071,7 +1071,7 @@ static void brcmnand_wp(struct mtd_info *mtd, int wp)
return;
brcmnand_set_wp(ctrl, wp);
chip->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1);
nand_status_op(chip, NULL);
/* NAND_STATUS_WP 0x00 = protected, 0x80 = not protected */
ret = bcmnand_ctrl_poll_status(ctrl,
NAND_CTRL_RDY |
@ -1453,7 +1453,7 @@ static uint8_t brcmnand_read_byte(struct mtd_info *mtd)
/* At FC_BYTES boundary, switch to next column */
if (host->last_byte > 0 && offs == 0)
chip->cmdfunc(mtd, NAND_CMD_RNDOUT, addr, -1);
nand_change_read_column_op(chip, addr, NULL, 0, false);
ret = ctrl->flash_cache[offs];
break;
@ -1681,7 +1681,7 @@ static int brcmstb_nand_verify_erased_page(struct mtd_info *mtd,
int ret;
if (!buf) {
buf = chip->buffers->databuf;
buf = chip->data_buf;
/* Invalidate page cache */
chip->pagebuf = -1;
}
@ -1689,7 +1689,6 @@ static int brcmstb_nand_verify_erased_page(struct mtd_info *mtd,
sas = mtd->oobsize / chip->ecc.steps;
/* read without ecc for verification */
chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page);
ret = chip->ecc.read_page_raw(mtd, chip, buf, true, page);
if (ret)
return ret;
@ -1793,6 +1792,8 @@ static int brcmnand_read_page(struct mtd_info *mtd, struct nand_chip *chip,
struct brcmnand_host *host = nand_get_controller_data(chip);
u8 *oob = oob_required ? (u8 *)chip->oob_poi : NULL;
nand_read_page_op(chip, page, 0, NULL, 0);
return brcmnand_read(mtd, chip, host->last_addr,
mtd->writesize >> FC_SHIFT, (u32 *)buf, oob);
}
@ -1804,6 +1805,8 @@ static int brcmnand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
u8 *oob = oob_required ? (u8 *)chip->oob_poi : NULL;
int ret;
nand_read_page_op(chip, page, 0, NULL, 0);
brcmnand_set_ecc_enabled(host, 0);
ret = brcmnand_read(mtd, chip, host->last_addr,
mtd->writesize >> FC_SHIFT, (u32 *)buf, oob);
@ -1909,8 +1912,10 @@ static int brcmnand_write_page(struct mtd_info *mtd, struct nand_chip *chip,
struct brcmnand_host *host = nand_get_controller_data(chip);
void *oob = oob_required ? chip->oob_poi : NULL;
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
brcmnand_write(mtd, chip, host->last_addr, (const u32 *)buf, oob);
return 0;
return nand_prog_page_end_op(chip);
}
static int brcmnand_write_page_raw(struct mtd_info *mtd,
@ -1920,10 +1925,12 @@ static int brcmnand_write_page_raw(struct mtd_info *mtd,
struct brcmnand_host *host = nand_get_controller_data(chip);
void *oob = oob_required ? chip->oob_poi : NULL;
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
brcmnand_set_ecc_enabled(host, 0);
brcmnand_write(mtd, chip, host->last_addr, (const u32 *)buf, oob);
brcmnand_set_ecc_enabled(host, 1);
return 0;
return nand_prog_page_end_op(chip);
}
static int brcmnand_write_oob(struct mtd_info *mtd, struct nand_chip *chip,
@ -2193,16 +2200,9 @@ static int brcmnand_setup_dev(struct brcmnand_host *host)
if (ctrl->nand_version >= 0x0702)
tmp |= ACC_CONTROL_RD_ERASED;
tmp &= ~ACC_CONTROL_FAST_PGM_RDIN;
if (ctrl->features & BRCMNAND_HAS_PREFETCH) {
/*
* FIXME: Flash DMA + prefetch may see spurious erased-page ECC
* errors
*/
if (has_flash_dma(ctrl))
tmp &= ~ACC_CONTROL_PREFETCH;
else
tmp |= ACC_CONTROL_PREFETCH;
}
if (ctrl->features & BRCMNAND_HAS_PREFETCH)
tmp &= ~ACC_CONTROL_PREFETCH;
nand_writereg(ctrl, offs, tmp);
return 0;
@ -2230,6 +2230,9 @@ static int brcmnand_init_cs(struct brcmnand_host *host, struct device_node *dn)
nand_set_controller_data(chip, host);
mtd->name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "brcmnand.%d",
host->cs);
if (!mtd->name)
return -ENOMEM;
mtd->owner = THIS_MODULE;
mtd->dev.parent = &pdev->dev;
@ -2369,12 +2372,11 @@ static int brcmnand_resume(struct device *dev)
list_for_each_entry(host, &ctrl->host_list, node) {
struct nand_chip *chip = &host->chip;
struct mtd_info *mtd = nand_to_mtd(chip);
brcmnand_save_restore_cs_config(host, 1);
/* Reset the chip, required by some chips after power-up */
chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1);
nand_reset_op(chip);
}
return 0;

View File

@ -353,23 +353,15 @@ static void cafe_nand_bug(struct mtd_info *mtd)
static int cafe_nand_write_oob(struct mtd_info *mtd,
struct nand_chip *chip, int page)
{
int status = 0;
chip->cmdfunc(mtd, NAND_CMD_SEQIN, mtd->writesize, page);
chip->write_buf(mtd, chip->oob_poi, mtd->oobsize);
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
status = chip->waitfunc(mtd, chip);
return status & NAND_STATUS_FAIL ? -EIO : 0;
return nand_prog_page_op(chip, page, mtd->writesize, chip->oob_poi,
mtd->oobsize);
}
/* Don't use -- use nand_read_oob_std for now */
static int cafe_nand_read_oob(struct mtd_info *mtd, struct nand_chip *chip,
int page)
{
chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page);
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
return nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize);
}
/**
* cafe_nand_read_page_syndrome - [REPLACEABLE] hardware ecc syndrome based page read
@ -391,7 +383,7 @@ static int cafe_nand_read_page(struct mtd_info *mtd, struct nand_chip *chip,
cafe_readl(cafe, NAND_ECC_RESULT),
cafe_readl(cafe, NAND_ECC_SYN01));
chip->read_buf(mtd, buf, mtd->writesize);
nand_read_page_op(chip, page, 0, buf, mtd->writesize);
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
if (checkecc && cafe_readl(cafe, NAND_ECC_RESULT) & (1<<18)) {
@ -549,13 +541,13 @@ static int cafe_nand_write_page_lowlevel(struct mtd_info *mtd,
{
struct cafe_priv *cafe = nand_get_controller_data(chip);
chip->write_buf(mtd, buf, mtd->writesize);
nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize);
chip->write_buf(mtd, chip->oob_poi, mtd->oobsize);
/* Set up ECC autogeneration */
cafe->ctl2 |= (1<<30);
return 0;
return nand_prog_page_end_op(chip);
}
static int cafe_nand_block_bad(struct mtd_info *mtd, loff_t ofs)
@ -613,7 +605,6 @@ static int cafe_nand_probe(struct pci_dev *pdev,
uint32_t ctrl;
int err = 0;
int old_dma;
struct nand_buffers *nbuf;
/* Very old versions shared the same PCI ident for all three
functions on the chip. Verify the class too... */
@ -661,7 +652,6 @@ static int cafe_nand_probe(struct pci_dev *pdev,
/* Enable the following for a flash based bad block table */
cafe->nand.bbt_options = NAND_BBT_USE_FLASH;
cafe->nand.options = NAND_OWN_BUFFERS;
if (skipbbt) {
cafe->nand.options |= NAND_SKIP_BBTSCAN;
@ -731,32 +721,20 @@ static int cafe_nand_probe(struct pci_dev *pdev,
if (err)
goto out_irq;
cafe->dmabuf = dma_alloc_coherent(&cafe->pdev->dev,
2112 + sizeof(struct nand_buffers) +
mtd->writesize + mtd->oobsize,
&cafe->dmaaddr, GFP_KERNEL);
cafe->dmabuf = dma_alloc_coherent(&cafe->pdev->dev, 2112,
&cafe->dmaaddr, GFP_KERNEL);
if (!cafe->dmabuf) {
err = -ENOMEM;
goto out_irq;
}
cafe->nand.buffers = nbuf = (void *)cafe->dmabuf + 2112;
/* Set up DMA address */
cafe_writel(cafe, cafe->dmaaddr & 0xffffffff, NAND_DMA_ADDR0);
if (sizeof(cafe->dmaaddr) > 4)
/* Shift in two parts to shut the compiler up */
cafe_writel(cafe, (cafe->dmaaddr >> 16) >> 16, NAND_DMA_ADDR1);
else
cafe_writel(cafe, 0, NAND_DMA_ADDR1);
cafe_writel(cafe, lower_32_bits(cafe->dmaaddr), NAND_DMA_ADDR0);
cafe_writel(cafe, upper_32_bits(cafe->dmaaddr), NAND_DMA_ADDR1);
cafe_dev_dbg(&cafe->pdev->dev, "Set DMA address to %x (virt %p)\n",
cafe_readl(cafe, NAND_DMA_ADDR0), cafe->dmabuf);
/* this driver does not need the @ecccalc and @ecccode */
nbuf->ecccalc = NULL;
nbuf->ecccode = NULL;
nbuf->databuf = (uint8_t *)(nbuf + 1);
/* Restore the DMA flag */
usedma = old_dma;
@ -801,10 +779,7 @@ static int cafe_nand_probe(struct pci_dev *pdev,
goto out;
out_free_dma:
dma_free_coherent(&cafe->pdev->dev,
2112 + sizeof(struct nand_buffers) +
mtd->writesize + mtd->oobsize,
cafe->dmabuf, cafe->dmaaddr);
dma_free_coherent(&cafe->pdev->dev, 2112, cafe->dmabuf, cafe->dmaaddr);
out_irq:
/* Disable NAND IRQ in global IRQ mask register */
cafe_writel(cafe, ~1 & cafe_readl(cafe, GLOBAL_IRQ_MASK), GLOBAL_IRQ_MASK);
@ -829,10 +804,7 @@ static void cafe_nand_remove(struct pci_dev *pdev)
nand_release(mtd);
free_rs(cafe->rs);
pci_iounmap(pdev, cafe->mmio);
dma_free_coherent(&cafe->pdev->dev,
2112 + sizeof(struct nand_buffers) +
mtd->writesize + mtd->oobsize,
cafe->dmabuf, cafe->dmaaddr);
dma_free_coherent(&cafe->pdev->dev, 2112, cafe->dmabuf, cafe->dmaaddr);
kfree(cafe);
}

View File

@ -330,16 +330,12 @@ static int denali_check_erased_page(struct mtd_info *mtd,
unsigned long uncor_ecc_flags,
unsigned int max_bitflips)
{
uint8_t *ecc_code = chip->buffers->ecccode;
struct denali_nand_info *denali = mtd_to_denali(mtd);
uint8_t *ecc_code = chip->oob_poi + denali->oob_skip_bytes;
int ecc_steps = chip->ecc.steps;
int ecc_size = chip->ecc.size;
int ecc_bytes = chip->ecc.bytes;
int i, ret, stat;
ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0,
chip->ecc.total);
if (ret)
return ret;
int i, stat;
for (i = 0; i < ecc_steps; i++) {
if (!(uncor_ecc_flags & BIT(i)))
@ -645,8 +641,6 @@ static void denali_oob_xfer(struct mtd_info *mtd, struct nand_chip *chip,
int page, int write)
{
struct denali_nand_info *denali = mtd_to_denali(mtd);
unsigned int start_cmd = write ? NAND_CMD_SEQIN : NAND_CMD_READ0;
unsigned int rnd_cmd = write ? NAND_CMD_RNDIN : NAND_CMD_RNDOUT;
int writesize = mtd->writesize;
int oobsize = mtd->oobsize;
uint8_t *bufpoi = chip->oob_poi;
@ -658,11 +652,11 @@ static void denali_oob_xfer(struct mtd_info *mtd, struct nand_chip *chip,
int i, pos, len;
/* BBM at the beginning of the OOB area */
chip->cmdfunc(mtd, start_cmd, writesize, page);
if (write)
chip->write_buf(mtd, bufpoi, oob_skip);
nand_prog_page_begin_op(chip, page, writesize, bufpoi,
oob_skip);
else
chip->read_buf(mtd, bufpoi, oob_skip);
nand_read_page_op(chip, page, writesize, bufpoi, oob_skip);
bufpoi += oob_skip;
/* OOB ECC */
@ -675,30 +669,35 @@ static void denali_oob_xfer(struct mtd_info *mtd, struct nand_chip *chip,
else if (pos + len > writesize)
len = writesize - pos;
chip->cmdfunc(mtd, rnd_cmd, pos, -1);
if (write)
chip->write_buf(mtd, bufpoi, len);
nand_change_write_column_op(chip, pos, bufpoi, len,
false);
else
chip->read_buf(mtd, bufpoi, len);
nand_change_read_column_op(chip, pos, bufpoi, len,
false);
bufpoi += len;
if (len < ecc_bytes) {
len = ecc_bytes - len;
chip->cmdfunc(mtd, rnd_cmd, writesize + oob_skip, -1);
if (write)
chip->write_buf(mtd, bufpoi, len);
nand_change_write_column_op(chip, writesize +
oob_skip, bufpoi,
len, false);
else
chip->read_buf(mtd, bufpoi, len);
nand_change_read_column_op(chip, writesize +
oob_skip, bufpoi,
len, false);
bufpoi += len;
}
}
/* OOB free */
len = oobsize - (bufpoi - chip->oob_poi);
chip->cmdfunc(mtd, rnd_cmd, size - len, -1);
if (write)
chip->write_buf(mtd, bufpoi, len);
nand_change_write_column_op(chip, size - len, bufpoi, len,
false);
else
chip->read_buf(mtd, bufpoi, len);
nand_change_read_column_op(chip, size - len, bufpoi, len,
false);
}
static int denali_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
@ -710,12 +709,12 @@ static int denali_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
int ecc_steps = chip->ecc.steps;
int ecc_size = chip->ecc.size;
int ecc_bytes = chip->ecc.bytes;
void *dma_buf = denali->buf;
void *tmp_buf = denali->buf;
int oob_skip = denali->oob_skip_bytes;
size_t size = writesize + oobsize;
int ret, i, pos, len;
ret = denali_data_xfer(denali, dma_buf, size, page, 1, 0);
ret = denali_data_xfer(denali, tmp_buf, size, page, 1, 0);
if (ret)
return ret;
@ -730,11 +729,11 @@ static int denali_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
else if (pos + len > writesize)
len = writesize - pos;
memcpy(buf, dma_buf + pos, len);
memcpy(buf, tmp_buf + pos, len);
buf += len;
if (len < ecc_size) {
len = ecc_size - len;
memcpy(buf, dma_buf + writesize + oob_skip,
memcpy(buf, tmp_buf + writesize + oob_skip,
len);
buf += len;
}
@ -745,7 +744,7 @@ static int denali_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
uint8_t *oob = chip->oob_poi;
/* BBM at the beginning of the OOB area */
memcpy(oob, dma_buf + writesize, oob_skip);
memcpy(oob, tmp_buf + writesize, oob_skip);
oob += oob_skip;
/* OOB ECC */
@ -758,11 +757,11 @@ static int denali_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
else if (pos + len > writesize)
len = writesize - pos;
memcpy(oob, dma_buf + pos, len);
memcpy(oob, tmp_buf + pos, len);
oob += len;
if (len < ecc_bytes) {
len = ecc_bytes - len;
memcpy(oob, dma_buf + writesize + oob_skip,
memcpy(oob, tmp_buf + writesize + oob_skip,
len);
oob += len;
}
@ -770,7 +769,7 @@ static int denali_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
/* OOB free */
len = oobsize - (oob - chip->oob_poi);
memcpy(oob, dma_buf + size - len, len);
memcpy(oob, tmp_buf + size - len, len);
}
return 0;
@ -788,16 +787,12 @@ static int denali_write_oob(struct mtd_info *mtd, struct nand_chip *chip,
int page)
{
struct denali_nand_info *denali = mtd_to_denali(mtd);
int status;
denali_reset_irq(denali);
denali_oob_xfer(mtd, chip, page, 1);
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
status = chip->waitfunc(mtd, chip);
return status & NAND_STATUS_FAIL ? -EIO : 0;
return nand_prog_page_end_op(chip);
}
static int denali_read_page(struct mtd_info *mtd, struct nand_chip *chip,
@ -841,7 +836,7 @@ static int denali_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
int ecc_steps = chip->ecc.steps;
int ecc_size = chip->ecc.size;
int ecc_bytes = chip->ecc.bytes;
void *dma_buf = denali->buf;
void *tmp_buf = denali->buf;
int oob_skip = denali->oob_skip_bytes;
size_t size = writesize + oobsize;
int i, pos, len;
@ -851,7 +846,7 @@ static int denali_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
* This simplifies the logic.
*/
if (!buf || !oob_required)
memset(dma_buf, 0xff, size);
memset(tmp_buf, 0xff, size);
/* Arrange the buffer for syndrome payload/ecc layout */
if (buf) {
@ -864,11 +859,11 @@ static int denali_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
else if (pos + len > writesize)
len = writesize - pos;
memcpy(dma_buf + pos, buf, len);
memcpy(tmp_buf + pos, buf, len);
buf += len;
if (len < ecc_size) {
len = ecc_size - len;
memcpy(dma_buf + writesize + oob_skip, buf,
memcpy(tmp_buf + writesize + oob_skip, buf,
len);
buf += len;
}
@ -879,7 +874,7 @@ static int denali_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
const uint8_t *oob = chip->oob_poi;
/* BBM at the beginning of the OOB area */
memcpy(dma_buf + writesize, oob, oob_skip);
memcpy(tmp_buf + writesize, oob, oob_skip);
oob += oob_skip;
/* OOB ECC */
@ -892,11 +887,11 @@ static int denali_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
else if (pos + len > writesize)
len = writesize - pos;
memcpy(dma_buf + pos, oob, len);
memcpy(tmp_buf + pos, oob, len);
oob += len;
if (len < ecc_bytes) {
len = ecc_bytes - len;
memcpy(dma_buf + writesize + oob_skip, oob,
memcpy(tmp_buf + writesize + oob_skip, oob,
len);
oob += len;
}
@ -904,10 +899,10 @@ static int denali_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
/* OOB free */
len = oobsize - (oob - chip->oob_poi);
memcpy(dma_buf + size - len, oob, len);
memcpy(tmp_buf + size - len, oob, len);
}
return denali_data_xfer(denali, dma_buf, size, page, 1, 1);
return denali_data_xfer(denali, tmp_buf, size, page, 1, 1);
}
static int denali_write_page(struct mtd_info *mtd, struct nand_chip *chip,
@ -951,7 +946,7 @@ static int denali_erase(struct mtd_info *mtd, int page)
irq_status = denali_wait_for_irq(denali,
INTR__ERASE_COMP | INTR__ERASE_FAIL);
return irq_status & INTR__ERASE_COMP ? 0 : NAND_STATUS_FAIL;
return irq_status & INTR__ERASE_COMP ? 0 : -EIO;
}
static int denali_setup_data_interface(struct mtd_info *mtd, int chipnr,
@ -1359,7 +1354,6 @@ int denali_init(struct denali_nand_info *denali)
chip->read_buf = denali_read_buf;
chip->write_buf = denali_write_buf;
}
chip->ecc.options |= NAND_ECC_CUSTOM_PAGE_ACCESS;
chip->ecc.read_page = denali_read_page;
chip->ecc.read_page_raw = denali_read_page_raw;
chip->ecc.write_page = denali_write_page;

View File

@ -329,7 +329,7 @@ struct denali_nand_info {
#define DENALI_CAP_DMA_64BIT BIT(1)
int denali_calc_ecc_bytes(int step_size, int strength);
extern int denali_init(struct denali_nand_info *denali);
extern void denali_remove(struct denali_nand_info *denali);
int denali_init(struct denali_nand_info *denali);
void denali_remove(struct denali_nand_info *denali);
#endif /* __DENALI_H__ */

View File

@ -125,3 +125,7 @@ static struct pci_driver denali_pci_driver = {
.remove = denali_pci_remove,
};
module_pci_driver(denali_pci_driver);
MODULE_DESCRIPTION("PCI driver for Denali NAND controller");
MODULE_AUTHOR("Intel Corporation and its suppliers");
MODULE_LICENSE("GPL v2");

View File

@ -448,7 +448,7 @@ static int doc200x_wait(struct mtd_info *mtd, struct nand_chip *this)
int status;
DoC_WaitReady(doc);
this->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1);
nand_status_op(this, NULL);
DoC_WaitReady(doc);
status = (int)this->read_byte(mtd);
@ -595,7 +595,7 @@ static void doc2001plus_select_chip(struct mtd_info *mtd, int chip)
/* Assert ChipEnable and deassert WriteProtect */
WriteDOC((DOC_FLASH_CE), docptr, Mplus_FlashSelect);
this->cmdfunc(mtd, NAND_CMD_RESET, -1, -1);
nand_reset_op(this);
doc->curchip = chip;
doc->curfloor = floor;

View File

@ -785,6 +785,8 @@ static int read_page(struct mtd_info *mtd, struct nand_chip *nand,
dev_dbg(doc->dev, "%s: page %08x\n", __func__, page);
nand_read_page_op(nand, page, 0, NULL, 0);
writew(DOC_ECCCONF0_READ_MODE |
DOC_ECCCONF0_ECC_ENABLE |
DOC_ECCCONF0_UNKNOWN |
@ -864,7 +866,7 @@ static int docg4_read_oob(struct mtd_info *mtd, struct nand_chip *nand,
dev_dbg(doc->dev, "%s: page %x\n", __func__, page);
docg4_command(mtd, NAND_CMD_READ0, nand->ecc.size, page);
nand_read_page_op(nand, page, nand->ecc.size, NULL, 0);
writew(DOC_ECCCONF0_READ_MODE | DOCG4_OOB_SIZE, docptr + DOC_ECCCONF0);
write_nop(docptr);
@ -900,6 +902,7 @@ static int docg4_erase_block(struct mtd_info *mtd, int page)
struct docg4_priv *doc = nand_get_controller_data(nand);
void __iomem *docptr = doc->virtadr;
uint16_t g4_page;
int status;
dev_dbg(doc->dev, "%s: page %04x\n", __func__, page);
@ -939,11 +942,15 @@ static int docg4_erase_block(struct mtd_info *mtd, int page)
poll_status(doc);
write_nop(docptr);
return nand->waitfunc(mtd, nand);
status = nand->waitfunc(mtd, nand);
if (status < 0)
return status;
return status & NAND_STATUS_FAIL ? -EIO : 0;
}
static int write_page(struct mtd_info *mtd, struct nand_chip *nand,
const uint8_t *buf, bool use_ecc)
const uint8_t *buf, int page, bool use_ecc)
{
struct docg4_priv *doc = nand_get_controller_data(nand);
void __iomem *docptr = doc->virtadr;
@ -951,6 +958,8 @@ static int write_page(struct mtd_info *mtd, struct nand_chip *nand,
dev_dbg(doc->dev, "%s...\n", __func__);
nand_prog_page_begin_op(nand, page, 0, NULL, 0);
writew(DOC_ECCCONF0_ECC_ENABLE |
DOC_ECCCONF0_UNKNOWN |
DOCG4_BCH_SIZE,
@ -995,19 +1004,19 @@ static int write_page(struct mtd_info *mtd, struct nand_chip *nand,
writew(0, docptr + DOC_DATAEND);
write_nop(docptr);
return 0;
return nand_prog_page_end_op(nand);
}
static int docg4_write_page_raw(struct mtd_info *mtd, struct nand_chip *nand,
const uint8_t *buf, int oob_required, int page)
{
return write_page(mtd, nand, buf, false);
return write_page(mtd, nand, buf, page, false);
}
static int docg4_write_page(struct mtd_info *mtd, struct nand_chip *nand,
const uint8_t *buf, int oob_required, int page)
{
return write_page(mtd, nand, buf, true);
return write_page(mtd, nand, buf, page, true);
}
static int docg4_write_oob(struct mtd_info *mtd, struct nand_chip *nand,

View File

@ -713,7 +713,7 @@ static int fsl_elbc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
struct fsl_lbc_ctrl *ctrl = priv->ctrl;
struct fsl_elbc_fcm_ctrl *elbc_fcm_ctrl = ctrl->nand;
fsl_elbc_read_buf(mtd, buf, mtd->writesize);
nand_read_page_op(chip, page, 0, buf, mtd->writesize);
if (oob_required)
fsl_elbc_read_buf(mtd, chip->oob_poi, mtd->oobsize);
@ -729,10 +729,10 @@ static int fsl_elbc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
static int fsl_elbc_write_page(struct mtd_info *mtd, struct nand_chip *chip,
const uint8_t *buf, int oob_required, int page)
{
fsl_elbc_write_buf(mtd, buf, mtd->writesize);
nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize);
fsl_elbc_write_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
return nand_prog_page_end_op(chip);
}
/* ECC will be calculated automatically, and errors will be detected in
@ -742,10 +742,10 @@ static int fsl_elbc_write_subpage(struct mtd_info *mtd, struct nand_chip *chip,
uint32_t offset, uint32_t data_len,
const uint8_t *buf, int oob_required, int page)
{
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
fsl_elbc_write_buf(mtd, buf, mtd->writesize);
fsl_elbc_write_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
return nand_prog_page_end_op(chip);
}
static int fsl_elbc_chip_init(struct fsl_elbc_mtd *priv)

View File

@ -688,7 +688,7 @@ static int fsl_ifc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
struct fsl_ifc_ctrl *ctrl = priv->ctrl;
struct fsl_ifc_nand_ctrl *nctrl = ifc_nand_ctrl;
fsl_ifc_read_buf(mtd, buf, mtd->writesize);
nand_read_page_op(chip, page, 0, buf, mtd->writesize);
if (oob_required)
fsl_ifc_read_buf(mtd, chip->oob_poi, mtd->oobsize);
@ -711,10 +711,10 @@ static int fsl_ifc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
static int fsl_ifc_write_page(struct mtd_info *mtd, struct nand_chip *chip,
const uint8_t *buf, int oob_required, int page)
{
fsl_ifc_write_buf(mtd, buf, mtd->writesize);
nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize);
fsl_ifc_write_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
return nand_prog_page_end_op(chip);
}
static int fsl_ifc_chip_init_tail(struct mtd_info *mtd)
@ -916,6 +916,13 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
if (ctrl->version >= FSL_IFC_VERSION_1_1_0)
fsl_ifc_sram_init(priv);
/*
* As IFC version 2.0.0 has 16KB of internal SRAM as compared to older
* versions which had 8KB. Hence bufnum mask needs to be updated.
*/
if (ctrl->version >= FSL_IFC_VERSION_2_0_0)
priv->bufnum_mask = (priv->bufnum_mask * 2) + 1;
return 0;
}

View File

@ -684,8 +684,8 @@ static int fsmc_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
int eccbytes = chip->ecc.bytes;
int eccsteps = chip->ecc.steps;
uint8_t *p = buf;
uint8_t *ecc_calc = chip->buffers->ecccalc;
uint8_t *ecc_code = chip->buffers->ecccode;
uint8_t *ecc_calc = chip->ecc.calc_buf;
uint8_t *ecc_code = chip->ecc.code_buf;
int off, len, group = 0;
/*
* ecc_oob is intentionally taken as uint16_t. In 16bit devices, we
@ -697,7 +697,7 @@ static int fsmc_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
unsigned int max_bitflips = 0;
for (i = 0, s = 0; s < eccsteps; s++, i += eccbytes, p += eccsize) {
chip->cmdfunc(mtd, NAND_CMD_READ0, s * eccsize, page);
nand_read_page_op(chip, page, s * eccsize, NULL, 0);
chip->ecc.hwctl(mtd, NAND_ECC_READ);
chip->read_buf(mtd, p, eccsize);
@ -720,8 +720,7 @@ static int fsmc_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
if (chip->options & NAND_BUSWIDTH_16)
len = roundup(len, 2);
chip->cmdfunc(mtd, NAND_CMD_READOOB, off, page);
chip->read_buf(mtd, oob + j, len);
nand_read_oob_op(chip, page, off, oob + j, len);
j += len;
}

View File

@ -1029,11 +1029,13 @@ static void block_mark_swapping(struct gpmi_nand_data *this,
p[1] = (p[1] & mask) | (from_oob >> (8 - bit));
}
static int gpmi_ecc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
uint8_t *buf, int oob_required, int page)
static int gpmi_ecc_read_page_data(struct nand_chip *chip,
uint8_t *buf, int oob_required,
int page)
{
struct gpmi_nand_data *this = nand_get_controller_data(chip);
struct bch_geometry *nfc_geo = &this->bch_geometry;
struct mtd_info *mtd = nand_to_mtd(chip);
void *payload_virt;
dma_addr_t payload_phys;
void *auxiliary_virt;
@ -1094,8 +1096,8 @@ static int gpmi_ecc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
eccbytes = DIV_ROUND_UP(offset + eccbits, 8);
offset /= 8;
eccbytes -= offset;
chip->cmdfunc(mtd, NAND_CMD_RNDOUT, offset, -1);
chip->read_buf(mtd, eccbuf, eccbytes);
nand_change_read_column_op(chip, offset, eccbuf,
eccbytes, false);
/*
* ECC data are not byte aligned and we may have
@ -1176,6 +1178,14 @@ static int gpmi_ecc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
return max_bitflips;
}
static int gpmi_ecc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
uint8_t *buf, int oob_required, int page)
{
nand_read_page_op(chip, page, 0, NULL, 0);
return gpmi_ecc_read_page_data(chip, buf, oob_required, page);
}
/* Fake a virtual small page for the subpage read */
static int gpmi_ecc_read_subpage(struct mtd_info *mtd, struct nand_chip *chip,
uint32_t offs, uint32_t len, uint8_t *buf, int page)
@ -1220,12 +1230,12 @@ static int gpmi_ecc_read_subpage(struct mtd_info *mtd, struct nand_chip *chip,
meta = geo->metadata_size;
if (first) {
col = meta + (size + ecc_parity_size) * first;
chip->cmdfunc(mtd, NAND_CMD_RNDOUT, col, -1);
meta = 0;
buf = buf + first * size;
}
nand_read_page_op(chip, page, col, NULL, 0);
/* Save the old environment */
r1_old = r1_new = readl(bch_regs + HW_BCH_FLASH0LAYOUT0);
r2_old = r2_new = readl(bch_regs + HW_BCH_FLASH0LAYOUT1);
@ -1254,7 +1264,7 @@ static int gpmi_ecc_read_subpage(struct mtd_info *mtd, struct nand_chip *chip,
/* Read the subpage now */
this->swap_block_mark = false;
max_bitflips = gpmi_ecc_read_page(mtd, chip, buf, 0, page);
max_bitflips = gpmi_ecc_read_page_data(chip, buf, 0, page);
/* Restore */
writel(r1_old, bch_regs + HW_BCH_FLASH0LAYOUT0);
@ -1277,6 +1287,9 @@ static int gpmi_ecc_write_page(struct mtd_info *mtd, struct nand_chip *chip,
int ret;
dev_dbg(this->dev, "ecc write page.\n");
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
if (this->swap_block_mark) {
/*
* If control arrives here, we're doing block mark swapping.
@ -1338,7 +1351,10 @@ exit_auxiliary:
payload_virt, payload_phys);
}
return 0;
if (ret)
return ret;
return nand_prog_page_end_op(chip);
}
/*
@ -1411,7 +1427,7 @@ static int gpmi_ecc_read_oob(struct mtd_info *mtd, struct nand_chip *chip,
memset(chip->oob_poi, ~0, mtd->oobsize);
/* Read out the conventional OOB. */
chip->cmdfunc(mtd, NAND_CMD_READ0, mtd->writesize, page);
nand_read_page_op(chip, page, mtd->writesize, NULL, 0);
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
/*
@ -1421,7 +1437,7 @@ static int gpmi_ecc_read_oob(struct mtd_info *mtd, struct nand_chip *chip,
*/
if (GPMI_IS_MX23(this)) {
/* Read the block mark into the first byte of the OOB buffer. */
chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
nand_read_page_op(chip, page, 0, NULL, 0);
chip->oob_poi[0] = chip->read_byte(mtd);
}
@ -1432,7 +1448,6 @@ static int
gpmi_ecc_write_oob(struct mtd_info *mtd, struct nand_chip *chip, int page)
{
struct mtd_oob_region of = { };
int status = 0;
/* Do we have available oob area? */
mtd_ooblayout_free(mtd, 0, &of);
@ -1442,12 +1457,8 @@ gpmi_ecc_write_oob(struct mtd_info *mtd, struct nand_chip *chip, int page)
if (!nand_is_slc(chip))
return -EPERM;
chip->cmdfunc(mtd, NAND_CMD_SEQIN, mtd->writesize + of.offset, page);
chip->write_buf(mtd, chip->oob_poi + of.offset, of.length);
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
status = chip->waitfunc(mtd, chip);
return status & NAND_STATUS_FAIL ? -EIO : 0;
return nand_prog_page_op(chip, page, mtd->writesize + of.offset,
chip->oob_poi + of.offset, of.length);
}
/*
@ -1477,8 +1488,8 @@ static int gpmi_ecc_read_page_raw(struct mtd_info *mtd,
uint8_t *oob = chip->oob_poi;
int step;
chip->read_buf(mtd, tmp_buf,
mtd->writesize + mtd->oobsize);
nand_read_page_op(chip, page, 0, tmp_buf,
mtd->writesize + mtd->oobsize);
/*
* If required, swap the bad block marker and the data stored in the
@ -1487,12 +1498,8 @@ static int gpmi_ecc_read_page_raw(struct mtd_info *mtd,
* See the layout description for a detailed explanation on why this
* is needed.
*/
if (this->swap_block_mark) {
u8 swap = tmp_buf[0];
tmp_buf[0] = tmp_buf[mtd->writesize];
tmp_buf[mtd->writesize] = swap;
}
if (this->swap_block_mark)
swap(tmp_buf[0], tmp_buf[mtd->writesize]);
/*
* Copy the metadata section into the oob buffer (this section is
@ -1615,31 +1622,22 @@ static int gpmi_ecc_write_page_raw(struct mtd_info *mtd,
* See the layout description for a detailed explanation on why this
* is needed.
*/
if (this->swap_block_mark) {
u8 swap = tmp_buf[0];
if (this->swap_block_mark)
swap(tmp_buf[0], tmp_buf[mtd->writesize]);
tmp_buf[0] = tmp_buf[mtd->writesize];
tmp_buf[mtd->writesize] = swap;
}
chip->write_buf(mtd, tmp_buf, mtd->writesize + mtd->oobsize);
return 0;
return nand_prog_page_op(chip, page, 0, tmp_buf,
mtd->writesize + mtd->oobsize);
}
static int gpmi_ecc_read_oob_raw(struct mtd_info *mtd, struct nand_chip *chip,
int page)
{
chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
return gpmi_ecc_read_page_raw(mtd, chip, NULL, 1, page);
}
static int gpmi_ecc_write_oob_raw(struct mtd_info *mtd, struct nand_chip *chip,
int page)
{
chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0, page);
return gpmi_ecc_write_page_raw(mtd, chip, NULL, 1, page);
}
@ -1649,7 +1647,7 @@ static int gpmi_block_markbad(struct mtd_info *mtd, loff_t ofs)
struct gpmi_nand_data *this = nand_get_controller_data(chip);
int ret = 0;
uint8_t *block_mark;
int column, page, status, chipnr;
int column, page, chipnr;
chipnr = (int)(ofs >> chip->chip_shift);
chip->select_chip(mtd, chipnr);
@ -1663,13 +1661,7 @@ static int gpmi_block_markbad(struct mtd_info *mtd, loff_t ofs)
/* Shift to get page */
page = (int)(ofs >> chip->page_shift);
chip->cmdfunc(mtd, NAND_CMD_SEQIN, column, page);
chip->write_buf(mtd, block_mark, 1);
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
status = chip->waitfunc(mtd, chip);
if (status & NAND_STATUS_FAIL)
ret = -EIO;
ret = nand_prog_page_op(chip, page, column, block_mark, 1);
chip->select_chip(mtd, -1);
@ -1712,7 +1704,7 @@ static int mx23_check_transcription_stamp(struct gpmi_nand_data *this)
unsigned int search_area_size_in_strides;
unsigned int stride;
unsigned int page;
uint8_t *buffer = chip->buffers->databuf;
uint8_t *buffer = chip->data_buf;
int saved_chip_number;
int found_an_ncb_fingerprint = false;
@ -1737,7 +1729,7 @@ static int mx23_check_transcription_stamp(struct gpmi_nand_data *this)
* Read the NCB fingerprint. The fingerprint is four bytes long
* and starts in the 12th byte of the page.
*/
chip->cmdfunc(mtd, NAND_CMD_READ0, 12, page);
nand_read_page_op(chip, page, 12, NULL, 0);
chip->read_buf(mtd, buffer, strlen(fingerprint));
/* Look for the fingerprint. */
@ -1771,7 +1763,7 @@ static int mx23_write_transcription_stamp(struct gpmi_nand_data *this)
unsigned int block;
unsigned int stride;
unsigned int page;
uint8_t *buffer = chip->buffers->databuf;
uint8_t *buffer = chip->data_buf;
int saved_chip_number;
int status;
@ -1797,17 +1789,10 @@ static int mx23_write_transcription_stamp(struct gpmi_nand_data *this)
dev_dbg(dev, "Erasing the search area...\n");
for (block = 0; block < search_area_size_in_blocks; block++) {
/* Compute the page address. */
page = block * block_size_in_pages;
/* Erase this block. */
dev_dbg(dev, "\tErasing block 0x%x\n", block);
chip->cmdfunc(mtd, NAND_CMD_ERASE1, -1, page);
chip->cmdfunc(mtd, NAND_CMD_ERASE2, -1, -1);
/* Wait for the erase to finish. */
status = chip->waitfunc(mtd, chip);
if (status & NAND_STATUS_FAIL)
status = nand_erase_op(chip, block);
if (status)
dev_err(dev, "[%s] Erase failed.\n", __func__);
}
@ -1823,13 +1808,9 @@ static int mx23_write_transcription_stamp(struct gpmi_nand_data *this)
/* Write the first page of the current stride. */
dev_dbg(dev, "Writing an NCB fingerprint in page 0x%x\n", page);
chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page);
chip->ecc.write_page_raw(mtd, chip, buffer, 0, page);
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
/* Wait for the write to finish. */
status = chip->waitfunc(mtd, chip);
if (status & NAND_STATUS_FAIL)
status = chip->ecc.write_page_raw(mtd, chip, buffer, 0, page);
if (status)
dev_err(dev, "[%s] Write failed.\n", __func__);
}
@ -1884,7 +1865,7 @@ static int mx23_boot_init(struct gpmi_nand_data *this)
/* Send the command to read the conventional block mark. */
chip->select_chip(mtd, chipnr);
chip->cmdfunc(mtd, NAND_CMD_READ0, mtd->writesize, page);
nand_read_page_op(chip, page, mtd->writesize, NULL, 0);
block_mark = chip->read_byte(mtd);
chip->select_chip(mtd, -1);

View File

@ -268,31 +268,31 @@ struct timing_threshold {
};
/* Common Services */
extern int common_nfc_set_geometry(struct gpmi_nand_data *);
extern struct dma_chan *get_dma_chan(struct gpmi_nand_data *);
extern void prepare_data_dma(struct gpmi_nand_data *,
enum dma_data_direction dr);
extern int start_dma_without_bch_irq(struct gpmi_nand_data *,
struct dma_async_tx_descriptor *);
extern int start_dma_with_bch_irq(struct gpmi_nand_data *,
struct dma_async_tx_descriptor *);
int common_nfc_set_geometry(struct gpmi_nand_data *);
struct dma_chan *get_dma_chan(struct gpmi_nand_data *);
void prepare_data_dma(struct gpmi_nand_data *,
enum dma_data_direction dr);
int start_dma_without_bch_irq(struct gpmi_nand_data *,
struct dma_async_tx_descriptor *);
int start_dma_with_bch_irq(struct gpmi_nand_data *,
struct dma_async_tx_descriptor *);
/* GPMI-NAND helper function library */
extern int gpmi_init(struct gpmi_nand_data *);
extern int gpmi_extra_init(struct gpmi_nand_data *);
extern void gpmi_clear_bch(struct gpmi_nand_data *);
extern void gpmi_dump_info(struct gpmi_nand_data *);
extern int bch_set_geometry(struct gpmi_nand_data *);
extern int gpmi_is_ready(struct gpmi_nand_data *, unsigned chip);
extern int gpmi_send_command(struct gpmi_nand_data *);
extern void gpmi_begin(struct gpmi_nand_data *);
extern void gpmi_end(struct gpmi_nand_data *);
extern int gpmi_read_data(struct gpmi_nand_data *);
extern int gpmi_send_data(struct gpmi_nand_data *);
extern int gpmi_send_page(struct gpmi_nand_data *,
dma_addr_t payload, dma_addr_t auxiliary);
extern int gpmi_read_page(struct gpmi_nand_data *,
dma_addr_t payload, dma_addr_t auxiliary);
int gpmi_init(struct gpmi_nand_data *);
int gpmi_extra_init(struct gpmi_nand_data *);
void gpmi_clear_bch(struct gpmi_nand_data *);
void gpmi_dump_info(struct gpmi_nand_data *);
int bch_set_geometry(struct gpmi_nand_data *);
int gpmi_is_ready(struct gpmi_nand_data *, unsigned chip);
int gpmi_send_command(struct gpmi_nand_data *);
void gpmi_begin(struct gpmi_nand_data *);
void gpmi_end(struct gpmi_nand_data *);
int gpmi_read_data(struct gpmi_nand_data *);
int gpmi_send_data(struct gpmi_nand_data *);
int gpmi_send_page(struct gpmi_nand_data *,
dma_addr_t payload, dma_addr_t auxiliary);
int gpmi_read_page(struct gpmi_nand_data *,
dma_addr_t payload, dma_addr_t auxiliary);
void gpmi_copy_bits(u8 *dst, size_t dst_bit_off,
const u8 *src, size_t src_bit_off,

View File

@ -544,7 +544,7 @@ static int hisi_nand_read_page_hwecc(struct mtd_info *mtd,
int max_bitflips = 0, stat = 0, stat_max = 0, status_ecc;
int stat_1, stat_2;
chip->read_buf(mtd, buf, mtd->writesize);
nand_read_page_op(chip, page, 0, buf, mtd->writesize);
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
/* errors which can not be corrected by ECC */
@ -574,8 +574,7 @@ static int hisi_nand_read_oob(struct mtd_info *mtd, struct nand_chip *chip,
{
struct hinfc_host *host = nand_get_controller_data(chip);
chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page);
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize);
if (host->irq_status & HINFC504_INTS_UE) {
host->irq_status = 0;
@ -590,11 +589,11 @@ static int hisi_nand_write_page_hwecc(struct mtd_info *mtd,
struct nand_chip *chip, const uint8_t *buf, int oob_required,
int page)
{
chip->write_buf(mtd, buf, mtd->writesize);
nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize);
if (oob_required)
chip->write_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
return nand_prog_page_end_op(chip);
}
static void hisi_nfc_host_init(struct hinfc_host *host)

View File

@ -313,6 +313,7 @@ static int jz_nand_detect_bank(struct platform_device *pdev,
uint32_t ctrl;
struct nand_chip *chip = &nand->chip;
struct mtd_info *mtd = nand_to_mtd(chip);
u8 id[2];
/* Request I/O resource. */
sprintf(res_name, "bank%d", bank);
@ -335,17 +336,16 @@ static int jz_nand_detect_bank(struct platform_device *pdev,
/* Retrieve the IDs from the first chip. */
chip->select_chip(mtd, 0);
chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1);
chip->cmdfunc(mtd, NAND_CMD_READID, 0x00, -1);
*nand_maf_id = chip->read_byte(mtd);
*nand_dev_id = chip->read_byte(mtd);
nand_reset_op(chip);
nand_readid_op(chip, 0, id, sizeof(id));
*nand_maf_id = id[0];
*nand_dev_id = id[1];
} else {
/* Detect additional chip. */
chip->select_chip(mtd, chipnr);
chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1);
chip->cmdfunc(mtd, NAND_CMD_READID, 0x00, -1);
if (*nand_maf_id != chip->read_byte(mtd)
|| *nand_dev_id != chip->read_byte(mtd)) {
nand_reset_op(chip);
nand_readid_op(chip, 0, id, sizeof(id));
if (*nand_maf_id != id[0] || *nand_dev_id != id[1]) {
ret = -ENODEV;
goto notfound_id;
}

View File

@ -461,7 +461,7 @@ static int lpc32xx_read_page(struct mtd_info *mtd, struct nand_chip *chip,
}
/* Writing Command and Address */
chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
nand_read_page_op(chip, page, 0, NULL, 0);
/* For all sub-pages */
for (i = 0; i < host->mlcsubpages; i++) {
@ -522,6 +522,8 @@ static int lpc32xx_write_page_lowlevel(struct mtd_info *mtd,
memcpy(dma_buf, buf, mtd->writesize);
}
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
for (i = 0; i < host->mlcsubpages; i++) {
/* Start Encode */
writeb(0x00, MLC_ECC_ENC_REG(host->io_base));
@ -550,7 +552,8 @@ static int lpc32xx_write_page_lowlevel(struct mtd_info *mtd,
/* Wait for Controller Ready */
lpc32xx_waitfunc_controller(mtd, chip);
}
return 0;
return nand_prog_page_end_op(chip);
}
static int lpc32xx_read_oob(struct mtd_info *mtd, struct nand_chip *chip,

View File

@ -399,10 +399,7 @@ static void lpc32xx_nand_write_buf(struct mtd_info *mtd, const uint8_t *buf, int
static int lpc32xx_nand_read_oob_syndrome(struct mtd_info *mtd,
struct nand_chip *chip, int page)
{
chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page);
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
return nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize);
}
/*
@ -411,17 +408,8 @@ static int lpc32xx_nand_read_oob_syndrome(struct mtd_info *mtd,
static int lpc32xx_nand_write_oob_syndrome(struct mtd_info *mtd,
struct nand_chip *chip, int page)
{
int status;
chip->cmdfunc(mtd, NAND_CMD_SEQIN, mtd->writesize, page);
chip->write_buf(mtd, chip->oob_poi, mtd->oobsize);
/* Send command to program the OOB data */
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
status = chip->waitfunc(mtd, chip);
return status & NAND_STATUS_FAIL ? -EIO : 0;
return nand_prog_page_op(chip, page, mtd->writesize, chip->oob_poi,
mtd->oobsize);
}
/*
@ -632,7 +620,7 @@ static int lpc32xx_nand_read_page_syndrome(struct mtd_info *mtd,
uint8_t *oobecc, tmpecc[LPC32XX_ECC_SAVE_SIZE];
/* Issue read command */
chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
nand_read_page_op(chip, page, 0, NULL, 0);
/* Read data and oob, calculate ECC */
status = lpc32xx_xfer(mtd, buf, chip->ecc.steps, 1);
@ -675,7 +663,7 @@ static int lpc32xx_nand_read_page_raw_syndrome(struct mtd_info *mtd,
int page)
{
/* Issue read command */
chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
nand_read_page_op(chip, page, 0, NULL, 0);
/* Raw reads can just use the FIFO interface */
chip->read_buf(mtd, buf, chip->ecc.size * chip->ecc.steps);
@ -698,6 +686,8 @@ static int lpc32xx_nand_write_page_syndrome(struct mtd_info *mtd,
uint8_t *pb;
int error;
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
/* Write data, calculate ECC on outbound data */
error = lpc32xx_xfer(mtd, (uint8_t *)buf, chip->ecc.steps, 0);
if (error)
@ -716,7 +706,8 @@ static int lpc32xx_nand_write_page_syndrome(struct mtd_info *mtd,
/* Write ECC data to device */
chip->write_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
return nand_prog_page_end_op(chip);
}
/*
@ -729,9 +720,11 @@ static int lpc32xx_nand_write_page_raw_syndrome(struct mtd_info *mtd,
int oob_required, int page)
{
/* Raw writes can just use the FIFO interface */
chip->write_buf(mtd, buf, chip->ecc.size * chip->ecc.steps);
nand_prog_page_begin_op(chip, page, 0, buf,
chip->ecc.size * chip->ecc.steps);
chip->write_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
return nand_prog_page_end_op(chip);
}
static int lpc32xx_nand_dma_setup(struct lpc32xx_nand_host *host)

File diff suppressed because it is too large Load Diff

View File

@ -34,34 +34,28 @@
#define ECC_ENCCON (0x00)
#define ECC_ENCCNFG (0x04)
#define ECC_MODE_SHIFT (5)
#define ECC_MS_SHIFT (16)
#define ECC_ENCDIADDR (0x08)
#define ECC_ENCIDLE (0x0C)
#define ECC_ENCIRQ_EN (0x80)
#define ECC_ENCIRQ_STA (0x84)
#define ECC_DECCON (0x100)
#define ECC_DECCNFG (0x104)
#define DEC_EMPTY_EN BIT(31)
#define DEC_CNFG_CORRECT (0x3 << 12)
#define ECC_DECIDLE (0x10C)
#define ECC_DECENUM0 (0x114)
#define ECC_DECDONE (0x124)
#define ECC_DECIRQ_EN (0x200)
#define ECC_DECIRQ_STA (0x204)
#define ECC_TIMEOUT (500000)
#define ECC_IDLE_REG(op) ((op) == ECC_ENCODE ? ECC_ENCIDLE : ECC_DECIDLE)
#define ECC_CTL_REG(op) ((op) == ECC_ENCODE ? ECC_ENCCON : ECC_DECCON)
#define ECC_IRQ_REG(op) ((op) == ECC_ENCODE ? \
ECC_ENCIRQ_EN : ECC_DECIRQ_EN)
struct mtk_ecc_caps {
u32 err_mask;
const u8 *ecc_strength;
const u32 *ecc_regs;
u8 num_ecc_strength;
u32 encode_parity_reg0;
u8 ecc_mode_shift;
u32 parity_bits;
int pg_irq_sel;
};
@ -89,6 +83,46 @@ static const u8 ecc_strength_mt2712[] = {
40, 44, 48, 52, 56, 60, 68, 72, 80
};
static const u8 ecc_strength_mt7622[] = {
4, 6, 8, 10, 12, 14, 16
};
enum mtk_ecc_regs {
ECC_ENCPAR00,
ECC_ENCIRQ_EN,
ECC_ENCIRQ_STA,
ECC_DECDONE,
ECC_DECIRQ_EN,
ECC_DECIRQ_STA,
};
static int mt2701_ecc_regs[] = {
[ECC_ENCPAR00] = 0x10,
[ECC_ENCIRQ_EN] = 0x80,
[ECC_ENCIRQ_STA] = 0x84,
[ECC_DECDONE] = 0x124,
[ECC_DECIRQ_EN] = 0x200,
[ECC_DECIRQ_STA] = 0x204,
};
static int mt2712_ecc_regs[] = {
[ECC_ENCPAR00] = 0x300,
[ECC_ENCIRQ_EN] = 0x80,
[ECC_ENCIRQ_STA] = 0x84,
[ECC_DECDONE] = 0x124,
[ECC_DECIRQ_EN] = 0x200,
[ECC_DECIRQ_STA] = 0x204,
};
static int mt7622_ecc_regs[] = {
[ECC_ENCPAR00] = 0x10,
[ECC_ENCIRQ_EN] = 0x30,
[ECC_ENCIRQ_STA] = 0x34,
[ECC_DECDONE] = 0x11c,
[ECC_DECIRQ_EN] = 0x140,
[ECC_DECIRQ_STA] = 0x144,
};
static inline void mtk_ecc_wait_idle(struct mtk_ecc *ecc,
enum mtk_ecc_operation op)
{
@ -107,32 +141,30 @@ static inline void mtk_ecc_wait_idle(struct mtk_ecc *ecc,
static irqreturn_t mtk_ecc_irq(int irq, void *id)
{
struct mtk_ecc *ecc = id;
enum mtk_ecc_operation op;
u32 dec, enc;
dec = readw(ecc->regs + ECC_DECIRQ_STA) & ECC_IRQ_EN;
dec = readw(ecc->regs + ecc->caps->ecc_regs[ECC_DECIRQ_STA])
& ECC_IRQ_EN;
if (dec) {
op = ECC_DECODE;
dec = readw(ecc->regs + ECC_DECDONE);
dec = readw(ecc->regs + ecc->caps->ecc_regs[ECC_DECDONE]);
if (dec & ecc->sectors) {
/*
* Clear decode IRQ status once again to ensure that
* there will be no extra IRQ.
*/
readw(ecc->regs + ECC_DECIRQ_STA);
readw(ecc->regs + ecc->caps->ecc_regs[ECC_DECIRQ_STA]);
ecc->sectors = 0;
complete(&ecc->done);
} else {
return IRQ_HANDLED;
}
} else {
enc = readl(ecc->regs + ECC_ENCIRQ_STA) & ECC_IRQ_EN;
if (enc) {
op = ECC_ENCODE;
enc = readl(ecc->regs + ecc->caps->ecc_regs[ECC_ENCIRQ_STA])
& ECC_IRQ_EN;
if (enc)
complete(&ecc->done);
} else {
else
return IRQ_NONE;
}
}
return IRQ_HANDLED;
@ -160,7 +192,7 @@ static int mtk_ecc_config(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
/* configure ECC encoder (in bits) */
enc_sz = config->len << 3;
reg = ecc_bit | (config->mode << ECC_MODE_SHIFT);
reg = ecc_bit | (config->mode << ecc->caps->ecc_mode_shift);
reg |= (enc_sz << ECC_MS_SHIFT);
writel(reg, ecc->regs + ECC_ENCCNFG);
@ -171,9 +203,9 @@ static int mtk_ecc_config(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
} else {
/* configure ECC decoder (in bits) */
dec_sz = (config->len << 3) +
config->strength * ECC_PARITY_BITS;
config->strength * ecc->caps->parity_bits;
reg = ecc_bit | (config->mode << ECC_MODE_SHIFT);
reg = ecc_bit | (config->mode << ecc->caps->ecc_mode_shift);
reg |= (dec_sz << ECC_MS_SHIFT) | DEC_CNFG_CORRECT;
reg |= DEC_EMPTY_EN;
writel(reg, ecc->regs + ECC_DECCNFG);
@ -291,7 +323,12 @@ int mtk_ecc_enable(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
*/
if (ecc->caps->pg_irq_sel && config->mode == ECC_NFI_MODE)
reg_val |= ECC_PG_IRQ_SEL;
writew(reg_val, ecc->regs + ECC_IRQ_REG(op));
if (op == ECC_ENCODE)
writew(reg_val, ecc->regs +
ecc->caps->ecc_regs[ECC_ENCIRQ_EN]);
else
writew(reg_val, ecc->regs +
ecc->caps->ecc_regs[ECC_DECIRQ_EN]);
}
writew(ECC_OP_ENABLE, ecc->regs + ECC_CTL_REG(op));
@ -310,13 +347,17 @@ void mtk_ecc_disable(struct mtk_ecc *ecc)
/* disable it */
mtk_ecc_wait_idle(ecc, op);
if (op == ECC_DECODE)
if (op == ECC_DECODE) {
/*
* Clear decode IRQ status in case there is a timeout to wait
* decode IRQ.
*/
readw(ecc->regs + ECC_DECIRQ_STA);
writew(0, ecc->regs + ECC_IRQ_REG(op));
readw(ecc->regs + ecc->caps->ecc_regs[ECC_DECDONE]);
writew(0, ecc->regs + ecc->caps->ecc_regs[ECC_DECIRQ_EN]);
} else {
writew(0, ecc->regs + ecc->caps->ecc_regs[ECC_ENCIRQ_EN]);
}
writew(ECC_OP_DISABLE, ecc->regs + ECC_CTL_REG(op));
mutex_unlock(&ecc->lock);
@ -367,11 +408,11 @@ int mtk_ecc_encode(struct mtk_ecc *ecc, struct mtk_ecc_config *config,
mtk_ecc_wait_idle(ecc, ECC_ENCODE);
/* Program ECC bytes to OOB: per sector oob = FDM + ECC + SPARE */
len = (config->strength * ECC_PARITY_BITS + 7) >> 3;
len = (config->strength * ecc->caps->parity_bits + 7) >> 3;
/* write the parity bytes generated by the ECC back to temp buffer */
__ioread32_copy(ecc->eccdata,
ecc->regs + ecc->caps->encode_parity_reg0,
ecc->regs + ecc->caps->ecc_regs[ECC_ENCPAR00],
round_up(len, 4));
/* copy into possibly unaligned OOB region with actual length */
@ -404,22 +445,42 @@ void mtk_ecc_adjust_strength(struct mtk_ecc *ecc, u32 *p)
}
EXPORT_SYMBOL(mtk_ecc_adjust_strength);
unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc)
{
return ecc->caps->parity_bits;
}
EXPORT_SYMBOL(mtk_ecc_get_parity_bits);
static const struct mtk_ecc_caps mtk_ecc_caps_mt2701 = {
.err_mask = 0x3f,
.ecc_strength = ecc_strength_mt2701,
.ecc_regs = mt2701_ecc_regs,
.num_ecc_strength = 20,
.encode_parity_reg0 = 0x10,
.ecc_mode_shift = 5,
.parity_bits = 14,
.pg_irq_sel = 0,
};
static const struct mtk_ecc_caps mtk_ecc_caps_mt2712 = {
.err_mask = 0x7f,
.ecc_strength = ecc_strength_mt2712,
.ecc_regs = mt2712_ecc_regs,
.num_ecc_strength = 23,
.encode_parity_reg0 = 0x300,
.ecc_mode_shift = 5,
.parity_bits = 14,
.pg_irq_sel = 1,
};
static const struct mtk_ecc_caps mtk_ecc_caps_mt7622 = {
.err_mask = 0x3f,
.ecc_strength = ecc_strength_mt7622,
.ecc_regs = mt7622_ecc_regs,
.num_ecc_strength = 7,
.ecc_mode_shift = 4,
.parity_bits = 13,
.pg_irq_sel = 0,
};
static const struct of_device_id mtk_ecc_dt_match[] = {
{
.compatible = "mediatek,mt2701-ecc",
@ -427,6 +488,9 @@ static const struct of_device_id mtk_ecc_dt_match[] = {
}, {
.compatible = "mediatek,mt2712-ecc",
.data = &mtk_ecc_caps_mt2712,
}, {
.compatible = "mediatek,mt7622-ecc",
.data = &mtk_ecc_caps_mt7622,
},
{},
};
@ -452,7 +516,7 @@ static int mtk_ecc_probe(struct platform_device *pdev)
max_eccdata_size = ecc->caps->num_ecc_strength - 1;
max_eccdata_size = ecc->caps->ecc_strength[max_eccdata_size];
max_eccdata_size = (max_eccdata_size * ECC_PARITY_BITS + 7) >> 3;
max_eccdata_size = (max_eccdata_size * ecc->caps->parity_bits + 7) >> 3;
max_eccdata_size = round_up(max_eccdata_size, 4);
ecc->eccdata = devm_kzalloc(dev, max_eccdata_size, GFP_KERNEL);
if (!ecc->eccdata)

View File

@ -14,8 +14,6 @@
#include <linux/types.h>
#define ECC_PARITY_BITS (14)
enum mtk_ecc_mode {ECC_DMA_MODE = 0, ECC_NFI_MODE = 1};
enum mtk_ecc_operation {ECC_ENCODE, ECC_DECODE};
@ -43,6 +41,7 @@ int mtk_ecc_wait_done(struct mtk_ecc *, enum mtk_ecc_operation);
int mtk_ecc_enable(struct mtk_ecc *, struct mtk_ecc_config *);
void mtk_ecc_disable(struct mtk_ecc *);
void mtk_ecc_adjust_strength(struct mtk_ecc *ecc, u32 *p);
unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc);
struct mtk_ecc *of_mtk_ecc_get(struct device_node *);
void mtk_ecc_release(struct mtk_ecc *);

View File

@ -97,7 +97,6 @@
#define MTK_TIMEOUT (500000)
#define MTK_RESET_TIMEOUT (1000000)
#define MTK_MAX_SECTOR (16)
#define MTK_NAND_MAX_NSELS (2)
#define MTK_NFC_MIN_SPARE (16)
#define ACCTIMING(tpoecs, tprecs, tc2r, tw2r, twh, twst, trlt) \
@ -109,6 +108,8 @@ struct mtk_nfc_caps {
u8 num_spare_size;
u8 pageformat_spare_shift;
u8 nfi_clk_div;
u8 max_sector;
u32 max_sector_size;
};
struct mtk_nfc_bad_mark_ctl {
@ -173,6 +174,10 @@ static const u8 spare_size_mt2712[] = {
74
};
static const u8 spare_size_mt7622[] = {
16, 26, 27, 28
};
static inline struct mtk_nfc_nand_chip *to_mtk_nand(struct nand_chip *nand)
{
return container_of(nand, struct mtk_nfc_nand_chip, nand);
@ -450,7 +455,7 @@ static inline u8 mtk_nfc_read_byte(struct mtd_info *mtd)
* set to max sector to allow the HW to continue reading over
* unaligned accesses
*/
reg = (MTK_MAX_SECTOR << CON_SEC_SHIFT) | CON_BRD;
reg = (nfc->caps->max_sector << CON_SEC_SHIFT) | CON_BRD;
nfi_writel(nfc, reg, NFI_CON);
/* trigger to fetch data */
@ -481,7 +486,7 @@ static void mtk_nfc_write_byte(struct mtd_info *mtd, u8 byte)
reg = nfi_readw(nfc, NFI_CNFG) | CNFG_BYTE_RW;
nfi_writew(nfc, reg, NFI_CNFG);
reg = MTK_MAX_SECTOR << CON_SEC_SHIFT | CON_BWR;
reg = nfc->caps->max_sector << CON_SEC_SHIFT | CON_BWR;
nfi_writel(nfc, reg, NFI_CON);
nfi_writew(nfc, STAR_EN, NFI_STRDATA);
@ -761,6 +766,8 @@ static int mtk_nfc_write_page(struct mtd_info *mtd, struct nand_chip *chip,
u32 reg;
int ret;
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
if (!raw) {
/* OOB => FDM: from register, ECC: from HW */
reg = nfi_readw(nfc, NFI_CNFG) | CNFG_AUTO_FMT_EN;
@ -794,7 +801,10 @@ static int mtk_nfc_write_page(struct mtd_info *mtd, struct nand_chip *chip,
if (!raw)
mtk_ecc_disable(nfc->ecc);
return ret;
if (ret)
return ret;
return nand_prog_page_end_op(chip);
}
static int mtk_nfc_write_page_hwecc(struct mtd_info *mtd,
@ -832,18 +842,7 @@ static int mtk_nfc_write_subpage_hwecc(struct mtd_info *mtd,
static int mtk_nfc_write_oob_std(struct mtd_info *mtd, struct nand_chip *chip,
int page)
{
int ret;
chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page);
ret = mtk_nfc_write_page_raw(mtd, chip, NULL, 1, page);
if (ret < 0)
return -EIO;
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
ret = chip->waitfunc(mtd, chip);
return ret & NAND_STATUS_FAIL ? -EIO : 0;
return mtk_nfc_write_page_raw(mtd, chip, NULL, 1, page);
}
static int mtk_nfc_update_ecc_stats(struct mtd_info *mtd, u8 *buf, u32 sectors)
@ -892,8 +891,7 @@ static int mtk_nfc_read_subpage(struct mtd_info *mtd, struct nand_chip *chip,
len = sectors * chip->ecc.size + (raw ? sectors * spare : 0);
buf = bufpoi + start * chip->ecc.size;
if (column != 0)
chip->cmdfunc(mtd, NAND_CMD_RNDOUT, column, -1);
nand_read_page_op(chip, page, column, NULL, 0);
addr = dma_map_single(nfc->dev, buf, len, DMA_FROM_DEVICE);
rc = dma_mapping_error(nfc->dev, addr);
@ -1016,8 +1014,6 @@ static int mtk_nfc_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
static int mtk_nfc_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip,
int page)
{
chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
return mtk_nfc_read_page_raw(mtd, chip, NULL, 1, page);
}
@ -1126,9 +1122,11 @@ static void mtk_nfc_set_fdm(struct mtk_nfc_fdm *fdm, struct mtd_info *mtd)
{
struct nand_chip *nand = mtd_to_nand(mtd);
struct mtk_nfc_nand_chip *chip = to_mtk_nand(nand);
struct mtk_nfc *nfc = nand_get_controller_data(nand);
u32 ecc_bytes;
ecc_bytes = DIV_ROUND_UP(nand->ecc.strength * ECC_PARITY_BITS, 8);
ecc_bytes = DIV_ROUND_UP(nand->ecc.strength *
mtk_ecc_get_parity_bits(nfc->ecc), 8);
fdm->reg_size = chip->spare_per_sector - ecc_bytes;
if (fdm->reg_size > NFI_FDM_MAX_SIZE)
@ -1208,7 +1206,8 @@ static int mtk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd)
* this controller only supports 512 and 1024 sizes
*/
if (nand->ecc.size < 1024) {
if (mtd->writesize > 512) {
if (mtd->writesize > 512 &&
nfc->caps->max_sector_size > 512) {
nand->ecc.size = 1024;
nand->ecc.strength <<= 1;
} else {
@ -1223,7 +1222,8 @@ static int mtk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd)
return ret;
/* calculate oob bytes except ecc parity data */
free = ((nand->ecc.strength * ECC_PARITY_BITS) + 7) >> 3;
free = (nand->ecc.strength * mtk_ecc_get_parity_bits(nfc->ecc)
+ 7) >> 3;
free = spare - free;
/*
@ -1233,10 +1233,12 @@ static int mtk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd)
*/
if (free > NFI_FDM_MAX_SIZE) {
spare -= NFI_FDM_MAX_SIZE;
nand->ecc.strength = (spare << 3) / ECC_PARITY_BITS;
nand->ecc.strength = (spare << 3) /
mtk_ecc_get_parity_bits(nfc->ecc);
} else if (free < 0) {
spare -= NFI_FDM_MIN_SIZE;
nand->ecc.strength = (spare << 3) / ECC_PARITY_BITS;
nand->ecc.strength = (spare << 3) /
mtk_ecc_get_parity_bits(nfc->ecc);
}
}
@ -1389,6 +1391,8 @@ static const struct mtk_nfc_caps mtk_nfc_caps_mt2701 = {
.num_spare_size = 16,
.pageformat_spare_shift = 4,
.nfi_clk_div = 1,
.max_sector = 16,
.max_sector_size = 1024,
};
static const struct mtk_nfc_caps mtk_nfc_caps_mt2712 = {
@ -1396,6 +1400,17 @@ static const struct mtk_nfc_caps mtk_nfc_caps_mt2712 = {
.num_spare_size = 19,
.pageformat_spare_shift = 16,
.nfi_clk_div = 2,
.max_sector = 16,
.max_sector_size = 1024,
};
static const struct mtk_nfc_caps mtk_nfc_caps_mt7622 = {
.spare_size = spare_size_mt7622,
.num_spare_size = 4,
.pageformat_spare_shift = 4,
.nfi_clk_div = 1,
.max_sector = 8,
.max_sector_size = 512,
};
static const struct of_device_id mtk_nfc_id_table[] = {
@ -1405,6 +1420,9 @@ static const struct of_device_id mtk_nfc_id_table[] = {
}, {
.compatible = "mediatek,mt2712-nfc",
.data = &mtk_nfc_caps_mt2712,
}, {
.compatible = "mediatek,mt7622-nfc",
.data = &mtk_nfc_caps_mt7622,
},
{}
};
@ -1540,7 +1558,6 @@ static int mtk_nfc_resume(struct device *dev)
struct mtk_nfc *nfc = dev_get_drvdata(dev);
struct mtk_nfc_nand_chip *chip;
struct nand_chip *nand;
struct mtd_info *mtd;
int ret;
u32 i;
@ -1553,11 +1570,8 @@ static int mtk_nfc_resume(struct device *dev)
/* reset NAND chip if VCC was powered off */
list_for_each_entry(chip, &nfc->chips, node) {
nand = &chip->nand;
mtd = nand_to_mtd(nand);
for (i = 0; i < chip->nsels; i++) {
nand->select_chip(mtd, i);
nand->cmdfunc(mtd, NAND_CMD_RESET, -1, -1);
}
for (i = 0; i < chip->nsels; i++)
nand_reset(nand, i);
}
return 0;

File diff suppressed because it is too large Load Diff

View File

@ -898,7 +898,7 @@ static inline int nand_memory_bbt(struct mtd_info *mtd, struct nand_bbt_descr *b
{
struct nand_chip *this = mtd_to_nand(mtd);
return create_bbt(mtd, this->buffers->databuf, bd, -1);
return create_bbt(mtd, this->data_buf, bd, -1);
}
/**

View File

@ -66,16 +66,44 @@ struct hynix_read_retry_otp {
};
static bool hynix_nand_has_valid_jedecid(struct nand_chip *chip)
{
u8 jedecid[5] = { };
int ret;
ret = nand_readid_op(chip, 0x40, jedecid, sizeof(jedecid));
if (ret)
return false;
return !strncmp("JEDEC", jedecid, sizeof(jedecid));
}
static int hynix_nand_cmd_op(struct nand_chip *chip, u8 cmd)
{
struct mtd_info *mtd = nand_to_mtd(chip);
u8 jedecid[6] = { };
int i = 0;
chip->cmdfunc(mtd, NAND_CMD_READID, 0x40, -1);
for (i = 0; i < 5; i++)
jedecid[i] = chip->read_byte(mtd);
if (chip->exec_op) {
struct nand_op_instr instrs[] = {
NAND_OP_CMD(cmd, 0),
};
struct nand_operation op = NAND_OPERATION(instrs);
return !strcmp("JEDEC", jedecid);
return nand_exec_op(chip, &op);
}
chip->cmdfunc(mtd, cmd, -1, -1);
return 0;
}
static int hynix_nand_reg_write_op(struct nand_chip *chip, u8 addr, u8 val)
{
struct mtd_info *mtd = nand_to_mtd(chip);
u16 column = ((u16)addr << 8) | addr;
chip->cmdfunc(mtd, NAND_CMD_NONE, column, -1);
chip->write_byte(mtd, val);
return 0;
}
static int hynix_nand_setup_read_retry(struct mtd_info *mtd, int retry_mode)
@ -83,14 +111,15 @@ static int hynix_nand_setup_read_retry(struct mtd_info *mtd, int retry_mode)
struct nand_chip *chip = mtd_to_nand(mtd);
struct hynix_nand *hynix = nand_get_manufacturer_data(chip);
const u8 *values;
int status;
int i;
int i, ret;
values = hynix->read_retry->values +
(retry_mode * hynix->read_retry->nregs);
/* Enter 'Set Hynix Parameters' mode */
chip->cmdfunc(mtd, NAND_HYNIX_CMD_SET_PARAMS, -1, -1);
ret = hynix_nand_cmd_op(chip, NAND_HYNIX_CMD_SET_PARAMS);
if (ret)
return ret;
/*
* Configure the NAND in the requested read-retry mode.
@ -102,21 +131,14 @@ static int hynix_nand_setup_read_retry(struct mtd_info *mtd, int retry_mode)
* probably tweaked at production in this case).
*/
for (i = 0; i < hynix->read_retry->nregs; i++) {
int column = hynix->read_retry->regs[i];
column |= column << 8;
chip->cmdfunc(mtd, NAND_CMD_NONE, column, -1);
chip->write_byte(mtd, values[i]);
ret = hynix_nand_reg_write_op(chip, hynix->read_retry->regs[i],
values[i]);
if (ret)
return ret;
}
/* Apply the new settings. */
chip->cmdfunc(mtd, NAND_HYNIX_CMD_APPLY_PARAMS, -1, -1);
status = chip->waitfunc(mtd, chip);
if (status & NAND_STATUS_FAIL)
return -EIO;
return 0;
return hynix_nand_cmd_op(chip, NAND_HYNIX_CMD_APPLY_PARAMS);
}
/**
@ -172,40 +194,63 @@ static int hynix_read_rr_otp(struct nand_chip *chip,
const struct hynix_read_retry_otp *info,
void *buf)
{
struct mtd_info *mtd = nand_to_mtd(chip);
int i;
int i, ret;
chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1);
ret = nand_reset_op(chip);
if (ret)
return ret;
chip->cmdfunc(mtd, NAND_HYNIX_CMD_SET_PARAMS, -1, -1);
ret = hynix_nand_cmd_op(chip, NAND_HYNIX_CMD_SET_PARAMS);
if (ret)
return ret;
for (i = 0; i < info->nregs; i++) {
int column = info->regs[i];
column |= column << 8;
chip->cmdfunc(mtd, NAND_CMD_NONE, column, -1);
chip->write_byte(mtd, info->values[i]);
ret = hynix_nand_reg_write_op(chip, info->regs[i],
info->values[i]);
if (ret)
return ret;
}
chip->cmdfunc(mtd, NAND_HYNIX_CMD_APPLY_PARAMS, -1, -1);
ret = hynix_nand_cmd_op(chip, NAND_HYNIX_CMD_APPLY_PARAMS);
if (ret)
return ret;
/* Sequence to enter OTP mode? */
chip->cmdfunc(mtd, 0x17, -1, -1);
chip->cmdfunc(mtd, 0x04, -1, -1);
chip->cmdfunc(mtd, 0x19, -1, -1);
ret = hynix_nand_cmd_op(chip, 0x17);
if (ret)
return ret;
ret = hynix_nand_cmd_op(chip, 0x4);
if (ret)
return ret;
ret = hynix_nand_cmd_op(chip, 0x19);
if (ret)
return ret;
/* Now read the page */
chip->cmdfunc(mtd, NAND_CMD_READ0, 0x0, info->page);
chip->read_buf(mtd, buf, info->size);
ret = nand_read_page_op(chip, info->page, 0, buf, info->size);
if (ret)
return ret;
/* Put everything back to normal */
chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1);
chip->cmdfunc(mtd, NAND_HYNIX_CMD_SET_PARAMS, 0x38, -1);
chip->write_byte(mtd, 0x0);
chip->cmdfunc(mtd, NAND_HYNIX_CMD_APPLY_PARAMS, -1, -1);
chip->cmdfunc(mtd, NAND_CMD_READ0, 0x0, -1);
ret = nand_reset_op(chip);
if (ret)
return ret;
return 0;
ret = hynix_nand_cmd_op(chip, NAND_HYNIX_CMD_SET_PARAMS);
if (ret)
return ret;
ret = hynix_nand_reg_write_op(chip, 0x38, 0);
if (ret)
return ret;
ret = hynix_nand_cmd_op(chip, NAND_HYNIX_CMD_APPLY_PARAMS);
if (ret)
return ret;
return nand_read_page_op(chip, 0, 0, NULL, 0);
}
#define NAND_HYNIX_1XNM_RR_COUNT_OFFS 0

View File

@ -117,16 +117,28 @@ micron_nand_read_page_on_die_ecc(struct mtd_info *mtd, struct nand_chip *chip,
uint8_t *buf, int oob_required,
int page)
{
int status;
int max_bitflips = 0;
u8 status;
int ret, max_bitflips = 0;
micron_nand_on_die_ecc_setup(chip, true);
ret = micron_nand_on_die_ecc_setup(chip, true);
if (ret)
return ret;
ret = nand_read_page_op(chip, page, 0, NULL, 0);
if (ret)
goto out;
ret = nand_status_op(chip, &status);
if (ret)
goto out;
ret = nand_exit_status_op(chip);
if (ret)
goto out;
chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page);
chip->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1);
status = chip->read_byte(mtd);
if (status & NAND_STATUS_FAIL)
mtd->ecc_stats.failed++;
/*
* The internal ECC doesn't tell us the number of bitflips
* that have been corrected, but tells us if it recommends to
@ -137,13 +149,15 @@ micron_nand_read_page_on_die_ecc(struct mtd_info *mtd, struct nand_chip *chip,
else if (status & NAND_STATUS_WRITE_RECOMMENDED)
max_bitflips = chip->ecc.strength;
chip->cmdfunc(mtd, NAND_CMD_READ0, -1, -1);
nand_read_page_raw(mtd, chip, buf, oob_required, page);
ret = nand_read_data_op(chip, buf, mtd->writesize, false);
if (!ret && oob_required)
ret = nand_read_data_op(chip, chip->oob_poi, mtd->oobsize,
false);
out:
micron_nand_on_die_ecc_setup(chip, false);
return max_bitflips;
return ret ? ret : max_bitflips;
}
static int
@ -151,46 +165,16 @@ micron_nand_write_page_on_die_ecc(struct mtd_info *mtd, struct nand_chip *chip,
const uint8_t *buf, int oob_required,
int page)
{
int status;
int ret;
micron_nand_on_die_ecc_setup(chip, true);
chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page);
nand_write_page_raw(mtd, chip, buf, oob_required, page);
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
status = chip->waitfunc(mtd, chip);
ret = micron_nand_on_die_ecc_setup(chip, true);
if (ret)
return ret;
ret = nand_write_page_raw(mtd, chip, buf, oob_required, page);
micron_nand_on_die_ecc_setup(chip, false);
return status & NAND_STATUS_FAIL ? -EIO : 0;
}
static int
micron_nand_read_page_raw_on_die_ecc(struct mtd_info *mtd,
struct nand_chip *chip,
uint8_t *buf, int oob_required,
int page)
{
chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page);
nand_read_page_raw(mtd, chip, buf, oob_required, page);
return 0;
}
static int
micron_nand_write_page_raw_on_die_ecc(struct mtd_info *mtd,
struct nand_chip *chip,
const uint8_t *buf, int oob_required,
int page)
{
int status;
chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page);
nand_write_page_raw(mtd, chip, buf, oob_required, page);
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
status = chip->waitfunc(mtd, chip);
return status & NAND_STATUS_FAIL ? -EIO : 0;
return ret;
}
enum {
@ -285,17 +269,14 @@ static int micron_nand_init(struct nand_chip *chip)
return -EINVAL;
}
chip->ecc.options = NAND_ECC_CUSTOM_PAGE_ACCESS;
chip->ecc.bytes = 8;
chip->ecc.size = 512;
chip->ecc.strength = 4;
chip->ecc.algo = NAND_ECC_BCH;
chip->ecc.read_page = micron_nand_read_page_on_die_ecc;
chip->ecc.write_page = micron_nand_write_page_on_die_ecc;
chip->ecc.read_page_raw =
micron_nand_read_page_raw_on_die_ecc;
chip->ecc.write_page_raw =
micron_nand_write_page_raw_on_die_ecc;
chip->ecc.read_page_raw = nand_read_page_raw;
chip->ecc.write_page_raw = nand_write_page_raw;
mtd_set_ooblayout(mtd, &micron_nand_on_die_ooblayout_ops);
}

View File

@ -91,6 +91,25 @@ static void samsung_nand_decode_id(struct nand_chip *chip)
}
} else {
nand_decode_ext_id(chip);
if (nand_is_slc(chip)) {
switch (chip->id.data[1]) {
/* K9F4G08U0D-S[I|C]B0(T00) */
case 0xDC:
chip->ecc_step_ds = 512;
chip->ecc_strength_ds = 1;
break;
/* K9F1G08U0E 21nm chips do not support subpage write */
case 0xF1:
if (chip->id.len > 4 &&
(chip->id.data[4] & GENMASK(1, 0)) == 0x1)
chip->options |= NAND_NO_SUBPAGE_WRITE;
break;
default:
break;
}
}
}
}

View File

@ -283,16 +283,16 @@ const struct nand_sdr_timings *onfi_async_timing_mode_to_sdr_timings(int mode)
EXPORT_SYMBOL(onfi_async_timing_mode_to_sdr_timings);
/**
* onfi_init_data_interface - [NAND Interface] Initialize a data interface from
* onfi_fill_data_interface - [NAND Interface] Initialize a data interface from
* given ONFI mode
* @iface: The data interface to be initialized
* @mode: The ONFI timing mode
*/
int onfi_init_data_interface(struct nand_chip *chip,
struct nand_data_interface *iface,
int onfi_fill_data_interface(struct nand_chip *chip,
enum nand_data_interface_type type,
int timing_mode)
{
struct nand_data_interface *iface = &chip->data_interface;
if (type != NAND_SDR_IFACE)
return -EINVAL;
@ -321,15 +321,4 @@ int onfi_init_data_interface(struct nand_chip *chip,
return 0;
}
EXPORT_SYMBOL(onfi_init_data_interface);
/**
* nand_get_default_data_interface - [NAND Interface] Retrieve NAND
* data interface for mode 0. This is used as default timing after
* reset.
*/
const struct nand_data_interface *nand_get_default_data_interface(void)
{
return &onfi_sdr_timings[0];
}
EXPORT_SYMBOL(nand_get_default_data_interface);
EXPORT_SYMBOL(onfi_fill_data_interface);

View File

@ -1530,7 +1530,9 @@ static int omap_write_page_bch(struct mtd_info *mtd, struct nand_chip *chip,
const uint8_t *buf, int oob_required, int page)
{
int ret;
uint8_t *ecc_calc = chip->buffers->ecccalc;
uint8_t *ecc_calc = chip->ecc.calc_buf;
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
/* Enable GPMC ecc engine */
chip->ecc.hwctl(mtd, NAND_ECC_WRITE);
@ -1548,7 +1550,8 @@ static int omap_write_page_bch(struct mtd_info *mtd, struct nand_chip *chip,
/* Write ecc vector to OOB area */
chip->write_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
return nand_prog_page_end_op(chip);
}
/**
@ -1568,7 +1571,7 @@ static int omap_write_subpage_bch(struct mtd_info *mtd,
u32 data_len, const u8 *buf,
int oob_required, int page)
{
u8 *ecc_calc = chip->buffers->ecccalc;
u8 *ecc_calc = chip->ecc.calc_buf;
int ecc_size = chip->ecc.size;
int ecc_bytes = chip->ecc.bytes;
int ecc_steps = chip->ecc.steps;
@ -1582,6 +1585,7 @@ static int omap_write_subpage_bch(struct mtd_info *mtd,
* ECC is calculated for all subpages but we choose
* only what we want.
*/
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
/* Enable GPMC ECC engine */
chip->ecc.hwctl(mtd, NAND_ECC_WRITE);
@ -1605,7 +1609,7 @@ static int omap_write_subpage_bch(struct mtd_info *mtd,
/* copy calculated ECC for whole page to chip->buffer->oob */
/* this include masked-value(0xFF) for unwritten subpages */
ecc_calc = chip->buffers->ecccalc;
ecc_calc = chip->ecc.calc_buf;
ret = mtd_ooblayout_set_eccbytes(mtd, ecc_calc, chip->oob_poi, 0,
chip->ecc.total);
if (ret)
@ -1614,7 +1618,7 @@ static int omap_write_subpage_bch(struct mtd_info *mtd,
/* write OOB buffer to NAND device */
chip->write_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
return nand_prog_page_end_op(chip);
}
/**
@ -1635,11 +1639,13 @@ static int omap_write_subpage_bch(struct mtd_info *mtd,
static int omap_read_page_bch(struct mtd_info *mtd, struct nand_chip *chip,
uint8_t *buf, int oob_required, int page)
{
uint8_t *ecc_calc = chip->buffers->ecccalc;
uint8_t *ecc_code = chip->buffers->ecccode;
uint8_t *ecc_calc = chip->ecc.calc_buf;
uint8_t *ecc_code = chip->ecc.code_buf;
int stat, ret;
unsigned int max_bitflips = 0;
nand_read_page_op(chip, page, 0, NULL, 0);
/* Enable GPMC ecc engine */
chip->ecc.hwctl(mtd, NAND_ECC_READ);
@ -1647,10 +1653,10 @@ static int omap_read_page_bch(struct mtd_info *mtd, struct nand_chip *chip,
chip->read_buf(mtd, buf, mtd->writesize);
/* Read oob bytes */
chip->cmdfunc(mtd, NAND_CMD_RNDOUT,
mtd->writesize + BADBLOCK_MARKER_LENGTH, -1);
chip->read_buf(mtd, chip->oob_poi + BADBLOCK_MARKER_LENGTH,
chip->ecc.total);
nand_change_read_column_op(chip,
mtd->writesize + BADBLOCK_MARKER_LENGTH,
chip->oob_poi + BADBLOCK_MARKER_LENGTH,
chip->ecc.total, false);
/* Calculate ecc bytes */
omap_calculate_ecc_bch_multi(mtd, buf, ecc_calc);

View File

@ -520,15 +520,13 @@ static int pxa3xx_nand_init_timings_compat(struct pxa3xx_nand_host *host,
struct nand_chip *chip = &host->chip;
struct pxa3xx_nand_info *info = host->info_data;
const struct pxa3xx_nand_flash *f = NULL;
struct mtd_info *mtd = nand_to_mtd(&host->chip);
int i, id, ntypes;
u8 idbuf[2];
ntypes = ARRAY_SIZE(builtin_flash_types);
chip->cmdfunc(mtd, NAND_CMD_READID, 0x00, -1);
id = chip->read_byte(mtd);
id |= chip->read_byte(mtd) << 0x8;
nand_readid_op(chip, 0, idbuf, sizeof(idbuf));
id = idbuf[0] | (idbuf[1] << 8);
for (i = 0; i < ntypes; i++) {
f = &builtin_flash_types[i];
@ -1351,10 +1349,10 @@ static int pxa3xx_nand_write_page_hwecc(struct mtd_info *mtd,
struct nand_chip *chip, const uint8_t *buf, int oob_required,
int page)
{
chip->write_buf(mtd, buf, mtd->writesize);
nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize);
chip->write_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
return nand_prog_page_end_op(chip);
}
static int pxa3xx_nand_read_page_hwecc(struct mtd_info *mtd,
@ -1364,7 +1362,7 @@ static int pxa3xx_nand_read_page_hwecc(struct mtd_info *mtd,
struct pxa3xx_nand_host *host = nand_get_controller_data(chip);
struct pxa3xx_nand_info *info = host->info_data;
chip->read_buf(mtd, buf, mtd->writesize);
nand_read_page_op(chip, page, 0, buf, mtd->writesize);
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
if (info->retcode == ERR_CORERR && info->use_ecc) {

View File

@ -1725,6 +1725,7 @@ static int qcom_nandc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
u8 *data_buf, *oob_buf = NULL;
int ret;
nand_read_page_op(chip, page, 0, NULL, 0);
data_buf = buf;
oob_buf = oob_required ? chip->oob_poi : NULL;
@ -1750,6 +1751,7 @@ static int qcom_nandc_read_page_raw(struct mtd_info *mtd,
int i, ret;
int read_loc;
nand_read_page_op(chip, page, 0, NULL, 0);
data_buf = buf;
oob_buf = chip->oob_poi;
@ -1850,6 +1852,8 @@ static int qcom_nandc_write_page(struct mtd_info *mtd, struct nand_chip *chip,
u8 *data_buf, *oob_buf;
int i, ret;
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
clear_read_regs(nandc);
clear_bam_transaction(nandc);
@ -1902,6 +1906,9 @@ static int qcom_nandc_write_page(struct mtd_info *mtd, struct nand_chip *chip,
free_descs(nandc);
if (!ret)
ret = nand_prog_page_end_op(chip);
return ret;
}
@ -1916,6 +1923,7 @@ static int qcom_nandc_write_page_raw(struct mtd_info *mtd,
u8 *data_buf, *oob_buf;
int i, ret;
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
clear_read_regs(nandc);
clear_bam_transaction(nandc);
@ -1970,6 +1978,9 @@ static int qcom_nandc_write_page_raw(struct mtd_info *mtd,
free_descs(nandc);
if (!ret)
ret = nand_prog_page_end_op(chip);
return ret;
}
@ -1990,7 +2001,7 @@ static int qcom_nandc_write_oob(struct mtd_info *mtd, struct nand_chip *chip,
struct nand_ecc_ctrl *ecc = &chip->ecc;
u8 *oob = chip->oob_poi;
int data_size, oob_size;
int ret, status = 0;
int ret;
host->use_ecc = true;
@ -2027,11 +2038,7 @@ static int qcom_nandc_write_oob(struct mtd_info *mtd, struct nand_chip *chip,
return -EIO;
}
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
status = chip->waitfunc(mtd, chip);
return status & NAND_STATUS_FAIL ? -EIO : 0;
return nand_prog_page_end_op(chip);
}
static int qcom_nandc_block_bad(struct mtd_info *mtd, loff_t ofs)
@ -2081,7 +2088,7 @@ static int qcom_nandc_block_markbad(struct mtd_info *mtd, loff_t ofs)
struct qcom_nand_host *host = to_qcom_nand_host(chip);
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
struct nand_ecc_ctrl *ecc = &chip->ecc;
int page, ret, status = 0;
int page, ret;
clear_read_regs(nandc);
clear_bam_transaction(nandc);
@ -2114,11 +2121,7 @@ static int qcom_nandc_block_markbad(struct mtd_info *mtd, loff_t ofs)
return -EIO;
}
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
status = chip->waitfunc(mtd, chip);
return status & NAND_STATUS_FAIL ? -EIO : 0;
return nand_prog_page_end_op(chip);
}
/*
@ -2636,6 +2639,9 @@ static int qcom_nand_host_init(struct qcom_nand_controller *nandc,
nand_set_flash_node(chip, dn);
mtd->name = devm_kasprintf(dev, GFP_KERNEL, "qcom_nand.%d", host->cs);
if (!mtd->name)
return -ENOMEM;
mtd->owner = THIS_MODULE;
mtd->dev.parent = dev;

View File

@ -364,7 +364,7 @@ static int r852_wait(struct mtd_info *mtd, struct nand_chip *chip)
struct r852_device *dev = nand_get_controller_data(chip);
unsigned long timeout;
int status;
u8 status;
timeout = jiffies + (chip->state == FL_ERASING ?
msecs_to_jiffies(400) : msecs_to_jiffies(20));
@ -373,8 +373,7 @@ static int r852_wait(struct mtd_info *mtd, struct nand_chip *chip)
if (chip->dev_ready(mtd))
break;
chip->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1);
status = (int)chip->read_byte(mtd);
nand_status_op(chip, &status);
/* Unfortunelly, no way to send detailed error status... */
if (dev->dma_error) {
@ -522,9 +521,7 @@ exit:
static int r852_read_oob(struct mtd_info *mtd, struct nand_chip *chip,
int page)
{
chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page);
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
return nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize);
}
/*
@ -1046,7 +1043,7 @@ static int r852_resume(struct device *device)
if (dev->card_registred) {
r852_engine_enable(dev);
dev->chip->select_chip(mtd, 0);
dev->chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1);
nand_reset_op(dev->chip);
dev->chip->select_chip(mtd, -1);
}

View File

@ -614,7 +614,7 @@ static void set_cmd_regs(struct mtd_info *mtd, uint32_t cmd, uint32_t flcmcdr_va
static int flctl_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
uint8_t *buf, int oob_required, int page)
{
chip->read_buf(mtd, buf, mtd->writesize);
nand_read_page_op(chip, page, 0, buf, mtd->writesize);
if (oob_required)
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
@ -624,9 +624,9 @@ static int flctl_write_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
const uint8_t *buf, int oob_required,
int page)
{
chip->write_buf(mtd, buf, mtd->writesize);
nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize);
chip->write_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
return nand_prog_page_end_op(chip);
}
static void execmd_read_page_sector(struct mtd_info *mtd, int page_addr)

View File

@ -36,7 +36,7 @@ struct sm_oob {
#define SM_SMALL_OOB_SIZE 8
extern int sm_register_device(struct mtd_info *mtd, int smartmedia);
int sm_register_device(struct mtd_info *mtd, int smartmedia);
static inline int sm_sector_valid(struct sm_oob *oob)

View File

@ -958,12 +958,12 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
int ret;
if (*cur_off != data_off)
nand->cmdfunc(mtd, NAND_CMD_RNDOUT, data_off, -1);
nand_change_read_column_op(nand, data_off, NULL, 0, false);
sunxi_nfc_randomizer_read_buf(mtd, NULL, ecc->size, false, page);
if (data_off + ecc->size != oob_off)
nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
nand_change_read_column_op(nand, oob_off, NULL, 0, false);
ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
if (ret)
@ -991,16 +991,15 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
* Re-read the data with the randomizer disabled to identify
* bitflips in erased pages.
*/
if (nand->options & NAND_NEED_SCRAMBLING) {
nand->cmdfunc(mtd, NAND_CMD_RNDOUT, data_off, -1);
nand->read_buf(mtd, data, ecc->size);
} else {
if (nand->options & NAND_NEED_SCRAMBLING)
nand_change_read_column_op(nand, data_off, data,
ecc->size, false);
else
memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE,
ecc->size);
}
nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
nand->read_buf(mtd, oob, ecc->bytes + 4);
nand_change_read_column_op(nand, oob_off, oob, ecc->bytes + 4,
false);
ret = nand_check_erased_ecc_chunk(data, ecc->size,
oob, ecc->bytes + 4,
@ -1011,7 +1010,8 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
if (oob_required) {
nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
nand_change_read_column_op(nand, oob_off, NULL, 0,
false);
sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
true, page);
@ -1038,8 +1038,8 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
return;
if (!cur_off || *cur_off != offset)
nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
offset + mtd->writesize, -1);
nand_change_read_column_op(nand, mtd->writesize, NULL, 0,
false);
if (!randomize)
sunxi_nfc_read_buf(mtd, oob + offset, len);
@ -1116,9 +1116,9 @@ static int sunxi_nfc_hw_ecc_read_chunks_dma(struct mtd_info *mtd, uint8_t *buf,
if (oob_required && !erased) {
/* TODO: use DMA to retrieve OOB */
nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
mtd->writesize + oob_off, -1);
nand->read_buf(mtd, oob, ecc->bytes + 4);
nand_change_read_column_op(nand,
mtd->writesize + oob_off,
oob, ecc->bytes + 4, false);
sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, i,
!i, page);
@ -1143,18 +1143,17 @@ static int sunxi_nfc_hw_ecc_read_chunks_dma(struct mtd_info *mtd, uint8_t *buf,
/*
* Re-read the data with the randomizer disabled to
* identify bitflips in erased pages.
* TODO: use DMA to read page in raw mode
*/
if (randomized) {
/* TODO: use DMA to read page in raw mode */
nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
data_off, -1);
nand->read_buf(mtd, data, ecc->size);
}
if (randomized)
nand_change_read_column_op(nand, data_off,
data, ecc->size,
false);
/* TODO: use DMA to retrieve OOB */
nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
mtd->writesize + oob_off, -1);
nand->read_buf(mtd, oob, ecc->bytes + 4);
nand_change_read_column_op(nand,
mtd->writesize + oob_off,
oob, ecc->bytes + 4, false);
ret = nand_check_erased_ecc_chunk(data, ecc->size,
oob, ecc->bytes + 4,
@ -1187,12 +1186,12 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
int ret;
if (data_off != *cur_off)
nand->cmdfunc(mtd, NAND_CMD_RNDIN, data_off, -1);
nand_change_write_column_op(nand, data_off, NULL, 0, false);
sunxi_nfc_randomizer_write_buf(mtd, data, ecc->size, false, page);
if (data_off + ecc->size != oob_off)
nand->cmdfunc(mtd, NAND_CMD_RNDIN, oob_off, -1);
nand_change_write_column_op(nand, oob_off, NULL, 0, false);
ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
if (ret)
@ -1228,8 +1227,8 @@ static void sunxi_nfc_hw_ecc_write_extra_oob(struct mtd_info *mtd,
return;
if (!cur_off || *cur_off != offset)
nand->cmdfunc(mtd, NAND_CMD_RNDIN,
offset + mtd->writesize, -1);
nand_change_write_column_op(nand, offset + mtd->writesize,
NULL, 0, false);
sunxi_nfc_randomizer_write_buf(mtd, oob + offset, len, false, page);
@ -1246,6 +1245,8 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
int ret, i, cur_off = 0;
bool raw_mode = false;
nand_read_page_op(chip, page, 0, NULL, 0);
sunxi_nfc_hw_ecc_enable(mtd);
for (i = 0; i < ecc->steps; i++) {
@ -1279,14 +1280,14 @@ static int sunxi_nfc_hw_ecc_read_page_dma(struct mtd_info *mtd,
{
int ret;
nand_read_page_op(chip, page, 0, NULL, 0);
ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, oob_required, page,
chip->ecc.steps);
if (ret >= 0)
return ret;
/* Fallback to PIO mode */
chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
return sunxi_nfc_hw_ecc_read_page(mtd, chip, buf, oob_required, page);
}
@ -1299,6 +1300,8 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
int ret, i, cur_off = 0;
unsigned int max_bitflips = 0;
nand_read_page_op(chip, page, 0, NULL, 0);
sunxi_nfc_hw_ecc_enable(mtd);
for (i = data_offs / ecc->size;
@ -1330,13 +1333,13 @@ static int sunxi_nfc_hw_ecc_read_subpage_dma(struct mtd_info *mtd,
int nchunks = DIV_ROUND_UP(data_offs + readlen, chip->ecc.size);
int ret;
nand_read_page_op(chip, page, 0, NULL, 0);
ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, false, page, nchunks);
if (ret >= 0)
return ret;
/* Fallback to PIO mode */
chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
return sunxi_nfc_hw_ecc_read_subpage(mtd, chip, data_offs, readlen,
buf, page);
}
@ -1349,6 +1352,8 @@ static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
struct nand_ecc_ctrl *ecc = &chip->ecc;
int ret, i, cur_off = 0;
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
sunxi_nfc_hw_ecc_enable(mtd);
for (i = 0; i < ecc->steps; i++) {
@ -1370,7 +1375,7 @@ static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
sunxi_nfc_hw_ecc_disable(mtd);
return 0;
return nand_prog_page_end_op(chip);
}
static int sunxi_nfc_hw_ecc_write_subpage(struct mtd_info *mtd,
@ -1382,6 +1387,8 @@ static int sunxi_nfc_hw_ecc_write_subpage(struct mtd_info *mtd,
struct nand_ecc_ctrl *ecc = &chip->ecc;
int ret, i, cur_off = 0;
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
sunxi_nfc_hw_ecc_enable(mtd);
for (i = data_offs / ecc->size;
@ -1400,7 +1407,7 @@ static int sunxi_nfc_hw_ecc_write_subpage(struct mtd_info *mtd,
sunxi_nfc_hw_ecc_disable(mtd);
return 0;
return nand_prog_page_end_op(chip);
}
static int sunxi_nfc_hw_ecc_write_page_dma(struct mtd_info *mtd,
@ -1430,6 +1437,8 @@ static int sunxi_nfc_hw_ecc_write_page_dma(struct mtd_info *mtd,
sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, i, !i, page);
}
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
sunxi_nfc_hw_ecc_enable(mtd);
sunxi_nfc_randomizer_config(mtd, page, false);
sunxi_nfc_randomizer_enable(mtd);
@ -1460,7 +1469,7 @@ static int sunxi_nfc_hw_ecc_write_page_dma(struct mtd_info *mtd,
sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi,
NULL, page);
return 0;
return nand_prog_page_end_op(chip);
pio_fallback:
return sunxi_nfc_hw_ecc_write_page(mtd, chip, buf, oob_required, page);
@ -1476,6 +1485,8 @@ static int sunxi_nfc_hw_syndrome_ecc_read_page(struct mtd_info *mtd,
int ret, i, cur_off = 0;
bool raw_mode = false;
nand_read_page_op(chip, page, 0, NULL, 0);
sunxi_nfc_hw_ecc_enable(mtd);
for (i = 0; i < ecc->steps; i++) {
@ -1512,6 +1523,8 @@ static int sunxi_nfc_hw_syndrome_ecc_write_page(struct mtd_info *mtd,
struct nand_ecc_ctrl *ecc = &chip->ecc;
int ret, i, cur_off = 0;
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
sunxi_nfc_hw_ecc_enable(mtd);
for (i = 0; i < ecc->steps; i++) {
@ -1533,41 +1546,33 @@ static int sunxi_nfc_hw_syndrome_ecc_write_page(struct mtd_info *mtd,
sunxi_nfc_hw_ecc_disable(mtd);
return 0;
return nand_prog_page_end_op(chip);
}
static int sunxi_nfc_hw_common_ecc_read_oob(struct mtd_info *mtd,
struct nand_chip *chip,
int page)
{
chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
chip->pagebuf = -1;
return chip->ecc.read_page(mtd, chip, chip->buffers->databuf, 1, page);
return chip->ecc.read_page(mtd, chip, chip->data_buf, 1, page);
}
static int sunxi_nfc_hw_common_ecc_write_oob(struct mtd_info *mtd,
struct nand_chip *chip,
int page)
{
int ret, status;
chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0, page);
int ret;
chip->pagebuf = -1;
memset(chip->buffers->databuf, 0xff, mtd->writesize);
ret = chip->ecc.write_page(mtd, chip, chip->buffers->databuf, 1, page);
memset(chip->data_buf, 0xff, mtd->writesize);
ret = chip->ecc.write_page(mtd, chip, chip->data_buf, 1, page);
if (ret)
return ret;
/* Send command to program the OOB data */
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
status = chip->waitfunc(mtd, chip);
return status & NAND_STATUS_FAIL ? -EIO : 0;
return nand_prog_page_end_op(chip);
}
static const s32 tWB_lut[] = {6, 12, 16, 20};
@ -1853,8 +1858,14 @@ static int sunxi_nand_hw_common_ecc_ctrl_init(struct mtd_info *mtd,
/* Add ECC info retrieval from DT */
for (i = 0; i < ARRAY_SIZE(strengths); i++) {
if (ecc->strength <= strengths[i])
if (ecc->strength <= strengths[i]) {
/*
* Update ecc->strength value with the actual strength
* that will be used by the ECC engine.
*/
ecc->strength = strengths[i];
break;
}
}
if (i >= ARRAY_SIZE(strengths)) {

View File

@ -329,7 +329,7 @@ static void aux_read(struct nand_chip *chip, u8 **buf, int len, int *pos)
if (!*buf) {
/* skip over "len" bytes */
chip->cmdfunc(mtd, NAND_CMD_RNDOUT, *pos, -1);
nand_change_read_column_op(chip, *pos, NULL, 0, false);
} else {
tango_read_buf(mtd, *buf, len);
*buf += len;
@ -344,7 +344,7 @@ static void aux_write(struct nand_chip *chip, const u8 **buf, int len, int *pos)
if (!*buf) {
/* skip over "len" bytes */
chip->cmdfunc(mtd, NAND_CMD_RNDIN, *pos, -1);
nand_change_write_column_op(chip, *pos, NULL, 0, false);
} else {
tango_write_buf(mtd, *buf, len);
*buf += len;
@ -427,7 +427,7 @@ static void raw_write(struct nand_chip *chip, const u8 *buf, const u8 *oob)
static int tango_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
u8 *buf, int oob_required, int page)
{
chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
nand_read_page_op(chip, page, 0, NULL, 0);
raw_read(chip, buf, chip->oob_poi);
return 0;
}
@ -435,23 +435,15 @@ static int tango_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
static int tango_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
const u8 *buf, int oob_required, int page)
{
int status;
chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0, page);
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
raw_write(chip, buf, chip->oob_poi);
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
status = chip->waitfunc(mtd, chip);
if (status & NAND_STATUS_FAIL)
return -EIO;
return 0;
return nand_prog_page_end_op(chip);
}
static int tango_read_oob(struct mtd_info *mtd, struct nand_chip *chip,
int page)
{
chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
nand_read_page_op(chip, page, 0, NULL, 0);
raw_read(chip, NULL, chip->oob_poi);
return 0;
}
@ -459,11 +451,9 @@ static int tango_read_oob(struct mtd_info *mtd, struct nand_chip *chip,
static int tango_write_oob(struct mtd_info *mtd, struct nand_chip *chip,
int page)
{
chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0, page);
nand_prog_page_begin_op(chip, page, 0, NULL, 0);
raw_write(chip, NULL, chip->oob_poi);
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
chip->waitfunc(mtd, chip);
return 0;
return nand_prog_page_end_op(chip);
}
static int oob_ecc(struct mtd_info *mtd, int idx, struct mtd_oob_region *res)
@ -590,7 +580,6 @@ static int chip_init(struct device *dev, struct device_node *np)
ecc->write_page = tango_write_page;
ecc->read_oob = tango_read_oob;
ecc->write_oob = tango_write_oob;
ecc->options = NAND_ECC_CUSTOM_PAGE_ACCESS;
err = nand_scan_tail(mtd);
if (err)

View File

@ -192,6 +192,7 @@ tmio_nand_wait(struct mtd_info *mtd, struct nand_chip *nand_chip)
{
struct tmio_nand *tmio = mtd_to_tmio(mtd);
long timeout;
u8 status;
/* enable RDYREQ interrupt */
tmio_iowrite8(0x0f, tmio->fcr + FCR_ISR);
@ -212,8 +213,8 @@ tmio_nand_wait(struct mtd_info *mtd, struct nand_chip *nand_chip)
dev_warn(&tmio->dev->dev, "timeout waiting for interrupt\n");
}
nand_chip->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1);
return nand_chip->read_byte(mtd);
nand_status_op(nand_chip, &status);
return status;
}
/*

View File

@ -560,7 +560,7 @@ static int vf610_nfc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
int eccsize = chip->ecc.size;
int stat;
vf610_nfc_read_buf(mtd, buf, eccsize);
nand_read_page_op(chip, page, 0, buf, eccsize);
if (oob_required)
vf610_nfc_read_buf(mtd, chip->oob_poi, mtd->oobsize);
@ -580,7 +580,7 @@ static int vf610_nfc_write_page(struct mtd_info *mtd, struct nand_chip *chip,
{
struct vf610_nfc *nfc = mtd_to_nfc(mtd);
vf610_nfc_write_buf(mtd, buf, mtd->writesize);
nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize);
if (oob_required)
vf610_nfc_write_buf(mtd, chip->oob_poi, mtd->oobsize);
@ -588,7 +588,7 @@ static int vf610_nfc_write_page(struct mtd_info *mtd, struct nand_chip *chip,
nfc->use_hw_ecc = true;
nfc->write_sz = mtd->writesize + mtd->oobsize;
return 0;
return nand_prog_page_end_op(chip);
}
static const struct of_device_id vf610_nfc_dt_ids[] = {

View File

@ -4,8 +4,7 @@ menuconfig MTD_ONENAND
depends on HAS_IOMEM
help
This enables support for accessing all type of OneNAND flash
devices. For further information see
<http://www.samsung.com/Products/Semiconductor/OneNAND/index.htm>
devices.
if MTD_ONENAND
@ -26,9 +25,11 @@ config MTD_ONENAND_GENERIC
config MTD_ONENAND_OMAP2
tristate "OneNAND on OMAP2/OMAP3 support"
depends on ARCH_OMAP2 || ARCH_OMAP3
depends on OF || COMPILE_TEST
help
Support for a OneNAND flash device connected to an OMAP2/OMAP3 CPU
Support for a OneNAND flash device connected to an OMAP2/OMAP3 SoC
via the GPMC memory controller.
Enable dmaengine and gpiolib for better performance.
config MTD_ONENAND_SAMSUNG
tristate "OneNAND on Samsung SOC controller support"

View File

@ -28,19 +28,18 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/onenand.h>
#include <linux/mtd/partitions.h>
#include <linux/of_device.h>
#include <linux/omap-gpmc.h>
#include <linux/platform_device.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
#include <linux/io.h>
#include <linux/slab.h>
#include <linux/regulator/consumer.h>
#include <linux/gpio.h>
#include <linux/gpio/consumer.h>
#include <asm/mach/flash.h>
#include <linux/platform_data/mtd-onenand-omap2.h>
#include <linux/omap-dma.h>
#define DRIVER_NAME "omap2-onenand"
@ -50,24 +49,17 @@ struct omap2_onenand {
struct platform_device *pdev;
int gpmc_cs;
unsigned long phys_base;
unsigned int mem_size;
int gpio_irq;
struct gpio_desc *int_gpiod;
struct mtd_info mtd;
struct onenand_chip onenand;
struct completion irq_done;
struct completion dma_done;
int dma_channel;
int freq;
int (*setup)(void __iomem *base, int *freq_ptr);
struct regulator *regulator;
u8 flags;
struct dma_chan *dma_chan;
};
static void omap2_onenand_dma_cb(int lch, u16 ch_status, void *data)
static void omap2_onenand_dma_complete_func(void *completion)
{
struct omap2_onenand *c = data;
complete(&c->dma_done);
complete(completion);
}
static irqreturn_t omap2_onenand_interrupt(int irq, void *dev_id)
@ -90,6 +82,65 @@ static inline void write_reg(struct omap2_onenand *c, unsigned short value,
writew(value, c->onenand.base + reg);
}
static int omap2_onenand_set_cfg(struct omap2_onenand *c,
bool sr, bool sw,
int latency, int burst_len)
{
unsigned short reg = ONENAND_SYS_CFG1_RDY | ONENAND_SYS_CFG1_INT;
reg |= latency << ONENAND_SYS_CFG1_BRL_SHIFT;
switch (burst_len) {
case 0: /* continuous */
break;
case 4:
reg |= ONENAND_SYS_CFG1_BL_4;
break;
case 8:
reg |= ONENAND_SYS_CFG1_BL_8;
break;
case 16:
reg |= ONENAND_SYS_CFG1_BL_16;
break;
case 32:
reg |= ONENAND_SYS_CFG1_BL_32;
break;
default:
return -EINVAL;
}
if (latency > 5)
reg |= ONENAND_SYS_CFG1_HF;
if (latency > 7)
reg |= ONENAND_SYS_CFG1_VHF;
if (sr)
reg |= ONENAND_SYS_CFG1_SYNC_READ;
if (sw)
reg |= ONENAND_SYS_CFG1_SYNC_WRITE;
write_reg(c, reg, ONENAND_REG_SYS_CFG1);
return 0;
}
static int omap2_onenand_get_freq(int ver)
{
switch ((ver >> 4) & 0xf) {
case 0:
return 40;
case 1:
return 54;
case 2:
return 66;
case 3:
return 83;
case 4:
return 104;
}
return -EINVAL;
}
static void wait_err(char *msg, int state, unsigned int ctrl, unsigned int intr)
{
printk(KERN_ERR "onenand_wait: %s! state %d ctrl 0x%04x intr 0x%04x\n",
@ -153,28 +204,22 @@ static int omap2_onenand_wait(struct mtd_info *mtd, int state)
if (!(syscfg & ONENAND_SYS_CFG1_IOBE)) {
syscfg |= ONENAND_SYS_CFG1_IOBE;
write_reg(c, syscfg, ONENAND_REG_SYS_CFG1);
if (c->flags & ONENAND_IN_OMAP34XX)
/* Add a delay to let GPIO settle */
syscfg = read_reg(c, ONENAND_REG_SYS_CFG1);
/* Add a delay to let GPIO settle */
syscfg = read_reg(c, ONENAND_REG_SYS_CFG1);
}
reinit_completion(&c->irq_done);
if (c->gpio_irq) {
result = gpio_get_value(c->gpio_irq);
if (result == -1) {
ctrl = read_reg(c, ONENAND_REG_CTRL_STATUS);
intr = read_reg(c, ONENAND_REG_INTERRUPT);
wait_err("gpio error", state, ctrl, intr);
return -EIO;
}
} else
result = 0;
if (result == 0) {
result = gpiod_get_value(c->int_gpiod);
if (result < 0) {
ctrl = read_reg(c, ONENAND_REG_CTRL_STATUS);
intr = read_reg(c, ONENAND_REG_INTERRUPT);
wait_err("gpio error", state, ctrl, intr);
return result;
} else if (result == 0) {
int retry_cnt = 0;
retry:
result = wait_for_completion_timeout(&c->irq_done,
msecs_to_jiffies(20));
if (result == 0) {
if (!wait_for_completion_io_timeout(&c->irq_done,
msecs_to_jiffies(20))) {
/* Timeout after 20ms */
ctrl = read_reg(c, ONENAND_REG_CTRL_STATUS);
if (ctrl & ONENAND_CTRL_ONGO &&
@ -291,9 +336,42 @@ static inline int omap2_onenand_bufferram_offset(struct mtd_info *mtd, int area)
return 0;
}
#if defined(CONFIG_ARCH_OMAP3) || defined(MULTI_OMAP2)
static inline int omap2_onenand_dma_transfer(struct omap2_onenand *c,
dma_addr_t src, dma_addr_t dst,
size_t count)
{
struct dma_async_tx_descriptor *tx;
dma_cookie_t cookie;
static int omap3_onenand_read_bufferram(struct mtd_info *mtd, int area,
tx = dmaengine_prep_dma_memcpy(c->dma_chan, dst, src, count, 0);
if (!tx) {
dev_err(&c->pdev->dev, "Failed to prepare DMA memcpy\n");
return -EIO;
}
reinit_completion(&c->dma_done);
tx->callback = omap2_onenand_dma_complete_func;
tx->callback_param = &c->dma_done;
cookie = tx->tx_submit(tx);
if (dma_submit_error(cookie)) {
dev_err(&c->pdev->dev, "Failed to do DMA tx_submit\n");
return -EIO;
}
dma_async_issue_pending(c->dma_chan);
if (!wait_for_completion_io_timeout(&c->dma_done,
msecs_to_jiffies(20))) {
dmaengine_terminate_sync(c->dma_chan);
return -ETIMEDOUT;
}
return 0;
}
static int omap2_onenand_read_bufferram(struct mtd_info *mtd, int area,
unsigned char *buffer, int offset,
size_t count)
{
@ -301,10 +379,9 @@ static int omap3_onenand_read_bufferram(struct mtd_info *mtd, int area,
struct onenand_chip *this = mtd->priv;
dma_addr_t dma_src, dma_dst;
int bram_offset;
unsigned long timeout;
void *buf = (void *)buffer;
size_t xtra;
volatile unsigned *done;
int ret;
bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset;
if (bram_offset & 3 || (size_t)buf & 3 || count < 384)
@ -341,25 +418,10 @@ static int omap3_onenand_read_bufferram(struct mtd_info *mtd, int area,
goto out_copy;
}
omap_set_dma_transfer_params(c->dma_channel, OMAP_DMA_DATA_TYPE_S32,
count >> 2, 1, 0, 0, 0);
omap_set_dma_src_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_src, 0, 0);
omap_set_dma_dest_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_dst, 0, 0);
reinit_completion(&c->dma_done);
omap_start_dma(c->dma_channel);
timeout = jiffies + msecs_to_jiffies(20);
done = &c->dma_done.done;
while (time_before(jiffies, timeout))
if (*done)
break;
ret = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count);
dma_unmap_single(&c->pdev->dev, dma_dst, count, DMA_FROM_DEVICE);
if (!*done) {
if (ret) {
dev_err(&c->pdev->dev, "timeout waiting for DMA\n");
goto out_copy;
}
@ -371,7 +433,7 @@ out_copy:
return 0;
}
static int omap3_onenand_write_bufferram(struct mtd_info *mtd, int area,
static int omap2_onenand_write_bufferram(struct mtd_info *mtd, int area,
const unsigned char *buffer,
int offset, size_t count)
{
@ -379,9 +441,8 @@ static int omap3_onenand_write_bufferram(struct mtd_info *mtd, int area,
struct onenand_chip *this = mtd->priv;
dma_addr_t dma_src, dma_dst;
int bram_offset;
unsigned long timeout;
void *buf = (void *)buffer;
volatile unsigned *done;
int ret;
bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset;
if (bram_offset & 3 || (size_t)buf & 3 || count < 384)
@ -412,25 +473,10 @@ static int omap3_onenand_write_bufferram(struct mtd_info *mtd, int area,
return -1;
}
omap_set_dma_transfer_params(c->dma_channel, OMAP_DMA_DATA_TYPE_S32,
count >> 2, 1, 0, 0, 0);
omap_set_dma_src_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_src, 0, 0);
omap_set_dma_dest_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_dst, 0, 0);
reinit_completion(&c->dma_done);
omap_start_dma(c->dma_channel);
timeout = jiffies + msecs_to_jiffies(20);
done = &c->dma_done.done;
while (time_before(jiffies, timeout))
if (*done)
break;
ret = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count);
dma_unmap_single(&c->pdev->dev, dma_src, count, DMA_TO_DEVICE);
if (!*done) {
if (ret) {
dev_err(&c->pdev->dev, "timeout waiting for DMA\n");
goto out_copy;
}
@ -442,136 +488,6 @@ out_copy:
return 0;
}
#else
static int omap3_onenand_read_bufferram(struct mtd_info *mtd, int area,
unsigned char *buffer, int offset,
size_t count)
{
return -ENOSYS;
}
static int omap3_onenand_write_bufferram(struct mtd_info *mtd, int area,
const unsigned char *buffer,
int offset, size_t count)
{
return -ENOSYS;
}
#endif
#if defined(CONFIG_ARCH_OMAP2) || defined(MULTI_OMAP2)
static int omap2_onenand_read_bufferram(struct mtd_info *mtd, int area,
unsigned char *buffer, int offset,
size_t count)
{
struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd);
struct onenand_chip *this = mtd->priv;
dma_addr_t dma_src, dma_dst;
int bram_offset;
bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset;
/* DMA is not used. Revisit PM requirements before enabling it. */
if (1 || (c->dma_channel < 0) ||
((void *) buffer >= (void *) high_memory) || (bram_offset & 3) ||
(((unsigned int) buffer) & 3) || (count < 1024) || (count & 3)) {
memcpy(buffer, (__force void *)(this->base + bram_offset),
count);
return 0;
}
dma_src = c->phys_base + bram_offset;
dma_dst = dma_map_single(&c->pdev->dev, buffer, count,
DMA_FROM_DEVICE);
if (dma_mapping_error(&c->pdev->dev, dma_dst)) {
dev_err(&c->pdev->dev,
"Couldn't DMA map a %d byte buffer\n",
count);
return -1;
}
omap_set_dma_transfer_params(c->dma_channel, OMAP_DMA_DATA_TYPE_S32,
count / 4, 1, 0, 0, 0);
omap_set_dma_src_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_src, 0, 0);
omap_set_dma_dest_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_dst, 0, 0);
reinit_completion(&c->dma_done);
omap_start_dma(c->dma_channel);
wait_for_completion(&c->dma_done);
dma_unmap_single(&c->pdev->dev, dma_dst, count, DMA_FROM_DEVICE);
return 0;
}
static int omap2_onenand_write_bufferram(struct mtd_info *mtd, int area,
const unsigned char *buffer,
int offset, size_t count)
{
struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd);
struct onenand_chip *this = mtd->priv;
dma_addr_t dma_src, dma_dst;
int bram_offset;
bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset;
/* DMA is not used. Revisit PM requirements before enabling it. */
if (1 || (c->dma_channel < 0) ||
((void *) buffer >= (void *) high_memory) || (bram_offset & 3) ||
(((unsigned int) buffer) & 3) || (count < 1024) || (count & 3)) {
memcpy((__force void *)(this->base + bram_offset), buffer,
count);
return 0;
}
dma_src = dma_map_single(&c->pdev->dev, (void *) buffer, count,
DMA_TO_DEVICE);
dma_dst = c->phys_base + bram_offset;
if (dma_mapping_error(&c->pdev->dev, dma_src)) {
dev_err(&c->pdev->dev,
"Couldn't DMA map a %d byte buffer\n",
count);
return -1;
}
omap_set_dma_transfer_params(c->dma_channel, OMAP_DMA_DATA_TYPE_S16,
count / 2, 1, 0, 0, 0);
omap_set_dma_src_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_src, 0, 0);
omap_set_dma_dest_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_dst, 0, 0);
reinit_completion(&c->dma_done);
omap_start_dma(c->dma_channel);
wait_for_completion(&c->dma_done);
dma_unmap_single(&c->pdev->dev, dma_src, count, DMA_TO_DEVICE);
return 0;
}
#else
static int omap2_onenand_read_bufferram(struct mtd_info *mtd, int area,
unsigned char *buffer, int offset,
size_t count)
{
return -ENOSYS;
}
static int omap2_onenand_write_bufferram(struct mtd_info *mtd, int area,
const unsigned char *buffer,
int offset, size_t count)
{
return -ENOSYS;
}
#endif
static struct platform_driver omap2_onenand_driver;
static void omap2_onenand_shutdown(struct platform_device *pdev)
{
struct omap2_onenand *c = dev_get_drvdata(&pdev->dev);
@ -583,168 +499,117 @@ static void omap2_onenand_shutdown(struct platform_device *pdev)
memset((__force void *)c->onenand.base, 0, ONENAND_BUFRAM_SIZE);
}
static int omap2_onenand_enable(struct mtd_info *mtd)
{
int ret;
struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd);
ret = regulator_enable(c->regulator);
if (ret != 0)
dev_err(&c->pdev->dev, "can't enable regulator\n");
return ret;
}
static int omap2_onenand_disable(struct mtd_info *mtd)
{
int ret;
struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd);
ret = regulator_disable(c->regulator);
if (ret != 0)
dev_err(&c->pdev->dev, "can't disable regulator\n");
return ret;
}
static int omap2_onenand_probe(struct platform_device *pdev)
{
struct omap_onenand_platform_data *pdata;
struct omap2_onenand *c;
struct onenand_chip *this;
int r;
u32 val;
dma_cap_mask_t mask;
int freq, latency, r;
struct resource *res;
struct omap2_onenand *c;
struct gpmc_onenand_info info;
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
pdata = dev_get_platdata(&pdev->dev);
if (pdata == NULL) {
dev_err(&pdev->dev, "platform data missing\n");
return -ENODEV;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(dev, "error getting memory resource\n");
return -EINVAL;
}
c = kzalloc(sizeof(struct omap2_onenand), GFP_KERNEL);
r = of_property_read_u32(np, "reg", &val);
if (r) {
dev_err(dev, "reg not found in DT\n");
return r;
}
c = devm_kzalloc(dev, sizeof(struct omap2_onenand), GFP_KERNEL);
if (!c)
return -ENOMEM;
init_completion(&c->irq_done);
init_completion(&c->dma_done);
c->flags = pdata->flags;
c->gpmc_cs = pdata->cs;
c->gpio_irq = pdata->gpio_irq;
c->dma_channel = pdata->dma_channel;
if (c->dma_channel < 0) {
/* if -1, don't use DMA */
c->gpio_irq = 0;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (res == NULL) {
r = -EINVAL;
dev_err(&pdev->dev, "error getting memory resource\n");
goto err_kfree;
}
c->gpmc_cs = val;
c->phys_base = res->start;
c->mem_size = resource_size(res);
if (request_mem_region(c->phys_base, c->mem_size,
pdev->dev.driver->name) == NULL) {
dev_err(&pdev->dev, "Cannot reserve memory region at 0x%08lx, size: 0x%x\n",
c->phys_base, c->mem_size);
r = -EBUSY;
goto err_kfree;
}
c->onenand.base = ioremap(c->phys_base, c->mem_size);
if (c->onenand.base == NULL) {
r = -ENOMEM;
goto err_release_mem_region;
c->onenand.base = devm_ioremap_resource(dev, res);
if (IS_ERR(c->onenand.base))
return PTR_ERR(c->onenand.base);
c->int_gpiod = devm_gpiod_get_optional(dev, "int", GPIOD_IN);
if (IS_ERR(c->int_gpiod)) {
r = PTR_ERR(c->int_gpiod);
/* Just try again if this happens */
if (r != -EPROBE_DEFER)
dev_err(dev, "error getting gpio: %d\n", r);
return r;
}
if (pdata->onenand_setup != NULL) {
r = pdata->onenand_setup(c->onenand.base, &c->freq);
if (r < 0) {
dev_err(&pdev->dev, "Onenand platform setup failed: "
"%d\n", r);
goto err_iounmap;
}
c->setup = pdata->onenand_setup;
if (c->int_gpiod) {
r = devm_request_irq(dev, gpiod_to_irq(c->int_gpiod),
omap2_onenand_interrupt,
IRQF_TRIGGER_RISING, "onenand", c);
if (r)
return r;
c->onenand.wait = omap2_onenand_wait;
}
if (c->gpio_irq) {
if ((r = gpio_request(c->gpio_irq, "OneNAND irq")) < 0) {
dev_err(&pdev->dev, "Failed to request GPIO%d for "
"OneNAND\n", c->gpio_irq);
goto err_iounmap;
}
gpio_direction_input(c->gpio_irq);
dma_cap_zero(mask);
dma_cap_set(DMA_MEMCPY, mask);
if ((r = request_irq(gpio_to_irq(c->gpio_irq),
omap2_onenand_interrupt, IRQF_TRIGGER_RISING,
pdev->dev.driver->name, c)) < 0)
goto err_release_gpio;
c->dma_chan = dma_request_channel(mask, NULL, NULL);
if (c->dma_chan) {
c->onenand.read_bufferram = omap2_onenand_read_bufferram;
c->onenand.write_bufferram = omap2_onenand_write_bufferram;
}
if (c->dma_channel >= 0) {
r = omap_request_dma(0, pdev->dev.driver->name,
omap2_onenand_dma_cb, (void *) c,
&c->dma_channel);
if (r == 0) {
omap_set_dma_write_mode(c->dma_channel,
OMAP_DMA_WRITE_NON_POSTED);
omap_set_dma_src_data_pack(c->dma_channel, 1);
omap_set_dma_src_burst_mode(c->dma_channel,
OMAP_DMA_DATA_BURST_8);
omap_set_dma_dest_data_pack(c->dma_channel, 1);
omap_set_dma_dest_burst_mode(c->dma_channel,
OMAP_DMA_DATA_BURST_8);
} else {
dev_info(&pdev->dev,
"failed to allocate DMA for OneNAND, "
"using PIO instead\n");
c->dma_channel = -1;
}
}
dev_info(&pdev->dev, "initializing on CS%d, phys base 0x%08lx, virtual "
"base %p, freq %d MHz\n", c->gpmc_cs, c->phys_base,
c->onenand.base, c->freq);
c->pdev = pdev;
c->mtd.priv = &c->onenand;
c->mtd.dev.parent = dev;
mtd_set_of_node(&c->mtd, dev->of_node);
c->mtd.dev.parent = &pdev->dev;
mtd_set_of_node(&c->mtd, pdata->of_node);
this = &c->onenand;
if (c->dma_channel >= 0) {
this->wait = omap2_onenand_wait;
if (c->flags & ONENAND_IN_OMAP34XX) {
this->read_bufferram = omap3_onenand_read_bufferram;
this->write_bufferram = omap3_onenand_write_bufferram;
} else {
this->read_bufferram = omap2_onenand_read_bufferram;
this->write_bufferram = omap2_onenand_write_bufferram;
}
}
if (pdata->regulator_can_sleep) {
c->regulator = regulator_get(&pdev->dev, "vonenand");
if (IS_ERR(c->regulator)) {
dev_err(&pdev->dev, "Failed to get regulator\n");
r = PTR_ERR(c->regulator);
goto err_release_dma;
}
c->onenand.enable = omap2_onenand_enable;
c->onenand.disable = omap2_onenand_disable;
}
if (pdata->skip_initial_unlocking)
this->options |= ONENAND_SKIP_INITIAL_UNLOCKING;
dev_info(dev, "initializing on CS%d (0x%08lx), va %p, %s mode\n",
c->gpmc_cs, c->phys_base, c->onenand.base,
c->dma_chan ? "DMA" : "PIO");
if ((r = onenand_scan(&c->mtd, 1)) < 0)
goto err_release_regulator;
goto err_release_dma;
r = mtd_device_register(&c->mtd, pdata ? pdata->parts : NULL,
pdata ? pdata->nr_parts : 0);
freq = omap2_onenand_get_freq(c->onenand.version_id);
if (freq > 0) {
switch (freq) {
case 104:
latency = 7;
break;
case 83:
latency = 6;
break;
case 66:
latency = 5;
break;
case 56:
latency = 4;
break;
default: /* 40 MHz or lower */
latency = 3;
break;
}
r = gpmc_omap_onenand_set_timings(dev, c->gpmc_cs,
freq, latency, &info);
if (r)
goto err_release_onenand;
r = omap2_onenand_set_cfg(c, info.sync_read, info.sync_write,
latency, info.burst_len);
if (r)
goto err_release_onenand;
if (info.sync_read || info.sync_write)
dev_info(dev, "optimized timings for %d MHz\n", freq);
}
r = mtd_device_register(&c->mtd, NULL, 0);
if (r)
goto err_release_onenand;
@ -754,22 +619,9 @@ static int omap2_onenand_probe(struct platform_device *pdev)
err_release_onenand:
onenand_release(&c->mtd);
err_release_regulator:
regulator_put(c->regulator);
err_release_dma:
if (c->dma_channel != -1)
omap_free_dma(c->dma_channel);
if (c->gpio_irq)
free_irq(gpio_to_irq(c->gpio_irq), c);
err_release_gpio:
if (c->gpio_irq)
gpio_free(c->gpio_irq);
err_iounmap:
iounmap(c->onenand.base);
err_release_mem_region:
release_mem_region(c->phys_base, c->mem_size);
err_kfree:
kfree(c);
if (c->dma_chan)
dma_release_channel(c->dma_chan);
return r;
}
@ -779,27 +631,26 @@ static int omap2_onenand_remove(struct platform_device *pdev)
struct omap2_onenand *c = dev_get_drvdata(&pdev->dev);
onenand_release(&c->mtd);
regulator_put(c->regulator);
if (c->dma_channel != -1)
omap_free_dma(c->dma_channel);
if (c->dma_chan)
dma_release_channel(c->dma_chan);
omap2_onenand_shutdown(pdev);
if (c->gpio_irq) {
free_irq(gpio_to_irq(c->gpio_irq), c);
gpio_free(c->gpio_irq);
}
iounmap(c->onenand.base);
release_mem_region(c->phys_base, c->mem_size);
kfree(c);
return 0;
}
static const struct of_device_id omap2_onenand_id_table[] = {
{ .compatible = "ti,omap2-onenand", },
{},
};
MODULE_DEVICE_TABLE(of, omap2_onenand_id_table);
static struct platform_driver omap2_onenand_driver = {
.probe = omap2_onenand_probe,
.remove = omap2_onenand_remove,
.shutdown = omap2_onenand_shutdown,
.driver = {
.name = DRIVER_NAME,
.of_match_table = omap2_onenand_id_table,
},
};

View File

@ -1383,15 +1383,6 @@ static int onenand_read_oob_nolock(struct mtd_info *mtd, loff_t from,
return -EINVAL;
}
/* Do not allow reads past end of device */
if (unlikely(from >= mtd->size ||
column + len > ((mtd->size >> this->page_shift) -
(from >> this->page_shift)) * oobsize)) {
printk(KERN_ERR "%s: Attempted to read beyond end of device\n",
__func__);
return -EINVAL;
}
stats = mtd->ecc_stats;
readcmd = ONENAND_IS_4KB_PAGE(this) ? ONENAND_CMD_READ : ONENAND_CMD_READOOB;
@ -1447,38 +1438,6 @@ static int onenand_read_oob_nolock(struct mtd_info *mtd, loff_t from,
return 0;
}
/**
* onenand_read - [MTD Interface] Read data from flash
* @param mtd MTD device structure
* @param from offset to read from
* @param len number of bytes to read
* @param retlen pointer to variable to store the number of read bytes
* @param buf the databuffer to put data
*
* Read with ecc
*/
static int onenand_read(struct mtd_info *mtd, loff_t from, size_t len,
size_t *retlen, u_char *buf)
{
struct onenand_chip *this = mtd->priv;
struct mtd_oob_ops ops = {
.len = len,
.ooblen = 0,
.datbuf = buf,
.oobbuf = NULL,
};
int ret;
onenand_get_device(mtd, FL_READING);
ret = ONENAND_IS_4KB_PAGE(this) ?
onenand_mlc_read_ops_nolock(mtd, from, &ops) :
onenand_read_ops_nolock(mtd, from, &ops);
onenand_release_device(mtd);
*retlen = ops.retlen;
return ret;
}
/**
* onenand_read_oob - [MTD Interface] Read main and/or out-of-band
* @param mtd: MTD device structure
@ -2056,15 +2015,6 @@ static int onenand_write_oob_nolock(struct mtd_info *mtd, loff_t to,
return -EINVAL;
}
/* Do not allow reads past end of device */
if (unlikely(to >= mtd->size ||
column + len > ((mtd->size >> this->page_shift) -
(to >> this->page_shift)) * oobsize)) {
printk(KERN_ERR "%s: Attempted to write past end of device\n",
__func__);
return -EINVAL;
}
oobbuf = this->oob_buf;
oobcmd = ONENAND_IS_4KB_PAGE(this) ? ONENAND_CMD_PROG : ONENAND_CMD_PROGOOB;
@ -2128,35 +2078,6 @@ static int onenand_write_oob_nolock(struct mtd_info *mtd, loff_t to,
return ret;
}
/**
* onenand_write - [MTD Interface] write buffer to FLASH
* @param mtd MTD device structure
* @param to offset to write to
* @param len number of bytes to write
* @param retlen pointer to variable to store the number of written bytes
* @param buf the data to write
*
* Write with ECC
*/
static int onenand_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
{
struct mtd_oob_ops ops = {
.len = len,
.ooblen = 0,
.datbuf = (u_char *) buf,
.oobbuf = NULL,
};
int ret;
onenand_get_device(mtd, FL_WRITING);
ret = onenand_write_ops_nolock(mtd, to, &ops);
onenand_release_device(mtd);
*retlen = ops.retlen;
return ret;
}
/**
* onenand_write_oob - [MTD Interface] NAND write data and/or out-of-band
* @param mtd: MTD device structure
@ -4038,8 +3959,6 @@ int onenand_scan(struct mtd_info *mtd, int maxchips)
mtd->_erase = onenand_erase;
mtd->_point = NULL;
mtd->_unpoint = NULL;
mtd->_read = onenand_read;
mtd->_write = onenand_write;
mtd->_read_oob = onenand_read_oob;
mtd->_write_oob = onenand_write_oob;
mtd->_panic_write = onenand_panic_write;

View File

@ -25,8 +25,6 @@
#include <linux/interrupt.h>
#include <linux/io.h>
#include <asm/mach/flash.h>
#include "samsung.h"
enum soc_type {
@ -129,16 +127,13 @@ struct s3c_onenand {
struct platform_device *pdev;
enum soc_type type;
void __iomem *base;
struct resource *base_res;
void __iomem *ahb_addr;
struct resource *ahb_res;
int bootram_command;
void __iomem *page_buf;
void __iomem *oob_buf;
void *page_buf;
void *oob_buf;
unsigned int (*mem_addr)(int fba, int fpa, int fsa);
unsigned int (*cmd_map)(unsigned int type, unsigned int val);
void __iomem *dma_addr;
struct resource *dma_res;
unsigned long phys_base;
struct completion complete;
};
@ -413,8 +408,8 @@ static int s3c_onenand_command(struct mtd_info *mtd, int cmd, loff_t addr,
/*
* Emulate Two BufferRAMs and access with 4 bytes pointer
*/
m = (unsigned int *) onenand->page_buf;
s = (unsigned int *) onenand->oob_buf;
m = onenand->page_buf;
s = onenand->oob_buf;
if (index) {
m += (this->writesize >> 2);
@ -486,11 +481,11 @@ static unsigned char *s3c_get_bufferram(struct mtd_info *mtd, int area)
unsigned char *p;
if (area == ONENAND_DATARAM) {
p = (unsigned char *) onenand->page_buf;
p = onenand->page_buf;
if (index == 1)
p += this->writesize;
} else {
p = (unsigned char *) onenand->oob_buf;
p = onenand->oob_buf;
if (index == 1)
p += mtd->oobsize;
}
@ -851,15 +846,14 @@ static int s3c_onenand_probe(struct platform_device *pdev)
/* No need to check pdata. the platform data is optional */
size = sizeof(struct mtd_info) + sizeof(struct onenand_chip);
mtd = kzalloc(size, GFP_KERNEL);
mtd = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
if (!mtd)
return -ENOMEM;
onenand = kzalloc(sizeof(struct s3c_onenand), GFP_KERNEL);
if (!onenand) {
err = -ENOMEM;
goto onenand_fail;
}
onenand = devm_kzalloc(&pdev->dev, sizeof(struct s3c_onenand),
GFP_KERNEL);
if (!onenand)
return -ENOMEM;
this = (struct onenand_chip *) &mtd[1];
mtd->priv = this;
@ -870,26 +864,12 @@ static int s3c_onenand_probe(struct platform_device *pdev)
s3c_onenand_setup(mtd);
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!r) {
dev_err(&pdev->dev, "no memory resource defined\n");
return -ENOENT;
goto ahb_resource_failed;
}
onenand->base = devm_ioremap_resource(&pdev->dev, r);
if (IS_ERR(onenand->base))
return PTR_ERR(onenand->base);
onenand->base_res = request_mem_region(r->start, resource_size(r),
pdev->name);
if (!onenand->base_res) {
dev_err(&pdev->dev, "failed to request memory resource\n");
err = -EBUSY;
goto resource_failed;
}
onenand->phys_base = r->start;
onenand->base = ioremap(r->start, resource_size(r));
if (!onenand->base) {
dev_err(&pdev->dev, "failed to map memory resource\n");
err = -EFAULT;
goto ioremap_failed;
}
/* Set onenand_chip also */
this->base = onenand->base;
@ -898,40 +878,20 @@ static int s3c_onenand_probe(struct platform_device *pdev)
if (onenand->type != TYPE_S5PC110) {
r = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (!r) {
dev_err(&pdev->dev, "no buffer memory resource defined\n");
err = -ENOENT;
goto ahb_resource_failed;
}
onenand->ahb_res = request_mem_region(r->start, resource_size(r),
pdev->name);
if (!onenand->ahb_res) {
dev_err(&pdev->dev, "failed to request buffer memory resource\n");
err = -EBUSY;
goto ahb_resource_failed;
}
onenand->ahb_addr = ioremap(r->start, resource_size(r));
if (!onenand->ahb_addr) {
dev_err(&pdev->dev, "failed to map buffer memory resource\n");
err = -EINVAL;
goto ahb_ioremap_failed;
}
onenand->ahb_addr = devm_ioremap_resource(&pdev->dev, r);
if (IS_ERR(onenand->ahb_addr))
return PTR_ERR(onenand->ahb_addr);
/* Allocate 4KiB BufferRAM */
onenand->page_buf = kzalloc(SZ_4K, GFP_KERNEL);
if (!onenand->page_buf) {
err = -ENOMEM;
goto page_buf_fail;
}
onenand->page_buf = devm_kzalloc(&pdev->dev, SZ_4K,
GFP_KERNEL);
if (!onenand->page_buf)
return -ENOMEM;
/* Allocate 128 SpareRAM */
onenand->oob_buf = kzalloc(128, GFP_KERNEL);
if (!onenand->oob_buf) {
err = -ENOMEM;
goto oob_buf_fail;
}
onenand->oob_buf = devm_kzalloc(&pdev->dev, 128, GFP_KERNEL);
if (!onenand->oob_buf)
return -ENOMEM;
/* S3C doesn't handle subpage write */
mtd->subpage_sft = 0;
@ -939,28 +899,9 @@ static int s3c_onenand_probe(struct platform_device *pdev)
} else { /* S5PC110 */
r = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (!r) {
dev_err(&pdev->dev, "no dma memory resource defined\n");
err = -ENOENT;
goto dma_resource_failed;
}
onenand->dma_res = request_mem_region(r->start, resource_size(r),
pdev->name);
if (!onenand->dma_res) {
dev_err(&pdev->dev, "failed to request dma memory resource\n");
err = -EBUSY;
goto dma_resource_failed;
}
onenand->dma_addr = ioremap(r->start, resource_size(r));
if (!onenand->dma_addr) {
dev_err(&pdev->dev, "failed to map dma memory resource\n");
err = -EINVAL;
goto dma_ioremap_failed;
}
onenand->phys_base = onenand->base_res->start;
onenand->dma_addr = devm_ioremap_resource(&pdev->dev, r);
if (IS_ERR(onenand->dma_addr))
return PTR_ERR(onenand->dma_addr);
s5pc110_dma_ops = s5pc110_dma_poll;
/* Interrupt support */
@ -968,19 +909,20 @@ static int s3c_onenand_probe(struct platform_device *pdev)
if (r) {
init_completion(&onenand->complete);
s5pc110_dma_ops = s5pc110_dma_irq;
err = request_irq(r->start, s5pc110_onenand_irq,
IRQF_SHARED, "onenand", &onenand);
err = devm_request_irq(&pdev->dev, r->start,
s5pc110_onenand_irq,
IRQF_SHARED, "onenand",
&onenand);
if (err) {
dev_err(&pdev->dev, "failed to get irq\n");
goto scan_failed;
return err;
}
}
}
if (onenand_scan(mtd, 1)) {
err = -EFAULT;
goto scan_failed;
}
err = onenand_scan(mtd, 1);
if (err)
return err;
if (onenand->type != TYPE_S5PC110) {
/* S3C doesn't handle subpage write */
@ -994,40 +936,15 @@ static int s3c_onenand_probe(struct platform_device *pdev)
err = mtd_device_parse_register(mtd, NULL, NULL,
pdata ? pdata->parts : NULL,
pdata ? pdata->nr_parts : 0);
if (err) {
dev_err(&pdev->dev, "failed to parse partitions and register the MTD device\n");
onenand_release(mtd);
return err;
}
platform_set_drvdata(pdev, mtd);
return 0;
scan_failed:
if (onenand->dma_addr)
iounmap(onenand->dma_addr);
dma_ioremap_failed:
if (onenand->dma_res)
release_mem_region(onenand->dma_res->start,
resource_size(onenand->dma_res));
kfree(onenand->oob_buf);
oob_buf_fail:
kfree(onenand->page_buf);
page_buf_fail:
if (onenand->ahb_addr)
iounmap(onenand->ahb_addr);
ahb_ioremap_failed:
if (onenand->ahb_res)
release_mem_region(onenand->ahb_res->start,
resource_size(onenand->ahb_res));
dma_resource_failed:
ahb_resource_failed:
iounmap(onenand->base);
ioremap_failed:
if (onenand->base_res)
release_mem_region(onenand->base_res->start,
resource_size(onenand->base_res));
resource_failed:
kfree(onenand);
onenand_fail:
kfree(mtd);
return err;
}
static int s3c_onenand_remove(struct platform_device *pdev)
@ -1035,25 +952,7 @@ static int s3c_onenand_remove(struct platform_device *pdev)
struct mtd_info *mtd = platform_get_drvdata(pdev);
onenand_release(mtd);
if (onenand->ahb_addr)
iounmap(onenand->ahb_addr);
if (onenand->ahb_res)
release_mem_region(onenand->ahb_res->start,
resource_size(onenand->ahb_res));
if (onenand->dma_addr)
iounmap(onenand->dma_addr);
if (onenand->dma_res)
release_mem_region(onenand->dma_res->start,
resource_size(onenand->dma_res));
iounmap(onenand->base);
release_mem_region(onenand->base_res->start,
resource_size(onenand->base_res));
kfree(onenand->oob_buf);
kfree(onenand->page_buf);
kfree(onenand);
kfree(mtd);
return 0;
}

View File

@ -192,7 +192,7 @@ static int sharpsl_nand_init_ftl(struct mtd_info *mtd, struct sharpsl_ftl *ftl)
/* create physical-logical table */
for (block_num = 0; block_num < phymax; block_num++) {
block_adr = block_num * mtd->erasesize;
block_adr = (loff_t)block_num * mtd->erasesize;
if (mtd_block_isbad(mtd, block_adr))
continue;
@ -219,7 +219,7 @@ exit:
return ret;
}
void sharpsl_nand_cleanup_ftl(struct sharpsl_ftl *ftl)
static void sharpsl_nand_cleanup_ftl(struct sharpsl_ftl *ftl)
{
kfree(ftl->log2phy);
}
@ -244,7 +244,7 @@ static int sharpsl_nand_read_laddr(struct mtd_info *mtd,
return -EINVAL;
block_num = ftl->log2phy[log_num];
block_adr = block_num * mtd->erasesize;
block_adr = (loff_t)block_num * mtd->erasesize;
block_ofs = mtd_mod_by_eb((u32)from, mtd);
err = mtd_read(mtd, block_adr + block_ofs, len, &retlen, buf);

View File

@ -58,6 +58,7 @@ struct cqspi_flash_pdata {
u8 data_width;
u8 cs;
bool registered;
bool use_direct_mode;
};
struct cqspi_st {
@ -68,6 +69,7 @@ struct cqspi_st {
void __iomem *iobase;
void __iomem *ahb_base;
resource_size_t ahb_size;
struct completion transfer_complete;
struct mutex bus_mutex;
@ -103,6 +105,7 @@ struct cqspi_st {
/* Register map */
#define CQSPI_REG_CONFIG 0x00
#define CQSPI_REG_CONFIG_ENABLE_MASK BIT(0)
#define CQSPI_REG_CONFIG_ENB_DIR_ACC_CTRL BIT(7)
#define CQSPI_REG_CONFIG_DECODE_MASK BIT(9)
#define CQSPI_REG_CONFIG_CHIPSELECT_LSB 10
#define CQSPI_REG_CONFIG_DMA_MASK BIT(15)
@ -450,8 +453,7 @@ static int cqspi_command_write_addr(struct spi_nor *nor,
return cqspi_exec_flash_cmd(cqspi, reg);
}
static int cqspi_indirect_read_setup(struct spi_nor *nor,
const unsigned int from_addr)
static int cqspi_read_setup(struct spi_nor *nor)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
@ -459,8 +461,6 @@ static int cqspi_indirect_read_setup(struct spi_nor *nor,
unsigned int dummy_clk = 0;
unsigned int reg;
writel(from_addr, reg_base + CQSPI_REG_INDIRECTRDSTARTADDR);
reg = nor->read_opcode << CQSPI_REG_RD_INSTR_OPCODE_LSB;
reg |= cqspi_calc_rdreg(nor, nor->read_opcode);
@ -493,8 +493,8 @@ static int cqspi_indirect_read_setup(struct spi_nor *nor,
return 0;
}
static int cqspi_indirect_read_execute(struct spi_nor *nor,
u8 *rxbuf, const unsigned n_rx)
static int cqspi_indirect_read_execute(struct spi_nor *nor, u8 *rxbuf,
loff_t from_addr, const size_t n_rx)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
@ -504,6 +504,7 @@ static int cqspi_indirect_read_execute(struct spi_nor *nor,
unsigned int bytes_to_read = 0;
int ret = 0;
writel(from_addr, reg_base + CQSPI_REG_INDIRECTRDSTARTADDR);
writel(remaining, reg_base + CQSPI_REG_INDIRECTRDBYTES);
/* Clear all interrupts. */
@ -570,8 +571,7 @@ failrd:
return ret;
}
static int cqspi_indirect_write_setup(struct spi_nor *nor,
const unsigned int to_addr)
static int cqspi_write_setup(struct spi_nor *nor)
{
unsigned int reg;
struct cqspi_flash_pdata *f_pdata = nor->priv;
@ -584,8 +584,6 @@ static int cqspi_indirect_write_setup(struct spi_nor *nor,
reg = cqspi_calc_rdreg(nor, nor->program_opcode);
writel(reg, reg_base + CQSPI_REG_RD_INSTR);
writel(to_addr, reg_base + CQSPI_REG_INDIRECTWRSTARTADDR);
reg = readl(reg_base + CQSPI_REG_SIZE);
reg &= ~CQSPI_REG_SIZE_ADDRESS_MASK;
reg |= (nor->addr_width - 1);
@ -593,8 +591,8 @@ static int cqspi_indirect_write_setup(struct spi_nor *nor,
return 0;
}
static int cqspi_indirect_write_execute(struct spi_nor *nor,
const u8 *txbuf, const unsigned n_tx)
static int cqspi_indirect_write_execute(struct spi_nor *nor, loff_t to_addr,
const u8 *txbuf, const size_t n_tx)
{
const unsigned int page_size = nor->page_size;
struct cqspi_flash_pdata *f_pdata = nor->priv;
@ -604,6 +602,7 @@ static int cqspi_indirect_write_execute(struct spi_nor *nor,
unsigned int write_bytes;
int ret;
writel(to_addr, reg_base + CQSPI_REG_INDIRECTWRSTARTADDR);
writel(remaining, reg_base + CQSPI_REG_INDIRECTWRBYTES);
/* Clear all interrupts. */
@ -894,17 +893,22 @@ static int cqspi_set_protocol(struct spi_nor *nor, const int read)
static ssize_t cqspi_write(struct spi_nor *nor, loff_t to,
size_t len, const u_char *buf)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
int ret;
ret = cqspi_set_protocol(nor, 0);
if (ret)
return ret;
ret = cqspi_indirect_write_setup(nor, to);
ret = cqspi_write_setup(nor);
if (ret)
return ret;
ret = cqspi_indirect_write_execute(nor, buf, len);
if (f_pdata->use_direct_mode)
memcpy_toio(cqspi->ahb_base + to, buf, len);
else
ret = cqspi_indirect_write_execute(nor, to, buf, len);
if (ret)
return ret;
@ -914,17 +918,22 @@ static ssize_t cqspi_write(struct spi_nor *nor, loff_t to,
static ssize_t cqspi_read(struct spi_nor *nor, loff_t from,
size_t len, u_char *buf)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
int ret;
ret = cqspi_set_protocol(nor, 1);
if (ret)
return ret;
ret = cqspi_indirect_read_setup(nor, from);
ret = cqspi_read_setup(nor);
if (ret)
return ret;
ret = cqspi_indirect_read_execute(nor, buf, len);
if (f_pdata->use_direct_mode)
memcpy_fromio(buf, cqspi->ahb_base + from, len);
else
ret = cqspi_indirect_read_execute(nor, buf, from, len);
if (ret)
return ret;
@ -1059,6 +1068,8 @@ static int cqspi_of_get_pdata(struct platform_device *pdev)
static void cqspi_controller_init(struct cqspi_st *cqspi)
{
u32 reg;
cqspi_controller_enable(cqspi, 0);
/* Configure the remap address register, no remap */
@ -1081,6 +1092,11 @@ static void cqspi_controller_init(struct cqspi_st *cqspi)
writel(cqspi->fifo_depth * cqspi->fifo_width / 8,
cqspi->iobase + CQSPI_REG_INDIRECTWRWATERMARK);
/* Enable Direct Access Controller */
reg = readl(cqspi->iobase + CQSPI_REG_CONFIG);
reg |= CQSPI_REG_CONFIG_ENB_DIR_ACC_CTRL;
writel(reg, cqspi->iobase + CQSPI_REG_CONFIG);
cqspi_controller_enable(cqspi, 1);
}
@ -1156,6 +1172,12 @@ static int cqspi_setup_flash(struct cqspi_st *cqspi, struct device_node *np)
goto err;
f_pdata->registered = true;
if (mtd->size <= cqspi->ahb_size) {
f_pdata->use_direct_mode = true;
dev_dbg(nor->dev, "using direct mode for %s\n",
mtd->name);
}
}
return 0;
@ -1215,6 +1237,7 @@ static int cqspi_probe(struct platform_device *pdev)
dev_err(dev, "Cannot remap AHB address.\n");
return PTR_ERR(cqspi->ahb_base);
}
cqspi->ahb_size = resource_size(res_ahb);
init_completion(&cqspi->transfer_complete);

View File

@ -801,10 +801,10 @@ static int fsl_qspi_nor_setup_last(struct fsl_qspi *q)
}
static const struct of_device_id fsl_qspi_dt_ids[] = {
{ .compatible = "fsl,vf610-qspi", .data = (void *)&vybrid_data, },
{ .compatible = "fsl,imx6sx-qspi", .data = (void *)&imx6sx_data, },
{ .compatible = "fsl,imx7d-qspi", .data = (void *)&imx7d_data, },
{ .compatible = "fsl,imx6ul-qspi", .data = (void *)&imx6ul_data, },
{ .compatible = "fsl,vf610-qspi", .data = &vybrid_data, },
{ .compatible = "fsl,imx6sx-qspi", .data = &imx6sx_data, },
{ .compatible = "fsl,imx7d-qspi", .data = &imx7d_data, },
{ .compatible = "fsl,imx6ul-qspi", .data = &imx6ul_data, },
{ .compatible = "fsl,ls1021a-qspi", .data = (void *)&ls1021a_data, },
{ /* sentinel */ }
};

View File

@ -138,7 +138,6 @@
* @erase_64k: 64k erase supported
* @opcodes: Opcodes which are supported. This are programmed by BIOS
* before it locks down the controller.
* @preopcodes: Preopcodes which are supported.
*/
struct intel_spi {
struct device *dev;
@ -155,7 +154,6 @@ struct intel_spi {
bool swseq_erase;
bool erase_64k;
u8 opcodes[8];
u8 preopcodes[2];
};
static bool writeable;
@ -400,10 +398,6 @@ static int intel_spi_init(struct intel_spi *ispi)
ispi->opcodes[i] = opmenu0 >> i * 8;
ispi->opcodes[i + 4] = opmenu1 >> i * 8;
}
val = readl(ispi->sregs + PREOP_OPTYPE);
ispi->preopcodes[0] = val;
ispi->preopcodes[1] = val >> 8;
}
}

View File

@ -110,7 +110,7 @@
#define MTK_NOR_PRG_REG(n) (MTK_NOR_PRGDATA0_REG + 4 * (n))
#define MTK_NOR_SHREG(n) (MTK_NOR_SHREG0_REG + 4 * (n))
struct mt8173_nor {
struct mtk_nor {
struct spi_nor nor;
struct device *dev;
void __iomem *base; /* nor flash base address */
@ -118,48 +118,48 @@ struct mt8173_nor {
struct clk *nor_clk;
};
static void mt8173_nor_set_read_mode(struct mt8173_nor *mt8173_nor)
static void mtk_nor_set_read_mode(struct mtk_nor *mtk_nor)
{
struct spi_nor *nor = &mt8173_nor->nor;
struct spi_nor *nor = &mtk_nor->nor;
switch (nor->read_proto) {
case SNOR_PROTO_1_1_1:
writeb(nor->read_opcode, mt8173_nor->base +
writeb(nor->read_opcode, mtk_nor->base +
MTK_NOR_PRGDATA3_REG);
writeb(MTK_NOR_FAST_READ, mt8173_nor->base +
writeb(MTK_NOR_FAST_READ, mtk_nor->base +
MTK_NOR_CFG1_REG);
break;
case SNOR_PROTO_1_1_2:
writeb(nor->read_opcode, mt8173_nor->base +
writeb(nor->read_opcode, mtk_nor->base +
MTK_NOR_PRGDATA3_REG);
writeb(MTK_NOR_DUAL_READ_EN, mt8173_nor->base +
writeb(MTK_NOR_DUAL_READ_EN, mtk_nor->base +
MTK_NOR_DUAL_REG);
break;
case SNOR_PROTO_1_1_4:
writeb(nor->read_opcode, mt8173_nor->base +
writeb(nor->read_opcode, mtk_nor->base +
MTK_NOR_PRGDATA4_REG);
writeb(MTK_NOR_QUAD_READ_EN, mt8173_nor->base +
writeb(MTK_NOR_QUAD_READ_EN, mtk_nor->base +
MTK_NOR_DUAL_REG);
break;
default:
writeb(MTK_NOR_DUAL_DISABLE, mt8173_nor->base +
writeb(MTK_NOR_DUAL_DISABLE, mtk_nor->base +
MTK_NOR_DUAL_REG);
break;
}
}
static int mt8173_nor_execute_cmd(struct mt8173_nor *mt8173_nor, u8 cmdval)
static int mtk_nor_execute_cmd(struct mtk_nor *mtk_nor, u8 cmdval)
{
int reg;
u8 val = cmdval & 0x1f;
writeb(cmdval, mt8173_nor->base + MTK_NOR_CMD_REG);
return readl_poll_timeout(mt8173_nor->base + MTK_NOR_CMD_REG, reg,
writeb(cmdval, mtk_nor->base + MTK_NOR_CMD_REG);
return readl_poll_timeout(mtk_nor->base + MTK_NOR_CMD_REG, reg,
!(reg & val), 100, 10000);
}
static int mt8173_nor_do_tx_rx(struct mt8173_nor *mt8173_nor, u8 op,
u8 *tx, int txlen, u8 *rx, int rxlen)
static int mtk_nor_do_tx_rx(struct mtk_nor *mtk_nor, u8 op,
u8 *tx, int txlen, u8 *rx, int rxlen)
{
int len = 1 + txlen + rxlen;
int i, ret, idx;
@ -167,26 +167,26 @@ static int mt8173_nor_do_tx_rx(struct mt8173_nor *mt8173_nor, u8 op,
if (len > MTK_NOR_MAX_SHIFT)
return -EINVAL;
writeb(len * 8, mt8173_nor->base + MTK_NOR_CNT_REG);
writeb(len * 8, mtk_nor->base + MTK_NOR_CNT_REG);
/* start at PRGDATA5, go down to PRGDATA0 */
idx = MTK_NOR_MAX_RX_TX_SHIFT - 1;
/* opcode */
writeb(op, mt8173_nor->base + MTK_NOR_PRG_REG(idx));
writeb(op, mtk_nor->base + MTK_NOR_PRG_REG(idx));
idx--;
/* program TX data */
for (i = 0; i < txlen; i++, idx--)
writeb(tx[i], mt8173_nor->base + MTK_NOR_PRG_REG(idx));
writeb(tx[i], mtk_nor->base + MTK_NOR_PRG_REG(idx));
/* clear out rest of TX registers */
while (idx >= 0) {
writeb(0, mt8173_nor->base + MTK_NOR_PRG_REG(idx));
writeb(0, mtk_nor->base + MTK_NOR_PRG_REG(idx));
idx--;
}
ret = mt8173_nor_execute_cmd(mt8173_nor, MTK_NOR_PRG_CMD);
ret = mtk_nor_execute_cmd(mtk_nor, MTK_NOR_PRG_CMD);
if (ret)
return ret;
@ -195,20 +195,20 @@ static int mt8173_nor_do_tx_rx(struct mt8173_nor *mt8173_nor, u8 op,
/* read out RX data */
for (i = 0; i < rxlen; i++, idx--)
rx[i] = readb(mt8173_nor->base + MTK_NOR_SHREG(idx));
rx[i] = readb(mtk_nor->base + MTK_NOR_SHREG(idx));
return 0;
}
/* Do a WRSR (Write Status Register) command */
static int mt8173_nor_wr_sr(struct mt8173_nor *mt8173_nor, u8 sr)
static int mtk_nor_wr_sr(struct mtk_nor *mtk_nor, u8 sr)
{
writeb(sr, mt8173_nor->base + MTK_NOR_PRGDATA5_REG);
writeb(8, mt8173_nor->base + MTK_NOR_CNT_REG);
return mt8173_nor_execute_cmd(mt8173_nor, MTK_NOR_WRSR_CMD);
writeb(sr, mtk_nor->base + MTK_NOR_PRGDATA5_REG);
writeb(8, mtk_nor->base + MTK_NOR_CNT_REG);
return mtk_nor_execute_cmd(mtk_nor, MTK_NOR_WRSR_CMD);
}
static int mt8173_nor_write_buffer_enable(struct mt8173_nor *mt8173_nor)
static int mtk_nor_write_buffer_enable(struct mtk_nor *mtk_nor)
{
u8 reg;
@ -216,27 +216,27 @@ static int mt8173_nor_write_buffer_enable(struct mt8173_nor *mt8173_nor)
* 0: pre-fetch buffer use for read
* 1: pre-fetch buffer use for page program
*/
writel(MTK_NOR_WR_BUF_ENABLE, mt8173_nor->base + MTK_NOR_CFG2_REG);
return readb_poll_timeout(mt8173_nor->base + MTK_NOR_CFG2_REG, reg,
writel(MTK_NOR_WR_BUF_ENABLE, mtk_nor->base + MTK_NOR_CFG2_REG);
return readb_poll_timeout(mtk_nor->base + MTK_NOR_CFG2_REG, reg,
0x01 == (reg & 0x01), 100, 10000);
}
static int mt8173_nor_write_buffer_disable(struct mt8173_nor *mt8173_nor)
static int mtk_nor_write_buffer_disable(struct mtk_nor *mtk_nor)
{
u8 reg;
writel(MTK_NOR_WR_BUF_DISABLE, mt8173_nor->base + MTK_NOR_CFG2_REG);
return readb_poll_timeout(mt8173_nor->base + MTK_NOR_CFG2_REG, reg,
writel(MTK_NOR_WR_BUF_DISABLE, mtk_nor->base + MTK_NOR_CFG2_REG);
return readb_poll_timeout(mtk_nor->base + MTK_NOR_CFG2_REG, reg,
MTK_NOR_WR_BUF_DISABLE == (reg & 0x1), 100,
10000);
}
static void mt8173_nor_set_addr_width(struct mt8173_nor *mt8173_nor)
static void mtk_nor_set_addr_width(struct mtk_nor *mtk_nor)
{
u8 val;
struct spi_nor *nor = &mt8173_nor->nor;
struct spi_nor *nor = &mtk_nor->nor;
val = readb(mt8173_nor->base + MTK_NOR_DUAL_REG);
val = readb(mtk_nor->base + MTK_NOR_DUAL_REG);
switch (nor->addr_width) {
case 3:
@ -246,115 +246,115 @@ static void mt8173_nor_set_addr_width(struct mt8173_nor *mt8173_nor)
val |= MTK_NOR_4B_ADDR_EN;
break;
default:
dev_warn(mt8173_nor->dev, "Unexpected address width %u.\n",
dev_warn(mtk_nor->dev, "Unexpected address width %u.\n",
nor->addr_width);
break;
}
writeb(val, mt8173_nor->base + MTK_NOR_DUAL_REG);
writeb(val, mtk_nor->base + MTK_NOR_DUAL_REG);
}
static void mt8173_nor_set_addr(struct mt8173_nor *mt8173_nor, u32 addr)
static void mtk_nor_set_addr(struct mtk_nor *mtk_nor, u32 addr)
{
int i;
mt8173_nor_set_addr_width(mt8173_nor);
mtk_nor_set_addr_width(mtk_nor);
for (i = 0; i < 3; i++) {
writeb(addr & 0xff, mt8173_nor->base + MTK_NOR_RADR0_REG + i * 4);
writeb(addr & 0xff, mtk_nor->base + MTK_NOR_RADR0_REG + i * 4);
addr >>= 8;
}
/* Last register is non-contiguous */
writeb(addr & 0xff, mt8173_nor->base + MTK_NOR_RADR3_REG);
writeb(addr & 0xff, mtk_nor->base + MTK_NOR_RADR3_REG);
}
static ssize_t mt8173_nor_read(struct spi_nor *nor, loff_t from, size_t length,
u_char *buffer)
static ssize_t mtk_nor_read(struct spi_nor *nor, loff_t from, size_t length,
u_char *buffer)
{
int i, ret;
int addr = (int)from;
u8 *buf = (u8 *)buffer;
struct mt8173_nor *mt8173_nor = nor->priv;
struct mtk_nor *mtk_nor = nor->priv;
/* set mode for fast read mode ,dual mode or quad mode */
mt8173_nor_set_read_mode(mt8173_nor);
mt8173_nor_set_addr(mt8173_nor, addr);
mtk_nor_set_read_mode(mtk_nor);
mtk_nor_set_addr(mtk_nor, addr);
for (i = 0; i < length; i++) {
ret = mt8173_nor_execute_cmd(mt8173_nor, MTK_NOR_PIO_READ_CMD);
ret = mtk_nor_execute_cmd(mtk_nor, MTK_NOR_PIO_READ_CMD);
if (ret < 0)
return ret;
buf[i] = readb(mt8173_nor->base + MTK_NOR_RDATA_REG);
buf[i] = readb(mtk_nor->base + MTK_NOR_RDATA_REG);
}
return length;
}
static int mt8173_nor_write_single_byte(struct mt8173_nor *mt8173_nor,
int addr, int length, u8 *data)
static int mtk_nor_write_single_byte(struct mtk_nor *mtk_nor,
int addr, int length, u8 *data)
{
int i, ret;
mt8173_nor_set_addr(mt8173_nor, addr);
mtk_nor_set_addr(mtk_nor, addr);
for (i = 0; i < length; i++) {
writeb(*data++, mt8173_nor->base + MTK_NOR_WDATA_REG);
ret = mt8173_nor_execute_cmd(mt8173_nor, MTK_NOR_PIO_WR_CMD);
writeb(*data++, mtk_nor->base + MTK_NOR_WDATA_REG);
ret = mtk_nor_execute_cmd(mtk_nor, MTK_NOR_PIO_WR_CMD);
if (ret < 0)
return ret;
}
return 0;
}
static int mt8173_nor_write_buffer(struct mt8173_nor *mt8173_nor, int addr,
const u8 *buf)
static int mtk_nor_write_buffer(struct mtk_nor *mtk_nor, int addr,
const u8 *buf)
{
int i, bufidx, data;
mt8173_nor_set_addr(mt8173_nor, addr);
mtk_nor_set_addr(mtk_nor, addr);
bufidx = 0;
for (i = 0; i < SFLASH_WRBUF_SIZE; i += 4) {
data = buf[bufidx + 3]<<24 | buf[bufidx + 2]<<16 |
buf[bufidx + 1]<<8 | buf[bufidx];
bufidx += 4;
writel(data, mt8173_nor->base + MTK_NOR_PP_DATA_REG);
writel(data, mtk_nor->base + MTK_NOR_PP_DATA_REG);
}
return mt8173_nor_execute_cmd(mt8173_nor, MTK_NOR_WR_CMD);
return mtk_nor_execute_cmd(mtk_nor, MTK_NOR_WR_CMD);
}
static ssize_t mt8173_nor_write(struct spi_nor *nor, loff_t to, size_t len,
const u_char *buf)
static ssize_t mtk_nor_write(struct spi_nor *nor, loff_t to, size_t len,
const u_char *buf)
{
int ret;
struct mt8173_nor *mt8173_nor = nor->priv;
struct mtk_nor *mtk_nor = nor->priv;
size_t i;
ret = mt8173_nor_write_buffer_enable(mt8173_nor);
ret = mtk_nor_write_buffer_enable(mtk_nor);
if (ret < 0) {
dev_warn(mt8173_nor->dev, "write buffer enable failed!\n");
dev_warn(mtk_nor->dev, "write buffer enable failed!\n");
return ret;
}
for (i = 0; i + SFLASH_WRBUF_SIZE <= len; i += SFLASH_WRBUF_SIZE) {
ret = mt8173_nor_write_buffer(mt8173_nor, to, buf);
ret = mtk_nor_write_buffer(mtk_nor, to, buf);
if (ret < 0) {
dev_err(mt8173_nor->dev, "write buffer failed!\n");
dev_err(mtk_nor->dev, "write buffer failed!\n");
return ret;
}
to += SFLASH_WRBUF_SIZE;
buf += SFLASH_WRBUF_SIZE;
}
ret = mt8173_nor_write_buffer_disable(mt8173_nor);
ret = mtk_nor_write_buffer_disable(mtk_nor);
if (ret < 0) {
dev_warn(mt8173_nor->dev, "write buffer disable failed!\n");
dev_warn(mtk_nor->dev, "write buffer disable failed!\n");
return ret;
}
if (i < len) {
ret = mt8173_nor_write_single_byte(mt8173_nor, to,
(int)(len - i), (u8 *)buf);
ret = mtk_nor_write_single_byte(mtk_nor, to,
(int)(len - i), (u8 *)buf);
if (ret < 0) {
dev_err(mt8173_nor->dev, "write single byte failed!\n");
dev_err(mtk_nor->dev, "write single byte failed!\n");
return ret;
}
}
@ -362,72 +362,72 @@ static ssize_t mt8173_nor_write(struct spi_nor *nor, loff_t to, size_t len,
return len;
}
static int mt8173_nor_read_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len)
static int mtk_nor_read_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len)
{
int ret;
struct mt8173_nor *mt8173_nor = nor->priv;
struct mtk_nor *mtk_nor = nor->priv;
switch (opcode) {
case SPINOR_OP_RDSR:
ret = mt8173_nor_execute_cmd(mt8173_nor, MTK_NOR_RDSR_CMD);
ret = mtk_nor_execute_cmd(mtk_nor, MTK_NOR_RDSR_CMD);
if (ret < 0)
return ret;
if (len == 1)
*buf = readb(mt8173_nor->base + MTK_NOR_RDSR_REG);
*buf = readb(mtk_nor->base + MTK_NOR_RDSR_REG);
else
dev_err(mt8173_nor->dev, "len should be 1 for read status!\n");
dev_err(mtk_nor->dev, "len should be 1 for read status!\n");
break;
default:
ret = mt8173_nor_do_tx_rx(mt8173_nor, opcode, NULL, 0, buf, len);
ret = mtk_nor_do_tx_rx(mtk_nor, opcode, NULL, 0, buf, len);
break;
}
return ret;
}
static int mt8173_nor_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf,
int len)
static int mtk_nor_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf,
int len)
{
int ret;
struct mt8173_nor *mt8173_nor = nor->priv;
struct mtk_nor *mtk_nor = nor->priv;
switch (opcode) {
case SPINOR_OP_WRSR:
/* We only handle 1 byte */
ret = mt8173_nor_wr_sr(mt8173_nor, *buf);
ret = mtk_nor_wr_sr(mtk_nor, *buf);
break;
default:
ret = mt8173_nor_do_tx_rx(mt8173_nor, opcode, buf, len, NULL, 0);
ret = mtk_nor_do_tx_rx(mtk_nor, opcode, buf, len, NULL, 0);
if (ret)
dev_warn(mt8173_nor->dev, "write reg failure!\n");
dev_warn(mtk_nor->dev, "write reg failure!\n");
break;
}
return ret;
}
static void mt8173_nor_disable_clk(struct mt8173_nor *mt8173_nor)
static void mtk_nor_disable_clk(struct mtk_nor *mtk_nor)
{
clk_disable_unprepare(mt8173_nor->spi_clk);
clk_disable_unprepare(mt8173_nor->nor_clk);
clk_disable_unprepare(mtk_nor->spi_clk);
clk_disable_unprepare(mtk_nor->nor_clk);
}
static int mt8173_nor_enable_clk(struct mt8173_nor *mt8173_nor)
static int mtk_nor_enable_clk(struct mtk_nor *mtk_nor)
{
int ret;
ret = clk_prepare_enable(mt8173_nor->spi_clk);
ret = clk_prepare_enable(mtk_nor->spi_clk);
if (ret)
return ret;
ret = clk_prepare_enable(mt8173_nor->nor_clk);
ret = clk_prepare_enable(mtk_nor->nor_clk);
if (ret) {
clk_disable_unprepare(mt8173_nor->spi_clk);
clk_disable_unprepare(mtk_nor->spi_clk);
return ret;
}
return 0;
}
static int mtk_nor_init(struct mt8173_nor *mt8173_nor,
static int mtk_nor_init(struct mtk_nor *mtk_nor,
struct device_node *flash_node)
{
const struct spi_nor_hwcaps hwcaps = {
@ -439,18 +439,18 @@ static int mtk_nor_init(struct mt8173_nor *mt8173_nor,
struct spi_nor *nor;
/* initialize controller to accept commands */
writel(MTK_NOR_ENABLE_SF_CMD, mt8173_nor->base + MTK_NOR_WRPROT_REG);
writel(MTK_NOR_ENABLE_SF_CMD, mtk_nor->base + MTK_NOR_WRPROT_REG);
nor = &mt8173_nor->nor;
nor->dev = mt8173_nor->dev;
nor->priv = mt8173_nor;
nor = &mtk_nor->nor;
nor->dev = mtk_nor->dev;
nor->priv = mtk_nor;
spi_nor_set_flash_node(nor, flash_node);
/* fill the hooks to spi nor */
nor->read = mt8173_nor_read;
nor->read_reg = mt8173_nor_read_reg;
nor->write = mt8173_nor_write;
nor->write_reg = mt8173_nor_write_reg;
nor->read = mtk_nor_read;
nor->read_reg = mtk_nor_read_reg;
nor->write = mtk_nor_write;
nor->write_reg = mtk_nor_write_reg;
nor->mtd.name = "mtk_nor";
/* initialized with NULL */
ret = spi_nor_scan(nor, NULL, &hwcaps);
@ -465,34 +465,34 @@ static int mtk_nor_drv_probe(struct platform_device *pdev)
struct device_node *flash_np;
struct resource *res;
int ret;
struct mt8173_nor *mt8173_nor;
struct mtk_nor *mtk_nor;
if (!pdev->dev.of_node) {
dev_err(&pdev->dev, "No DT found\n");
return -EINVAL;
}
mt8173_nor = devm_kzalloc(&pdev->dev, sizeof(*mt8173_nor), GFP_KERNEL);
if (!mt8173_nor)
mtk_nor = devm_kzalloc(&pdev->dev, sizeof(*mtk_nor), GFP_KERNEL);
if (!mtk_nor)
return -ENOMEM;
platform_set_drvdata(pdev, mt8173_nor);
platform_set_drvdata(pdev, mtk_nor);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
mt8173_nor->base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(mt8173_nor->base))
return PTR_ERR(mt8173_nor->base);
mtk_nor->base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(mtk_nor->base))
return PTR_ERR(mtk_nor->base);
mt8173_nor->spi_clk = devm_clk_get(&pdev->dev, "spi");
if (IS_ERR(mt8173_nor->spi_clk))
return PTR_ERR(mt8173_nor->spi_clk);
mtk_nor->spi_clk = devm_clk_get(&pdev->dev, "spi");
if (IS_ERR(mtk_nor->spi_clk))
return PTR_ERR(mtk_nor->spi_clk);
mt8173_nor->nor_clk = devm_clk_get(&pdev->dev, "sf");
if (IS_ERR(mt8173_nor->nor_clk))
return PTR_ERR(mt8173_nor->nor_clk);
mtk_nor->nor_clk = devm_clk_get(&pdev->dev, "sf");
if (IS_ERR(mtk_nor->nor_clk))
return PTR_ERR(mtk_nor->nor_clk);
mt8173_nor->dev = &pdev->dev;
mtk_nor->dev = &pdev->dev;
ret = mt8173_nor_enable_clk(mt8173_nor);
ret = mtk_nor_enable_clk(mtk_nor);
if (ret)
return ret;
@ -503,20 +503,20 @@ static int mtk_nor_drv_probe(struct platform_device *pdev)
ret = -ENODEV;
goto nor_free;
}
ret = mtk_nor_init(mt8173_nor, flash_np);
ret = mtk_nor_init(mtk_nor, flash_np);
nor_free:
if (ret)
mt8173_nor_disable_clk(mt8173_nor);
mtk_nor_disable_clk(mtk_nor);
return ret;
}
static int mtk_nor_drv_remove(struct platform_device *pdev)
{
struct mt8173_nor *mt8173_nor = platform_get_drvdata(pdev);
struct mtk_nor *mtk_nor = platform_get_drvdata(pdev);
mt8173_nor_disable_clk(mt8173_nor);
mtk_nor_disable_clk(mtk_nor);
return 0;
}
@ -524,18 +524,18 @@ static int mtk_nor_drv_remove(struct platform_device *pdev)
#ifdef CONFIG_PM_SLEEP
static int mtk_nor_suspend(struct device *dev)
{
struct mt8173_nor *mt8173_nor = dev_get_drvdata(dev);
struct mtk_nor *mtk_nor = dev_get_drvdata(dev);
mt8173_nor_disable_clk(mt8173_nor);
mtk_nor_disable_clk(mtk_nor);
return 0;
}
static int mtk_nor_resume(struct device *dev)
{
struct mt8173_nor *mt8173_nor = dev_get_drvdata(dev);
struct mtk_nor *mtk_nor = dev_get_drvdata(dev);
return mt8173_nor_enable_clk(mt8173_nor);
return mtk_nor_enable_clk(mtk_nor);
}
static const struct dev_pm_ops mtk_nor_dev_pm_ops = {

View File

@ -330,8 +330,22 @@ static inline int spi_nor_fsr_ready(struct spi_nor *nor)
int fsr = read_fsr(nor);
if (fsr < 0)
return fsr;
else
return fsr & FSR_READY;
if (fsr & (FSR_E_ERR | FSR_P_ERR)) {
if (fsr & FSR_E_ERR)
dev_err(nor->dev, "Erase operation failed.\n");
else
dev_err(nor->dev, "Program operation failed.\n");
if (fsr & FSR_PT_ERR)
dev_err(nor->dev,
"Attempted to modify a protected sector.\n");
nor->write_reg(nor, SPINOR_OP_CLFSR, NULL, 0);
return -EIO;
}
return fsr & FSR_READY;
}
static int spi_nor_ready(struct spi_nor *nor)
@ -552,6 +566,27 @@ erase_err:
return ret;
}
/* Write status register and ensure bits in mask match written values */
static int write_sr_and_check(struct spi_nor *nor, u8 status_new, u8 mask)
{
int ret;
write_enable(nor);
ret = write_sr(nor, status_new);
if (ret)
return ret;
ret = spi_nor_wait_till_ready(nor);
if (ret)
return ret;
ret = read_sr(nor);
if (ret < 0)
return ret;
return ((ret & mask) != (status_new & mask)) ? -EIO : 0;
}
static void stm_get_locked_range(struct spi_nor *nor, u8 sr, loff_t *ofs,
uint64_t *len)
{
@ -650,7 +685,6 @@ static int stm_lock(struct spi_nor *nor, loff_t ofs, uint64_t len)
loff_t lock_len;
bool can_be_top = true, can_be_bottom = nor->flags & SNOR_F_HAS_SR_TB;
bool use_top;
int ret;
status_old = read_sr(nor);
if (status_old < 0)
@ -714,11 +748,7 @@ static int stm_lock(struct spi_nor *nor, loff_t ofs, uint64_t len)
if ((status_new & mask) < (status_old & mask))
return -EINVAL;
write_enable(nor);
ret = write_sr(nor, status_new);
if (ret)
return ret;
return spi_nor_wait_till_ready(nor);
return write_sr_and_check(nor, status_new, mask);
}
/*
@ -735,7 +765,6 @@ static int stm_unlock(struct spi_nor *nor, loff_t ofs, uint64_t len)
loff_t lock_len;
bool can_be_top = true, can_be_bottom = nor->flags & SNOR_F_HAS_SR_TB;
bool use_top;
int ret;
status_old = read_sr(nor);
if (status_old < 0)
@ -802,11 +831,7 @@ static int stm_unlock(struct spi_nor *nor, loff_t ofs, uint64_t len)
if ((status_new & mask) > (status_old & mask))
return -EINVAL;
write_enable(nor);
ret = write_sr(nor, status_new);
if (ret)
return ret;
return spi_nor_wait_till_ready(nor);
return write_sr_and_check(nor, status_new, mask);
}
/*
@ -1020,7 +1045,13 @@ static const struct flash_info spi_nor_ids[] = {
{ "640s33b", INFO(0x898913, 0, 64 * 1024, 128, 0) },
/* ISSI */
{ "is25cd512", INFO(0x7f9d20, 0, 32 * 1024, 2, SECT_4K) },
{ "is25cd512", INFO(0x7f9d20, 0, 32 * 1024, 2, SECT_4K) },
{ "is25lq040b", INFO(0x9d4013, 0, 64 * 1024, 8,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
{ "is25lp080d", INFO(0x9d6014, 0, 64 * 1024, 16,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
{ "is25lp128", INFO(0x9d6018, 0, 64 * 1024, 256,
SECT_4K | SPI_NOR_DUAL_READ) },
/* Macronix */
{ "mx25l512e", INFO(0xc22010, 0, 64 * 1024, 1, SECT_4K) },
@ -1065,7 +1096,7 @@ static const struct flash_info spi_nor_ids[] = {
{ "pm25lv010", INFO(0, 0, 32 * 1024, 4, SECT_4K_PMC) },
{ "pm25lq032", INFO(0x7f9d46, 0, 64 * 1024, 64, SECT_4K) },
/* Spansion -- single (large) sector size only, at least
/* Spansion/Cypress -- single (large) sector size only, at least
* for the chips listed here (without boot sectors).
*/
{ "s25sl032p", INFO(0x010215, 0x4d00, 64 * 1024, 64, SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
@ -1094,6 +1125,8 @@ static const struct flash_info spi_nor_ids[] = {
{ "s25fl204k", INFO(0x014013, 0, 64 * 1024, 8, SECT_4K | SPI_NOR_DUAL_READ) },
{ "s25fl208k", INFO(0x014014, 0, 64 * 1024, 16, SECT_4K | SPI_NOR_DUAL_READ) },
{ "s25fl064l", INFO(0x016017, 0, 64 * 1024, 128, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | SPI_NOR_4B_OPCODES) },
{ "s25fl128l", INFO(0x016018, 0, 64 * 1024, 256, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | SPI_NOR_4B_OPCODES) },
{ "s25fl256l", INFO(0x016019, 0, 64 * 1024, 512, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | SPI_NOR_4B_OPCODES) },
/* SST -- large erase sizes are "overlays", "sectors" are 4K */
{ "sst25vf040b", INFO(0xbf258d, 0, 64 * 1024, 8, SECT_4K | SST_WRITE) },
@ -2713,6 +2746,16 @@ static void spi_nor_resume(struct mtd_info *mtd)
dev_err(dev, "resume() failed\n");
}
void spi_nor_restore(struct spi_nor *nor)
{
/* restore the addressing mode */
if ((nor->addr_width == 4) &&
(JEDEC_MFR(nor->info) != SNOR_MFR_SPANSION) &&
!(nor->info->flags & SPI_NOR_4B_OPCODES))
set_4byte(nor, nor->info, 0);
}
EXPORT_SYMBOL_GPL(spi_nor_restore);
int spi_nor_scan(struct spi_nor *nor, const char *name,
const struct spi_nor_hwcaps *hwcaps)
{

View File

@ -151,7 +151,7 @@ static int read_page(int log)
memcpy(&oldstats, &mtd->ecc_stats, sizeof(oldstats));
err = mtd_read(mtd, offset, mtd->writesize, &read, rbuffer);
if (err == -EUCLEAN)
if (!err || err == -EUCLEAN)
err = mtd->ecc_stats.corrected - oldstats.corrected;
if (err < 0 || read != mtd->writesize) {

View File

@ -193,6 +193,9 @@ static int verify_eraseblock(int ebnum)
ops.datbuf = NULL;
ops.oobbuf = readbuf;
err = mtd_read_oob(mtd, addr, &ops);
if (mtd_is_bitflip(err))
err = 0;
if (err || ops.oobretlen != use_len) {
pr_err("error: readoob failed at %#llx\n",
(long long)addr);
@ -227,6 +230,9 @@ static int verify_eraseblock(int ebnum)
ops.datbuf = NULL;
ops.oobbuf = readbuf;
err = mtd_read_oob(mtd, addr, &ops);
if (mtd_is_bitflip(err))
err = 0;
if (err || ops.oobretlen != mtd->oobavail) {
pr_err("error: readoob failed at %#llx\n",
(long long)addr);
@ -286,6 +292,9 @@ static int verify_eraseblock_in_one_go(int ebnum)
/* read entire block's OOB at one go */
err = mtd_read_oob(mtd, addr, &ops);
if (mtd_is_bitflip(err))
err = 0;
if (err || ops.oobretlen != len) {
pr_err("error: readoob failed at %#llx\n",
(long long)addr);
@ -527,6 +536,9 @@ static int __init mtd_oobtest_init(void)
pr_info("attempting to start read past end of OOB\n");
pr_info("an error is expected...\n");
err = mtd_read_oob(mtd, addr0, &ops);
if (mtd_is_bitflip(err))
err = 0;
if (err) {
pr_info("error occurred as expected\n");
err = 0;
@ -571,6 +583,9 @@ static int __init mtd_oobtest_init(void)
pr_info("attempting to read past end of device\n");
pr_info("an error is expected...\n");
err = mtd_read_oob(mtd, mtd->size - mtd->writesize, &ops);
if (mtd_is_bitflip(err))
err = 0;
if (err) {
pr_info("error occurred as expected\n");
err = 0;
@ -615,6 +630,9 @@ static int __init mtd_oobtest_init(void)
pr_info("attempting to read past end of device\n");
pr_info("an error is expected...\n");
err = mtd_read_oob(mtd, mtd->size - mtd->writesize, &ops);
if (mtd_is_bitflip(err))
err = 0;
if (err) {
pr_info("error occurred as expected\n");
err = 0;
@ -684,6 +702,9 @@ static int __init mtd_oobtest_init(void)
ops.datbuf = NULL;
ops.oobbuf = readbuf;
err = mtd_read_oob(mtd, addr, &ops);
if (mtd_is_bitflip(err))
err = 0;
if (err)
goto out;
if (memcmpshow(addr, readbuf, writebuf,

View File

@ -637,8 +637,7 @@ static int spinand_write_page_hwecc(struct mtd_info *mtd,
int eccsteps = chip->ecc.steps;
enable_hw_ecc = 1;
chip->write_buf(mtd, p, eccsize * eccsteps);
return 0;
return nand_prog_page_op(chip, page, 0, p, eccsize * eccsteps);
}
static int spinand_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
@ -653,7 +652,7 @@ static int spinand_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
enable_read_hw_ecc = 1;
chip->read_buf(mtd, p, eccsize * eccsteps);
nand_read_page_op(chip, page, 0, p, eccsize * eccsteps);
if (oob_required)
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);

View File

@ -270,75 +270,67 @@ void map_destroy(struct mtd_info *mtd);
#define INVALIDATE_CACHED_RANGE(map, from, size) \
do { if (map->inval_cache) map->inval_cache(map, from, size); } while (0)
#define map_word_equal(map, val1, val2) \
({ \
int i, ret = 1; \
for (i = 0; i < map_words(map); i++) \
if ((val1).x[i] != (val2).x[i]) { \
ret = 0; \
break; \
} \
ret; \
})
static inline int map_word_equal(struct map_info *map, map_word val1, map_word val2)
{
int i;
#define map_word_and(map, val1, val2) \
({ \
map_word r; \
int i; \
for (i = 0; i < map_words(map); i++) \
r.x[i] = (val1).x[i] & (val2).x[i]; \
r; \
})
for (i = 0; i < map_words(map); i++) {
if (val1.x[i] != val2.x[i])
return 0;
}
#define map_word_clr(map, val1, val2) \
({ \
map_word r; \
int i; \
for (i = 0; i < map_words(map); i++) \
r.x[i] = (val1).x[i] & ~(val2).x[i]; \
r; \
})
return 1;
}
#define map_word_or(map, val1, val2) \
({ \
map_word r; \
int i; \
for (i = 0; i < map_words(map); i++) \
r.x[i] = (val1).x[i] | (val2).x[i]; \
r; \
})
static inline map_word map_word_and(struct map_info *map, map_word val1, map_word val2)
{
map_word r;
int i;
#define map_word_andequal(map, val1, val2, val3) \
({ \
int i, ret = 1; \
for (i = 0; i < map_words(map); i++) { \
if (((val1).x[i] & (val2).x[i]) != (val2).x[i]) { \
ret = 0; \
break; \
} \
} \
ret; \
})
for (i = 0; i < map_words(map); i++)
r.x[i] = val1.x[i] & val2.x[i];
return r;
}
static inline map_word map_word_clr(struct map_info *map, map_word val1, map_word val2)
{
map_word r;
int i;
for (i = 0; i < map_words(map); i++)
r.x[i] = val1.x[i] & ~val2.x[i];
return r;
}
static inline map_word map_word_or(struct map_info *map, map_word val1, map_word val2)
{
map_word r;
int i;
for (i = 0; i < map_words(map); i++)
r.x[i] = val1.x[i] | val2.x[i];
return r;
}
static inline int map_word_andequal(struct map_info *map, map_word val1, map_word val2, map_word val3)
{
int i;
for (i = 0; i < map_words(map); i++) {
if ((val1.x[i] & val2.x[i]) != val3.x[i])
return 0;
}
return 1;
}
static inline int map_word_bitsset(struct map_info *map, map_word val1, map_word val2)
{
int i;
for (i = 0; i < map_words(map); i++) {
if (val1.x[i] & val2.x[i])
return 1;
}
return 0;
}
#define map_word_bitsset(map, val1, val2) \
({ \
int i, ret = 0; \
for (i = 0; i < map_words(map); i++) { \
if ((val1).x[i] & (val2).x[i]) { \
ret = 1; \
break; \
} \
} \
ret; \
})
static inline map_word map_word_load(struct map_info *map, const void *ptr)
{

View File

@ -489,6 +489,34 @@ static inline uint32_t mtd_mod_by_eb(uint64_t sz, struct mtd_info *mtd)
return do_div(sz, mtd->erasesize);
}
/**
* mtd_align_erase_req - Adjust an erase request to align things on eraseblock
* boundaries.
* @mtd: the MTD device this erase request applies on
* @req: the erase request to adjust
*
* This function will adjust @req->addr and @req->len to align them on
* @mtd->erasesize. Of course we expect @mtd->erasesize to be != 0.
*/
static inline void mtd_align_erase_req(struct mtd_info *mtd,
struct erase_info *req)
{
u32 mod;
if (WARN_ON(!mtd->erasesize))
return;
mod = mtd_mod_by_eb(req->addr, mtd);
if (mod) {
req->addr -= mod;
req->len += mod;
}
mod = mtd_mod_by_eb(req->addr + req->len, mtd);
if (mod)
req->len += mtd->erasesize - mod;
}
static inline uint32_t mtd_div_by_ws(uint64_t sz, struct mtd_info *mtd)
{
if (mtd->writesize_shift)

View File

@ -133,12 +133,6 @@ enum nand_ecc_algo {
*/
#define NAND_ECC_GENERIC_ERASED_CHECK BIT(0)
#define NAND_ECC_MAXIMIZE BIT(1)
/*
* If your controller already sends the required NAND commands when
* reading or writing a page, then the framework is not supposed to
* send READ0 and SEQIN/PAGEPROG respectively.
*/
#define NAND_ECC_CUSTOM_PAGE_ACCESS BIT(2)
/* Bit mask for flags passed to do_nand_read_ecc */
#define NAND_GET_DEVICE 0x80
@ -191,11 +185,6 @@ enum nand_ecc_algo {
/* Non chip related options */
/* This option skips the bbt scan during initialization. */
#define NAND_SKIP_BBTSCAN 0x00010000
/*
* This option is defined if the board driver allocates its own buffers
* (e.g. because it needs them DMA-coherent).
*/
#define NAND_OWN_BUFFERS 0x00020000
/* Chip may not exist, so silence any errors in scan */
#define NAND_SCAN_SILENT_NODEV 0x00040000
/*
@ -525,6 +514,8 @@ static const struct nand_ecc_caps __name = { \
* @postpad: padding information for syndrome based ECC generators
* @options: ECC specific options (see NAND_ECC_XXX flags defined above)
* @priv: pointer to private ECC control data
* @calc_buf: buffer for calculated ECC, size is oobsize.
* @code_buf: buffer for ECC read from flash, size is oobsize.
* @hwctl: function to control hardware ECC generator. Must only
* be provided if an hardware ECC is available
* @calculate: function for ECC calculation or readback from ECC hardware
@ -575,6 +566,8 @@ struct nand_ecc_ctrl {
int postpad;
unsigned int options;
void *priv;
u8 *calc_buf;
u8 *code_buf;
void (*hwctl)(struct mtd_info *mtd, int mode);
int (*calculate)(struct mtd_info *mtd, const uint8_t *dat,
uint8_t *ecc_code);
@ -602,26 +595,6 @@ struct nand_ecc_ctrl {
int page);
};
static inline int nand_standard_page_accessors(struct nand_ecc_ctrl *ecc)
{
return !(ecc->options & NAND_ECC_CUSTOM_PAGE_ACCESS);
}
/**
* struct nand_buffers - buffer structure for read/write
* @ecccalc: buffer pointer for calculated ECC, size is oobsize.
* @ecccode: buffer pointer for ECC read from flash, size is oobsize.
* @databuf: buffer pointer for data, size is (page size + oobsize).
*
* Do not change the order of buffers. databuf and oobrbuf must be in
* consecutive order.
*/
struct nand_buffers {
uint8_t *ecccalc;
uint8_t *ecccode;
uint8_t *databuf;
};
/**
* struct nand_sdr_timings - SDR NAND chip timings
*
@ -761,6 +734,350 @@ struct nand_manufacturer_ops {
void (*cleanup)(struct nand_chip *chip);
};
/**
* struct nand_op_cmd_instr - Definition of a command instruction
* @opcode: the command to issue in one cycle
*/
struct nand_op_cmd_instr {
u8 opcode;
};
/**
* struct nand_op_addr_instr - Definition of an address instruction
* @naddrs: length of the @addrs array
* @addrs: array containing the address cycles to issue
*/
struct nand_op_addr_instr {
unsigned int naddrs;
const u8 *addrs;
};
/**
* struct nand_op_data_instr - Definition of a data instruction
* @len: number of data bytes to move
* @in: buffer to fill when reading from the NAND chip
* @out: buffer to read from when writing to the NAND chip
* @force_8bit: force 8-bit access
*
* Please note that "in" and "out" are inverted from the ONFI specification
* and are from the controller perspective, so a "in" is a read from the NAND
* chip while a "out" is a write to the NAND chip.
*/
struct nand_op_data_instr {
unsigned int len;
union {
void *in;
const void *out;
} buf;
bool force_8bit;
};
/**
* struct nand_op_waitrdy_instr - Definition of a wait ready instruction
* @timeout_ms: maximum delay while waiting for the ready/busy pin in ms
*/
struct nand_op_waitrdy_instr {
unsigned int timeout_ms;
};
/**
* enum nand_op_instr_type - Definition of all instruction types
* @NAND_OP_CMD_INSTR: command instruction
* @NAND_OP_ADDR_INSTR: address instruction
* @NAND_OP_DATA_IN_INSTR: data in instruction
* @NAND_OP_DATA_OUT_INSTR: data out instruction
* @NAND_OP_WAITRDY_INSTR: wait ready instruction
*/
enum nand_op_instr_type {
NAND_OP_CMD_INSTR,
NAND_OP_ADDR_INSTR,
NAND_OP_DATA_IN_INSTR,
NAND_OP_DATA_OUT_INSTR,
NAND_OP_WAITRDY_INSTR,
};
/**
* struct nand_op_instr - Instruction object
* @type: the instruction type
* @cmd/@addr/@data/@waitrdy: extra data associated to the instruction.
* You'll have to use the appropriate element
* depending on @type
* @delay_ns: delay the controller should apply after the instruction has been
* issued on the bus. Most modern controllers have internal timings
* control logic, and in this case, the controller driver can ignore
* this field.
*/
struct nand_op_instr {
enum nand_op_instr_type type;
union {
struct nand_op_cmd_instr cmd;
struct nand_op_addr_instr addr;
struct nand_op_data_instr data;
struct nand_op_waitrdy_instr waitrdy;
} ctx;
unsigned int delay_ns;
};
/*
* Special handling must be done for the WAITRDY timeout parameter as it usually
* is either tPROG (after a prog), tR (before a read), tRST (during a reset) or
* tBERS (during an erase) which all of them are u64 values that cannot be
* divided by usual kernel macros and must be handled with the special
* DIV_ROUND_UP_ULL() macro.
*/
#define __DIVIDE(dividend, divisor) ({ \
sizeof(dividend) == sizeof(u32) ? \
DIV_ROUND_UP(dividend, divisor) : \
DIV_ROUND_UP_ULL(dividend, divisor); \
})
#define PSEC_TO_NSEC(x) __DIVIDE(x, 1000)
#define PSEC_TO_MSEC(x) __DIVIDE(x, 1000000000)
#define NAND_OP_CMD(id, ns) \
{ \
.type = NAND_OP_CMD_INSTR, \
.ctx.cmd.opcode = id, \
.delay_ns = ns, \
}
#define NAND_OP_ADDR(ncycles, cycles, ns) \
{ \
.type = NAND_OP_ADDR_INSTR, \
.ctx.addr = { \
.naddrs = ncycles, \
.addrs = cycles, \
}, \
.delay_ns = ns, \
}
#define NAND_OP_DATA_IN(l, b, ns) \
{ \
.type = NAND_OP_DATA_IN_INSTR, \
.ctx.data = { \
.len = l, \
.buf.in = b, \
.force_8bit = false, \
}, \
.delay_ns = ns, \
}
#define NAND_OP_DATA_OUT(l, b, ns) \
{ \
.type = NAND_OP_DATA_OUT_INSTR, \
.ctx.data = { \
.len = l, \
.buf.out = b, \
.force_8bit = false, \
}, \
.delay_ns = ns, \
}
#define NAND_OP_8BIT_DATA_IN(l, b, ns) \
{ \
.type = NAND_OP_DATA_IN_INSTR, \
.ctx.data = { \
.len = l, \
.buf.in = b, \
.force_8bit = true, \
}, \
.delay_ns = ns, \
}
#define NAND_OP_8BIT_DATA_OUT(l, b, ns) \
{ \
.type = NAND_OP_DATA_OUT_INSTR, \
.ctx.data = { \
.len = l, \
.buf.out = b, \
.force_8bit = true, \
}, \
.delay_ns = ns, \
}
#define NAND_OP_WAIT_RDY(tout_ms, ns) \
{ \
.type = NAND_OP_WAITRDY_INSTR, \
.ctx.waitrdy.timeout_ms = tout_ms, \
.delay_ns = ns, \
}
/**
* struct nand_subop - a sub operation
* @instrs: array of instructions
* @ninstrs: length of the @instrs array
* @first_instr_start_off: offset to start from for the first instruction
* of the sub-operation
* @last_instr_end_off: offset to end at (excluded) for the last instruction
* of the sub-operation
*
* Both @first_instr_start_off and @last_instr_end_off only apply to data or
* address instructions.
*
* When an operation cannot be handled as is by the NAND controller, it will
* be split by the parser into sub-operations which will be passed to the
* controller driver.
*/
struct nand_subop {
const struct nand_op_instr *instrs;
unsigned int ninstrs;
unsigned int first_instr_start_off;
unsigned int last_instr_end_off;
};
int nand_subop_get_addr_start_off(const struct nand_subop *subop,
unsigned int op_id);
int nand_subop_get_num_addr_cyc(const struct nand_subop *subop,
unsigned int op_id);
int nand_subop_get_data_start_off(const struct nand_subop *subop,
unsigned int op_id);
int nand_subop_get_data_len(const struct nand_subop *subop,
unsigned int op_id);
/**
* struct nand_op_parser_addr_constraints - Constraints for address instructions
* @maxcycles: maximum number of address cycles the controller can issue in a
* single step
*/
struct nand_op_parser_addr_constraints {
unsigned int maxcycles;
};
/**
* struct nand_op_parser_data_constraints - Constraints for data instructions
* @maxlen: maximum data length that the controller can handle in a single step
*/
struct nand_op_parser_data_constraints {
unsigned int maxlen;
};
/**
* struct nand_op_parser_pattern_elem - One element of a pattern
* @type: the instructuction type
* @optional: whether this element of the pattern is optional or mandatory
* @addr/@data: address or data constraint (number of cycles or data length)
*/
struct nand_op_parser_pattern_elem {
enum nand_op_instr_type type;
bool optional;
union {
struct nand_op_parser_addr_constraints addr;
struct nand_op_parser_data_constraints data;
} ctx;
};
#define NAND_OP_PARSER_PAT_CMD_ELEM(_opt) \
{ \
.type = NAND_OP_CMD_INSTR, \
.optional = _opt, \
}
#define NAND_OP_PARSER_PAT_ADDR_ELEM(_opt, _maxcycles) \
{ \
.type = NAND_OP_ADDR_INSTR, \
.optional = _opt, \
.ctx.addr.maxcycles = _maxcycles, \
}
#define NAND_OP_PARSER_PAT_DATA_IN_ELEM(_opt, _maxlen) \
{ \
.type = NAND_OP_DATA_IN_INSTR, \
.optional = _opt, \
.ctx.data.maxlen = _maxlen, \
}
#define NAND_OP_PARSER_PAT_DATA_OUT_ELEM(_opt, _maxlen) \
{ \
.type = NAND_OP_DATA_OUT_INSTR, \
.optional = _opt, \
.ctx.data.maxlen = _maxlen, \
}
#define NAND_OP_PARSER_PAT_WAITRDY_ELEM(_opt) \
{ \
.type = NAND_OP_WAITRDY_INSTR, \
.optional = _opt, \
}
/**
* struct nand_op_parser_pattern - NAND sub-operation pattern descriptor
* @elems: array of pattern elements
* @nelems: number of pattern elements in @elems array
* @exec: the function that will issue a sub-operation
*
* A pattern is a list of elements, each element reprensenting one instruction
* with its constraints. The pattern itself is used by the core to match NAND
* chip operation with NAND controller operations.
* Once a match between a NAND controller operation pattern and a NAND chip
* operation (or a sub-set of a NAND operation) is found, the pattern ->exec()
* hook is called so that the controller driver can issue the operation on the
* bus.
*
* Controller drivers should declare as many patterns as they support and pass
* this list of patterns (created with the help of the following macro) to
* the nand_op_parser_exec_op() helper.
*/
struct nand_op_parser_pattern {
const struct nand_op_parser_pattern_elem *elems;
unsigned int nelems;
int (*exec)(struct nand_chip *chip, const struct nand_subop *subop);
};
#define NAND_OP_PARSER_PATTERN(_exec, ...) \
{ \
.exec = _exec, \
.elems = (struct nand_op_parser_pattern_elem[]) { __VA_ARGS__ }, \
.nelems = sizeof((struct nand_op_parser_pattern_elem[]) { __VA_ARGS__ }) / \
sizeof(struct nand_op_parser_pattern_elem), \
}
/**
* struct nand_op_parser - NAND controller operation parser descriptor
* @patterns: array of supported patterns
* @npatterns: length of the @patterns array
*
* The parser descriptor is just an array of supported patterns which will be
* iterated by nand_op_parser_exec_op() everytime it tries to execute an
* NAND operation (or tries to determine if a specific operation is supported).
*
* It is worth mentioning that patterns will be tested in their declaration
* order, and the first match will be taken, so it's important to order patterns
* appropriately so that simple/inefficient patterns are placed at the end of
* the list. Usually, this is where you put single instruction patterns.
*/
struct nand_op_parser {
const struct nand_op_parser_pattern *patterns;
unsigned int npatterns;
};
#define NAND_OP_PARSER(...) \
{ \
.patterns = (struct nand_op_parser_pattern[]) { __VA_ARGS__ }, \
.npatterns = sizeof((struct nand_op_parser_pattern[]) { __VA_ARGS__ }) / \
sizeof(struct nand_op_parser_pattern), \
}
/**
* struct nand_operation - NAND operation descriptor
* @instrs: array of instructions to execute
* @ninstrs: length of the @instrs array
*
* The actual operation structure that will be passed to chip->exec_op().
*/
struct nand_operation {
const struct nand_op_instr *instrs;
unsigned int ninstrs;
};
#define NAND_OPERATION(_instrs) \
{ \
.instrs = _instrs, \
.ninstrs = ARRAY_SIZE(_instrs), \
}
int nand_op_parser_exec_op(struct nand_chip *chip,
const struct nand_op_parser *parser,
const struct nand_operation *op, bool check_only);
/**
* struct nand_chip - NAND Private Flash Chip Data
* @mtd: MTD device registered to the MTD framework
@ -787,10 +1104,13 @@ struct nand_manufacturer_ops {
* commands to the chip.
* @waitfunc: [REPLACEABLE] hardwarespecific function for wait on
* ready.
* @exec_op: controller specific method to execute NAND operations.
* This method replaces ->cmdfunc(),
* ->{read,write}_{buf,byte,word}(), ->dev_ready() and
* ->waifunc().
* @setup_read_retry: [FLASHSPECIFIC] flash (vendor) specific function for
* setting the read-retry mode. Mostly needed for MLC NAND.
* @ecc: [BOARDSPECIFIC] ECC control structure
* @buffers: buffer structure for read/write
* @buf_align: minimum buffer alignment required by a platform
* @hwcontrol: platform-specific hardware control structure
* @erase: [REPLACEABLE] erase function
@ -830,6 +1150,7 @@ struct nand_manufacturer_ops {
* @numchips: [INTERN] number of physical chips
* @chipsize: [INTERN] the size of one chip for multichip arrays
* @pagemask: [INTERN] page number mask = number of (pages / chip) - 1
* @data_buf: [INTERN] buffer for data, size is (page size + oobsize).
* @pagebuf: [INTERN] holds the pagenumber which is currently in
* data_buf.
* @pagebuf_bitflips: [INTERN] holds the bitflip count for the page which is
@ -886,6 +1207,9 @@ struct nand_chip {
void (*cmdfunc)(struct mtd_info *mtd, unsigned command, int column,
int page_addr);
int(*waitfunc)(struct mtd_info *mtd, struct nand_chip *this);
int (*exec_op)(struct nand_chip *chip,
const struct nand_operation *op,
bool check_only);
int (*erase)(struct mtd_info *mtd, int page);
int (*scan_bbt)(struct mtd_info *mtd);
int (*onfi_set_features)(struct mtd_info *mtd, struct nand_chip *chip,
@ -896,7 +1220,6 @@ struct nand_chip {
int (*setup_data_interface)(struct mtd_info *mtd, int chipnr,
const struct nand_data_interface *conf);
int chip_delay;
unsigned int options;
unsigned int bbt_options;
@ -908,6 +1231,7 @@ struct nand_chip {
int numchips;
uint64_t chipsize;
int pagemask;
u8 *data_buf;
int pagebuf;
unsigned int pagebuf_bitflips;
int subpagesize;
@ -928,7 +1252,7 @@ struct nand_chip {
u16 max_bb_per_die;
u32 blocks_per_die;
struct nand_data_interface *data_interface;
struct nand_data_interface data_interface;
int read_retries;
@ -938,7 +1262,6 @@ struct nand_chip {
struct nand_hw_control *controller;
struct nand_ecc_ctrl ecc;
struct nand_buffers *buffers;
unsigned long buf_align;
struct nand_hw_control hwcontrol;
@ -956,6 +1279,15 @@ struct nand_chip {
} manufacturer;
};
static inline int nand_exec_op(struct nand_chip *chip,
const struct nand_operation *op)
{
if (!chip->exec_op)
return -ENOTSUPP;
return chip->exec_op(chip, op, false);
}
extern const struct mtd_ooblayout_ops nand_ooblayout_sp_ops;
extern const struct mtd_ooblayout_ops nand_ooblayout_lp_ops;
@ -1225,8 +1557,7 @@ static inline int onfi_get_sync_timing_mode(struct nand_chip *chip)
return le16_to_cpu(chip->onfi_params.src_sync_timing_mode);
}
int onfi_init_data_interface(struct nand_chip *chip,
struct nand_data_interface *iface,
int onfi_fill_data_interface(struct nand_chip *chip,
enum nand_data_interface_type type,
int timing_mode);
@ -1269,8 +1600,6 @@ static inline int jedec_feature(struct nand_chip *chip)
/* get timing characteristics from ONFI timing mode. */
const struct nand_sdr_timings *onfi_async_timing_mode_to_sdr_timings(int mode);
/* get data interface from ONFI timing mode 0, used after reset. */
const struct nand_data_interface *nand_get_default_data_interface(void);
int nand_check_erased_ecc_chunk(void *data, int datalen,
void *ecc, int ecclen,
@ -1316,9 +1645,45 @@ int nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
/* Reset and initialize a NAND device */
int nand_reset(struct nand_chip *chip, int chipnr);
/* NAND operation helpers */
int nand_reset_op(struct nand_chip *chip);
int nand_readid_op(struct nand_chip *chip, u8 addr, void *buf,
unsigned int len);
int nand_status_op(struct nand_chip *chip, u8 *status);
int nand_exit_status_op(struct nand_chip *chip);
int nand_erase_op(struct nand_chip *chip, unsigned int eraseblock);
int nand_read_page_op(struct nand_chip *chip, unsigned int page,
unsigned int offset_in_page, void *buf, unsigned int len);
int nand_change_read_column_op(struct nand_chip *chip,
unsigned int offset_in_page, void *buf,
unsigned int len, bool force_8bit);
int nand_read_oob_op(struct nand_chip *chip, unsigned int page,
unsigned int offset_in_page, void *buf, unsigned int len);
int nand_prog_page_begin_op(struct nand_chip *chip, unsigned int page,
unsigned int offset_in_page, const void *buf,
unsigned int len);
int nand_prog_page_end_op(struct nand_chip *chip);
int nand_prog_page_op(struct nand_chip *chip, unsigned int page,
unsigned int offset_in_page, const void *buf,
unsigned int len);
int nand_change_write_column_op(struct nand_chip *chip,
unsigned int offset_in_page, const void *buf,
unsigned int len, bool force_8bit);
int nand_read_data_op(struct nand_chip *chip, void *buf, unsigned int len,
bool force_8bit);
int nand_write_data_op(struct nand_chip *chip, const void *buf,
unsigned int len, bool force_8bit);
/* Free resources held by the NAND device */
void nand_cleanup(struct nand_chip *chip);
/* Default extended ID decoding function */
void nand_decode_ext_id(struct nand_chip *chip);
/*
* External helper for controller drivers that have to implement the WAITRDY
* instruction and have no physical pin to check it.
*/
int nand_soft_waitrdy(struct nand_chip *chip, unsigned long timeout_ms);
#endif /* __LINUX_MTD_RAWNAND_H */

View File

@ -61,6 +61,7 @@
#define SPINOR_OP_RDSFDP 0x5a /* Read SFDP */
#define SPINOR_OP_RDCR 0x35 /* Read configuration register */
#define SPINOR_OP_RDFSR 0x70 /* Read flag status register */
#define SPINOR_OP_CLFSR 0x50 /* Clear flag status register */
/* 4-byte address opcodes - used on Spansion and some Macronix flashes. */
#define SPINOR_OP_READ_4B 0x13 /* Read data bytes (low frequency) */
@ -130,7 +131,10 @@
#define EVCR_QUAD_EN_MICRON BIT(7) /* Micron Quad I/O */
/* Flag Status Register bits */
#define FSR_READY BIT(7)
#define FSR_READY BIT(7) /* Device status, 0 = Busy, 1 = Ready */
#define FSR_E_ERR BIT(5) /* Erase operation status */
#define FSR_P_ERR BIT(4) /* Program operation status */
#define FSR_PT_ERR BIT(1) /* Protection error bit */
/* Configuration Register bits. */
#define CR_QUAD_EN_SPAN BIT(1) /* Spansion Quad I/O */
@ -399,4 +403,10 @@ struct spi_nor_hwcaps {
int spi_nor_scan(struct spi_nor *nor, const char *name,
const struct spi_nor_hwcaps *hwcaps);
/**
* spi_nor_restore_addr_mode() - restore the status of SPI NOR
* @nor: the spi_nor structure
*/
void spi_nor_restore(struct spi_nor *nor);
#endif

View File

@ -25,15 +25,43 @@ struct gpmc_nand_ops {
struct gpmc_nand_regs;
struct gpmc_onenand_info {
bool sync_read;
bool sync_write;
int burst_len;
};
#if IS_ENABLED(CONFIG_OMAP_GPMC)
struct gpmc_nand_ops *gpmc_omap_get_nand_ops(struct gpmc_nand_regs *regs,
int cs);
/**
* gpmc_omap_onenand_set_timings - set optimized sync timings.
* @cs: Chip Select Region
* @freq: Chip frequency
* @latency: Burst latency cycle count
* @info: Structure describing parameters used
*
* Sets optimized timings for the @cs region based on @freq and @latency.
* Updates the @info structure based on the GPMC settings.
*/
int gpmc_omap_onenand_set_timings(struct device *dev, int cs, int freq,
int latency,
struct gpmc_onenand_info *info);
#else
static inline struct gpmc_nand_ops *gpmc_omap_get_nand_ops(struct gpmc_nand_regs *regs,
int cs)
{
return NULL;
}
static inline
int gpmc_omap_onenand_set_timings(struct device *dev, int cs, int freq,
int latency,
struct gpmc_onenand_info *info)
{
return -EINVAL;
}
#endif /* CONFIG_OMAP_GPMC */
extern int gpmc_calc_timings(struct gpmc_timings *gpmc_t,

View File

@ -1,34 +0,0 @@
/*
* Copyright (C) 2006 Nokia Corporation
* Author: Juha Yrjola
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __MTD_ONENAND_OMAP2_H
#define __MTD_ONENAND_OMAP2_H
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
#define ONENAND_SYNC_READ (1 << 0)
#define ONENAND_SYNC_READWRITE (1 << 1)
#define ONENAND_IN_OMAP34XX (1 << 2)
struct omap_onenand_platform_data {
int cs;
int gpio_irq;
struct mtd_partition *parts;
int nr_parts;
int (*onenand_setup)(void __iomem *, int *freq_ptr);
int dma_channel;
u8 flags;
u8 regulator_can_sleep;
u8 skip_initial_unlocking;
/* for passing the partitions */
struct device_node *of_node;
};
#endif