Commit Graph

14 Commits

Author SHA1 Message Date
Russell King 02b4e2756e ARM: v7 setup function should invalidate L1 cache
All ARMv5 and older CPUs invalidate their caches in the early assembly
setup function, prior to enabling the MMU.  This is because the L1
cache should not contain any data relevant to the execution of the
kernel at this point; all data should have been flushed out to memory.

This requirement should also be true for ARMv6 and ARMv7 CPUs - indeed,
these typically do not search their caches when caching is disabled (as
it needs to be when the MMU is disabled) so this change should be safe.

ARMv7 allows there to be CPUs which search their caches while caching is
disabled, and it's permitted that the cache is uninitialised at boot;
for these, the architecture reference manual requires that an
implementation specific code sequence is used immediately after reset
to ensure that the cache is placed into a sane state.  Such
functionality is definitely outside the remit of the Linux kernel, and
must be done by the SoC's firmware before _any_ CPU gets to the Linux
kernel.

Changing the data cache clean+invalidate to a mere invalidate allows us
to get rid of a lot of platform specific hacks around this issue for
their secondary CPU bringup paths - some of which were buggy.

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Tested-by: Dinh Nguyen <dinguyen@opensource.altera.com>
Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Tested-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Tested-by: Michal Simek <michal.simek@xilinx.com>
Tested-by: Wei Xu <xuwei5@hisilicon.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-06-01 11:30:26 +01:00
Soren Brinkmann ed62e33094 ARM: zynq: Rename 'zynq_platform_cpu_die'
Match the naming pattern of all other SMP ops and rename
zynq_platform_cpu_die --> zynq_cpu_die.

Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2014-09-16 12:55:11 +02:00
Soren Brinkmann caf86a73ea ARM: zynq: Remove hotplug.c
The hotplug code contains only a single function, which is an SMP
function. Move that to platsmp.c where all other SMP runctions reside.
That allows removing hotplug.c and declaring the cpu_die function
static.

Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2014-09-16 12:55:10 +02:00
Soren Brinkmann 50c7960a45 ARM: zynq: Synchronise zynq_cpu_die/kill
Avoid races and add synchronisation between the arch specific
kill and die routines.

The same synchronisation issue was fixed on IMX platform
by this commit:
"ARM: imx: fix sync issue between imx_cpu_die and imx_cpu_kill"
(sha1: 2f3edfd7e2)

Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2014-09-16 12:55:09 +02:00
Soren Brinkmann ae88b85e80 ARM: zynq: PM: Enable A9 internal clock gating feature
Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2014-09-16 12:55:05 +02:00
Sudeep KarkadaNagesha 2605f85275 ARM: zynq: remove unnecessary setting of cpu_present_mask
This patch also removes setting cpu_present_mask as platforms should
only re-initialize it in smp_prepare_cpus() if present != possible.

Cc: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-12-10 14:20:25 +01:00
Michal Simek f1fd2fa62d arm: zynq: Add support for zynq_cpu_kill function
Use simple hook to slcr to stop cpu.

Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-12-10 14:17:56 +01:00
Soren Brinkmann 6a37ff388a arm: zynq: Invalidate L1 in secondary boot
During boot, Linux initiates a clean-invalidate operation only, resulting
in faulty data to be written to the memory system during resume.
Therefore invalidate the L1 in the secondary boot path to avoid these
issues.

Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-12-10 14:17:55 +01:00
Soren Brinkmann 11e031308b arm: zynq: platsmp: Remove CPU presence check
The generic code already checks that the CPU being requested is legal if
the cpu possible/present masks are set correctly.

Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-12-10 14:17:53 +01:00
Paul Gortmaker 8bd26e3a7e arm: delete __cpuinit/__CPUINIT usage from all ARM users
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications.  For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.

After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out.  Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.

Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
and are flagged as __cpuinit  -- so if we remove the __cpuinit from
the arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
related content into no-ops as early as possible, since that will get
rid of these warnings.  In any case, they are temporary and harmless.

This removes all the ARM uses of the __cpuinit macros from C code,
and all __CPUINIT from assembly code.  It also had two ".previous"
section statements that were paired off against __CPUINIT
(aka .section ".cpuinit.text") that also get removed here.

[1] https://lkml.org/lkml/2013/5/20/589

Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-07-14 19:36:52 -04:00
Michal Simek 88cd4e882d ARM: zynq: Not to rewrite jump code when starting address is 0x0
This configuration is used by remoteproc.

Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-06-17 07:35:31 +02:00
Arnd Bergmann 797b3a9ee7 Merge branch 'gic/cleanup' into next/soc2
Both zynq and shmobile have conflicts against the gic cleanup
series, resolved here.

Conflicts:
	arch/arm/mach-shmobile/smp-emev2.c
	arch/arm/mach-shmobile/smp-r8a7779.c
	arch/arm/mach-shmobile/smp-sh73a0.c
	arch/arm/mach-zynq/platsmp.c
	drivers/gpio/gpio-pl061.c

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2013-04-08 18:59:19 +02:00
Michal Simek c7c28b0fdd arm: zynq: Add hotplug support
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-04-04 09:24:00 +02:00
Michal Simek aa7eb2bb4e arm: zynq: Add smp support
Zynq is dual core Cortex A9 which starts always
at zero. Using simple trampoline ensure long jump
to secondary_startup code.

Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
2013-04-04 09:24:00 +02:00