Improve aarch64_legitimize_address - avoid splitting the offset if it is supported.
Improve aarch64_legitimize_address - avoid splitting the offset if it is supported. When we do split, take the mode size into account. BLKmode falls into the unaligned case but should be treated like LDP/STP. This improves codesize slightly due to fewer base address calculations: int f(int *p) { return p[5000] + p[7000]; } Now generates: f: add x0, x0, 16384 ldr w1, [x0, 3616] ldr w0, [x0, 11616] add w0, w1, w0 ret instead of: f: add x1, x0, 16384 add x0, x0, 24576 ldr w1, [x1, 3616] ldr w0, [x0, 3424] add w0, w1, w0 ret gcc/ * config/aarch64/aarch64.c (aarch64_legitimize_address): Avoid use of base_offset if offset already in range. From-SVN: r240026
This commit is contained in:
parent
ca235a8500
commit
ff0f3f1cb5
@ -1,3 +1,8 @@
|
||||
2016-09-07 Wilco Dijkstra <wdijkstr@arm.com>
|
||||
|
||||
* config/aarch64/aarch64.c (aarch64_legitimize_address):
|
||||
Avoid use of base_offset if offset already in range.
|
||||
|
||||
2016-09-07 Kaz Kojima <kkojima@gcc.gnu.org>
|
||||
|
||||
* config/sh/sh-protos.h (struct sh_atomic_model,
|
||||
|
@ -5082,9 +5082,19 @@ aarch64_legitimize_address (rtx x, rtx /* orig_x */, machine_mode mode)
|
||||
/* For offsets aren't a multiple of the access size, the limit is
|
||||
-256...255. */
|
||||
else if (offset & (GET_MODE_SIZE (mode) - 1))
|
||||
base_offset = (offset + 0x100) & ~0x1ff;
|
||||
{
|
||||
base_offset = (offset + 0x100) & ~0x1ff;
|
||||
|
||||
/* BLKmode typically uses LDP of X-registers. */
|
||||
if (mode == BLKmode)
|
||||
base_offset = (offset + 512) & ~0x3ff;
|
||||
}
|
||||
/* Small negative offsets are supported. */
|
||||
else if (IN_RANGE (offset, -256, 0))
|
||||
base_offset = 0;
|
||||
/* Use 12-bit offset by access size. */
|
||||
else
|
||||
base_offset = offset & ~0xfff;
|
||||
base_offset = offset & (~0xfff * GET_MODE_SIZE (mode));
|
||||
|
||||
if (base_offset != 0)
|
||||
{
|
||||
|
Loading…
x
Reference in New Issue
Block a user