Skip to content

Commit

Permalink
Merge branch 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm
Browse files Browse the repository at this point in the history
Pull ARM updates from Russell King:
 "Included in this update are both some long term fixes and some new
  features.

  Fixes:

   - An integer overflow in the calculation of ELF_ET_DYN_BASE.

   - Avoiding OOMs for high-order IOMMU allocations

   - SMP requires the data cache to be enabled for synchronisation
     primitives to work, so prevent the CPU_DCACHE_DISABLE option being
     visible on SMP builds.

   - A bug going back 10+ years in the noMMU ARM94* CPU support code,
     where it corrupts registers.  Found by folk getting Linux running
     on their cameras.

   - Versatile Express needs an errata workaround enabled for CPU
     hot-unplug to work.

  Features:

   - Clean up module linker by handling out of range relocations
     separately from relocation cases we don't handle.

   - Fix a long term bug in the pci_mmap_page_range() code, which we
     hope won't impact userspace (we hope there's no users of the
     existing broken interface.)

   - Don't map DMA coherent allocations when we don't have a MMU.

   - Drop experimental status for SMP_ON_UP.

   - Warn when DT doesn't specify ePAPR mandatory cache properties.

   - Add documentation concerning how we find the start of physical
     memory for AUTO_ZRELADDR kernels, detailing why we have chosen the
     mask and the implications of changing it.

   - Updates from Ard Biesheuvel to address some issues with large
     kernels (such as allyesconfig) failing to link.

   - Allow hibernation to work on modern (ARMv7) CPUs - this appears to
     have never worked in the past on these CPUs.

   - Enable IRQ_SHOW_LEVEL, which changes the /proc/interrupts output
     format (hopefully without userspace breaking...  let's hope that if
     it causes someone a problem, they tell us.)

   - Fix tegra-ahb DT offsets.

   - Rework ARM errata 643719 code (and ARMv7 flush_cache_louis()/
     flush_dcache_all()) code to be more efficient, and enable this
     errata workaround by default for ARMv7+SMP CPUs.  This complements
     the Versatile Express fix above.

   - Rework ARMv7 context code for errata 430973, so that only Cortex A8
     CPUs are impacted by the branch target buffer flush when this
     errata is enabled.  Also update the help text to indicate that all
     r1p* A8 CPUs are impacted.

   - Switch ARM to the generic show_mem() implementation, it conveys all
     the information which we were already reporting.

   - Prevent slow timer sources being used for udelay() - timers running
     at less than 1MHz are not useful for this, and can cause udelay()
     to return immediately, without any wait.  Using such a slow timer
     is silly.

   - VDSO support for 32-bit ARM, mainly for gettimeofday() using the
     ARM architected timer.

   - Perf support for Scorpion performance monitoring units"

vdso semantic conflict fixed up as per linux-next.

* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (52 commits)
  ARM: update errata 430973 documentation to cover Cortex A8 r1p*
  ARM: ensure delay timer has sufficient accuracy for delays
  ARM: switch to use the generic show_mem() implementation
  ARM: proc-v7: avoid errata 430973 workaround for non-Cortex A8 CPUs
  ARM: enable ARM errata 643719 workaround by default
  ARM: cache-v7: optimise test for Cortex A9 r0pX devices
  ARM: cache-v7: optimise branches in v7_flush_cache_louis
  ARM: cache-v7: consolidate initialisation of cache level index
  ARM: cache-v7: shift CLIDR to extract appropriate field before masking
  ARM: cache-v7: use movw/movt instructions
  ARM: allow 16-bit instructions in ALT_UP()
  ARM: proc-arm94*.S: fix setup function
  ARM: vexpress: fix CPU hotplug with CT9x4 tile.
  ARM: 8276/1: Make CPU_DCACHE_DISABLE depend on !SMP
  ARM: 8335/1: Documentation: DT bindings: Tegra AHB: document the legacy base address
  ARM: 8334/1: amba: tegra-ahb: detect and correct bogus base address
  ARM: 8333/1: amba: tegra-ahb: fix register offsets in the macros
  ARM: 8339/1: Enable CONFIG_GENERIC_IRQ_SHOW_LEVEL
  ARM: 8338/1: kexec: Relax SMP validation to improve DT compatibility
  ARM: 8337/1: mm: Do not invoke OOM for higher order IOMMU DMA allocations
  ...
  • Loading branch information
torvalds committed Apr 15, 2015
2 parents bdfa54d + 4b2f883 commit bb0fd7a
Show file tree
Hide file tree
Showing 93 changed files with 2,429 additions and 604 deletions.
2 changes: 2 additions & 0 deletions Documentation/devicetree/bindings/arm/pmu.txt
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ Required properties:
"arm,arm11mpcore-pmu"
"arm,arm1176-pmu"
"arm,arm1136-pmu"
"qcom,scorpion-pmu"
"qcom,scorpion-mp-pmu"
"qcom,krait-pmu"
- interrupts : 1 combined interrupt or 1 per core. If the interrupt is a per-cpu
interrupt (PPI) then 1 interrupt should be specified.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,12 @@ Required properties:
Tegra30, must contain "nvidia,tegra30-ahb". Otherwise, must contain
'"nvidia,<chip>-ahb", "nvidia,tegra30-ahb"' where <chip> is tegra124,
tegra132, or tegra210.
- reg : Should contain 1 register ranges(address and length)
- reg : Should contain 1 register ranges(address and length). For
Tegra20, Tegra30, and Tegra114 chips, the value must be <0x6000c004
0x10c>. For Tegra124, Tegra132 and Tegra210 chips, the value should
be be <0x6000c000 0x150>.

Example:
Example (for a Tegra20 chip):
ahb: ahb@6000c004 {
compatible = "nvidia,tegra20-ahb";
reg = <0x6000c004 0x10c>; /* AHB Arbitration + Gizmo Controller */
Expand Down
6 changes: 4 additions & 2 deletions arch/arm/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ config ARM
select GENERIC_IDLE_POLL_SETUP
select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW
select GENERIC_IRQ_SHOW_LEVEL
select GENERIC_PCI_IOMAP
select GENERIC_SCHED_CLOCK
select GENERIC_SMP_IDLE_THREAD
Expand Down Expand Up @@ -1063,7 +1064,7 @@ config ARM_ERRATA_430973
depends on CPU_V7
help
This option enables the workaround for the 430973 Cortex-A8
(r1p0..r1p2) erratum. If a code sequence containing an ARM/Thumb
r1p* erratum. If a code sequence containing an ARM/Thumb
interworking branch is replaced with another code sequence at the
same virtual address, whether due to self-modifying code or virtual
to physical address re-mapping, Cortex-A8 does not recover from the
Expand Down Expand Up @@ -1132,6 +1133,7 @@ config ARM_ERRATA_742231
config ARM_ERRATA_643719
bool "ARM errata: LoUIS bit field in CLIDR register is incorrect"
depends on CPU_V7 && SMP
default y
help
This option enables the workaround for the 643719 Cortex-A9 (prior to
r1p0) erratum. On affected cores the LoUIS bit field of the CLIDR
Expand Down Expand Up @@ -1349,7 +1351,7 @@ config SMP
If you don't know what to do here, say N.

config SMP_ON_UP
bool "Allow booting SMP kernel on uniprocessor systems (EXPERIMENTAL)"
bool "Allow booting SMP kernel on uniprocessor systems"
depends on SMP && !XIP_KERNEL && MMU
default y
help
Expand Down
10 changes: 9 additions & 1 deletion arch/arm/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# Ensure linker flags are correct
LDFLAGS :=

LDFLAGS_vmlinux :=-p --no-undefined -X
LDFLAGS_vmlinux :=-p --no-undefined -X --pic-veneer
ifeq ($(CONFIG_CPU_ENDIAN_BE8),y)
LDFLAGS_vmlinux += --be8
LDFLAGS_MODULE += --be8
Expand Down Expand Up @@ -264,6 +264,7 @@ core-$(CONFIG_FPE_FASTFPE) += $(FASTFPE_OBJ)
core-$(CONFIG_VFP) += arch/arm/vfp/
core-$(CONFIG_XEN) += arch/arm/xen/
core-$(CONFIG_KVM_ARM_HOST) += arch/arm/kvm/
core-$(CONFIG_VDSO) += arch/arm/vdso/

# If we have a machine-specific directory, then include it in the build.
core-y += arch/arm/kernel/ arch/arm/mm/ arch/arm/common/
Expand Down Expand Up @@ -321,6 +322,12 @@ dtbs: prepare scripts
dtbs_install:
$(Q)$(MAKE) $(dtbinst)=$(boot)/dts

PHONY += vdso_install
vdso_install:
ifeq ($(CONFIG_VDSO),y)
$(Q)$(MAKE) $(build)=arch/arm/vdso $@
endif

# We use MRPROPER_FILES and CLEAN_FILES now
archclean:
$(Q)$(MAKE) $(clean)=$(boot)
Expand All @@ -345,4 +352,5 @@ define archhelp
echo ' Install using (your) ~/bin/$(INSTALLKERNEL) or'
echo ' (distribution) /sbin/$(INSTALLKERNEL) or'
echo ' install to $$(INSTALL_PATH) and run lilo'
echo ' vdso_install - Install unstripped vdso.so to $$(INSTALL_MOD_PATH)/vdso'
endef
52 changes: 45 additions & 7 deletions arch/arm/boot/compressed/head.S
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,11 @@
*/
#include <linux/linkage.h>
#include <asm/assembler.h>
#include <asm/v7m.h>

AR_CLASS( .arch armv7-a )
M_CLASS( .arch armv7-m )

.arch armv7-a
/*
* Debugging stuff
*
Expand Down Expand Up @@ -114,7 +117,12 @@
* sort out different calling conventions
*/
.align
.arm @ Always enter in ARM state
/*
* Always enter in ARM state for CPUs that support the ARM ISA.
* As of today (2014) that's exactly the members of the A and R
* classes.
*/
AR_CLASS( .arm )
start:
.type start,#function
.rept 7
Expand All @@ -132,14 +140,15 @@ start:

THUMB( .thumb )
1:
ARM_BE8( setend be ) @ go BE8 if compiled for BE8
mrs r9, cpsr
ARM_BE8( setend be ) @ go BE8 if compiled for BE8
AR_CLASS( mrs r9, cpsr )
#ifdef CONFIG_ARM_VIRT_EXT
bl __hyp_stub_install @ get into SVC mode, reversibly
#endif
mov r7, r1 @ save architecture ID
mov r8, r2 @ save atags pointer

#ifndef CONFIG_CPU_V7M
/*
* Booting from Angel - need to enter SVC mode and disable
* FIQs/IRQs (numeric definitions from angel arm.h source).
Expand All @@ -155,6 +164,7 @@ not_angel:
safe_svcmode_maskall r0
msr spsr_cxsf, r9 @ Save the CPU boot mode in
@ SPSR
#endif
/*
* Note that some cache flushing and other stuff may
* be needed here - is there an Angel SWI call for this?
Expand All @@ -168,9 +178,26 @@ not_angel:
.text

#ifdef CONFIG_AUTO_ZRELADDR
@ determine final kernel image address
/*
* Find the start of physical memory. As we are executing
* without the MMU on, we are in the physical address space.
* We just need to get rid of any offset by aligning the
* address.
*
* This alignment is a balance between the requirements of
* different platforms - we have chosen 128MB to allow
* platforms which align the start of their physical memory
* to 128MB to use this feature, while allowing the zImage
* to be placed within the first 128MB of memory on other
* platforms. Increasing the alignment means we place
* stricter alignment requirements on the start of physical
* memory, but relaxing it means that we break people who
* are already placing their zImage in (eg) the top 64MB
* of this range.
*/
mov r4, pc
and r4, r4, #0xf8000000
/* Determine final kernel image address. */
add r4, r4, #TEXT_OFFSET
#else
ldr r4, =zreladdr
Expand Down Expand Up @@ -810,6 +837,16 @@ __common_mmu_cache_on:
call_cache_fn: adr r12, proc_types
#ifdef CONFIG_CPU_CP15
mrc p15, 0, r9, c0, c0 @ get processor ID
#elif defined(CONFIG_CPU_V7M)
/*
* On v7-M the processor id is located in the V7M_SCB_CPUID
* register, but as cache handling is IMPLEMENTATION DEFINED on
* v7-M (if existant at all) we just return early here.
* If V7M_SCB_CPUID were used the cpu ID functions (i.e.
* __armv7_mmu_cache_{on,off,flush}) would be selected which
* use cp15 registers that are not implemented on v7-M.
*/
bx lr
#else
ldr r9, =CONFIG_PROCESSOR_ID
#endif
Expand Down Expand Up @@ -1310,8 +1347,9 @@ __hyp_reentry_vectors:

__enter_kernel:
mov r0, #0 @ must be 0
ARM( mov pc, r4 ) @ call kernel
THUMB( bx r4 ) @ entry point is always ARM
ARM( mov pc, r4 ) @ call kernel
M_CLASS( add r4, r4, #1 ) @ enter in Thumb mode for M class
THUMB( bx r4 ) @ entry point is always ARM for A/R classes

reloc_code_end:

Expand Down
1 change: 0 additions & 1 deletion arch/arm/include/asm/Kbuild
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@


generic-y += auxvec.h
generic-y += bitsperlong.h
generic-y += cputime.h
generic-y += current.h
Expand Down
3 changes: 3 additions & 0 deletions arch/arm/include/asm/assembler.h
Original file line number Diff line number Diff line change
Expand Up @@ -237,6 +237,9 @@
.pushsection ".alt.smp.init", "a" ;\
.long 9998b ;\
9997: instr ;\
.if . - 9997b == 2 ;\
nop ;\
.endif ;\
.if . - 9997b != 4 ;\
.error "ALT_UP() content must assemble to exactly 4 bytes";\
.endif ;\
Expand Down
1 change: 1 addition & 0 deletions arch/arm/include/asm/auxvec.h
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
#include <uapi/asm/auxvec.h>
16 changes: 16 additions & 0 deletions arch/arm/include/asm/cputype.h
Original file line number Diff line number Diff line change
Expand Up @@ -253,4 +253,20 @@ static inline int cpu_is_pj4(void)
#else
#define cpu_is_pj4() 0
#endif

static inline int __attribute_const__ cpuid_feature_extract_field(u32 features,
int field)
{
int feature = (features >> field) & 15;

/* feature registers are signed values */
if (feature > 8)
feature -= 16;

return feature;
}

#define cpuid_feature_extract(reg, field) \
cpuid_feature_extract_field(read_cpuid_ext(reg), field)

#endif
11 changes: 10 additions & 1 deletion arch/arm/include/asm/elf.h
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
#ifndef __ASMARM_ELF_H
#define __ASMARM_ELF_H

#include <asm/auxvec.h>
#include <asm/hwcap.h>
#include <asm/vdso_datapage.h>

/*
* ELF register definitions..
Expand Down Expand Up @@ -115,7 +117,7 @@ int dump_task_regs(struct task_struct *t, elf_gregset_t *elfregs);
the loader. We need to make sure that it is out of the way of the program
that it will "exec", and that there is sufficient room for the brk. */

#define ELF_ET_DYN_BASE (2 * TASK_SIZE / 3)
#define ELF_ET_DYN_BASE (TASK_SIZE / 3 * 2)

/* When the program starts, a1 contains a pointer to a function to be
registered with atexit, as per the SVR4 ABI. A value of 0 means we
Expand All @@ -126,6 +128,13 @@ extern void elf_set_personality(const struct elf32_hdr *);
#define SET_PERSONALITY(ex) elf_set_personality(&(ex))

#ifdef CONFIG_MMU
#ifdef CONFIG_VDSO
#define ARCH_DLINFO \
do { \
NEW_AUX_ENT(AT_SYSINFO_EHDR, \
(elf_addr_t)current->mm->context.vdso); \
} while (0)
#endif
#define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1
struct linux_binprm;
int arch_setup_additional_pages(struct linux_binprm *, int);
Expand Down
2 changes: 1 addition & 1 deletion arch/arm/include/asm/futex.h
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
" .align 3\n" \
" .long 1b, 4f, 2b, 4f\n" \
" .popsection\n" \
" .pushsection .fixup,\"ax\"\n" \
" .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \
"4: mov %0, " err_reg "\n" \
" b 3b\n" \
Expand Down
3 changes: 3 additions & 0 deletions arch/arm/include/asm/mmu.h
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@ typedef struct {
#endif
unsigned int vmalloc_seq;
unsigned long sigpage;
#ifdef CONFIG_VDSO
unsigned long vdso;
#endif
} mm_context_t;

#ifdef CONFIG_CPU_HAS_ASID
Expand Down
1 change: 1 addition & 0 deletions arch/arm/include/asm/pmu.h
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,7 @@ struct pmu_hw_events {
struct arm_pmu {
struct pmu pmu;
cpumask_t active_irqs;
int *irq_affinity;
char *name;
irqreturn_t (*handle_irq)(int irq_num, void *dev);
void (*enable)(struct perf_event *event);
Expand Down
1 change: 1 addition & 0 deletions arch/arm/include/asm/smp_plat.h
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,7 @@ static inline u32 mpidr_hash_size(void)
return 1 << mpidr_hash.bits;
}

extern int platform_can_secondary_boot(void);
extern int platform_can_cpu_hotplug(void);

#endif
10 changes: 5 additions & 5 deletions arch/arm/include/asm/uaccess.h
Original file line number Diff line number Diff line change
Expand Up @@ -315,7 +315,7 @@ do { \
__asm__ __volatile__( \
"1: " TUSER(ldrb) " %1,[%2],#0\n" \
"2:\n" \
" .pushsection .fixup,\"ax\"\n" \
" .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \
"3: mov %0, %3\n" \
" mov %1, #0\n" \
Expand Down Expand Up @@ -351,7 +351,7 @@ do { \
__asm__ __volatile__( \
"1: " TUSER(ldr) " %1,[%2],#0\n" \
"2:\n" \
" .pushsection .fixup,\"ax\"\n" \
" .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \
"3: mov %0, %3\n" \
" mov %1, #0\n" \
Expand Down Expand Up @@ -397,7 +397,7 @@ do { \
__asm__ __volatile__( \
"1: " TUSER(strb) " %1,[%2],#0\n" \
"2:\n" \
" .pushsection .fixup,\"ax\"\n" \
" .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \
"3: mov %0, %3\n" \
" b 2b\n" \
Expand Down Expand Up @@ -430,7 +430,7 @@ do { \
__asm__ __volatile__( \
"1: " TUSER(str) " %1,[%2],#0\n" \
"2:\n" \
" .pushsection .fixup,\"ax\"\n" \
" .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \
"3: mov %0, %3\n" \
" b 2b\n" \
Expand Down Expand Up @@ -458,7 +458,7 @@ do { \
THUMB( "1: " TUSER(str) " " __reg_oper1 ", [%1]\n" ) \
THUMB( "2: " TUSER(str) " " __reg_oper0 ", [%1, #4]\n" ) \
"3:\n" \
" .pushsection .fixup,\"ax\"\n" \
" .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \
"4: mov %0, %3\n" \
" b 3b\n" \
Expand Down
8 changes: 8 additions & 0 deletions arch/arm/include/asm/unified.h
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,14 @@
.syntax unified
#endif

#ifdef CONFIG_CPU_V7M
#define AR_CLASS(x...)
#define M_CLASS(x...) x
#else
#define AR_CLASS(x...) x
#define M_CLASS(x...)
#endif

#ifdef CONFIG_THUMB2_KERNEL

#if __GNUC__ < 4
Expand Down
Loading

0 comments on commit bb0fd7a

Please sign in to comment.