Skip to content

Commit

Permalink
Merge tag 'trace-3.17' of git://git.kernel.org/pub/scm/linux/kernel/g…
Browse files Browse the repository at this point in the history
…it/rostedt/linux-trace

Pull tracing updates from Steven Rostedt:
 "This pull request has a lot of work done.  The main thing is the
  changes to the ftrace function callback infrastructure.  It's
  introducing a way to allow different functions to call directly
  different trampolines instead of all calling the same "mcount" one.

  The only user of this for now is the function graph tracer, which
  always had a different trampoline, but the function tracer trampoline
  was called and did basically nothing, and then the function graph
  tracer trampoline was called.  The difference now, is that the
  function graph tracer trampoline can be called directly if a function
  is only being traced by the function graph trampoline.  If function
  tracing is also happening on the same function, the old way is still
  done.

  The accounting for this takes up more memory when function graph
  tracing is activated, as it needs to keep track of which functions it
  uses.  I have a new way that wont take as much memory, but it's not
  ready yet for this merge window, and will have to wait for the next
  one.

  Another big change was the removal of the ftrace_start/stop() calls
  that were used by the suspend/resume code that stopped function
  tracing when entering into suspend and resume paths.  The stop of
  ftrace was done because there was some function that would crash the
  system if one called smp_processor_id()! The stop/start was a big
  hammer to solve the issue at the time, which was when ftrace was first
  introduced into Linux.  Now ftrace has better infrastructure to debug
  such issues, and I found the problem function and labeled it with
  "notrace" and function tracing can now safely be activated all the way
  down into the guts of suspend and resume

  Other changes include clean ups of uprobe code, clean up of the
  trace_seq() code, and other various small fixes and clean ups to
  ftrace and tracing"

* tag 'trace-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (57 commits)
  ftrace: Add warning if tramp hash does not match nr_trampolines
  ftrace: Fix trampoline hash update check on rec->flags
  ring-buffer: Use rb_page_size() instead of open coded head_page size
  ftrace: Rename ftrace_ops field from trampolines to nr_trampolines
  tracing: Convert local function_graph functions to static
  ftrace: Do not copy old hash when resetting
  tracing: let user specify tracing_thresh after selecting function_graph
  ring-buffer: Always run per-cpu ring buffer resize with schedule_work_on()
  tracing: Remove function_trace_stop and HAVE_FUNCTION_TRACE_MCOUNT_TEST
  s390/ftrace: remove check of obsolete variable function_trace_stop
  arm64, ftrace: Remove check of obsolete variable function_trace_stop
  Blackfin: ftrace: Remove check of obsolete variable function_trace_stop
  metag: ftrace: Remove check of obsolete variable function_trace_stop
  microblaze: ftrace: Remove check of obsolete variable function_trace_stop
  MIPS: ftrace: Remove check of obsolete variable function_trace_stop
  parisc: ftrace: Remove check of obsolete variable function_trace_stop
  sh: ftrace: Remove check of obsolete variable function_trace_stop
  sparc64,ftrace: Remove check of obsolete variable function_trace_stop
  tile: ftrace: Remove check of obsolete variable function_trace_stop
  ftrace: x86: Remove check of obsolete variable function_trace_stop
  ...
  • Loading branch information
torvalds committed Aug 4, 2014
2 parents c7ed326 + dc6f03f commit b8c0aa4
Show file tree
Hide file tree
Showing 50 changed files with 1,032 additions and 693 deletions.
6 changes: 6 additions & 0 deletions Documentation/kernel-parameters.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1097,6 +1097,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
that can be changed at run time by the
set_graph_function file in the debugfs tracing directory.

ftrace_graph_notrace=[function-list]
[FTRACE] Do not trace from the functions specified in
function-list. This list is a comma separated list of
functions that can be changed at run time by the
set_graph_notrace file in the debugfs tracing directory.

gamecon.map[2|3]=
[HW,JOY] Multisystem joystick and NES/SNES/PSX pad
support via parallel port (up to 5 devices per port)
Expand Down
26 changes: 0 additions & 26 deletions Documentation/trace/ftrace-design.txt
Original file line number Diff line number Diff line change
Expand Up @@ -102,30 +102,6 @@ extern void mcount(void);
EXPORT_SYMBOL(mcount);


HAVE_FUNCTION_TRACE_MCOUNT_TEST
-------------------------------

This is an optional optimization for the normal case when tracing is turned off
in the system. If you do not enable this Kconfig option, the common ftrace
code will take care of doing the checking for you.

To support this feature, you only need to check the function_trace_stop
variable in the mcount function. If it is non-zero, there is no tracing to be
done at all, so you can return.

This additional pseudo code would simply be:
void mcount(void)
{
/* save any bare state needed in order to do initial checking */

+ if (function_trace_stop)
+ return;

extern void (*ftrace_trace_function)(unsigned long, unsigned long);
if (ftrace_trace_function != ftrace_stub)
...


HAVE_FUNCTION_GRAPH_TRACER
--------------------------

Expand Down Expand Up @@ -328,8 +304,6 @@ void mcount(void)

void ftrace_caller(void)
{
/* implement HAVE_FUNCTION_TRACE_MCOUNT_TEST if you desire */

/* save all state needed by the ABI (see paragraph above) */

unsigned long frompc = ...;
Expand Down
5 changes: 0 additions & 5 deletions arch/arm64/kernel/entry-ftrace.S
Original file line number Diff line number Diff line change
Expand Up @@ -96,11 +96,6 @@
* - ftrace_graph_caller to set up an exit hook
*/
ENTRY(_mcount)
#ifdef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST
ldr x0, =ftrace_trace_stop
ldr x0, [x0] // if ftrace_trace_stop
ret // return;
#endif
mcount_enter

ldr x0, =ftrace_trace_function
Expand Down
1 change: 0 additions & 1 deletion arch/blackfin/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@ config BLACKFIN
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_TRACE_MCOUNT_TEST
select HAVE_IDE
select HAVE_KERNEL_GZIP if RAMKERNEL
select HAVE_KERNEL_BZIP2 if RAMKERNEL
Expand Down
18 changes: 0 additions & 18 deletions arch/blackfin/kernel/ftrace-entry.S
Original file line number Diff line number Diff line change
Expand Up @@ -33,15 +33,6 @@ ENDPROC(__mcount)
* function will be waiting there. mmmm pie.
*/
ENTRY(_ftrace_caller)
# ifdef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST
/* optional micro optimization: return if stopped */
p1.l = _function_trace_stop;
p1.h = _function_trace_stop;
r3 = [p1];
cc = r3 == 0;
if ! cc jump _ftrace_stub (bp);
# endif

/* save first/second/third function arg and the return register */
[--sp] = r2;
[--sp] = r0;
Expand Down Expand Up @@ -83,15 +74,6 @@ ENDPROC(_ftrace_caller)

/* See documentation for _ftrace_caller */
ENTRY(__mcount)
# ifdef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST
/* optional micro optimization: return if stopped */
p1.l = _function_trace_stop;
p1.h = _function_trace_stop;
r3 = [p1];
cc = r3 == 0;
if ! cc jump _ftrace_stub (bp);
# endif

/* save third function arg early so we can do testing below */
[--sp] = r2;

Expand Down
1 change: 0 additions & 1 deletion arch/metag/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@ config METAG
select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_TRACE_MCOUNT_TEST
select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_LZO
Expand Down
14 changes: 0 additions & 14 deletions arch/metag/kernel/ftrace_stub.S
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,6 @@ _mcount_wrapper:
.global _ftrace_caller
.type _ftrace_caller,function
_ftrace_caller:
MOVT D0Re0,#HI(_function_trace_stop)
ADD D0Re0,D0Re0,#LO(_function_trace_stop)
GETD D0Re0,[D0Re0]
CMP D0Re0,#0
BEQ $Lcall_stub
MOV PC,D0.4
$Lcall_stub:
MSETL [A0StP], D0Ar6, D0Ar4, D0Ar2, D0.4
MOV D1Ar1, D0.4
MOV D0Ar2, D1RtP
Expand All @@ -42,13 +35,6 @@ _ftrace_call:
.global _mcount_wrapper
.type _mcount_wrapper,function
_mcount_wrapper:
MOVT D0Re0,#HI(_function_trace_stop)
ADD D0Re0,D0Re0,#LO(_function_trace_stop)
GETD D0Re0,[D0Re0]
CMP D0Re0,#0
BEQ $Lcall_mcount
MOV PC,D0.4
$Lcall_mcount:
MSETL [A0StP], D0Ar6, D0Ar4, D0Ar2, D0.4
MOV D1Ar1, D0.4
MOV D0Ar2, D1RtP
Expand Down
1 change: 0 additions & 1 deletion arch/microblaze/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@ config MICROBLAZE
select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACE_MCOUNT_TEST
select HAVE_FUNCTION_TRACER
select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP
Expand Down
3 changes: 3 additions & 0 deletions arch/microblaze/kernel/ftrace.c
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,9 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr)
unsigned long return_hooker = (unsigned long)
&return_to_handler;

if (unlikely(ftrace_graph_is_dead()))
return;

if (unlikely(atomic_read(&current->tracing_graph_pause)))
return;

Expand Down
5 changes: 0 additions & 5 deletions arch/microblaze/kernel/mcount.S
Original file line number Diff line number Diff line change
Expand Up @@ -91,11 +91,6 @@ ENTRY(ftrace_caller)
#endif /* CONFIG_DYNAMIC_FTRACE */
SAVE_REGS
swi r15, r1, 0;
/* MS: HAVE_FUNCTION_TRACE_MCOUNT_TEST begin of checking */
lwi r5, r0, function_trace_stop;
bneid r5, end;
nop;
/* MS: HAVE_FUNCTION_TRACE_MCOUNT_TEST end of checking */
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
#ifndef CONFIG_DYNAMIC_FTRACE
lwi r5, r0, ftrace_graph_return;
Expand Down
1 change: 0 additions & 1 deletion arch/mips/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ config MIPS
select HAVE_BPF_JIT if !CPU_MICROMIPS
select ARCH_HAVE_CUSTOM_GPIO_H
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_TRACE_MCOUNT_TEST
select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_C_RECORDMCOUNT
Expand Down
3 changes: 3 additions & 0 deletions arch/mips/kernel/ftrace.c
Original file line number Diff line number Diff line change
Expand Up @@ -302,6 +302,9 @@ void prepare_ftrace_return(unsigned long *parent_ra_addr, unsigned long self_ra,
&return_to_handler;
int faulted, insns;

if (unlikely(ftrace_graph_is_dead()))
return;

if (unlikely(atomic_read(&current->tracing_graph_pause)))
return;

Expand Down
7 changes: 0 additions & 7 deletions arch/mips/kernel/mcount.S
Original file line number Diff line number Diff line change
Expand Up @@ -74,10 +74,6 @@ _mcount:
#endif

/* When tracing is activated, it calls ftrace_caller+8 (aka here) */
lw t1, function_trace_stop
bnez t1, ftrace_stub
nop

MCOUNT_SAVE_REGS
#ifdef KBUILD_MCOUNT_RA_ADDRESS
PTR_S MCOUNT_RA_ADDRESS_REG, PT_R12(sp)
Expand Down Expand Up @@ -105,9 +101,6 @@ ftrace_stub:
#else /* ! CONFIG_DYNAMIC_FTRACE */

NESTED(_mcount, PT_SIZE, ra)
lw t1, function_trace_stop
bnez t1, ftrace_stub
nop
PTR_LA t1, ftrace_stub
PTR_L t2, ftrace_trace_function /* Prepare t2 for (1) */
bne t1, t2, static_trace
Expand Down
1 change: 0 additions & 1 deletion arch/parisc/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@ config PARISC
select HAVE_OPROFILE
select HAVE_FUNCTION_TRACER if 64BIT
select HAVE_FUNCTION_GRAPH_TRACER if 64BIT
select HAVE_FUNCTION_TRACE_MCOUNT_TEST if 64BIT
select ARCH_WANT_FRAME_POINTERS
select RTC_CLASS
select RTC_DRV_GENERIC
Expand Down
6 changes: 3 additions & 3 deletions arch/parisc/kernel/ftrace.c
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,9 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr)
unsigned long long calltime;
struct ftrace_graph_ent trace;

if (unlikely(ftrace_graph_is_dead()))
return;

if (unlikely(atomic_read(&current->tracing_graph_pause)))
return;

Expand Down Expand Up @@ -152,9 +155,6 @@ void ftrace_function_trampoline(unsigned long parent,
{
extern ftrace_func_t ftrace_trace_function;

if (function_trace_stop)
return;

if (ftrace_trace_function != ftrace_stub) {
ftrace_trace_function(parent, self_addr);
return;
Expand Down
3 changes: 3 additions & 0 deletions arch/powerpc/kernel/ftrace.c
Original file line number Diff line number Diff line change
Expand Up @@ -525,6 +525,9 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr)
struct ftrace_graph_ent trace;
unsigned long return_hooker = (unsigned long)&return_to_handler;

if (unlikely(ftrace_graph_is_dead()))
return;

if (unlikely(atomic_read(&current->tracing_graph_pause)))
return;

Expand Down
1 change: 0 additions & 1 deletion arch/s390/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,6 @@ config S390
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_TRACE_MCOUNT_TEST
select HAVE_FUTEX_CMPXCHG if FUTEX
select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_GZIP
Expand Down
10 changes: 3 additions & 7 deletions arch/s390/kernel/mcount.S
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,9 @@ ENTRY(_mcount)
ENTRY(ftrace_caller)
#endif
stm %r2,%r5,16(%r15)
bras %r1,2f
bras %r1,1f
0: .long ftrace_trace_function
1: .long function_trace_stop
2: l %r2,1b-0b(%r1)
icm %r2,0xf,0(%r2)
jnz 3f
st %r14,56(%r15)
1: st %r14,56(%r15)
lr %r0,%r15
ahi %r15,-96
l %r3,100(%r15)
Expand All @@ -50,7 +46,7 @@ ENTRY(ftrace_graph_caller)
#endif
ahi %r15,96
l %r14,56(%r15)
3: lm %r2,%r5,16(%r15)
lm %r2,%r5,16(%r15)
br %r14

#ifdef CONFIG_FUNCTION_GRAPH_TRACER
Expand Down
3 changes: 0 additions & 3 deletions arch/s390/kernel/mcount64.S
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,6 @@ ENTRY(_mcount)

ENTRY(ftrace_caller)
#endif
larl %r1,function_trace_stop
icm %r1,0xf,0(%r1)
bnzr %r14
stmg %r2,%r5,32(%r15)
stg %r14,112(%r15)
lgr %r1,%r15
Expand Down
1 change: 0 additions & 1 deletion arch/sh/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,6 @@ config SUPERH32
select HAVE_FUNCTION_TRACER
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_DYNAMIC_FTRACE
select HAVE_FUNCTION_TRACE_MCOUNT_TEST
select HAVE_FTRACE_NMI_ENTER if DYNAMIC_FTRACE
select ARCH_WANT_IPC_PARSE_VERSION
select HAVE_FUNCTION_GRAPH_TRACER
Expand Down
3 changes: 3 additions & 0 deletions arch/sh/kernel/ftrace.c
Original file line number Diff line number Diff line change
Expand Up @@ -344,6 +344,9 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr)
struct ftrace_graph_ent trace;
unsigned long return_hooker = (unsigned long)&return_to_handler;

if (unlikely(ftrace_graph_is_dead()))
return;

if (unlikely(atomic_read(&current->tracing_graph_pause)))
return;

Expand Down
24 changes: 2 additions & 22 deletions arch/sh/lib/mcount.S
Original file line number Diff line number Diff line change
Expand Up @@ -92,13 +92,6 @@ mcount:
rts
nop
#else
#ifndef CONFIG_DYNAMIC_FTRACE
mov.l .Lfunction_trace_stop, r0
mov.l @r0, r0
tst r0, r0
bf ftrace_stub
#endif

MCOUNT_ENTER()

#ifdef CONFIG_DYNAMIC_FTRACE
Expand Down Expand Up @@ -174,11 +167,6 @@ ftrace_graph_call:

.globl ftrace_caller
ftrace_caller:
mov.l .Lfunction_trace_stop, r0
mov.l @r0, r0
tst r0, r0
bf ftrace_stub

MCOUNT_ENTER()

.globl ftrace_call
Expand All @@ -196,8 +184,6 @@ ftrace_call:
#endif /* CONFIG_DYNAMIC_FTRACE */

.align 2
.Lfunction_trace_stop:
.long function_trace_stop

/*
* NOTE: From here on the locations of the .Lftrace_stub label and
Expand All @@ -217,12 +203,7 @@ ftrace_stub:
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
.globl ftrace_graph_caller
ftrace_graph_caller:
mov.l 2f, r0
mov.l @r0, r0
tst r0, r0
bt 1f

mov.l 3f, r1
mov.l 2f, r1
jmp @r1
nop
1:
Expand All @@ -242,8 +223,7 @@ ftrace_graph_caller:
MCOUNT_LEAVE()

.align 2
2: .long function_trace_stop
3: .long skip_trace
2: .long skip_trace
.Lprepare_ftrace_return:
.long prepare_ftrace_return

Expand Down
1 change: 0 additions & 1 deletion arch/sparc/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,6 @@ config SPARC64
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_GRAPH_FP_TEST
select HAVE_FUNCTION_TRACE_MCOUNT_TEST
select HAVE_KRETPROBES
select HAVE_KPROBES
select HAVE_RCU_TABLE_FREE if SMP
Expand Down
Loading

0 comments on commit b8c0aa4

Please sign in to comment.