Loading...
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 | // SPDX-License-Identifier: GPL-2.0 /* * Dynamic function tracer architecture backend. * * Copyright IBM Corp. 2009,2014 * * Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>, * Martin Schwidefsky <schwidefsky@de.ibm.com> */ #include <linux/moduleloader.h> #include <linux/hardirq.h> #include <linux/uaccess.h> #include <linux/ftrace.h> #include <linux/kernel.h> #include <linux/types.h> #include <linux/kprobes.h> #include <trace/syscall.h> #include <asm/asm-offsets.h> #include <asm/cacheflush.h> #include <asm/set_memory.h> #include "entry.h" /* * To generate function prologue either gcc's hotpatch feature (since gcc 4.8) * or a combination of -pg -mrecord-mcount -mnop-mcount -mfentry flags * (since gcc 9 / clang 10) is used. * In both cases the original and also the disabled function prologue contains * only a single six byte instruction and looks like this: * > brcl 0,0 # offset 0 * To enable ftrace the code gets patched like above and afterwards looks * like this: * > brasl %r0,ftrace_caller # offset 0 * * The instruction will be patched by ftrace_make_call / ftrace_make_nop. * The ftrace function gets called with a non-standard C function call ABI * where r0 contains the return address. It is also expected that the called * function only clobbers r0 and r1, but restores r2-r15. * For module code we can't directly jump to ftrace caller, but need a * trampoline (ftrace_plt), which clobbers also r1. */ unsigned long ftrace_plt; int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, unsigned long addr) { return 0; } int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long addr) { struct ftrace_insn orig, new, old; if (copy_from_kernel_nofault(&old, (void *) rec->ip, sizeof(old))) return -EFAULT; /* Replace ftrace call with a nop. */ ftrace_generate_call_insn(&orig, rec->ip); ftrace_generate_nop_insn(&new); /* Verify that the to be replaced code matches what we expect. */ if (memcmp(&orig, &old, sizeof(old))) return -EINVAL; s390_kernel_write((void *) rec->ip, &new, sizeof(new)); return 0; } int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) { struct ftrace_insn orig, new, old; if (copy_from_kernel_nofault(&old, (void *) rec->ip, sizeof(old))) return -EFAULT; /* Replace nop with an ftrace call. */ ftrace_generate_nop_insn(&orig); ftrace_generate_call_insn(&new, rec->ip); /* Verify that the to be replaced code matches what we expect. */ if (memcmp(&orig, &old, sizeof(old))) return -EINVAL; s390_kernel_write((void *) rec->ip, &new, sizeof(new)); return 0; } int ftrace_update_ftrace_func(ftrace_func_t func) { return 0; } int __init ftrace_dyn_arch_init(void) { return 0; } #ifdef CONFIG_MODULES static int __init ftrace_plt_init(void) { unsigned int *ip; ftrace_plt = (unsigned long) module_alloc(PAGE_SIZE); if (!ftrace_plt) panic("cannot allocate ftrace plt\n"); ip = (unsigned int *) ftrace_plt; ip[0] = 0x0d10e310; /* basr 1,0; lg 1,10(1); br 1 */ ip[1] = 0x100a0004; ip[2] = 0x07f10000; ip[3] = FTRACE_ADDR >> 32; ip[4] = FTRACE_ADDR & 0xffffffff; set_memory_ro(ftrace_plt, 1); return 0; } device_initcall(ftrace_plt_init); #endif /* CONFIG_MODULES */ #ifdef CONFIG_FUNCTION_GRAPH_TRACER /* * Hook the return address and push it in the stack of return addresses * in current thread info. */ unsigned long prepare_ftrace_return(unsigned long ra, unsigned long sp, unsigned long ip) { if (unlikely(ftrace_graph_is_dead())) goto out; if (unlikely(atomic_read(¤t->tracing_graph_pause))) goto out; ip -= MCOUNT_INSN_SIZE; if (!function_graph_enter(ra, ip, 0, (void *) sp)) ra = (unsigned long) return_to_handler; out: return ra; } NOKPROBE_SYMBOL(prepare_ftrace_return); /* * Patch the kernel code at ftrace_graph_caller location. The instruction * there is branch relative on condition. To enable the ftrace graph code * block, we simply patch the mask field of the instruction to zero and * turn the instruction into a nop. * To disable the ftrace graph code the mask field will be patched to * all ones, which turns the instruction into an unconditional branch. */ int ftrace_enable_ftrace_graph_caller(void) { u8 op = 0x04; /* set mask field to zero */ s390_kernel_write(__va(ftrace_graph_caller)+1, &op, sizeof(op)); return 0; } int ftrace_disable_ftrace_graph_caller(void) { u8 op = 0xf4; /* set mask field to all ones */ s390_kernel_write(__va(ftrace_graph_caller)+1, &op, sizeof(op)); return 0; } #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ #ifdef CONFIG_KPROBES_ON_FTRACE void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *ops, struct ftrace_regs *fregs) { struct kprobe_ctlblk *kcb; struct pt_regs *regs; struct kprobe *p; int bit; bit = ftrace_test_recursion_trylock(ip, parent_ip); if (bit < 0) return; regs = ftrace_get_regs(fregs); preempt_disable_notrace(); p = get_kprobe((kprobe_opcode_t *)ip); if (unlikely(!p) || kprobe_disabled(p)) goto out; if (kprobe_running()) { kprobes_inc_nmissed_count(p); goto out; } __this_cpu_write(current_kprobe, p); kcb = get_kprobe_ctlblk(); kcb->kprobe_status = KPROBE_HIT_ACTIVE; instruction_pointer_set(regs, ip); if (!p->pre_handler || !p->pre_handler(p, regs)) { instruction_pointer_set(regs, ip + MCOUNT_INSN_SIZE); if (unlikely(p->post_handler)) { kcb->kprobe_status = KPROBE_HIT_SSDONE; p->post_handler(p, regs, 0); } } __this_cpu_write(current_kprobe, NULL); out: preempt_enable_notrace(); ftrace_test_recursion_unlock(bit); } NOKPROBE_SYMBOL(kprobe_ftrace_handler); int arch_prepare_kprobe_ftrace(struct kprobe *p) { p->ainsn.insn = NULL; return 0; } #endif |