Probes fixes for v7.1

- fprobe: Fixed several bugs, which includes followings:
   . Prevention of re-registration: Added an earlier check to reject
    re-registering an already active fprobe before its state is modified
    during the initialization phase.
   . Robustness in failure paths:
    - Ensured fprobes are correctly removed from all internal tables and
     properly RCU-freed during registration failure.
    - Modified unregister_fprobe() to proceed with unregistration even if
     temporary memory allocation fails.
   . RCU safety in module unloading: Avoided a potential "sleep in RCU"
    warning by removing a kcalloc() call in the module notifier path. This
    also tries to remove fprobe_hash_node even if memory allocation fails.
   . Type-aware unregistration: Fixed a bug where unregistering an fprobe
    did not account for different types (entry-only vs. entry-exit) at the
    same address, which previously left "junk" entries in the underlying
    ftrace/fgraph ops.
   . Unregistration of empty ftrace_ops: Avoided unneeded performance
    overhead because of making registered ftrace_ops empty (means trace
    all functions.) This counts remaining entries and unregister ftrace_ops
    when it becomes empty.
 
  - ftracetest: In addition to the fix, two new selftests to check above
    fixes, are included:
    . Module Unloading Test: Specifically verifies that fprobe events on
     a module are correctly cleaned up and do not trigger "trace-all"
     behavior when the module is removed.
    . Multiple Fprobe Events Test: Ensures that having multiple fprobes on
     the same function correctly manages the ftrace hash map during
     removal.
 -----BEGIN PGP SIGNATURE-----
 
 iQFPBAABCgA5FiEEh7BulGwFlgAOi5DV2/sHvwUrPxsFAmnoGFUbHG1hc2FtaS5o
 aXJhbWF0c3VAZ21haWwuY29tAAoJENv7B78FKz8bvR8H/3f+lFlflidmqgjW+dy5
 cB73jVVznarkvVr2AdR1KpNv3V9qjLCWAiUM6sDI79mbMNbOKh44EkEX9P5actNq
 3e4NofYr8TyfnJfD6tnrevjhZspCbvRWKVc2+gY9EIi9XUxSRjGuv/5VkIa5HMAN
 3Zg++U0mYBTEAn59nGR++GakMuF/UJChOC5eVKpVYjtRLQ+TzWe6nwBhXj6PrhVT
 OjZN4LHLH/Eze+Suk2duBDt0RQhfPqiz8yW7eP6TzxkS/DN+hmgoQ/zqFjndPsmc
 1TnJ1iJLJ5vo0SqtuCKDv9OTc1oclMCYYdzCxEOnrfFXX4BXxGjzcEd7sjINHMKd
 kBY=
 =9oav
 -----END PGP SIGNATURE-----

Merge tag 'probes-v7.1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull probes fixes from Masami Hiramatsu:
 "fprobe bug fixes:

   - Prevent re-registration

     Add an earlier check to reject re-registering an already active
     fprobe before its state is modified during the initialization phase

   - Robustness in failure paths:
      - Ensure fprobes are correctly removed from all internal tables
        and properly RCU-freed during registration failure
      - Make unregister_fprobe() proceed with unregistration even if
        temporary memory allocation fails

   - RCU safety in module unloading

     Avoid a potential "sleep in RCU" warning by removing a kcalloc()
     call in the module notifier path. This also tries to remove
     fprobe_hash_node even if memory allocation fails.

   - Type-aware unregistration

     Fix a bug where unregistering an fprobe did not account for
     different types (entry-only vs entry-exit) at the same address,
     which previously left "junk" entries in the underlying
     ftrace/fgraph ops

   - Unregistration of empty ftrace_ops

     Avoid unneeded performance overhead due to making registered
     ftrace_ops empty - which means 'trace all functions'. This counts
     remaining entries and unregister ftrace_ops when it becomes empty.

  Two new selftests to check above fixes:

   - Module Unloading Test:

     Specifically verifies that fprobe events on a module are correctly
     cleaned up and do not trigger 'trace-all' behavior when the module
     is removed.

   - Multiple Fprobe Events Test:

     Ensure that having multiple fprobes on the same function correctly
     manages the ftrace hash map during removal"

* tag 'probes-v7.1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
  selftests/ftrace: Add a testcase for multiple fprobe events
  selftests/ftrace: Add a testcase for fprobe events on module
  tracing/fprobe: Fix to unregister ftrace_ops if it is empty on module unloading
  tracing/fprobe: Check the same type fprobe on table as the unregistered one
  tracing/fprobe: Avoid kcalloc() in rcu_read_lock section
  tracing/fprobe: Remove fprobe from hash in failure path
  tracing/fprobe: Unregister fprobe even if memory allocation fails
  tracing/fprobe: Reject registration of a registered fprobe before init
This commit is contained in:
Linus Torvalds 2026-04-21 19:05:09 -07:00
commit beaba8bfbb
3 changed files with 471 additions and 167 deletions

View File

@ -4,6 +4,7 @@
*/
#define pr_fmt(fmt) "fprobe: " fmt
#include <linux/cleanup.h>
#include <linux/err.h>
#include <linux/fprobe.h>
#include <linux/kallsyms.h>
@ -78,36 +79,33 @@ static const struct rhashtable_params fprobe_rht_params = {
};
/* Node insertion and deletion requires the fprobe_mutex */
static int insert_fprobe_node(struct fprobe_hlist_node *node)
static int __insert_fprobe_node(struct fprobe_hlist_node *node, struct fprobe *fp)
{
int ret;
lockdep_assert_held(&fprobe_mutex);
return rhltable_insert(&fprobe_ip_table, &node->hlist, fprobe_rht_params);
ret = rhltable_insert(&fprobe_ip_table, &node->hlist, fprobe_rht_params);
/* Set the fprobe pointer if insertion was successful. */
if (!ret)
WRITE_ONCE(node->fp, fp);
return ret;
}
/* Return true if there are synonims */
static bool delete_fprobe_node(struct fprobe_hlist_node *node)
static void __delete_fprobe_node(struct fprobe_hlist_node *node)
{
lockdep_assert_held(&fprobe_mutex);
bool ret;
/* Avoid double deleting */
/* Avoid double deleting and non-inserted nodes */
if (READ_ONCE(node->fp) != NULL) {
WRITE_ONCE(node->fp, NULL);
rhltable_remove(&fprobe_ip_table, &node->hlist,
fprobe_rht_params);
}
rcu_read_lock();
ret = !!rhltable_lookup(&fprobe_ip_table, &node->addr,
fprobe_rht_params);
rcu_read_unlock();
return ret;
}
/* Check existence of the fprobe */
static bool is_fprobe_still_exist(struct fprobe *fp)
static bool fprobe_registered(struct fprobe *fp)
{
struct hlist_head *head;
struct fprobe_hlist *fph;
@ -120,7 +118,7 @@ static bool is_fprobe_still_exist(struct fprobe *fp)
}
return false;
}
NOKPROBE_SYMBOL(is_fprobe_still_exist);
NOKPROBE_SYMBOL(fprobe_registered);
static int add_fprobe_hash(struct fprobe *fp)
{
@ -132,9 +130,6 @@ static int add_fprobe_hash(struct fprobe *fp)
if (WARN_ON_ONCE(!fph))
return -EINVAL;
if (is_fprobe_still_exist(fp))
return -EEXIST;
head = &fprobe_table[hash_ptr(fp, FPROBE_HASH_BITS)];
hlist_add_head_rcu(&fp->hlist_array->hlist, head);
return 0;
@ -149,7 +144,7 @@ static int del_fprobe_hash(struct fprobe *fp)
if (WARN_ON_ONCE(!fph))
return -EINVAL;
if (!is_fprobe_still_exist(fp))
if (!fprobe_registered(fp))
return -ENOENT;
fph->fp = NULL;
@ -255,7 +250,65 @@ static inline int __fprobe_kprobe_handler(unsigned long ip, unsigned long parent
return ret;
}
static int fprobe_fgraph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops,
struct ftrace_regs *fregs);
static void fprobe_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops,
struct ftrace_regs *fregs);
static struct fgraph_ops fprobe_graph_ops = {
.entryfunc = fprobe_fgraph_entry,
.retfunc = fprobe_return,
};
/* Number of fgraph fprobe nodes */
static int nr_fgraph_fprobes;
/* Is fprobe_graph_ops registered? */
static bool fprobe_graph_registered;
/* Add @addrs to the ftrace filter and register fgraph if needed. */
static int fprobe_graph_add_ips(unsigned long *addrs, int num)
{
int ret;
lockdep_assert_held(&fprobe_mutex);
ret = ftrace_set_filter_ips(&fprobe_graph_ops.ops, addrs, num, 0, 0);
if (ret)
return ret;
if (!fprobe_graph_registered) {
ret = register_ftrace_graph(&fprobe_graph_ops);
if (WARN_ON_ONCE(ret)) {
ftrace_free_filter(&fprobe_graph_ops.ops);
return ret;
}
fprobe_graph_registered = true;
}
return 0;
}
static void __fprobe_graph_unregister(void)
{
if (fprobe_graph_registered) {
unregister_ftrace_graph(&fprobe_graph_ops);
ftrace_free_filter(&fprobe_graph_ops.ops);
fprobe_graph_registered = false;
}
}
/* Remove @addrs from the ftrace filter and unregister fgraph if possible. */
static void fprobe_graph_remove_ips(unsigned long *addrs, int num)
{
lockdep_assert_held(&fprobe_mutex);
if (!nr_fgraph_fprobes)
__fprobe_graph_unregister();
else if (num)
ftrace_set_filter_ips(&fprobe_graph_ops.ops, addrs, num, 1, 0);
}
#if defined(CONFIG_DYNAMIC_FTRACE_WITH_ARGS) || defined(CONFIG_DYNAMIC_FTRACE_WITH_REGS)
/* ftrace_ops callback, this processes fprobes which have only entry_handler. */
static void fprobe_ftrace_entry(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *ops, struct ftrace_regs *fregs)
@ -298,7 +351,10 @@ static struct ftrace_ops fprobe_ftrace_ops = {
.func = fprobe_ftrace_entry,
.flags = FTRACE_OPS_FL_SAVE_ARGS,
};
static int fprobe_ftrace_active;
/* Number of ftrace fprobe nodes */
static int nr_ftrace_fprobes;
/* Is fprobe_ftrace_ops registered? */
static bool fprobe_ftrace_registered;
static int fprobe_ftrace_add_ips(unsigned long *addrs, int num)
{
@ -310,25 +366,33 @@ static int fprobe_ftrace_add_ips(unsigned long *addrs, int num)
if (ret)
return ret;
if (!fprobe_ftrace_active) {
if (!fprobe_ftrace_registered) {
ret = register_ftrace_function(&fprobe_ftrace_ops);
if (ret) {
ftrace_free_filter(&fprobe_ftrace_ops);
return ret;
}
fprobe_ftrace_registered = true;
}
fprobe_ftrace_active++;
return 0;
}
static void __fprobe_ftrace_unregister(void)
{
if (fprobe_ftrace_registered) {
unregister_ftrace_function(&fprobe_ftrace_ops);
ftrace_free_filter(&fprobe_ftrace_ops);
fprobe_ftrace_registered = false;
}
}
static void fprobe_ftrace_remove_ips(unsigned long *addrs, int num)
{
lockdep_assert_held(&fprobe_mutex);
fprobe_ftrace_active--;
if (!fprobe_ftrace_active)
unregister_ftrace_function(&fprobe_ftrace_ops);
if (num)
if (!nr_ftrace_fprobes)
__fprobe_ftrace_unregister();
else if (num)
ftrace_set_filter_ips(&fprobe_ftrace_ops, addrs, num, 1, 0);
}
@ -337,12 +401,78 @@ static bool fprobe_is_ftrace(struct fprobe *fp)
return !fp->exit_handler;
}
#ifdef CONFIG_MODULES
static void fprobe_set_ips(unsigned long *ips, unsigned int cnt, int remove,
int reset)
/* Node insertion and deletion requires the fprobe_mutex */
static int insert_fprobe_node(struct fprobe_hlist_node *node, struct fprobe *fp)
{
ftrace_set_filter_ips(&fprobe_graph_ops.ops, ips, cnt, remove, reset);
ftrace_set_filter_ips(&fprobe_ftrace_ops, ips, cnt, remove, reset);
int ret;
lockdep_assert_held(&fprobe_mutex);
ret = __insert_fprobe_node(node, fp);
if (!ret) {
if (fprobe_is_ftrace(fp))
nr_ftrace_fprobes++;
else
nr_fgraph_fprobes++;
}
return ret;
}
static void delete_fprobe_node(struct fprobe_hlist_node *node)
{
struct fprobe *fp;
lockdep_assert_held(&fprobe_mutex);
fp = READ_ONCE(node->fp);
if (fp) {
if (fprobe_is_ftrace(fp))
nr_ftrace_fprobes--;
else
nr_fgraph_fprobes--;
}
__delete_fprobe_node(node);
}
static bool fprobe_exists_on_hash(unsigned long ip, bool ftrace)
{
struct rhlist_head *head, *pos;
struct fprobe_hlist_node *node;
struct fprobe *fp;
guard(rcu)();
head = rhltable_lookup(&fprobe_ip_table, &ip,
fprobe_rht_params);
if (!head)
return false;
/* We have to check the same type on the list. */
rhl_for_each_entry_rcu(node, pos, head, hlist) {
if (node->addr != ip)
break;
fp = READ_ONCE(node->fp);
if (likely(fp)) {
if ((!ftrace && fp->exit_handler) ||
(ftrace && !fp->exit_handler))
return true;
}
}
return false;
}
#ifdef CONFIG_MODULES
static void fprobe_remove_ips(unsigned long *ips, unsigned int cnt)
{
if (!nr_fgraph_fprobes)
__fprobe_graph_unregister();
else if (cnt)
ftrace_set_filter_ips(&fprobe_graph_ops.ops, ips, cnt, 1, 0);
if (!nr_ftrace_fprobes)
__fprobe_ftrace_unregister();
else if (cnt)
ftrace_set_filter_ips(&fprobe_ftrace_ops, ips, cnt, 1, 0);
}
#endif
#else
@ -360,11 +490,62 @@ static bool fprobe_is_ftrace(struct fprobe *fp)
return false;
}
#ifdef CONFIG_MODULES
static void fprobe_set_ips(unsigned long *ips, unsigned int cnt, int remove,
int reset)
/* Node insertion and deletion requires the fprobe_mutex */
static int insert_fprobe_node(struct fprobe_hlist_node *node, struct fprobe *fp)
{
ftrace_set_filter_ips(&fprobe_graph_ops.ops, ips, cnt, remove, reset);
int ret;
lockdep_assert_held(&fprobe_mutex);
ret = __insert_fprobe_node(node, fp);
if (!ret)
nr_fgraph_fprobes++;
return ret;
}
static void delete_fprobe_node(struct fprobe_hlist_node *node)
{
struct fprobe *fp;
lockdep_assert_held(&fprobe_mutex);
fp = READ_ONCE(node->fp);
if (fp)
nr_fgraph_fprobes--;
__delete_fprobe_node(node);
}
static bool fprobe_exists_on_hash(unsigned long ip, bool ftrace __maybe_unused)
{
struct rhlist_head *head, *pos;
struct fprobe_hlist_node *node;
struct fprobe *fp;
guard(rcu)();
head = rhltable_lookup(&fprobe_ip_table, &ip,
fprobe_rht_params);
if (!head)
return false;
/* We only need to check fp is there. */
rhl_for_each_entry_rcu(node, pos, head, hlist) {
if (node->addr != ip)
break;
fp = READ_ONCE(node->fp);
if (likely(fp))
return true;
}
return false;
}
#ifdef CONFIG_MODULES
static void fprobe_remove_ips(unsigned long *ips, unsigned int cnt)
{
if (!nr_fgraph_fprobes)
__fprobe_graph_unregister();
else if (cnt)
ftrace_set_filter_ips(&fprobe_graph_ops.ops, ips, cnt, 1, 0);
}
#endif
#endif /* !CONFIG_DYNAMIC_FTRACE_WITH_ARGS && !CONFIG_DYNAMIC_FTRACE_WITH_REGS */
@ -480,7 +661,7 @@ static void fprobe_return(struct ftrace_graph_ret *trace,
if (!fp)
break;
curr += FPROBE_HEADER_SIZE_IN_LONG;
if (is_fprobe_still_exist(fp) && !fprobe_disabled(fp)) {
if (fprobe_registered(fp) && !fprobe_disabled(fp)) {
if (WARN_ON_ONCE(curr + size > size_words))
break;
fp->exit_handler(fp, trace->func, ret_ip, fregs,
@ -492,51 +673,9 @@ static void fprobe_return(struct ftrace_graph_ret *trace,
}
NOKPROBE_SYMBOL(fprobe_return);
static struct fgraph_ops fprobe_graph_ops = {
.entryfunc = fprobe_fgraph_entry,
.retfunc = fprobe_return,
};
static int fprobe_graph_active;
/* Add @addrs to the ftrace filter and register fgraph if needed. */
static int fprobe_graph_add_ips(unsigned long *addrs, int num)
{
int ret;
lockdep_assert_held(&fprobe_mutex);
ret = ftrace_set_filter_ips(&fprobe_graph_ops.ops, addrs, num, 0, 0);
if (ret)
return ret;
if (!fprobe_graph_active) {
ret = register_ftrace_graph(&fprobe_graph_ops);
if (WARN_ON_ONCE(ret)) {
ftrace_free_filter(&fprobe_graph_ops.ops);
return ret;
}
}
fprobe_graph_active++;
return 0;
}
/* Remove @addrs from the ftrace filter and unregister fgraph if possible. */
static void fprobe_graph_remove_ips(unsigned long *addrs, int num)
{
lockdep_assert_held(&fprobe_mutex);
fprobe_graph_active--;
/* Q: should we unregister it ? */
if (!fprobe_graph_active)
unregister_ftrace_graph(&fprobe_graph_ops);
if (num)
ftrace_set_filter_ips(&fprobe_graph_ops.ops, addrs, num, 1, 0);
}
#ifdef CONFIG_MODULES
#define FPROBE_IPS_BATCH_INIT 8
#define FPROBE_IPS_BATCH_INIT 128
/* instruction pointer address list */
struct fprobe_addr_list {
int index;
@ -544,43 +683,29 @@ struct fprobe_addr_list {
unsigned long *addrs;
};
static int fprobe_addr_list_add(struct fprobe_addr_list *alist, unsigned long addr)
{
unsigned long *addrs;
/* Previously we failed to expand the list. */
if (alist->index == alist->size)
return -ENOSPC;
alist->addrs[alist->index++] = addr;
if (alist->index < alist->size)
return 0;
/* Expand the address list */
addrs = kcalloc(alist->size * 2, sizeof(*addrs), GFP_KERNEL);
if (!addrs)
return -ENOMEM;
memcpy(addrs, alist->addrs, alist->size * sizeof(*addrs));
alist->size *= 2;
kfree(alist->addrs);
alist->addrs = addrs;
return 0;
}
static void fprobe_remove_node_in_module(struct module *mod, struct fprobe_hlist_node *node,
static int fprobe_remove_node_in_module(struct module *mod, struct fprobe_hlist_node *node,
struct fprobe_addr_list *alist)
{
lockdep_assert_in_rcu_read_lock();
if (!within_module(node->addr, mod))
return;
if (delete_fprobe_node(node))
return;
return 0;
delete_fprobe_node(node);
/* If no address list is available, we can't track this address. */
if (!alist->addrs)
return 0;
/*
* If failed to update alist, just continue to update hlist.
* Therefore, at list user handler will not hit anymore.
* Don't care the type here, because all fprobes on the same
* address must be removed eventually.
*/
fprobe_addr_list_add(alist, node->addr);
if (!rhltable_lookup(&fprobe_ip_table, &node->addr, fprobe_rht_params)) {
alist->addrs[alist->index++] = node->addr;
if (alist->index == alist->size)
return -ENOSPC;
}
return 0;
}
/* Handle module unloading to manage fprobe_ip_table. */
@ -591,29 +716,48 @@ static int fprobe_module_callback(struct notifier_block *nb,
struct fprobe_hlist_node *node;
struct rhashtable_iter iter;
struct module *mod = data;
bool retry;
if (val != MODULE_STATE_GOING)
return NOTIFY_DONE;
alist.addrs = kcalloc(alist.size, sizeof(*alist.addrs), GFP_KERNEL);
/* If failed to alloc memory, we can not remove ips from hash. */
if (!alist.addrs)
return NOTIFY_DONE;
/*
* If failed to alloc memory, ftrace_ops will not be able to remove ips from
* hash, but we can still remove nodes from fprobe_ip_table, so we can avoid
* the potential wrong callback. So just print a warning here and try to
* continue without address list.
*/
WARN_ONCE(!alist.addrs,
"Failed to allocate memory for fprobe_addr_list, ftrace_ops will not be updated");
mutex_lock(&fprobe_mutex);
again:
retry = false;
alist.index = 0;
rhltable_walk_enter(&fprobe_ip_table, &iter);
do {
rhashtable_walk_start(&iter);
while ((node = rhashtable_walk_next(&iter)) && !IS_ERR(node))
fprobe_remove_node_in_module(mod, node, &alist);
if (fprobe_remove_node_in_module(mod, node, &alist) < 0) {
retry = true;
break;
}
rhashtable_walk_stop(&iter);
} while (node == ERR_PTR(-EAGAIN));
} while (node == ERR_PTR(-EAGAIN) && !retry);
rhashtable_walk_exit(&iter);
/* Remove any ips from hash table(s) */
fprobe_remove_ips(alist.addrs, alist.index);
/*
* If we break rhashtable walk loop except for -EAGAIN, we need
* to restart looping from start for safety. Anyway, this is
* not a hotpath.
*/
if (retry)
goto again;
if (alist.index > 0)
fprobe_set_ips(alist.addrs, alist.index, 1, 0);
mutex_unlock(&fprobe_mutex);
kfree(alist.addrs);
@ -757,7 +901,6 @@ static int fprobe_init(struct fprobe *fp, unsigned long *addrs, int num)
fp->hlist_array = hlist_array;
hlist_array->fp = fp;
for (i = 0; i < num; i++) {
hlist_array->array[i].fp = fp;
addr = ftrace_location(addrs[i]);
if (!addr) {
fprobe_fail_cleanup(fp);
@ -821,6 +964,8 @@ int register_fprobe(struct fprobe *fp, const char *filter, const char *notfilter
}
EXPORT_SYMBOL_GPL(register_fprobe);
static int unregister_fprobe_nolock(struct fprobe *fp);
/**
* register_fprobe_ips() - Register fprobe to ftrace by address.
* @fp: A fprobe data structure to be registered.
@ -839,35 +984,33 @@ int register_fprobe_ips(struct fprobe *fp, unsigned long *addrs, int num)
struct fprobe_hlist *hlist_array;
int ret, i;
guard(mutex)(&fprobe_mutex);
if (fprobe_registered(fp))
return -EEXIST;
ret = fprobe_init(fp, addrs, num);
if (ret)
return ret;
mutex_lock(&fprobe_mutex);
hlist_array = fp->hlist_array;
if (fprobe_is_ftrace(fp))
ret = fprobe_ftrace_add_ips(addrs, num);
else
ret = fprobe_graph_add_ips(addrs, num);
if (!ret) {
add_fprobe_hash(fp);
for (i = 0; i < hlist_array->size; i++) {
ret = insert_fprobe_node(&hlist_array->array[i]);
if (ret)
break;
}
/* fallback on insert error */
if (ret) {
for (i--; i >= 0; i--)
delete_fprobe_node(&hlist_array->array[i]);
}
}
mutex_unlock(&fprobe_mutex);
if (ret)
if (ret) {
fprobe_fail_cleanup(fp);
return ret;
}
hlist_array = fp->hlist_array;
ret = add_fprobe_hash(fp);
for (i = 0; i < hlist_array->size && !ret; i++)
ret = insert_fprobe_node(&hlist_array->array[i], fp);
if (ret) {
unregister_fprobe_nolock(fp);
/* In error case, wait for clean up safely. */
synchronize_rcu();
}
return ret;
}
@ -911,37 +1054,28 @@ bool fprobe_is_registered(struct fprobe *fp)
return true;
}
/**
* unregister_fprobe() - Unregister fprobe.
* @fp: A fprobe data structure to be unregistered.
*
* Unregister fprobe (and remove ftrace hooks from the function entries).
*
* Return 0 if @fp is unregistered successfully, -errno if not.
*/
int unregister_fprobe(struct fprobe *fp)
static int unregister_fprobe_nolock(struct fprobe *fp)
{
struct fprobe_hlist *hlist_array;
struct fprobe_hlist *hlist_array = fp->hlist_array;
unsigned long *addrs = NULL;
int ret = 0, i, count;
int i, count;
mutex_lock(&fprobe_mutex);
if (!fp || !is_fprobe_still_exist(fp)) {
ret = -EINVAL;
goto out;
}
hlist_array = fp->hlist_array;
addrs = kcalloc(hlist_array->size, sizeof(unsigned long), GFP_KERNEL);
if (!addrs) {
ret = -ENOMEM; /* TODO: Fallback to one-by-one loop */
goto out;
}
/*
* This will remove fprobe_hash_node from the hash table even if
* memory allocation fails. However, ftrace_ops will not be updated.
* Anyway, when the last fprobe is unregistered, ftrace_ops is also
* unregistered.
*/
if (!addrs)
pr_warn("Failed to allocate working array. ftrace_ops may not sync.\n");
/* Remove non-synonim ips from table and hash */
count = 0;
for (i = 0; i < hlist_array->size; i++) {
if (!delete_fprobe_node(&hlist_array->array[i]))
delete_fprobe_node(&hlist_array->array[i]);
if (addrs && !fprobe_exists_on_hash(hlist_array->array[i].addr,
fprobe_is_ftrace(fp)))
addrs[count++] = hlist_array->array[i].addr;
}
del_fprobe_hash(fp);
@ -953,12 +1087,26 @@ int unregister_fprobe(struct fprobe *fp)
kfree_rcu(hlist_array, rcu);
fp->hlist_array = NULL;
out:
mutex_unlock(&fprobe_mutex);
kfree(addrs);
return ret;
return 0;
}
/**
* unregister_fprobe() - Unregister fprobe.
* @fp: A fprobe data structure to be unregistered.
*
* Unregister fprobe (and remove ftrace hooks from the function entries).
*
* Return 0 if @fp is unregistered successfully, -errno if not.
*/
int unregister_fprobe(struct fprobe *fp)
{
guard(mutex)(&fprobe_mutex);
if (!fp || !fprobe_registered(fp))
return -EINVAL;
return unregister_fprobe_nolock(fp);
}
EXPORT_SYMBOL_GPL(unregister_fprobe);

View File

@ -0,0 +1,87 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# description: Generic dynamic event - add/remove fprobe events on module
# requires: dynamic_events "f[:[<group>/][<event>]] <func-name>[%return] [<args>]":README enabled_functions
rmmod trace-events-sample ||:
if ! modprobe trace-events-sample ; then
echo "No trace-events sample module - please make CONFIG_SAMPLE_TRACE_EVENTS=m"
exit_unresolved;
fi
trap "lsmod | grep -q trace_events_sample && rmmod trace-events-sample" EXIT
echo 0 > events/enable
echo > dynamic_events
FUNC1='foo_bar*'
FUNC2='vfs_read'
:;: "Add an event on the test module" ;:
echo "f:test1 $FUNC1" >> dynamic_events
echo 1 > events/fprobes/test1/enable
:;: "Ensure it is enabled" ;:
funcs=`cat enabled_functions | wc -l`
test $funcs -ne 0
:;: "Check the enabled_functions is cleared on unloading" ;:
rmmod trace-events-sample
funcs=`cat enabled_functions | wc -l`
test $funcs -eq 0
:;: "Check it is kept clean" ;:
modprobe trace-events-sample
echo 1 > events/fprobes/test1/enable || echo "OK"
funcs=`cat enabled_functions | wc -l`
test $funcs -eq 0
:;: "Add another event not on the test module" ;:
echo "f:test2 $FUNC2" >> dynamic_events
echo 1 > events/fprobes/test2/enable
:;: "Ensure it is enabled" ;:
ofuncs=`cat enabled_functions | wc -l`
test $ofuncs -ne 0
:;: "Disable and remove the first event"
echo 0 > events/fprobes/test1/enable
echo "-:fprobes/test1" >> dynamic_events
funcs=`cat enabled_functions | wc -l`
test $ofuncs -eq $funcs
:;: "Disable and remove other events" ;:
echo 0 > events/fprobes/enable
echo > dynamic_events
funcs=`cat enabled_functions | wc -l`
test $funcs -eq 0
rmmod trace-events-sample
:;: "Add events on kernel and test module" ;:
modprobe trace-events-sample
echo "f:test1 $FUNC1" >> dynamic_events
echo 1 > events/fprobes/test1/enable
echo "f:test2 $FUNC2" >> dynamic_events
echo 1 > events/fprobes/test2/enable
ofuncs=`cat enabled_functions | wc -l`
test $ofuncs -ne 0
:;: "Unload module (ftrace entry should be removed)" ;:
rmmod trace-events-sample
funcs=`cat enabled_functions | wc -l`
test $funcs -ne 0
test $ofuncs -ne $funcs
:;: "Disable and remove core-kernel fprobe event" ;:
echo 0 > events/fprobes/test2/enable
echo "-:fprobes/test2" >> dynamic_events
:;: "Ensure ftrace is disabled." ;:
funcs=`cat enabled_functions | wc -l`
test $funcs -eq 0
echo 0 > events/fprobes/enable
echo > dynamic_events
trap "" EXIT
clear_trace

View File

@ -0,0 +1,69 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# description: Generic dynamic event - add/remove multiple fprobe events on the same function
# requires: dynamic_events "f[:[<group>/][<event>]] <func-name>[%return] [<args>]":README enabled_functions
echo 0 > events/enable
echo > dynamic_events
PLACE=vfs_read
PLACE2=vfs_open
:;: 'Ensure no other ftrace user' ;:
test `cat enabled_functions | wc -l` -eq 0 || exit_unresolved
:;: 'Test case 1: leave entry event' ;:
:;: 'Add entry and exit events on the same place' ;:
echo "f:event1 ${PLACE}" >> dynamic_events
echo "f:event2 ${PLACE}%return" >> dynamic_events
:;: 'Enable both of them' ;:
echo 1 > events/fprobes/enable
test `cat enabled_functions | wc -l` -eq 1
:;: 'Disable and remove exit event' ;:
echo 0 > events/fprobes/event2/enable
echo -:event2 >> dynamic_events
:;: 'Disable and remove all events' ;:
echo 0 > events/fprobes/enable
echo > dynamic_events
:;: 'Add another event' ;:
echo "f:event3 ${PLACE2}%return" > dynamic_events
echo 1 > events/fprobes/enable
test `cat enabled_functions | wc -l` -eq 1
:;: 'No other ftrace user' ;:
echo 0 > events/fprobes/enable
echo > dynamic_events
test `cat enabled_functions | wc -l` -eq 0
:;: 'Test case 2: leave exit event' ;:
:;: 'Add entry and exit events on the same place' ;:
echo "f:event1 ${PLACE}" >> dynamic_events
echo "f:event2 ${PLACE}%return" >> dynamic_events
:;: 'Enable both of them' ;:
echo 1 > events/fprobes/enable
test `cat enabled_functions | wc -l` -eq 1
:;: 'Disable and remove entry event' ;:
echo 0 > events/fprobes/event1/enable
echo -:event1 >> dynamic_events
:;: 'Disable and remove all events' ;:
echo 0 > events/fprobes/enable
echo > dynamic_events
:;: 'Add another event' ;:
echo "f:event3 ${PLACE2}" > dynamic_events
echo 1 > events/fprobes/enable
test `cat enabled_functions | wc -l` -eq 1
:;: 'No other ftrace user' ;:
echo 0 > events/fprobes/enable
echo > dynamic_events
test `cat enabled_functions | wc -l` -eq 0
clear_trace