Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add bpf_map_lookup_batch and bpf_map_delete_batch in bcc #3363

Merged
merged 3 commits into from
Apr 25, 2021

Conversation

egobillot
Copy link
Contributor

Add bpf_map_lookup_batch and bpf_map_delete_batch in bcc

  • add bpf_map_lookup_batch in bcc
  • add bpf_map_delete_batch in bcc
  • add test_map_batch_ops.py to test batch lookup and delete on map
  • add items_lookup_batch() and items_delete_batch() in the reference guide

The bpf_map_delete_batch in table.py is not using bpf_map_delete_batch from libbcc. It rather uses bpf_map_lookup_and_delete_batch without using the return values. I did not succeed in writing bpf_map_delete_batch with only one BPF syscall with BPF_MAP_DELETE_BATCH. I doubt this is possible.

bpf_map_delete_batch could be used to improve the table.clear() (less system calls) but it would only works for kernel v5.6 and later. How do you you manage this ? Do you let the kernel dependencies management in libbpf ?

* add bpf_map_lookup_batch in bcc
* add bpf_map_delete_batch in bcc
* add test_map_batch_ops.py to test batch lookup and delete on map
* add items_lookup_batch() and items_delete_batch() in the reference guide
@yonghong-song
Copy link
Collaborator

[buildbot, test this please]

@yonghong-song
Copy link
Collaborator

Thanks @egobillot for the contribution. To answer some of your questions below:

The bpf_map_delete_batch in table.py is not using bpf_map_delete_batch from libbcc. It rather uses bpf_map_lookup_and_delete_batch without using the return values. I did not succeed in writing bpf_map_delete_batch with only one BPF syscall with BPF_MAP_DELETE_BATCH. I doubt this is possible.

Currently, to BPF_MAP_DELETE_BATCH needs an array of keys, most likely these keys will come from BPF_MAP_LOOKUP_BATCH or BPF_MAP_LOOKUP_AND_DELETE_BATCH. With these keys, you can delete them in one syscall.

bpf_map_delete_batch could be used to improve the table.clear() (less system calls) but it would only works for kernel v5.6 and later. How do you you manage this ? Do you let the kernel dependencies management in libbpf ?

It is tricky to use table.clear() without using proceeding BPF_MAP_LOOKUP_BATCH, e.g., example, user space may lookup some keys like k1, k2, k3, and then when you call table.clear(), maybe some new keys are already added like k4. In this case, k4 will be removed. and this may make analysis result in-precise. I know today we already have this issue, but it would be good we can use the interface to improve the situation.

You can test whether /proc/kallsyms having this symbol generic_map_lookup_batch or not. If it exists, the feature is supported.

ffffffff811b8d30 t bpf_map_do_batch
ffffffff811bb2c0 T generic_map_delete_batch
ffffffff811bb460 T generic_map_update_batch
ffffffff811bb660 T generic_map_lookup_batch
ffffffff811d5b40 t __htab_map_lookup_and_delete_batch
ffffffff811d6310 t htab_map_lookup_and_delete_batch
ffffffff811d6330 t htab_map_lookup_batch
ffffffff811d6350 t htab_lru_map_lookup_and_delete_batch
ffffffff811d6370 t htab_lru_map_lookup_batch
ffffffff811d6390 t htab_percpu_map_lookup_and_delete_batch
ffffffff811d63b0 t htab_percpu_map_lookup_batch
ffffffff811d63d0 t htab_lru_percpu_map_lookup_and_delete_batch
ffffffff811d63f0 t htab_lru_percpu_map_lookup_batch

for i in range(0, count.value):
yield (keys[i], values[i])

def items_delete_batch(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like items_delete_patch takes an optional argument which is an array of keys. If users provide this array, we should be call self.items_delete_batch to delete these keys only. If user provides no keys, we can call items_lookup_and_delete_batch() to delete all of keys in the map. WDYT?

Copy link
Contributor Author

@egobillot egobillot Apr 20, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good idea. Here is my proposal : 4dbaeef

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And to be consistent, I should also provide the same optional argument, an array of keys, for items_lookup_and_delete_batch() and items_lookup_batch(). The src code should be almost the same than items_delete_batch(). If you are Ok with this commit 4dbaeef, I will continue with the 2 other functions.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And to be consistent, I should also provide the same optional argument, an array of keys, for items_lookup_and_delete_batch() and items_lookup_batch(). The src code should be almost the same than items_delete_batch(). If you are Ok with this commit 4dbaeef, I will continue with the 2 other functions.

@egobillot The new change looks good to me. Agree that let us make API consistent to have keys for batched lookup, lookup_and_delete as well. Thanks for working on this!

@yonghong-song
Copy link
Collaborator

[buildbot, test this please]

@yonghong-song yonghong-song merged commit 748d6b5 into iovisor:master Apr 25, 2021
CrackerCat pushed a commit to CrackerCat/bcc that referenced this pull request Jul 31, 2024
 . add bpf_map_lookup_batch and bpf_map_delete_batch in bcc
 . add test_map_batch_ops.py to test batch lookup and delete on map
 . add items_lookup_batch() and items_delete_batch() in the reference guide
 . add keys as an optional argument to items_delete_batch
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants