Skip to content

Commit 4f12c2b

Browse files
yonghong-songkernel-patches-bot
authored andcommitted
Currently, we use bucket_lock when traversing bpf_sk_storage_map
elements. Since bpf_iter programs cannot use bpf_sk_storage_get() and bpf_sk_storage_delete() helpers which may also grab bucket lock, we do not have a deadlock issue which exists for hashmap when using bucket_lock ([1]). If a bucket contains a lot of sockets, during bpf_iter traversing a bucket, concurrent bpf_sk_storage_{get,delete}() may experience some undesirable delays. Using rcu_read_lock() is a reasonable compromise here. Although it may lose some precision, e.g., access stale sockets, but it will not hurt performance of other bpf programs. [1] https://lore.kernel.org/bpf/[email protected] Cc: Martin KaFai Lau <[email protected]> Signed-off-by: Yonghong Song <[email protected]> --- net/core/bpf_sk_storage.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-)
1 parent f744b8a commit 4f12c2b

File tree

1 file changed

+6
-9
lines changed

1 file changed

+6
-9
lines changed

net/core/bpf_sk_storage.c

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -701,7 +701,7 @@ bpf_sk_storage_map_seq_find_next(struct bpf_iter_seq_sk_storage_map_info *info,
701701
if (!selem) {
702702
/* not found, unlock and go to the next bucket */
703703
b = &smap->buckets[bucket_id++];
704-
raw_spin_unlock_bh(&b->lock);
704+
rcu_read_unlock();
705705
skip_elems = 0;
706706
break;
707707
}
@@ -715,7 +715,7 @@ bpf_sk_storage_map_seq_find_next(struct bpf_iter_seq_sk_storage_map_info *info,
715715

716716
for (i = bucket_id; i < (1U << smap->bucket_log); i++) {
717717
b = &smap->buckets[i];
718-
raw_spin_lock_bh(&b->lock);
718+
rcu_read_lock();
719719
count = 0;
720720
hlist_for_each_entry(selem, &b->list, map_node) {
721721
sk_storage = rcu_dereference_raw(selem->local_storage);
@@ -726,7 +726,7 @@ bpf_sk_storage_map_seq_find_next(struct bpf_iter_seq_sk_storage_map_info *info,
726726
}
727727
count++;
728728
}
729-
raw_spin_unlock_bh(&b->lock);
729+
rcu_read_unlock();
730730
skip_elems = 0;
731731
}
732732

@@ -806,13 +806,10 @@ static void bpf_sk_storage_map_seq_stop(struct seq_file *seq, void *v)
806806
struct bpf_local_storage_map *smap;
807807
struct bpf_local_storage_map_bucket *b;
808808

809-
if (!v) {
809+
if (!v)
810810
(void)__bpf_sk_storage_map_seq_show(seq, v);
811-
} else {
812-
smap = (struct bpf_local_storage_map *)info->map;
813-
b = &smap->buckets[info->bucket_id];
814-
raw_spin_unlock_bh(&b->lock);
815-
}
811+
else
812+
rcu_read_unlock();
816813
}
817814

818815
static int bpf_iter_init_sk_storage_map(void *priv_data,

0 commit comments

Comments
 (0)