Skip to content
This repository was archived by the owner on Oct 30, 2021. It is now read-only.

Commit 40064ae

Browse files
dennisszhouhtejun
authored andcommitted
percpu: replace area map allocator with bitmap
The percpu memory allocator is experiencing scalability issues when allocating and freeing large numbers of counters as in BPF. Additionally, there is a corner case where iteration is triggered over all chunks if the contig_hint is the right size, but wrong alignment. This patch replaces the area map allocator with a basic bitmap allocator implementation. Each subsequent patch will introduce new features and replace full scanning functions with faster non-scanning options when possible. Implementation: This patchset removes the area map allocator in favor of a bitmap allocator backed by metadata blocks. The primary goal is to provide consistency in performance and memory footprint with a focus on small allocations (< 64 bytes). The bitmap removes the heavy memmove from the freeing critical path and provides a consistent memory footprint. The metadata blocks provide a bound on the amount of scanning required by maintaining a set of hints. In an effort to make freeing fast, the metadata is updated on the free path if the new free area makes a page free, a block free, or spans across blocks. This causes the chunk's contig hint to potentially be smaller than what it could allocate by up to the smaller of a page or a block. If the chunk's contig hint is contained within a block, a check occurs and the hint is kept accurate. Metadata is always kept accurate on allocation, so there will not be a situation where a chunk has a later contig hint than available. Evaluation: I have primarily done testing against a simple workload of allocation of 1 million objects (2^20) of varying size. Deallocation was done by in order, alternating, and in reverse. These numbers were collected after rebasing ontop of a80099a. I present the worst-case numbers here: Area Map Allocator: Object Size | Alloc Time (ms) | Free Time (ms) ---------------------------------------------- 4B | 310 | 4770 16B | 557 | 1325 64B | 436 | 273 256B | 776 | 131 1024B | 3280 | 122 Bitmap Allocator: Object Size | Alloc Time (ms) | Free Time (ms) ---------------------------------------------- 4B | 490 | 70 16B | 515 | 75 64B | 610 | 80 256B | 950 | 100 1024B | 3520 | 200 This data demonstrates the inability for the area map allocator to handle less than ideal situations. In the best case of reverse deallocation, the area map allocator was able to perform within range of the bitmap allocator. In the worst case situation, freeing took nearly 5 seconds for 1 million 4-byte objects. The bitmap allocator dramatically improves the consistency of the free path. The small allocations performed nearly identical regardless of the freeing pattern. While it does add to the allocation latency, the allocation scenario here is optimal for the area map allocator. The area map allocator runs into trouble when it is allocating in chunks where the latter half is full. It is difficult to replicate this, so I present a variant where the pages are second half filled. Freeing was done sequentially. Below are the numbers for this scenario: Area Map Allocator: Object Size | Alloc Time (ms) | Free Time (ms) ---------------------------------------------- 4B | 4118 | 4892 16B | 1651 | 1163 64B | 598 | 285 256B | 771 | 158 1024B | 3034 | 160 Bitmap Allocator: Object Size | Alloc Time (ms) | Free Time (ms) ---------------------------------------------- 4B | 481 | 67 16B | 506 | 69 64B | 636 | 75 256B | 892 | 90 1024B | 3262 | 147 The data shows a parabolic curve of performance for the area map allocator. This is due to the memmove operation being the dominant cost with the lower object sizes as more objects are packed in a chunk and at higher object sizes, the traversal of the chunk slots is the dominating cost. The bitmap allocator suffers this problem as well. The above data shows the inability to scale for the allocation path with the area map allocator and that the bitmap allocator demonstrates consistent performance in general. The second problem of additional scanning can result in the area map allocator completing in 52 minutes when trying to allocate 1 million 4-byte objects with 8-byte alignment. The same workload takes approximately 16 seconds to complete for the bitmap allocator. V2: Fixed a bug in pcpu_alloc_first_chunk end_offset was setting the bitmap using bytes instead of bits. Added a comment to pcpu_cnt_pop_pages to explain bitmap_weight. Signed-off-by: Dennis Zhou <dennisszhou@gmail.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
1 parent 91e914c commit 40064ae

File tree

6 files changed

+362
-504
lines changed

6 files changed

+362
-504
lines changed

include/linux/percpu.h

-1
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,6 @@ extern bool is_kernel_percpu_address(unsigned long addr);
120120
#if !defined(CONFIG_SMP) || !defined(CONFIG_HAVE_SETUP_PER_CPU_AREA)
121121
extern void __init setup_per_cpu_areas(void);
122122
#endif
123-
extern void __init percpu_init_late(void);
124123

125124
extern void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp);
126125
extern void __percpu *__alloc_percpu(size_t size, size_t align);

init/main.c

-1
Original file line numberDiff line numberDiff line change
@@ -500,7 +500,6 @@ static void __init mm_init(void)
500500
page_ext_init_flatmem();
501501
mem_init();
502502
kmem_cache_init();
503-
percpu_init_late();
504503
pgtable_init();
505504
vmalloc_init();
506505
ioremap_huge_init();

mm/percpu-internal.h

+28-6
Original file line numberDiff line numberDiff line change
@@ -11,14 +11,12 @@ struct pcpu_chunk {
1111
#endif
1212

1313
struct list_head list; /* linked to pcpu_slot lists */
14-
int free_size; /* free bytes in the chunk */
15-
int contig_hint; /* max contiguous size hint */
14+
int free_bytes; /* free bytes in the chunk */
15+
int contig_bits; /* max contiguous size hint */
1616
void *base_addr; /* base address of this chunk */
1717

18-
int map_used; /* # of map entries used before the sentry */
19-
int map_alloc; /* # of map entries allocated */
20-
int *map; /* allocation map */
21-
struct list_head map_extend_list;/* on pcpu_map_extend_chunks */
18+
unsigned long *alloc_map; /* allocation map */
19+
unsigned long *bound_map; /* boundary map */
2220

2321
void *data; /* chunk data */
2422
int first_free; /* no free below this */
@@ -45,6 +43,30 @@ extern int pcpu_nr_empty_pop_pages;
4543
extern struct pcpu_chunk *pcpu_first_chunk;
4644
extern struct pcpu_chunk *pcpu_reserved_chunk;
4745

46+
/**
47+
* pcpu_nr_pages_to_map_bits - converts the pages to size of bitmap
48+
* @pages: number of physical pages
49+
*
50+
* This conversion is from physical pages to the number of bits
51+
* required in the bitmap.
52+
*/
53+
static inline int pcpu_nr_pages_to_map_bits(int pages)
54+
{
55+
return pages * PAGE_SIZE / PCPU_MIN_ALLOC_SIZE;
56+
}
57+
58+
/**
59+
* pcpu_chunk_map_bits - helper to convert nr_pages to size of bitmap
60+
* @chunk: chunk of interest
61+
*
62+
* This conversion is from the number of physical pages that the chunk
63+
* serves to the number of bits in the bitmap.
64+
*/
65+
static inline int pcpu_chunk_map_bits(struct pcpu_chunk *chunk)
66+
{
67+
return pcpu_nr_pages_to_map_bits(chunk->nr_pages);
68+
}
69+
4870
#ifdef CONFIG_PERCPU_STATS
4971

5072
#include <linux/spinlock.h>

mm/percpu-km.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ static struct pcpu_chunk *pcpu_create_chunk(void)
6969
chunk->base_addr = page_address(pages) - pcpu_group_offsets[0];
7070

7171
spin_lock_irq(&pcpu_lock);
72-
pcpu_chunk_populated(chunk, 0, nr_pages);
72+
pcpu_chunk_populated(chunk, 0, nr_pages, false);
7373
spin_unlock_irq(&pcpu_lock);
7474

7575
pcpu_stats_chunk_alloc();

mm/percpu-stats.c

+60-39
Original file line numberDiff line numberDiff line change
@@ -29,65 +29,85 @@ static int cmpint(const void *a, const void *b)
2929
}
3030

3131
/*
32-
* Iterates over all chunks to find the max # of map entries used.
32+
* Iterates over all chunks to find the max nr_alloc entries.
3333
*/
34-
static int find_max_map_used(void)
34+
static int find_max_nr_alloc(void)
3535
{
3636
struct pcpu_chunk *chunk;
37-
int slot, max_map_used;
37+
int slot, max_nr_alloc;
3838

39-
max_map_used = 0;
39+
max_nr_alloc = 0;
4040
for (slot = 0; slot < pcpu_nr_slots; slot++)
4141
list_for_each_entry(chunk, &pcpu_slot[slot], list)
42-
max_map_used = max(max_map_used, chunk->map_used);
42+
max_nr_alloc = max(max_nr_alloc, chunk->nr_alloc);
4343

44-
return max_map_used;
44+
return max_nr_alloc;
4545
}
4646

4747
/*
4848
* Prints out chunk state. Fragmentation is considered between
4949
* the beginning of the chunk to the last allocation.
50+
*
51+
* All statistics are in bytes unless stated otherwise.
5052
*/
5153
static void chunk_map_stats(struct seq_file *m, struct pcpu_chunk *chunk,
5254
int *buffer)
5355
{
54-
int i, s_index, e_index, last_alloc, alloc_sign, as_len;
56+
int i, last_alloc, as_len, start, end;
5557
int *alloc_sizes, *p;
5658
/* statistics */
5759
int sum_frag = 0, max_frag = 0;
5860
int cur_min_alloc = 0, cur_med_alloc = 0, cur_max_alloc = 0;
5961

6062
alloc_sizes = buffer;
61-
s_index = (chunk->start_offset) ? 1 : 0;
62-
e_index = chunk->map_used - ((chunk->end_offset) ? 1 : 0);
63-
64-
/* find last allocation */
65-
last_alloc = -1;
66-
for (i = e_index - 1; i >= s_index; i--) {
67-
if (chunk->map[i] & 1) {
68-
last_alloc = i;
69-
break;
70-
}
71-
}
7263

73-
/* if the chunk is not empty - ignoring reserve */
74-
if (last_alloc >= s_index) {
75-
as_len = last_alloc + 1 - s_index;
76-
77-
/*
78-
* Iterate through chunk map computing size info.
79-
* The first bit is overloaded to be a used flag.
80-
* negative = free space, positive = allocated
81-
*/
82-
for (i = 0, p = chunk->map + s_index; i < as_len; i++, p++) {
83-
alloc_sign = (*p & 1) ? 1 : -1;
84-
alloc_sizes[i] = alloc_sign *
85-
((p[1] & ~1) - (p[0] & ~1));
64+
/*
65+
* find_last_bit returns the start value if nothing found.
66+
* Therefore, we must determine if it is a failure of find_last_bit
67+
* and set the appropriate value.
68+
*/
69+
last_alloc = find_last_bit(chunk->alloc_map,
70+
pcpu_chunk_map_bits(chunk) -
71+
chunk->end_offset / PCPU_MIN_ALLOC_SIZE - 1);
72+
last_alloc = test_bit(last_alloc, chunk->alloc_map) ?
73+
last_alloc + 1 : 0;
74+
75+
as_len = 0;
76+
start = chunk->start_offset;
77+
78+
/*
79+
* If a bit is set in the allocation map, the bound_map identifies
80+
* where the allocation ends. If the allocation is not set, the
81+
* bound_map does not identify free areas as it is only kept accurate
82+
* on allocation, not free.
83+
*
84+
* Positive values are allocations and negative values are free
85+
* fragments.
86+
*/
87+
while (start < last_alloc) {
88+
if (test_bit(start, chunk->alloc_map)) {
89+
end = find_next_bit(chunk->bound_map, last_alloc,
90+
start + 1);
91+
alloc_sizes[as_len] = 1;
92+
} else {
93+
end = find_next_bit(chunk->alloc_map, last_alloc,
94+
start + 1);
95+
alloc_sizes[as_len] = -1;
8696
}
8797

88-
sort(alloc_sizes, as_len, sizeof(chunk->map[0]), cmpint, NULL);
98+
alloc_sizes[as_len++] *= (end - start) * PCPU_MIN_ALLOC_SIZE;
99+
100+
start = end;
101+
}
102+
103+
/*
104+
* The negative values are free fragments and thus sorting gives the
105+
* free fragments at the beginning in largest first order.
106+
*/
107+
if (as_len > 0) {
108+
sort(alloc_sizes, as_len, sizeof(int), cmpint, NULL);
89109

90-
/* Iterate through the unallocated fragements. */
110+
/* iterate through the unallocated fragments */
91111
for (i = 0, p = alloc_sizes; *p < 0 && i < as_len; i++, p++) {
92112
sum_frag -= *p;
93113
max_frag = max(max_frag, -1 * (*p));
@@ -101,8 +121,8 @@ static void chunk_map_stats(struct seq_file *m, struct pcpu_chunk *chunk,
101121
P("nr_alloc", chunk->nr_alloc);
102122
P("max_alloc_size", chunk->max_alloc_size);
103123
P("empty_pop_pages", chunk->nr_empty_pop_pages);
104-
P("free_size", chunk->free_size);
105-
P("contig_hint", chunk->contig_hint);
124+
P("free_bytes", chunk->free_bytes);
125+
P("contig_bytes", chunk->contig_bits * PCPU_MIN_ALLOC_SIZE);
106126
P("sum_frag", sum_frag);
107127
P("max_frag", max_frag);
108128
P("cur_min_alloc", cur_min_alloc);
@@ -114,22 +134,23 @@ static void chunk_map_stats(struct seq_file *m, struct pcpu_chunk *chunk,
114134
static int percpu_stats_show(struct seq_file *m, void *v)
115135
{
116136
struct pcpu_chunk *chunk;
117-
int slot, max_map_used;
137+
int slot, max_nr_alloc;
118138
int *buffer;
119139

120140
alloc_buffer:
121141
spin_lock_irq(&pcpu_lock);
122-
max_map_used = find_max_map_used();
142+
max_nr_alloc = find_max_nr_alloc();
123143
spin_unlock_irq(&pcpu_lock);
124144

125-
buffer = vmalloc(max_map_used * sizeof(pcpu_first_chunk->map[0]));
145+
/* there can be at most this many free and allocated fragments */
146+
buffer = vmalloc((2 * max_nr_alloc + 1) * sizeof(int));
126147
if (!buffer)
127148
return -ENOMEM;
128149

129150
spin_lock_irq(&pcpu_lock);
130151

131152
/* if the buffer allocated earlier is too small */
132-
if (max_map_used < find_max_map_used()) {
153+
if (max_nr_alloc < find_max_nr_alloc()) {
133154
spin_unlock_irq(&pcpu_lock);
134155
vfree(buffer);
135156
goto alloc_buffer;

0 commit comments

Comments
 (0)