Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
95 commits
Select commit Hold shift + click to select a range
ec5b16d
wifi: cfg80211: sme: cap SSID length in __cfg80211_connect_result()
PlaidCat Oct 30, 2025
4ffb21b
mm/shmem: make find_get_pages_range() work for huge page
PlaidCat Oct 30, 2025
900cbdf
powerpc: remove the __kernel_io_end export
PlaidCat Oct 30, 2025
4499229
powerpc/mm: drop #ifdef CONFIG_MMU in is_ioremap_addr()
PlaidCat Oct 30, 2025
731218e
include/linux/pagemap.h: rename arguments to find_subpage
PlaidCat Oct 30, 2025
38beddc
mm/filemap.c: unexport find_get_entry
PlaidCat Oct 30, 2025
d99de07
include/linux/pagemap.h: optimise find_subpage for !THP
PlaidCat Oct 30, 2025
a7b9b09
mm/shmem.c: distribute switch variables for initialization
PlaidCat Oct 30, 2025
c936787
mm/shmem.c: clean code by removing unnecessary assignment
PlaidCat Oct 30, 2025
3f23a91
mm: huge tmpfs: try to split_huge_page() when punching hole
PlaidCat Oct 30, 2025
7a16acb
mm/shmem: fix build without THP
PlaidCat Oct 30, 2025
7f20744
mm: factor find_get_incore_page out of mincore_page
PlaidCat Oct 30, 2025
441ff41
mm: use find_get_incore_page in memcontrol
PlaidCat Oct 30, 2025
5fd3d05
mm: optimise madvise WILLNEED
PlaidCat Oct 30, 2025
b3e027d
proc: optimise smaps for shmem entries
PlaidCat Oct 30, 2025
f20aa7a
i915: use find_lock_page instead of find_lock_entry
PlaidCat Oct 30, 2025
1e0510f
mm: convert find_get_entry to return the head page
PlaidCat Oct 30, 2025
3351c37
mm/shmem: return head page from find_lock_entry
PlaidCat Oct 30, 2025
9ff54d1
mm: add find_lock_head
PlaidCat Oct 30, 2025
2170723
mm: pagemap.h: fix two kernel-doc markups
PlaidCat Oct 30, 2025
f384f86
mm: fix madvise WILLNEED performance problem
PlaidCat Oct 30, 2025
f48fbe2
mm: make pagecache tagged lookups return only head pages
PlaidCat Oct 30, 2025
c37569d
mm/shmem: use pagevec_lookup in shmem_unlock_mapping
PlaidCat Oct 30, 2025
cf7b075
mm/swap: optimise get_shadow_from_swap_cache
PlaidCat Oct 30, 2025
1dda59b
mm,thp,shmem: limit shmem THP alloc gfp_mask
PlaidCat Oct 30, 2025
8fd7a57
mm,thp,shm: limit gfp mask to no more than specified
PlaidCat Oct 30, 2025
18bc1e2
mm,shmem,thp: limit shmem THP allocations to requested zones
PlaidCat Oct 30, 2025
d7d35c1
huge tmpfs: remove shrinklist addition from shmem_setattr()
PlaidCat Oct 30, 2025
6dc9d8c
huge tmpfs: move shmem_huge_enabled() upwards
PlaidCat Oct 30, 2025
3f68ece
huge tmpfs: SGP_NOALLOC to stop collapse_file() on race
PlaidCat Oct 30, 2025
b34f75f
huge tmpfs: shmem_is_huge(vma, inode, index)
PlaidCat Oct 30, 2025
9eb6c3d
huge tmpfs: decide stat.st_blksize by shmem_is_huge()
PlaidCat Oct 30, 2025
c531037
shmem: shmem_writepage() split unlikely i915 THP
PlaidCat Oct 30, 2025
4f78a1d
s390/extable: fix exception table sorting
PlaidCat Oct 30, 2025
e52fb0f
memregion: Fix memregion_free() fallback definition
PlaidCat Oct 30, 2025
13521fb
mm/compaction: fix set skip in fast_find_migrateblock
PlaidCat Oct 30, 2025
fcdeac8
mm/page_reporting: replace rcu_access_pointer() with rcu_dereference_…
PlaidCat Oct 30, 2025
ec49b9b
Revert "mm/compaction: fix set skip in fast_find_migrateblock"
PlaidCat Oct 30, 2025
58f17f6
mm: memcg: fix NULL pointer in mem_cgroup_track_foreign_dirty_slowpath()
PlaidCat Oct 30, 2025
80e0d14
mm/compaction: rename 'start_pfn' to 'iteration_start_pfn' in compact…
PlaidCat Oct 30, 2025
63b81f9
mm/compaction: move compaction_suitable's comment to right place
PlaidCat Oct 30, 2025
7daac83
mm, compaction: rename compact_control->rescan to finish_pageblock
PlaidCat Oct 30, 2025
e9f58af
mm, compaction: check if a page has been captured before draining PCP…
PlaidCat Oct 30, 2025
b61051e
mm, compaction: finish scanning the current pageblock if requested
PlaidCat Oct 30, 2025
5eeb412
mm, compaction: finish pageblocks on complete migration failure
PlaidCat Oct 30, 2025
1c78d64
mm: zswap: shrink until can accept
PlaidCat Oct 30, 2025
b80c9bb
mm: vmalloc must set pte via arch code
PlaidCat Oct 30, 2025
deb0fda
x86/mm: Avoid using set_pgd() outside of real PGD pages
PlaidCat Oct 30, 2025
71b426e
writeback: fix dereferencing NULL mapping->host on writeback_page_tem…
PlaidCat Oct 30, 2025
6bee4b8
powerpc/mm/dax: Fix the condition when checking if altmap vmemap can …
PlaidCat Oct 30, 2025
9713688
powerpc/mm/altmap: Fix altmap boundary check
PlaidCat Oct 30, 2025
c5c94f5
tmpfs: verify {g,u}id mount options correctly
PlaidCat Oct 30, 2025
53944f5
mm: add a call to flush_cache_vmap() in vmap_pfn()
PlaidCat Oct 30, 2025
150d49b
radix tree: remove unused variable
PlaidCat Oct 30, 2025
9125e1b
mm: memory-failure: kill soft_offline_free_page()
PlaidCat Oct 30, 2025
23f4211
mm: memory-failure: fix unexpected return value in soft_offline_page()
PlaidCat Oct 30, 2025
841841b
mm/vmalloc: extend __find_vmap_area() with one more argument
PlaidCat Oct 30, 2025
f32fadb
mm/vmalloc: add a safer version of find_vm_area() for debug
PlaidCat Oct 30, 2025
a13ca61
mm: memcontrol: fix GFP_NOFS recursion in memory.high enforcement
PlaidCat Oct 30, 2025
7fe1866
slab: kmalloc_size_roundup() must not return 0 for non-zero size
PlaidCat Oct 30, 2025
f35ba26
mm/cma: use nth_page() in place of direct struct page manipulation
PlaidCat Oct 30, 2025
ac877f4
mm/memory_hotplug: use pfn math in place of direct struct page manipu…
PlaidCat Oct 30, 2025
48e9df2
mm/page_alloc: correct start page when guard page debug is enabled
PlaidCat Oct 30, 2025
88fb00d
vfs: fix readahead(2) on block devices
PlaidCat Oct 30, 2025
046c4aa
writeback, cgroup: switch inodes with dirty timestamps to release dyi…
PlaidCat Oct 30, 2025
c154b25
powerpc/pseries: fix potential memory leak in init_cpu_associativity()
PlaidCat Oct 30, 2025
63bf987
mm: hugetlb: simplify per-node sysfs creation and removal
PlaidCat Oct 30, 2025
43042a2
mm: hugetlb: eliminate memory-less nodes handling
PlaidCat Oct 30, 2025
9f9977b
base/node.c: initialize the accessor list before registering
PlaidCat Oct 30, 2025
676d5c8
arm64/mm: Set only the PTE_DIRTY bit while preserving the HW dirty state
PlaidCat Oct 30, 2025
8056cd5
arm64: mm: Always make sw-dirty PTEs hw-dirty in pte_modify
PlaidCat Oct 30, 2025
a66aafb
mm: memcontrol: don't throttle dying tasks on memory.high
PlaidCat Oct 30, 2025
dff572f
mm: writeback: ratelimit stat flush from mem_cgroup_wb_stats
PlaidCat Oct 30, 2025
5b33169
mm: memcg: don't periodically flush stats when memcg is disabled
PlaidCat Oct 30, 2025
f224a5b
mm: memcg: use larger batches for proactive reclaim
PlaidCat Oct 30, 2025
de04d38
mm/slub, kunit: Use inverted data to corrupt kmem cache
PlaidCat Oct 30, 2025
8b0c793
s390/mm: Fix storage key clearing for guest huge pages
PlaidCat Oct 30, 2025
b2f9bae
s390/mm: Fix clearing storage keys for huge pages
PlaidCat Oct 30, 2025
d07171c
mm/numa_balancing: teach mpol_to_str about the balancing mode
PlaidCat Oct 30, 2025
573f323
arm64: Fix KASAN random tag seed initialization
PlaidCat Oct 30, 2025
817894a
x86/mm/pat: cpa-test: fix length for CPA_ARRAY test
PlaidCat Oct 30, 2025
3634996
x86/mm: Fix flush_tlb_range() when used for zapping normal PMDs
PlaidCat Oct 30, 2025
1b89b3e
mm/hugetlb: wait for hugetlb folios to be freed
PlaidCat Oct 30, 2025
a0b14d6
mm, percpu: do not consider sleepable allocations atomic
PlaidCat Oct 30, 2025
10a8386
arm64: mm: Correct the update of max_pfn
PlaidCat Oct 30, 2025
e2955f0
mm: fix apply_to_existing_page_range()
PlaidCat Oct 30, 2025
87d3de9
mm/gup: fix wrongly calculated returned value in fault_in_safe_writea…
PlaidCat Oct 30, 2025
bb06d2f
mm/shmem: fix potential dead loop in shmem_unuse()
PlaidCat Oct 30, 2025
b3892cd
net/mlx5: Stop waiting for PCI if pci channel is offline
PlaidCat Oct 30, 2025
dc3a955
Bluetooth: L2CAP: fix "bad unlock balance" in l2cap_disconnect_rsp
PlaidCat Oct 30, 2025
55a13cf
scsi: lpfc: Fix buffer free/clear order in deferred receive path
PlaidCat Oct 30, 2025
d3cf1f7
efivarfs: Fix slab-out-of-bounds in efivarfs_d_compare
PlaidCat Oct 30, 2025
ab4b0b0
Bluetooth: Fix potential use-after-free when clear keys
PlaidCat Oct 30, 2025
0a881c2
Bluetooth: L2CAP: Fix user-after-free
PlaidCat Oct 30, 2025
99b4f48
Rebuild rocky8_10 with kernel-4.18.0-553.81.1.el8_10
PlaidCat Oct 30, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Makefile.rhelver
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ RHEL_MINOR = 10
#
# Use this spot to avoid future merge conflicts.
# Do not trim this comment.
RHEL_RELEASE = 553.80.1
RHEL_RELEASE = 553.81.1

#
# ZSTREAM
Expand Down
11 changes: 9 additions & 2 deletions arch/arm64/include/asm/pgtable.h
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ static inline pte_t pte_wrprotect(pte_t pte)
* clear), set the PTE_DIRTY bit.
*/
if (pte_hw_dirty(pte))
pte = pte_mkdirty(pte);
pte = set_pte_bit(pte, __pgprot(PTE_DIRTY));

pte = clear_pte_bit(pte, __pgprot(PTE_WRITE));
pte = set_pte_bit(pte, __pgprot(PTE_RDONLY));
Expand Down Expand Up @@ -675,8 +675,15 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
PTE_PROT_NONE | PTE_VALID | PTE_WRITE | PTE_GP;
/* preserve the hardware dirty information */
if (pte_hw_dirty(pte))
pte = pte_mkdirty(pte);
pte = set_pte_bit(pte, __pgprot(PTE_DIRTY));

pte_val(pte) = (pte_val(pte) & ~mask) | (pgprot_val(newprot) & mask);
/*
* If we end up clearing hw dirtiness for a sw-dirty PTE, set hardware
* dirtiness again.
*/
if (pte_sw_dirty(pte))
pte = pte_mkdirty(pte);
return pte;
}

Expand Down
3 changes: 0 additions & 3 deletions arch/arm64/kernel/setup.c
Original file line number Diff line number Diff line change
Expand Up @@ -353,9 +353,6 @@ void __init setup_arch(char **cmdline_p)
smp_init_cpus();
smp_build_mpidr_hash();

/* Init percpu seeds for random tags after cpus are set up. */
kasan_init_sw_tags();

#ifdef CONFIG_ARM64_SW_TTBR0_PAN
/*
* Make sure init_thread_info.ttbr0 always generates translation
Expand Down
2 changes: 2 additions & 0 deletions arch/arm64/kernel/smp.c
Original file line number Diff line number Diff line change
Expand Up @@ -474,6 +474,8 @@ void __init smp_prepare_boot_cpu(void)
init_gic_priority_masking();

kasan_init_hw_tags();
/* Init percpu seeds for random tags after cpus are set up. */
kasan_init_sw_tags();
}

static u64 __init of_get_cpu_mpidr(struct device_node *dn)
Expand Down
3 changes: 2 additions & 1 deletion arch/arm64/mm/mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -1434,7 +1434,8 @@ int arch_add_memory(int nid, u64 start, u64 size,
__remove_pgd_mapping(swapper_pg_dir,
__phys_to_virt(start), size);
else {
max_pfn = PFN_UP(start + size);
/* Address of hotplugged memory can be smaller */
max_pfn = max(max_pfn, PFN_UP(start + size));
max_low_pfn = max_pfn;
}

Expand Down
4 changes: 0 additions & 4 deletions arch/powerpc/include/asm/pgtable.h
Original file line number Diff line number Diff line change
Expand Up @@ -68,13 +68,9 @@ static inline void mark_initmem_nx(void) { }
#define is_ioremap_addr is_ioremap_addr
static inline bool is_ioremap_addr(const void *x)
{
#ifdef CONFIG_MMU
unsigned long addr = (unsigned long)x;

return addr >= IOREMAP_BASE && addr < IOREMAP_END;
#else
return false;
#endif
}
#endif /* CONFIG_PPC64 */

Expand Down
5 changes: 2 additions & 3 deletions arch/powerpc/mm/init_64.c
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ static bool altmap_cross_boundary(struct vmem_altmap *altmap, unsigned long star
unsigned long nr_pfn = page_size / sizeof(struct page);
unsigned long start_pfn = page_to_pfn((struct page *)start);

if ((start_pfn + nr_pfn) > altmap->end_pfn)
if ((start_pfn + nr_pfn - 1) > altmap->end_pfn)
return true;

if (start_pfn < altmap->base_pfn)
Expand Down Expand Up @@ -305,8 +305,7 @@ void __ref vmemmap_free(unsigned long start, unsigned long end,
start = ALIGN_DOWN(start, page_size);
if (altmap) {
alt_start = altmap->base_pfn;
alt_end = altmap->base_pfn + altmap->reserve +
altmap->free + altmap->alloc + altmap->align;
alt_end = altmap->base_pfn + altmap->reserve + altmap->free;
}

pr_debug("vmemmap_free %lx...%lx\n", start, end);
Expand Down
1 change: 0 additions & 1 deletion arch/powerpc/mm/pgtable_64.c
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,6 @@ EXPORT_SYMBOL(__vmalloc_end);
unsigned long __kernel_io_start;
EXPORT_SYMBOL(__kernel_io_start);
unsigned long __kernel_io_end;
EXPORT_SYMBOL(__kernel_io_end);
struct page *vmemmap;
EXPORT_SYMBOL(vmemmap);
unsigned long __pte_frag_nr;
Expand Down
4 changes: 3 additions & 1 deletion arch/powerpc/platforms/pseries/lpar.c
Original file line number Diff line number Diff line change
Expand Up @@ -521,8 +521,10 @@ static ssize_t vcpudispatch_stats_write(struct file *file, const char __user *p,

if (cmd) {
rc = init_cpu_associativity();
if (rc)
if (rc) {
destroy_cpu_associativity();
goto out;
}

for_each_possible_cpu(cpu) {
disp = per_cpu_ptr(&vcpu_disp_data, cpu);
Expand Down
9 changes: 7 additions & 2 deletions arch/s390/include/asm/extable.h
Original file line number Diff line number Diff line change
Expand Up @@ -71,8 +71,13 @@ static inline void swap_ex_entry_fixup(struct exception_table_entry *a,
{
a->fixup = b->fixup + delta;
b->fixup = tmp.fixup - delta;
a->handler = b->handler + delta;
b->handler = tmp.handler - delta;
a->handler = b->handler;
if (a->handler)
a->handler += delta;
b->handler = tmp.handler;
if (b->handler)
b->handler -= delta;
}
#define swap_ex_entry_fixup swap_ex_entry_fixup

#endif
2 changes: 1 addition & 1 deletion arch/s390/mm/gmap.c
Original file line number Diff line number Diff line change
Expand Up @@ -2652,7 +2652,7 @@ static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr,
return 0;

start = pmd_val(*pmd) & HPAGE_MASK;
end = start + HPAGE_SIZE - 1;
end = start + HPAGE_SIZE;
__storage_key_init_range(start, end);
set_bit(PG_arch_1, &page->flags);
cond_resched();
Expand Down
2 changes: 1 addition & 1 deletion arch/s390/mm/hugetlbpage.c
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ static void clear_huge_pte_skeys(struct mm_struct *mm, unsigned long rste)
}

if (!test_and_set_bit(PG_arch_1, &page->flags))
__storage_key_init_range(paddr, paddr + size - 1);
__storage_key_init_range(paddr, paddr + size);
}

void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/include/asm/tlbflush.h
Original file line number Diff line number Diff line change
Expand Up @@ -571,7 +571,7 @@ struct flush_tlb_info {
flush_tlb_mm_range((vma)->vm_mm, start, end, \
((vma)->vm_flags & VM_HUGETLB) \
? huge_page_shift(hstate_vma(vma)) \
: PAGE_SHIFT, false)
: PAGE_SHIFT, true)

extern void flush_tlb_all(void);
extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
Expand Down
8 changes: 4 additions & 4 deletions arch/x86/mm/kaslr.c
Original file line number Diff line number Diff line change
Expand Up @@ -182,11 +182,11 @@ static void __meminit init_trampoline_pud(void)
set_p4d(p4d_tramp,
__p4d(_KERNPG_TABLE | __pa(pud_page_tramp)));

set_pgd(&trampoline_pgd_entry,
__pgd(_KERNPG_TABLE | __pa(p4d_page_tramp)));
trampoline_pgd_entry =
__pgd(_KERNPG_TABLE | __pa(p4d_page_tramp));
} else {
set_pgd(&trampoline_pgd_entry,
__pgd(_KERNPG_TABLE | __pa(pud_page_tramp)));
trampoline_pgd_entry =
__pgd(_KERNPG_TABLE | __pa(pud_page_tramp));
}
}

Expand Down
2 changes: 1 addition & 1 deletion arch/x86/mm/pat/cpa-test.c
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ static int pageattr_test(void)
break;

case 1:
err = change_page_attr_set(addrs, len[1], PAGE_CPA_TEST, 1);
err = change_page_attr_set(addrs, len[i], PAGE_CPA_TEST, 1);
break;

case 2:
Expand Down
97 changes: 97 additions & 0 deletions ciq/ciq_backports/kernel-4.18.0-553.81.1.el8_10/287d5fed.failed
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
mm: memcg: use larger batches for proactive reclaim

jira LE-4623
Rebuild_History Non-Buildable kernel-4.18.0-553.81.1.el8_10
commit-author T.J. Mercier <tjmercier@google.com>
commit 287d5fedb377ddc232b216b882723305b27ae31a
Empty-Commit: Cherry-Pick Conflicts during history rebuild.
Will be included in final tarball splat. Ref for failed cherry-pick at:
ciq/ciq_backports/kernel-4.18.0-553.81.1.el8_10/287d5fed.failed

Before 388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive
reclaim") we passed the number of pages for the reclaim request directly
to try_to_free_mem_cgroup_pages, which could lead to significant
overreclaim. After 0388536ac291 the number of pages was limited to a
maximum 32 (SWAP_CLUSTER_MAX) to reduce the amount of overreclaim.
However such a small batch size caused a regression in reclaim performance
due to many more reclaim start/stop cycles inside memory_reclaim. The
restart cost is amortized over more pages with larger batch sizes, and
becomes a significant component of the runtime if the batch size is too
small.

Reclaim tries to balance nr_to_reclaim fidelity with fairness across nodes
and cgroups over which the pages are spread. As such, the bigger the
request, the bigger the absolute overreclaim error. Historic in-kernel
users of reclaim have used fixed, small sized requests to approach an
appropriate reclaim rate over time. When we reclaim a user request of
arbitrary size, use decaying batch sizes to manage error while maintaining
reasonable throughput.

MGLRU enabled - memcg LRU used
root - full reclaim pages/sec time (sec)
pre-0388536ac291 : 68047 10.46
post-0388536ac291 : 13742 inf
(reclaim-reclaimed)/4 : 67352 10.51

MGLRU enabled - memcg LRU not used
/uid_0 - 1G reclaim pages/sec time (sec) overreclaim (MiB)
pre-0388536ac291 : 258822 1.12 107.8
post-0388536ac291 : 105174 2.49 3.5
(reclaim-reclaimed)/4 : 233396 1.12 -7.4

MGLRU enabled - memcg LRU not used
/uid_0 - full reclaim pages/sec time (sec)
pre-0388536ac291 : 72334 7.09
post-0388536ac291 : 38105 14.45
(reclaim-reclaimed)/4 : 72914 6.96

[tjmercier@google.com: v4]
Link: https://lkml.kernel.org/r/20240206175251.3364296-1-tjmercier@google.com
Link: https://lkml.kernel.org/r/20240202233855.1236422-1-tjmercier@google.com
Fixes: 0388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive reclaim")
Signed-off-by: T.J. Mercier <tjmercier@google.com>
Reviewed-by: Yosry Ahmed <yosryahmed@google.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Michal Koutny <mkoutny@suse.com>
Acked-by: Shakeel Butt <shakeelb@google.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Efly Young <yangyifei03@kuaishou.com>
Cc: Yu Zhao <yuzhao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
(cherry picked from commit 287d5fedb377ddc232b216b882723305b27ae31a)
Signed-off-by: Jonathan Maple <jmaple@ciq.com>

# Conflicts:
# mm/memcontrol.c
diff --cc mm/memcontrol.c
index e3d7fee03b47,cb216d30a221..000000000000
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@@ -6753,7 -6979,10 +6753,9 @@@ static ssize_t memory_reclaim(struct ke
if (err)
return err;

- reclaim_options = MEMCG_RECLAIM_MAY_SWAP | MEMCG_RECLAIM_PROACTIVE;
while (nr_reclaimed < nr_to_reclaim) {
+ /* Will converge on zero, but reclaim enforces a minimum */
+ unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4;
unsigned long reclaimed;

if (signal_pending(current))
@@@ -6768,8 -6997,7 +6770,12 @@@
lru_add_drain_all();

reclaimed = try_to_free_mem_cgroup_pages(memcg,
++<<<<<<< HEAD
+ min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX),
+ GFP_KERNEL, true);
++=======
+ batch_size, GFP_KERNEL, reclaim_options);
++>>>>>>> 287d5fedb377 (mm: memcg: use larger batches for proactive reclaim)

if (!reclaimed && !nr_retries--)
return -EAGAIN;
* Unmerged path mm/memcontrol.c
Loading