Skip to content

Commit c1e8d7c

Browse files
walken-googletorvalds
authored andcommitted
mmap locking API: convert mmap_sem comments
Convert comments that reference mmap_sem to reference mmap_lock instead. [akpm@linux-foundation.org: fix up linux-next leftovers] [akpm@linux-foundation.org: s/lockaphore/lock/, per Vlastimil] [akpm@linux-foundation.org: more linux-next fixups, per Michel] Signed-off-by: Michel Lespinasse <walken@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Davidlohr Bueso <dbueso@suse.de> Cc: David Rientjes <rientjes@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Laurent Dufour <ldufour@linux.ibm.com> Cc: Liam Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ying Han <yinghan@google.com> Link: http://lkml.kernel.org/r/20200520052908.204642-13-walken@google.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent 3e4e28c commit c1e8d7c

File tree

113 files changed

+351
-352
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

113 files changed

+351
-352
lines changed

Documentation/admin-guide/mm/numa_memory_policy.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -364,19 +364,19 @@ follows:
364364

365365
2) for querying the policy, we do not need to take an extra reference on the
366366
target task's task policy nor vma policies because we always acquire the
367-
task's mm's mmap_sem for read during the query. The set_mempolicy() and
368-
mbind() APIs [see below] always acquire the mmap_sem for write when
367+
task's mm's mmap_lock for read during the query. The set_mempolicy() and
368+
mbind() APIs [see below] always acquire the mmap_lock for write when
369369
installing or replacing task or vma policies. Thus, there is no possibility
370370
of a task or thread freeing a policy while another task or thread is
371371
querying it.
372372

373373
3) Page allocation usage of task or vma policy occurs in the fault path where
374-
we hold them mmap_sem for read. Again, because replacing the task or vma
375-
policy requires that the mmap_sem be held for write, the policy can't be
374+
we hold them mmap_lock for read. Again, because replacing the task or vma
375+
policy requires that the mmap_lock be held for write, the policy can't be
376376
freed out from under us while we're using it for page allocation.
377377

378378
4) Shared policies require special consideration. One task can replace a
379-
shared memory policy while another task, with a distinct mmap_sem, is
379+
shared memory policy while another task, with a distinct mmap_lock, is
380380
querying or allocating a page based on the policy. To resolve this
381381
potential race, the shared policy infrastructure adds an extra reference
382382
to the shared policy during lookup while holding a spin lock on the shared

Documentation/admin-guide/mm/userfaultfd.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ memory ranges) provides two primary functionalities:
3333
The real advantage of userfaults if compared to regular virtual memory
3434
management of mremap/mprotect is that the userfaults in all their
3535
operations never involve heavyweight structures like vmas (in fact the
36-
``userfaultfd`` runtime load never takes the mmap_sem for writing).
36+
``userfaultfd`` runtime load never takes the mmap_lock for writing).
3737

3838
Vmas are not suitable for page- (or hugepage) granular fault tracking
3939
when dealing with virtual address spaces that could span

Documentation/filesystems/locking.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -615,7 +615,7 @@ prototypes::
615615
locking rules:
616616

617617
============= ======== ===========================
618-
ops mmap_sem PageLocked(page)
618+
ops mmap_lock PageLocked(page)
619619
============= ======== ===========================
620620
open: yes
621621
close: yes

Documentation/vm/transhuge.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -98,9 +98,9 @@ split_huge_page() or split_huge_pmd() has a cost.
9898

9999
To make pagetable walks huge pmd aware, all you need to do is to call
100100
pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
101-
mmap_sem in read (or write) mode to be sure a huge pmd cannot be
101+
mmap_lock in read (or write) mode to be sure a huge pmd cannot be
102102
created from under you by khugepaged (khugepaged collapse_huge_page
103-
takes the mmap_sem in write mode in addition to the anon_vma lock). If
103+
takes the mmap_lock in write mode in addition to the anon_vma lock). If
104104
pmd_trans_huge returns false, you just fallback in the old code
105105
paths. If instead pmd_trans_huge returns true, you have to take the
106106
page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the

arch/arc/mm/fault.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs)
141141
}
142142

143143
/*
144-
* Fault retry nuances, mmap_sem already relinquished by core mm
144+
* Fault retry nuances, mmap_lock already relinquished by core mm
145145
*/
146146
if (unlikely((fault & VM_FAULT_RETRY) &&
147147
(flags & FAULT_FLAG_ALLOW_RETRY))) {

arch/arm/kernel/vdso.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -240,7 +240,7 @@ static int install_vvar(struct mm_struct *mm, unsigned long addr)
240240
return PTR_ERR_OR_ZERO(vma);
241241
}
242242

243-
/* assumes mmap_sem is write-locked */
243+
/* assumes mmap_lock is write-locked */
244244
void arm_install_vdso(struct mm_struct *mm, unsigned long addr)
245245
{
246246
struct vm_area_struct *vma;

arch/arm/mm/fault.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -293,7 +293,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
293293
fault = __do_page_fault(mm, addr, fsr, flags, tsk);
294294

295295
/* If we need to retry but a fatal signal is pending, handle the
296-
* signal first. We do not need to release the mmap_sem because
296+
* signal first. We do not need to release the mmap_lock because
297297
* it would already be released in __lock_page_or_retry in
298298
* mm/filemap.c. */
299299
if (fault_signal_pending(fault, regs)) {

arch/ia64/mm/fault.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re
8686
#ifdef CONFIG_VIRTUAL_MEM_MAP
8787
/*
8888
* If fault is in region 5 and we are in the kernel, we may already
89-
* have the mmap_sem (pfn_valid macro is called during mmap). There
89+
* have the mmap_lock (pfn_valid macro is called during mmap). There
9090
* is no vma for region 5 addr's anyway, so skip getting the semaphore
9191
* and go directly to the exception handling code.
9292
*/

arch/microblaze/mm/fault.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long address,
124124
/* When running in the kernel we expect faults to occur only to
125125
* addresses in user space. All other faults represent errors in the
126126
* kernel and should generate an OOPS. Unfortunately, in the case of an
127-
* erroneous fault occurring in a code path which already holds mmap_sem
127+
* erroneous fault occurring in a code path which already holds mmap_lock
128128
* we will deadlock attempting to validate the fault against the
129129
* address space. Luckily the kernel only validly references user
130130
* space from well defined areas of code, which are listed in the

arch/nds32/mm/fault.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -210,7 +210,7 @@ void do_page_fault(unsigned long entry, unsigned long addr,
210210

211211
/*
212212
* If we need to retry but a fatal signal is pending, handle the
213-
* signal first. We do not need to release the mmap_sem because it
213+
* signal first. We do not need to release the mmap_lock because it
214214
* would already be released in __lock_page_or_retry in mm/filemap.c.
215215
*/
216216
if (fault_signal_pending(fault, regs)) {

0 commit comments

Comments
 (0)