@@ -364,19 +364,19 @@ follows:
364364
3653652) for querying the policy, we do not need to take an extra reference on the
366366 target task's task policy nor vma policies because we always acquire the
367- task's mm's mmap_sem for read during the query. The set_mempolicy() and
368- mbind() APIs [see below] always acquire the mmap_sem for write when
367+ task's mm's mmap_lock for read during the query. The set_mempolicy() and
368+ mbind() APIs [see below] always acquire the mmap_lock for write when
369369 installing or replacing task or vma policies. Thus, there is no possibility
370370 of a task or thread freeing a policy while another task or thread is
371371 querying it.
372372
3733733) Page allocation usage of task or vma policy occurs in the fault path where
374- we hold them mmap_sem for read. Again, because replacing the task or vma
375- policy requires that the mmap_sem be held for write, the policy can't be
374+ we hold them mmap_lock for read. Again, because replacing the task or vma
375+ policy requires that the mmap_lock be held for write, the policy can't be
376376 freed out from under us while we're using it for page allocation.
377377
3783784) Shared policies require special consideration. One task can replace a
379- shared memory policy while another task, with a distinct mmap_sem , is
379+ shared memory policy while another task, with a distinct mmap_lock , is
380380 querying or allocating a page based on the policy. To resolve this
381381 potential race, the shared policy infrastructure adds an extra reference
382382 to the shared policy during lookup while holding a spin lock on the shared
0 commit comments