Skip to content

Conversation

@GhaithCraft
Copy link
Owner

No description provided.

@GhaithCraft GhaithCraft reopened this Nov 17, 2017
GhaithCraft pushed a commit that referenced this pull request Nov 17, 2017
[ Upstream commit ecf5fc6e9654cd7a268c782a523f072b2f1959f9 ]

Nikolay has reported a hang when a memcg reclaim got stuck with the
following backtrace:

PID: 18308  TASK: ffff883d7c9b0a30  CPU: 1   COMMAND: "rsync"
  #0 __schedule at ffffffff815ab152
  #1 schedule at ffffffff815ab76e
  #2 schedule_timeout at ffffffff815ae5e5
  #3 io_schedule_timeout at ffffffff815aad6a
  #4 bit_wait_io at ffffffff815abfc6
  #5 __wait_on_bit at ffffffff815abda5
  #6 wait_on_page_bit at ffffffff8111fd4f
  #7 shrink_page_list at ffffffff81135445
  #8 shrink_inactive_list at ffffffff81135845
  #9 shrink_lruvec at ffffffff81135ead
 #10 shrink_zone at ffffffff811360c3
 #11 shrink_zones at ffffffff81136eff
 #12 do_try_to_free_pages at ffffffff8113712f
 #13 try_to_free_mem_cgroup_pages at ffffffff811372be
 #14 try_charge at ffffffff81189423
 #15 mem_cgroup_try_charge at ffffffff8118c6f5
 #16 __add_to_page_cache_locked at ffffffff8112137d
 #17 add_to_page_cache_lru at ffffffff81121618
 #18 pagecache_get_page at ffffffff8112170b
 #19 grow_dev_page at ffffffff811c8297
 #20 __getblk_slow at ffffffff811c91d6
 #21 __getblk_gfp at ffffffff811c92c1
 #22 ext4_ext_grow_indepth at ffffffff8124565c
 #23 ext4_ext_create_new_leaf at ffffffff81246ca8
 #24 ext4_ext_insert_extent at ffffffff81246f09
 #25 ext4_ext_map_blocks at ffffffff8124a848
 #26 ext4_map_blocks at ffffffff8121a5b7
 #27 mpage_map_one_extent at ffffffff8121b1fa
 #28 mpage_map_and_submit_extent at ffffffff8121f07b
 #29 ext4_writepages at ffffffff8121f6d5
 #30 do_writepages at ffffffff8112c490
 #31 __filemap_fdatawrite_range at ffffffff81120199
 #32 filemap_flush at ffffffff8112041c
 #33 ext4_alloc_da_blocks at ffffffff81219da1
 #34 ext4_rename at ffffffff81229b91
 #35 ext4_rename2 at ffffffff81229e32
 #36 vfs_rename at ffffffff811a08a5
 #37 SYSC_renameat2 at ffffffff811a3ffc
 #38 sys_renameat2 at ffffffff811a408e
 #39 sys_rename at ffffffff8119e51e
 #40 system_call_fastpath at ffffffff815afa89

Dave Chinner has properly pointed out that this is a deadlock in the
reclaim code because ext4 doesn't submit pages which are marked by
PG_writeback right away.

The heuristic was introduced by commit e62e384e9da8 ("memcg: prevent OOM
with too many dirty pages") and it was applied only when may_enter_fs
was specified.  The code has been changed by c3b94f44fcb0 ("memcg:
further prevent OOM with too many dirty pages") which has removed the
__GFP_FS restriction with a reasoning that we do not get into the fs
code.  But this is not sufficient apparently because the fs doesn't
necessarily submit pages marked PG_writeback for IO right away.

ext4_bio_write_page calls io_submit_add_bh but that doesn't necessarily
submit the bio.  Instead it tries to map more pages into the bio and
mpage_map_one_extent might trigger memcg charge which might end up
waiting on a page which is marked PG_writeback but hasn't been submitted
yet so we would end up waiting for something that never finishes.

Fix this issue by replacing __GFP_IO by may_enter_fs check (for case 2)
before we go to wait on the writeback.  The page fault path, which is
the only path that triggers memcg oom killer since 3.12, shouldn't
require GFP_NOFS and so we shouldn't reintroduce the premature OOM
killer issue which was originally addressed by the heuristic.

As per David Chinner the xfs is doing similar thing since 2.6.15 already
so ext4 is not the only affected filesystem.  Moreover he notes:

: For example: IO completion might require unwritten extent conversion
: which executes filesystem transactions and GFP_NOFS allocations. The
: writeback flag on the pages can not be cleared until unwritten
: extent conversion completes. Hence memory reclaim cannot wait on
: page writeback to complete in GFP_NOFS context because it is not
: safe to do so, memcg reclaim or otherwise.

Cc: stable@vger.kernel.org # 3.9+
[tytso@mit.edu: corrected the control flow]
Fixes: c3b94f44fcb0 ("memcg: further prevent OOM with too many dirty pages")
Reported-by: Nikolay Borisov <kernel@kyup.com>
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
@GhaithCraft GhaithCraft reopened this Nov 17, 2017
Tkkg1994 and others added 23 commits November 17, 2017 16:00
Signed-off-by: djb77 <dwayne.bakewell@gmail.com>
- Thanks to Tkkg1994
This reverts commit 783dc5cde628c51b3c3fdc8b4313b9425d1d0301.
-FIOPS
-MAPLE
-SIO
-SIO PLUS
-TRIPNDROID
-VR
-ZEN
This makes them work better with big.LITTLE setups.  Previously, all big cluster tunables were lost when a cluster went offline.

Signed-off-by: Luca Grifo <lg@linux.com>
Signed-off-by: djb77 <dwayne.bakewell@gmail.com>
Enabling software CRCs on the data blocks can be a significant (30%) performance cost, and for other reasons may not always be desired.
So we allow it it to be disabled.
Squashed commit of the following:

commit f49e14ccdcb6694ed27754e020057d27a8fcca07
Author: Andrei F <luxneb@gmail.com>
Date:   Thu Nov 26 22:40:38 2015 +0100

    elevator: Fix a race in elevator switching

    commit d50235b7bc3ee0a0427984d763ea7534149531b4 upstream.

    There's a race between elevator switching and normal io operation.
        Because the allocation of struct elevator_queue and struct elevator_data
        don't in a atomic operation.So there are have chance to use NULL
        ->elevator_data.
        For example:
            Thread A:                               Thread B
            blk_queu_bio                            elevator_switch
            spin_lock_irq(q->queue_block)           elevator_alloc
            elv_merge                               elevator_init_fn

        Because call elevator_alloc, it can't hold queue_lock and the
        ->elevator_data is NULL.So at the same time, threadA call elv_merge and
        nedd some info of elevator_data.So the crash happened.

        Move the elevator_alloc into func elevator_init_fn, it make the
        operations in a atomic operation.

        Using the follow method can easy reproduce this bug
        1:dd if=/dev/sdb of=/dev/null
        2:while true;do echo noop > scheduler;echo deadline > scheduler;done

        The test method also use this method.

    Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>
    Cc: Jonghwan Choi <jhbird.choi@samsung.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit daf22a727e64f1277b074442efb821366015ca72
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Thu Jul 25 13:45:21 2013 +0300

    block: row: Remove warning massage from add_request

    Regular priority queues is marked as "starved" if it skipped a dispatch
    due to being empty. When a new request is added to a "starved" queue
    it will be marked as urgent.
    The removed WARN_ON was warning about an impossible case when a regular
    priority (read) queue was marked as starved but wasn't empty. This is
    a possible case due to the bellow:
    If the device driver fetched a read request that is pending for
    transmission and an URGENT request arrives, the fetched read will be
    reinserted back to the scheduler. Its possible that the queue it will be
    reinserted to was marked as "starved" in the meanwhile due to being empty.

    CRs-fixed: 517800
    Change-Id: Iaae642ea0ed9c817c41745b0e8ae2217cc684f0c
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit dca47e75f1413d58e4f97ef638e5d4456c55bdce
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Tue Jul 2 14:43:13 2013 +0300

    block: row: change hrtimer_cancel to hrtimer_try_to_cancel

    Calling hrtimer_cancel with interrupts disabled can result in a livelock.
    When flushing plug list in the block layer interrupts are disabled and an
    hrtimer is used when adding requests from that plug list to the scheduler.
    In this code flow, if the hrtimer (which is used for idling) is set, it's
    being canceled by calling hrtimer_cancel. hrtimer_cancel will perform
    the following in an endless loop:
    1. try cancel the timer
    2. if fails - rest_cpu
    the cancellation can fail if the timer function already started. Since
    interrupts are disabled it can never complete.
    This patch reduced the number of times the hrtimer lock is taken while
    interrupts are disabled by calling hrtimer_try_co_cancel. the later will
    try to cancel the timer just once and return with an error code if fails.

    CRs-fixed: 499887
    Change-Id: I25f79c357426d72ad67c261ce7cb503ae97dc7b9
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit a6047b9d808eaa787e4df3107bea7536334856cd
Author: Lee Susman <lsusman@codeaurora.org>
Date:   Sun Jun 23 16:27:40 2013 +0300

    block: row-iosched idling triggered by readahead pages

    In the current implementation idling is triggered only by request
    insertion frequency. This heuristic is not very accurate and may hit
    random requests that shouldn't trigger idling. This patch uses the
    PG_readahead flag in struct page's flags, which indicates that the page
    is part of a readahead window, to start idling upon dispatch of a request
    associated with a readahead page.

    The above readehead flag is used together with the existing
    insertion-frequency trigger. The frequency timer will catch read requests
    which are not part of a readahead window, but are still part of a
    sequential stream (and therefore dispatched in small time intervals).

    Change-Id: Icb7145199c007408de3f267645ccb842e051fd00
    Signed-off-by: Lee Susman <lsusman@codeaurora.org>

commit e70e4e8e1d1f111023dd2b2d0fc9237240cab9ab
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Wed May 1 14:35:20 2013 +0300

    block: urgent: Fix dispatching of URGENT mechanism

    There are cases when blk_peek_request is called not from blk_fetch_request
    thus the URGENT request may be started but the flag q->dispatched_urgent is
    not updated.

    Change-Id: I4fb588823f1b2949160cbd3907f4729767932e12
    CRs-fixed: 471736
    CRs-fixed: 473036
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit 0e36870f6a436840eed1782d0e85b4adb300b59f
Author: Maya Erez <merez@codeaurora.org>
Date:   Sun Apr 14 15:19:52 2013 +0300

    block: row: Fix starvation tolerance values

    The current starvation tolerance values increase the boot time
    since high priority SW requests are delayed by regular priority requests.
    In order to overcome this, increase the starvation tolerance values.

    Change-Id: I9947fca9927cbd39a1d41d4bd87069df679d3103
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
    Signed-off-by: Maya Erez <merez@codeaurora.org>

commit 3cab8d28e735fdad300eda3bed703129ba05d70a
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Thu Apr 11 14:57:15 2013 +0300

    block: urgent request: Update dispatch_urgent in case of requeue/reinsert

    The block layer implements a mechanism for verifying that the device
    driver won't be notified of an URGENT request if there is already an
    URGENT request in flight. This is due to the fact that interrupting an
    URGENT request isn't efficient.
    This patch fixes the above described mechanism in case the URGENT request
    was returned back to the block layer from some reason: by requeue or
    reinsert.

    CRs-fixed: 473376, 473036, 471736
    Change-Id: Ie8b8208230a302d4526068531616984825f1050d
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit e052e4574bb928b44e660b9679d23e14011b0b9d
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Thu Mar 21 11:04:02 2013 +0200

    block: row: Update sysfs functions

    All ROW (time related) configurable parameters are stored in ms so there
    is no need to convert from/to ms when reading/updating them via sysfs.

    Change-Id: Ib6a1de54140b5d25696743da944c076dd6fc02ae
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

    Conflicts:
    	block/row-iosched.c

commit 2c3203650c2109c18abb3b17a5114d54bb22e683
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Thu Mar 21 13:02:07 2013 +0200

    block: row: Prevent starvation of regular priority by high priority

    At the moment all REGULAR and LOW priority requests are starved as long as
    there are HIGH priority requests to dispatch.
    This patch prevents the above starvation by setting a starvation limit the
    REGULAR\LOW priority requests can tolerate.

    Change-Id: Ibe24207982c2c55d75c0b0230f67e013d1106017
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit a5434f618d395a03fe19ef430a8c5747bad069f9
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Tue Mar 12 21:02:33 2013 +0200

    block: urgent request: remove unnecessary urgent marking

    An urgent request is marked by the scheduler in rq->cmd_flags with the
    REQ_URGENT flag. There is no need to add an additional marking by
    the block layer.

    Change-Id: I05d5e9539d2f6c1bfa80240b0671db197a5d3b3f
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit 3928fb74c2f78578c57913938644acb704b77586
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Tue Mar 12 21:17:18 2013 +0200

    block: row: Re-design urgent request notification mechanism

    When ROW scheduler reports to the block layer that there is an urgent
    request pending, the device driver may decide to stop the transmission
    of the current request in order to handle the urgent one. This is done
    in order to reduce the latency of an urgent request. For example:
    long WRITE may be stopped to handle an urgent READ.

    This patch updates the ROW URGENT notification policy to apply with the
    below:

    - Don't notify URGENT if there is an un-completed URGENT request in driver
    - After notifying that URGENT request is present, the next request
      dispatched is the URGENT one.
    - At every given moment only 1 request can be marked as URGENT.
      Independent of it's location (driver or scheduler)

    Other changes to URGENT policy:
    - Only READ queues are allowed to notify of an URGENT request pending.

    CR fix:
    If a pending urgent request (A) gets merged with another request (B)
    A is removed from scheduler queue but is not removed from
    rd->pending_urgent_rq.

    CRs-Fixed: 453712
    Change-Id: I321e8cf58e12a05b82edd2a03f52fcce7bc9a900
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit 8912aa92e3d919ceabc72b2eddc829fc5e4bd7eb
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Thu Jan 24 16:17:27 2013 +0200

    block: row: Update initial values of ROW data structures

    This patch sets the initial values of internal ROW
    parameters.

    Change-Id: I38132062a7fcbe2e58b9cc757e55caac64d013dc
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
    [smuckle@codeaurora.org: ported from msm-3.7]
    Signed-off-by: Steve Muckle <smuckle@codeaurora.org>

commit b709e1a8a56784cb83c2c31a4e7df574a6b29802
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Thu Jan 24 15:08:40 2013 +0200

    block: row: Don't notify URGENT if there are un-completed urgent req

    When ROW scheduler reports to the block layer that there is an urgent
    request pending, the device driver may decide to stop the transmission
    of the current request in order to handle the urgent one. If the current
    transmitted request is an urgent request - we don't want it to be
    stopped.
    Due to the above ROW scheduler won't notify of an urgent request if
    there are urgent requests in flight.

    Change-Id: I2fa186d911b908ec7611682b378b9cdc48637ac7
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit eba966603cc8e6f8fb418bf702f5a6eca5f56f34
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Thu Jan 24 04:01:59 2013 +0200

    block: add REQ_URGENT to request flags

    This patch adds a new flag to be used in cmd_flags field of struct request
    for marking request as urgent.
    Urgent request is the one that should be given priority currently handled
    (regular) request by the device driver. The decision of a request urgency
    is taken by the scheduler.

    Change-Id: Ic20470987ef23410f1d0324f96f00578f7df8717
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

    Conflicts:
    	include/linux/blk_types.h

commit 7c865ab1a9ae626d023d0b03ed7fbe5c57bcbe7c
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Thu Jan 17 20:56:07 2013 +0200

    block: row: Idling mechanism re-factoring

    At the moment idling in ROW is implemented by delayed work that uses
    jiffies granularity which is not very accurate. This patch replaces
    current idling mechanism implementation with hrtime API, which gives
    nanosecond resolution (instead of jiffies).

    Change-Id: I86c7b1776d035e1d81571894b300228c8b8f2d92
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit 72ea1d39c04734bf5eb52117968704148d2da42f
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Wed Jan 23 17:15:49 2013 +0200

    block: row: Dispatch requests according to their io-priority

    This patch implements "application-hints" which is a way the issuing
    application can notify the scheduler on the priority of its request.
    This is done by setting the io-priority of the request.
    This patch reuses an already existing mechanism of io-priorities developed
    for CFQ. Please refer to kernel/Documentation/block/ioprio.txt for
    usage example and explanations.

    Change-Id: I228ec8e52161b424242bb7bb133418dc8b73925a
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit 9f8f3d2757788477656b1d25a3055ae11d97cee4
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Sat Jan 12 16:23:18 2013 +0200

    block: row: Aggregate row_queue parameters to one structure

    Each ROW queues has several parameters which default values are defined
    in separate arrays. This patch aggregates all default values into one
    array.
    The values in question are:
     - is idling enabled for the queue
     - queue quantum
     - can the queue notify on urgent request

    Change-Id: I3821b0a042542295069b340406a16b1000873ec6
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit d84ad45f3077661cab5984cd2fb7d5ef2ff06e39
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Sat Jan 12 16:21:47 2013 +0200

    block: row: fix sysfs functions - idle_time conversion

    idle_time was updated to be stored in msec instead of jiffies.
    So there is no need to convert the value when reading from user or
    displaying the value to him.

    Change-Id: I58e074b204e90a90536d32199ac668112966e9cf
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit 202b21e9daf7b8a097f97f764bb4ad4712c75fa7
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Sat Jan 12 16:21:12 2013 +0200

    block: row: Insert dispatch_quantum into struct row_queue

    There is really no point in keeping the dispatch quantum
    of a queue outside of it. By inserting it to the row_queue
    structure we spare extra level in accessing it.

    Change-Id: Ic77571818b643e71f9aafbb2ca93d0a92158b199
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit 58ca84f091faa6ff8c4f567b158be5d38f9a5c58
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Sun Jan 13 22:04:59 2013 +0200

    block: row: Add some debug information on ROW queues

    1. Add a counter for number of requests on queue.
    2. Add function to print queues status (number requests
       currently on queue and number of already dispatched requests
       in current dispatch cycle).

    Change-Id: I1e98b9ca33853e6e6a8ddc53240f6cd6981e6024
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit 1bbb2c7ada5a647cab1f2306458d6cf9b821ddf7
Author: Subhash Jadavani <subhashj@codeaurora.org>
Date:   Thu Jan 10 02:15:13 2013 +0530

    block: blk-merge: don't merge the pages with non-contiguous descriptors

    blk_rq_map_sg() function merges the physically contiguous pages to use same
    scatter-gather node without checking if their page descriptors are
    contiguous or not.

    Now when dma_map_sg() is called on the scatter gather list, it would
    take the base page pointer from each node (one by one) and iterates
    through all of the pages in same sg node by keep incrementing the base
    page pointer with the assumption that physically contiguous pages will
    have their page descriptor address contiguous which may not be true
    if SPARSEMEM config is enabled. So here we may end referring to invalid
    page descriptor.

    Following table shows the example of physically contiguous pages but
    their page descriptor addresses non-contiguous.
    -------------------------------------------
    | Page Descriptor    |   Physical Address |
    ------------------------------------------
    | 0xc1e43fdc         |   0xdffff000       |
    | 0xc2052000         |   0xe0000000       |
    -------------------------------------------

    With this patch, relevant blk-merge functions will also check if the
    physically contiguous pages are having page descriptors address contiguous
    or not? If not then, these pages are separated to be in different
    scatter-gather nodes.

    CRs-Fixed: 392141
    Change-Id: I3601565e5569a69f06fb3af99061c4d4c23af241
    Signed-off-by: Subhash Jadavani <subhashj@codeaurora.org>

    Conflicts:
    	block/blk-merge.c

commit 9a9b428480c932ef8434d8b9bd3b7bafdcac3f84
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Thu Dec 20 19:23:58 2012 +0200

    row: Add support for urgent request handling

    This patch adds support for handling urgent requests.
    ROW queue can be marked as "urgent" so if it was un-served in last
    dispatch cycle and a request was added to it - it will trigger
    issuing an urgent-request-notification to the block device driver.
    The block device driver may choose at stop the transmission of current
    ongoing request to handle the urgent one. Foe example: long WRITE may
    be stopped to handle an urgent READ. This decreases READ latency.

    Change-Id: I84954c13f5e3b1b5caeadc9fe1f9aa21208cb35e
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit 8d5ec526b7e70307d3c4ce587b714349f44c0be8
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Thu Dec 6 13:17:19 2012 +0200

    block:row: fix idling mechanism in ROW

    This patch addresses the following issues found in the ROW idling
    mechanism:
    1. Fix the delay passed to queue_delayed_work (pass actual delay
       and not the time when to start the work)
    2. Change the idle time and the idling-trigger frequency to be
       HZ dependent (instead of using msec_to_jiffies())
    3. Destroy idle_workqueue() in queue_exit

    Change-Id: If86513ad6b4be44fb7a860f29bd2127197d8d5bf
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

    Conflicts:
    	block/row-iosched.c

commit c26a95811462b9ba8eca23b4ba2150e7b660ca40
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Tue Oct 30 08:33:06 2012 +0200

    row: Adding support for reinsert already dispatched req

    Add support for reinserting already dispatched request back to the
    schedulers internal data structures.
    The request will be reinserted back to the queue (head) it was
    dispatched from as if it was never dispatched.

    Change-Id: I70954df300774409c25b5821465fb3aa33d8feb5
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit a1a6f09cae0149d935bcea3f20d4acb6556d68f9
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Tue Dec 4 16:04:15 2012 +0200

    block: Add API for urgent request handling

    This patch add support in block & elevator layers for handling
    urgent requests. The decision if a request is urgent or not is taken
    by the scheduler. Urgent request notification is passed to the underlying
    block device driver (eMMC for example). Block device driver may decide to
    interrupt the currently running low priority request to serve the new
    urgent request. By doing so READ latency is greatly reduced in read&write
    collision scenarios.

    Note that if the current scheduler doesn't implement the urgent request
    mechanism, this code path is never activated.

    Change-Id: I8aa74b9b45c0d3a2221bd4e82ea76eb4103e7cfa
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

    Conflicts:
    	block/blk-core.c

commit 4e907d9d6079629d6ce61fbdfb1a629d3587e176
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Tue Dec 4 15:54:43 2012 +0200

    block: Add support for reinsert a dispatched req

    Add support for reinserting a dispatched request back to the
    scheduler's internal data structures.
    This capability is used by the device driver when it chooses to
    interrupt the current request transmission and execute another (more
    urgent) pending request. For example: interrupting long write in order
    to handle pending read. The device driver re-inserts the
    remaining write request back to the scheduler, to be rescheduled
    for transmission later on.

    Add API for verifying whether the current scheduler
    supports reinserting requests mechanism. If reinsert mechanism isn't
    supported by the scheduler, this code path will never be activated.

    Change-Id: I5c982a66b651ebf544aae60063ac8a340d79e67f
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit 0675c27faab797f7149893b84cc357aadb37c697
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Mon Oct 15 20:56:02 2012 +0200

    block: ROW: Fix forced dispatch

    This patch fixes forced dispatch in the ROW scheduling algorithm.
    When the dispatch function is called with the forced flag on, we
    can't delay the dispatch of the requests that are in scheduler queues.
    Thus, when dispatch is called with forced turned on, we need to cancel
    idling, or not to idle at all.

    Change-Id: I3aa0da33ad7b59c0731c696f1392b48525b52ddc
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

commit ce6acf59662d1bbe5663a64aef9fe1695b8bbe1b
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date:   Thu Sep 20 10:46:10 2012 +0300

    block: Adding ROW scheduling algorithm

    This patch adds the implementation of a new scheduling algorithm - ROW.
    The policy of this algorithm is to prioritize READ requests over WRITE
    as much as possible without starving the WRITE requests.

    Change-Id: I4ed52ea21d43b0e7c0769b2599779a3d3869c519
    Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

Signed-off-by: Tkkg1994 <luca.grifo@outlook.com>
Signed-off-by: djb77 <dwayne.bakewell@gmail.com>
Also updated block/Makefile, block/Kconfig.iosched and defconfig files

block: introduce the BFQ-v7r8 I/O sched for 3.18.0

Add the BFQ-v7r8 I/O scheduler to 3.18.0.
The general structure is borrowed from CFQ, as much of the code for
handling I/O contexts. Over time, several useful features have been
ported from CFQ as well (details in the changelog in README.BFQ). A
(bfq_)queue is associated to each task doing I/O on a device, and each
time a scheduling decision has to be made a queue is selected and served
until it expires.

    - Slices are given in the service domain: tasks are assigned
      budgets, measured in number of sectors. Once got the disk, a task
      must however consume its assigned budget within a configurable
      maximum time (by default, the maximum possible value of the
      budgets is automatically computed to comply with this timeout).
      This allows the desired latency vs "throughput boosting" tradeoff
      to be set.

    - Budgets are scheduled according to a variant of WF2Q+, implemented
      using an augmented rb-tree to take eligibility into account while
      preserving an O(log N) overall complexity.

    - A low-latency tunable is provided; if enabled, both interactive
      and soft real-time applications are guaranteed a very low latency.

    - Latency guarantees are preserved also in the presence of NCQ.

    - Also with flash-based devices, a high throughput is achieved
      while still preserving latency guarantees.

    - BFQ features Early Queue Merge (EQM), a sort of fusion of the
      cooperating-queue-merging and the preemption mechanisms present
      in CFQ. EQM is in fact a unified mechanism that tries to get a
      sequential read pattern, and hence a high throughput, with any
      set of processes performing interleaved I/O over a contiguous
      sequence of sectors.

    - BFQ supports full hierarchical scheduling, exporting a cgroups
      interface.  Since each node has a full scheduler, each group can
      be assigned its own weight.

    - If the cgroups interface is not used, only I/O priorities can be
      assigned to processes, with ioprio values mapped to weights
      with the relation weight = IOPRIO_BE_NR - ioprio.

    - ioprio classes are served in strict priority order, i.e., lower
      priority queues are not served as long as there are higher
      priority queues.  Among queues in the same class the bandwidth is
      distributed in proportion to the weight of each queue. A very
      thin extra bandwidth is however guaranteed to the Idle class, to
      prevent it from starving.

Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com>
Signed-off-by: Tkkg1994 <luca.grifo@outlook.com>
Signed-off-by: djb77 <dwayne.bakewell@gmail.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
GhaithCraft pushed a commit that referenced this pull request Nov 17, 2017
[ Upstream commit ecf5fc6e9654cd7a268c782a523f072b2f1959f9 ]

Nikolay has reported a hang when a memcg reclaim got stuck with the
following backtrace:

PID: 18308  TASK: ffff883d7c9b0a30  CPU: 1   COMMAND: "rsync"
  #0 __schedule at ffffffff815ab152
  #1 schedule at ffffffff815ab76e
  #2 schedule_timeout at ffffffff815ae5e5
  #3 io_schedule_timeout at ffffffff815aad6a
  #4 bit_wait_io at ffffffff815abfc6
  #5 __wait_on_bit at ffffffff815abda5
  #6 wait_on_page_bit at ffffffff8111fd4f
  #7 shrink_page_list at ffffffff81135445
  #8 shrink_inactive_list at ffffffff81135845
  #9 shrink_lruvec at ffffffff81135ead
 #10 shrink_zone at ffffffff811360c3
 #11 shrink_zones at ffffffff81136eff
 #12 do_try_to_free_pages at ffffffff8113712f
 #13 try_to_free_mem_cgroup_pages at ffffffff811372be
 #14 try_charge at ffffffff81189423
 #15 mem_cgroup_try_charge at ffffffff8118c6f5
 #16 __add_to_page_cache_locked at ffffffff8112137d
 #17 add_to_page_cache_lru at ffffffff81121618
 #18 pagecache_get_page at ffffffff8112170b
 #19 grow_dev_page at ffffffff811c8297
 #20 __getblk_slow at ffffffff811c91d6
 #21 __getblk_gfp at ffffffff811c92c1
 #22 ext4_ext_grow_indepth at ffffffff8124565c
 #23 ext4_ext_create_new_leaf at ffffffff81246ca8
 #24 ext4_ext_insert_extent at ffffffff81246f09
 #25 ext4_ext_map_blocks at ffffffff8124a848
 #26 ext4_map_blocks at ffffffff8121a5b7
 #27 mpage_map_one_extent at ffffffff8121b1fa
 #28 mpage_map_and_submit_extent at ffffffff8121f07b
 #29 ext4_writepages at ffffffff8121f6d5
 #30 do_writepages at ffffffff8112c490
 #31 __filemap_fdatawrite_range at ffffffff81120199
 #32 filemap_flush at ffffffff8112041c
 #33 ext4_alloc_da_blocks at ffffffff81219da1
 #34 ext4_rename at ffffffff81229b91
 #35 ext4_rename2 at ffffffff81229e32
 #36 vfs_rename at ffffffff811a08a5
 #37 SYSC_renameat2 at ffffffff811a3ffc
 #38 sys_renameat2 at ffffffff811a408e
 #39 sys_rename at ffffffff8119e51e
 #40 system_call_fastpath at ffffffff815afa89

Dave Chinner has properly pointed out that this is a deadlock in the
reclaim code because ext4 doesn't submit pages which are marked by
PG_writeback right away.

The heuristic was introduced by commit e62e384e9da8 ("memcg: prevent OOM
with too many dirty pages") and it was applied only when may_enter_fs
was specified.  The code has been changed by c3b94f44fcb0 ("memcg:
further prevent OOM with too many dirty pages") which has removed the
__GFP_FS restriction with a reasoning that we do not get into the fs
code.  But this is not sufficient apparently because the fs doesn't
necessarily submit pages marked PG_writeback for IO right away.

ext4_bio_write_page calls io_submit_add_bh but that doesn't necessarily
submit the bio.  Instead it tries to map more pages into the bio and
mpage_map_one_extent might trigger memcg charge which might end up
waiting on a page which is marked PG_writeback but hasn't been submitted
yet so we would end up waiting for something that never finishes.

Fix this issue by replacing __GFP_IO by may_enter_fs check (for case 2)
before we go to wait on the writeback.  The page fault path, which is
the only path that triggers memcg oom killer since 3.12, shouldn't
require GFP_NOFS and so we shouldn't reintroduce the premature OOM
killer issue which was originally addressed by the heuristic.

As per David Chinner the xfs is doing similar thing since 2.6.15 already
so ext4 is not the only affected filesystem.  Moreover he notes:

: For example: IO completion might require unwritten extent conversion
: which executes filesystem transactions and GFP_NOFS allocations. The
: writeback flag on the pages can not be cleared until unwritten
: extent conversion completes. Hence memory reclaim cannot wait on
: page writeback to complete in GFP_NOFS context because it is not
: safe to do so, memcg reclaim or otherwise.

Cc: stable@vger.kernel.org # 3.9+
[tytso@mit.edu: corrected the control flow]
Fixes: c3b94f44fcb0 ("memcg: further prevent OOM with too many dirty pages")
Reported-by: Nikolay Borisov <kernel@kyup.com>
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
GhaithCraft pushed a commit that referenced this pull request Nov 18, 2017
commit a743bbeef27b9176987ec0cb7f906ab0ab52d1da upstream.

The warning below says it all:

  BUG: using __this_cpu_read() in preemptible [00000000] code: swapper/0/1
  caller is __this_cpu_preempt_check
  CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.14.0-rc8 #4
  Call Trace:
   dump_stack
   check_preemption_disabled
   ? do_early_param
   __this_cpu_preempt_check
   arch_perfmon_init
   op_nmi_init
   ? alloc_pci_root_info
   oprofile_arch_init
   oprofile_init
   do_one_initcall
   ...

These accessors should not have been used in the first place: it is PPro so
no mixed silicon revisions and thus it can simply use boot_cpu_data.

Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Tested-by: Fengguang Wu <fengguang.wu@intel.com>
Fix-creation-mandated-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Robert Richter <rric@kernel.org>
Cc: x86@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants