mm: handle lru_add_drain_all for UP properly [Linux 5.0]

mm: handle lru_add_drain_all for UP properly [Linux 5.0]

This Linux kernel change "mm: handle lru_add_drain_all for UP properly" is included in the Linux 5.0 release. This change is authored by Michal Hocko <mhocko [at] suse.com> on Wed Feb 20 22:19:54 2019 -0800. The commit for this change in Linux stable tree is 6ea183d (patch).

mm: handle lru_add_drain_all for UP properly

Since for_each_cpu(cpu, mask) added by commit 2d3854a37e8b767a
("cpumask: introduce new API, without changing anything") did not
evaluate the mask argument if NR_CPUS == 1 due to CONFIG_SMP=n,
lru_add_drain_all() is hitting WARN_ON() at __flush_work() added by
commit 4d43d395fed12463 ("workqueue: Try to catch flush_work() without
INIT_WORK().") by unconditionally calling flush_work() [1].

Workaround this issue by using CONFIG_SMP=n specific lru_add_drain_all
implementation.  There is no real need to defer the implementation to
the workqueue as the draining is going to happen on the local cpu.  So
alias lru_add_drain_all to lru_add_drain which does all the necessary
work.

[[email protected]: fix various build warnings]
[1] https://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Michal Hocko <[email protected]>
Reported-by: Guenter Roeck <[email protected]>
Debugged-by: Tetsuo Handa <[email protected]>
Cc: Tejun Heo <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>

There are 17 lines of Linux source code added/deleted in this change. Code changes to Linux kernel are as follows.

 mm/swap.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/mm/swap.c b/mm/swap.c
index 4929bc1..4d7d37e 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -320,11 +320,6 @@ static inline void activate_page_drain(int cpu)
 {
 }

-static bool need_activate_page_drain(int cpu)
-{
-   return false;
-}
-
 void activate_page(struct page *page)
 {
    struct zone *zone = page_zone(page);
@@ -653,13 +648,15 @@ void lru_add_drain(void)
    put_cpu();
 }

+#ifdef CONFIG_SMP
+
+static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
+
 static void lru_add_drain_per_cpu(struct work_struct *dummy)
 {
    lru_add_drain();
 }

-static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
-
 /*
  * Doesn't need any cpu hotplug locking because we do rely on per-cpu
  * kworkers being shut down before our page_alloc_cpu_dead callback is
@@ -702,6 +699,12 @@ void lru_add_drain_all(void)

    mutex_unlock(&lock);
 }
+#else
+void lru_add_drain_all(void)
+{
+   lru_add_drain();
+}
+#endif

 /**
  * release_pages - batched put_page()

Leave a Reply

Your email address will not be published. Required fields are marked *