Skip to content
Snippets Groups Projects
  1. Sep 24, 2014
    • Tejun Heo's avatar
      percpu_ref: add PERCPU_REF_INIT_* flags · 2aad2a86
      Tejun Heo authored
      
      With the recent addition of percpu_ref_reinit(), percpu_ref now can be
      used as a persistent switch which can be turned on and off repeatedly
      where turning off maps to killing the ref and waiting for it to drain;
      however, there currently isn't a way to initialize a percpu_ref in its
      off (killed and drained) state, which can be inconvenient for certain
      persistent switch use cases.
      
      Similarly, percpu_ref_switch_to_atomic/percpu() allow dynamic
      selection of operation mode; however, currently a newly initialized
      percpu_ref is always in percpu mode making it impossible to avoid the
      latency overhead of switching to atomic mode.
      
      This patch adds @flags to percpu_ref_init() and implements the
      following flags.
      
      * PERCPU_REF_INIT_ATOMIC	: start ref in atomic mode
      * PERCPU_REF_INIT_DEAD		: start ref killed and drained
      
      These flags should be able to serve the above two use cases.
      
      v2: target_core_tpg.c conversion was missing.  Fixed.
      
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarKent Overstreet <kmo@daterainc.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      2aad2a86
  2. Sep 08, 2014
    • Tejun Heo's avatar
      percpu-refcount: add @gfp to percpu_ref_init() · a34375ef
      Tejun Heo authored
      
      Percpu allocator now supports allocation mask.  Add @gfp to
      percpu_ref_init() so that !GFP_KERNEL allocation masks can be used
      with percpu_refs too.
      
      This patch doesn't make any functional difference.
      
      v2: blk-mq conversion was missing.  Updated.
      
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Kent Overstreet <koverstreet@google.com>
      Cc: Benjamin LaHaise <bcrl@kvack.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Nicholas A. Bellinger <nab@linux-iscsi.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      a34375ef
  3. Sep 04, 2014
  4. Sep 02, 2014
    • Jeff Moyer's avatar
      aio: add missing smp_rmb() in read_events_ring · 2ff396be
      Jeff Moyer authored
      
      We ran into a case on ppc64 running mariadb where io_getevents would
      return zeroed out I/O events.  After adding instrumentation, it became
      clear that there was some missing synchronization between reading the
      tail pointer and the events themselves.  This small patch fixes the
      problem in testing.
      
      Thanks to Zach for helping to look into this, and suggesting the fix.
      
      Signed-off-by: default avatarJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      Cc: stable@vger.kernel.org
      2ff396be
  5. Aug 24, 2014
    • Benjamin LaHaise's avatar
      aio: fix reqs_available handling · d856f32a
      Benjamin LaHaise authored
      As reported by Dan Aloni, commit f8567a38 ("aio: fix aio request
      leak when events are reaped by userspace") introduces a regression when
      user code attempts to perform io_submit() with more events than are
      available in the ring buffer.  Reverting that commit would reintroduce a
      regression when user space event reaping is used.
      
      Fixing this bug is a bit more involved than the previous attempts to fix
      this regression.  Since we do not have a single point at which we can
      count events as being reaped by user space and io_getevents(), we have
      to track event completion by looking at the number of events left in the
      event ring.  So long as there are as many events in the ring buffer as
      there have been completion events generate, we cannot call
      put_reqs_available().  The code to check for this is now placed in
      refill_reqs_available().
      
      A test program from Dan and modified by me for verifying this bug is available
      at http://www.kvack.org/~bcrl/20140824-aio_bug.c
      
       .
      
      Reported-by: default avatarDan Aloni <dan@kernelim.com>
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      Acked-by: default avatarDan Aloni <dan@kernelim.com>
      Cc: Kent Overstreet <kmo@daterainc.com>
      Cc: Mateusz Guzik <mguzik@redhat.com>
      Cc: Petr Matousek <pmatouse@redhat.com>
      Cc: stable@vger.kernel.org      # v3.16 and anything that f8567a38 was backported to
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d856f32a
  6. Jul 24, 2014
  7. Jul 22, 2014
  8. Jul 14, 2014
  9. Jun 28, 2014
    • Tejun Heo's avatar
      percpu-refcount: require percpu_ref to be exited explicitly · 9a1049da
      Tejun Heo authored
      
      Currently, a percpu_ref undoes percpu_ref_init() automatically by
      freeing the allocated percpu area when the percpu_ref is killed.
      While seemingly convenient, this has the following niggles.
      
      * It's impossible to re-init a released reference counter without
        going through re-allocation.
      
      * In the similar vein, it's impossible to initialize a percpu_ref
        count with static percpu variables.
      
      * We need and have an explicit destructor anyway for failure paths -
        percpu_ref_cancel_init().
      
      This patch removes the automatic percpu counter freeing in
      percpu_ref_kill_rcu() and repurposes percpu_ref_cancel_init() into a
      generic destructor now named percpu_ref_exit().  percpu_ref_destroy()
      is considered but it gets confusing with percpu_ref_kill() while
      "exit" clearly indicates that it's the counterpart of
      percpu_ref_init().
      
      All percpu_ref_cancel_init() users are updated to invoke
      percpu_ref_exit() instead and explicit percpu_ref_exit() calls are
      added to the destruction path of all percpu_ref users.
      
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      Cc: Kent Overstreet <kmo@daterainc.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Benjamin LaHaise <bcrl@kvack.org>
      Cc: Nicholas A. Bellinger <nab@linux-iscsi.org>
      Cc: Li Zefan <lizefan@huawei.com>
      9a1049da
    • Tejun Heo's avatar
      percpu-refcount, aio: use percpu_ref_cancel_init() in ioctx_alloc() · 55c6c814
      Tejun Heo authored
      
      ioctx_alloc() reaches inside percpu_ref and directly frees
      ->pcpu_count in its failure path, which is quite gross.  percpu_ref
      has been providing a proper interface to do this,
      percpu_ref_cancel_init(), for quite some time now.  Let's use that
      instead.
      
      This patch doesn't introduce any behavior changes.
      
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      Cc: Kent Overstreet <kmo@daterainc.com>
      55c6c814
  10. Jun 24, 2014
    • Oleg Nesterov's avatar
      aio: kill the misleading rcu read locks in ioctx_add_table() and kill_ioctx() · 855ef0de
      Oleg Nesterov authored
      
      ioctx_add_table() is the writer, it does not need rcu_read_lock() to
      protect ->ioctx_table. It relies on mm->ioctx_lock and rcu locks just
      add the confusion.
      
      And it doesn't need rcu_dereference() by the same reason, it must see
      any updates previously done under the same ->ioctx_lock. We could use
      rcu_dereference_protected() but the patch uses rcu_dereference_raw(),
      the function is simple enough.
      
      The same for kill_ioctx(), although it does not update the pointer.
      
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      855ef0de
    • Oleg Nesterov's avatar
      aio: change exit_aio() to load mm->ioctx_table once and avoid rcu_read_lock() · 4b70ac5f
      Oleg Nesterov authored
      
      On 04/30, Benjamin LaHaise wrote:
      >
      > > -		ctx->mmap_size = 0;
      > > -
      > > -		kill_ioctx(mm, ctx, NULL);
      > > +		if (ctx) {
      > > +			ctx->mmap_size = 0;
      > > +			kill_ioctx(mm, ctx, NULL);
      > > +		}
      >
      > Rather than indenting and moving the two lines changing mmap_size and the
      > kill_ioctx() call, why not just do "if (!ctx) ... continue;"?  That reduces
      > the number of lines changed and avoid excessive indentation.
      
      OK. To me the code looks better/simpler with "if (ctx)", but this is subjective
      of course, I won't argue.
      
      The patch still removes the empty line between mmap_size = 0 and kill_ioctx(),
      we reset mmap_size only for kill_ioctx(). But feel free to remove this change.
      
      -------------------------------------------------------------------------------
      Subject: [PATCH v3 1/2] aio: change exit_aio() to load mm->ioctx_table once and avoid rcu_read_lock()
      
      1. We can read ->ioctx_table only once and we do not read rcu_read_lock()
         or even rcu_dereference().
      
         This mm has no users, nobody else can play with ->ioctx_table. Otherwise
         the code is buggy anyway, if we need rcu_read_lock() in a loop because
         ->ioctx_table can be updated then kfree(table) is obviously wrong.
      
      2. Update the comment. "exit_mmap(mm) is coming" is the good reason to avoid
         munmap(), but another reason is that we simply can't do vm_munmap() unless
         current->mm == mm and this is not true in general, the caller is mmput().
      
      3. We do not really need to nullify mm->ioctx_table before return, probably
         the current code does this to catch the potential problems. But in this
         case RCU_INIT_POINTER(NULL) looks better.
      
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      4b70ac5f
    • Benjamin LaHaise's avatar
      aio: fix kernel memory disclosure in io_getevents() introduced in v3.10 · edfbbf38
      Benjamin LaHaise authored
      
      A kernel memory disclosure was introduced in aio_read_events_ring() in v3.10
      by commit a31ad380.  The changes made to
      aio_read_events_ring() failed to correctly limit the index into
      ctx->ring_pages[], allowing an attacked to cause the subsequent kmap() of
      an arbitrary page with a copy_to_user() to copy the contents into userspace.
      This vulnerability has been assigned CVE-2014-0206.  Thanks to Mateusz and
      Petr for disclosing this issue.
      
      This patch applies to v3.12+.  A separate backport is needed for 3.10/3.11.
      
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      Cc: Mateusz Guzik <mguzik@redhat.com>
      Cc: Petr Matousek <pmatouse@redhat.com>
      Cc: Kent Overstreet <kmo@daterainc.com>
      Cc: Jeff Moyer <jmoyer@redhat.com>
      Cc: stable@vger.kernel.org
      edfbbf38
    • Benjamin LaHaise's avatar
      aio: fix aio request leak when events are reaped by userspace · f8567a38
      Benjamin LaHaise authored
      
      The aio cleanups and optimizations by kmo that were merged into the 3.10
      tree added a regression for userspace event reaping.  Specifically, the
      reference counts are not decremented if the event is reaped in userspace,
      leading to the application being unable to submit further aio requests.
      This patch applies to 3.12+.  A separate backport is required for 3.10/3.11.
      This issue was uncovered as part of CVE-2014-0206.
      
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      Cc: stable@vger.kernel.org
      Cc: Kent Overstreet <kmo@daterainc.com>
      Cc: Mateusz Guzik <mguzik@redhat.com>
      Cc: Petr Matousek <pmatouse@redhat.com>
      f8567a38
  11. May 06, 2014
    • Al Viro's avatar
      new methods: ->read_iter() and ->write_iter() · 293bc982
      Al Viro authored
      
      Beginning to introduce those.  Just the callers for now, and it's
      clumsier than it'll eventually become; once we finish converting
      aio_read and aio_write instances, the things will get nicer.
      
      For now, these guys are in parallel to ->aio_read() and ->aio_write();
      they take iocb and iov_iter, with everything in iov_iter already
      validated.  File offset is passed in iocb->ki_pos, iov/nr_segs -
      in iov_iter.
      
      Main concerns in that series are stack footprint and ability to
      split the damn thing cleanly.
      
      [fix from Peter Ujfalusi <peter.ujfalusi@ti.com> folded]
      
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      293bc982
  12. May 01, 2014
  13. Apr 29, 2014
  14. Apr 22, 2014
  15. Apr 16, 2014
    • Anatol Pomozov's avatar
      aio: block io_destroy() until all context requests are completed · e02ba72a
      Anatol Pomozov authored
      deletes aio context and all resources related to. It makes sense that
      no IO operations connected to the context should be running after the context
      is destroyed. As we removed io_context we have no chance to
      get requests status or call io_getevents().
      
      man page for io_destroy says that this function may block until
      all context's requests are completed. Before kernel 3.11 io_destroy()
      blocked indeed, but since aio refactoring in 3.11 it is not true anymore.
      
      Here is a pseudo-code that shows a testcase for a race condition discovered
      in 3.11:
      
        initialize io_context
        io_submit(read to buffer)
        io_destroy()
      
        // context is destroyed so we can free the resources
        free(buffers);
      
        // if the buffer is allocated by some other user he'll be surprised
        // to learn that the buffer still filled by an outstanding operation
        // from the destroyed io_context
      
      The fix is straight-forward - add a completion struct and wait on it
      in io_destroy, complete() should be called when number of in-fligh requests
      reaches zero.
      
      If two or more io_destroy() called for the same context simultaneously then
      only the first one waits for IO completion, other calls behaviour is undefined.
      
      Tested: ran http://pastebin.com/LrPsQ4RL
      
       testcase for several hours and
        do not see the race condition anymore.
      
      Signed-off-by: default avatarAnatol Pomozov <anatol.pomozov@gmail.com>
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      e02ba72a
  16. Mar 28, 2014
    • Benjamin LaHaise's avatar
      aio: v4 ensure access to ctx->ring_pages is correctly serialised for migration · fa8a53c3
      Benjamin LaHaise authored
      
      As reported by Tang Chen, Gu Zheng and Yasuaki Isimatsu, the following issues
      exist in the aio ring page migration support.
      
      As a result, for example, we have the following problem:
      
                  thread 1                      |              thread 2
                                                |
      aio_migratepage()                         |
       |-> take ctx->completion_lock            |
       |-> migrate_page_copy(new, old)          |
       |   *NOW*, ctx->ring_pages[idx] == old   |
                                                |
                                                |    *NOW*, ctx->ring_pages[idx] == old
                                                |    aio_read_events_ring()
                                                |     |-> ring = kmap_atomic(ctx->ring_pages[0])
                                                |     |-> ring->head = head;          *HERE, write to the old ring page*
                                                |     |-> kunmap_atomic(ring);
                                                |
       |-> ctx->ring_pages[idx] = new           |
       |   *BUT NOW*, the content of            |
       |    ring_pages[idx] is old.             |
       |-> release ctx->completion_lock         |
      
      As above, the new ring page will not be updated.
      
      Fix this issue, as well as prevent races in aio_ring_setup() by holding
      the ring_lock mutex during kioctx setup and page migration.  This avoids
      the overhead of taking another spinlock in aio_read_events_ring() as Tang's
      and Gu's original fix did, pushing the overhead into the migration code.
      
      Note that to handle the nesting of ring_lock inside of mmap_sem, the
      migratepage operation uses mutex_trylock().  Page migration is not a 100%
      critical operation in this case, so the ocassional failure can be
      tolerated.  This issue was reported by Sasha Levin.
      
      Based on feedback from Linus, avoid the extra taking of ctx->completion_lock.
      Instead, make page migration fully serialised by mapping->private_lock, and
      have aio_free_ring() simply disconnect the kioctx from the mapping by calling
      put_aio_ring_file() before touching ctx->ring_pages[].  This simplifies the
      error handling logic in aio_migratepage(), and should improve robustness.
      
      v4: always do mutex_unlock() in cases when kioctx setup fails.
      
      Reported-by: default avatarYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Reported-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Gu Zheng <guz.fnst@cn.fujitsu.com>
      Cc: stable@vger.kernel.org
      fa8a53c3
  17. Dec 22, 2013
    • Linus Torvalds's avatar
      aio: clean up and fix aio_setup_ring page mapping · 3dc9acb6
      Linus Torvalds authored
      
      Since commit 36bc08cc ("fs/aio: Add support to aio ring pages
      migration") the aio ring setup code has used a special per-ring backing
      inode for the page allocations, rather than just using random anonymous
      pages.
      
      However, rather than remembering the pages as it allocated them, it
      would allocate the pages, insert them into the file mapping (dirty, so
      that they couldn't be free'd), and then forget about them.  And then to
      look them up again, it would mmap the mapping, and then use
      "get_user_pages()" to get back an array of the pages we just created.
      
      Now, not only is that incredibly inefficient, it also leaked all the
      pages if the mmap failed (which could happen due to excessive number of
      mappings, for example).
      
      So clean it all up, making it much more straightforward.  Also remove
      some left-overs of the previous (broken) mm_populate() usage that was
      removed in commit d6c355c7 ("aio: fix race in ring buffer page
      lookup introduced by page migration support") but left the pointless and
      now misleading MAP_POPULATE flag around.
      
      Tested-and-acked-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3dc9acb6
  18. Dec 21, 2013
    • Benjamin LaHaise's avatar
      aio/migratepages: make aio migrate pages sane · 8e321fef
      Benjamin LaHaise authored
      
      The arbitrary restriction on page counts offered by the core
      migrate_page_move_mapping() code results in rather suspicious looking
      fiddling with page reference counts in the aio_migratepage() operation.
      To fix this, make migrate_page_move_mapping() take an extra_count parameter
      that allows aio to tell the code about its own reference count on the page
      being migrated.
      
      While cleaning up aio_migratepage(), make it validate that the old page
      being passed in is actually what aio_migratepage() expects to prevent
      misbehaviour in the case of races.
      
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      8e321fef
    • Benjamin LaHaise's avatar
      aio: fix kioctx leak introduced by "aio: Fix a trinity splat" · 1881686f
      Benjamin LaHaise authored
      
      e34ecee2 reworked the percpu reference
      counting to correct a bug trinity found.  Unfortunately, the change lead
      to kioctxes being leaked because there was no final reference count to
      put.  Add that reference count back in to fix things.
      
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      Cc: stable@vger.kernel.org
      1881686f
  19. Dec 06, 2013
  20. Nov 19, 2013
  21. Nov 13, 2013
  22. Nov 09, 2013
  23. Oct 11, 2013
    • Kent Overstreet's avatar
      aio: Fix a trinity splat · e34ecee2
      Kent Overstreet authored
      
      aio kiocb refcounting was broken - it was relying on keeping track of
      the number of available ring buffer entries, which it needs to do
      anyways; then at shutdown time it'd wait for completions to be delivered
      until the # of available ring buffer entries equalled what it was
      initialized to.
      
      Problem with  that is that the ring buffer is mapped writable into
      userspace, so userspace could futz with the head and tail pointers to
      cause the kernel to see extra completions, and cause free_ioctx() to
      return while there were still outstanding kiocbs. Which would be bad.
      
      Fix is just to directly refcount the kiocbs - which is more
      straightforward, and with the new percpu refcounting code doesn't cost
      us any cacheline bouncing which was the whole point of the original
      scheme.
      
      Also clean up ioctx_alloc()'s error path and fix a bug where it wasn't
      subtracting from aio_nr if ioctx_add_table() failed.
      
      Signed-off-by: default avatarKent Overstreet <kmo@daterainc.com>
      e34ecee2
  24. Sep 27, 2013
  25. Sep 09, 2013
    • Artem Savkov's avatar
      aio: rcu_read_lock protection for new rcu_dereference calls · d9b2c871
      Artem Savkov authored
      
      Patch "aio: fix rcu sparse warnings introduced by ioctx table lookup patch"
      (77d30b14 in linux-next.git) introduced a
      couple of new rcu_dereference calls which are not protected by rcu_read_lock
      and result in following warnings during syscall fuzzing(trinity):
      
      [  471.646379] ===============================
      [  471.649727] [ INFO: suspicious RCU usage. ]
      [  471.653919] 3.11.0-next-20130906+ #496 Not tainted
      [  471.657792] -------------------------------
      [  471.661235] fs/aio.c:503 suspicious rcu_dereference_check() usage!
      [  471.665968]
      [  471.665968] other info that might help us debug this:
      [  471.665968]
      [  471.672141]
      [  471.672141] rcu_scheduler_active = 1, debug_locks = 1
      [  471.677549] 1 lock held by trinity-child0/3774:
      [  471.681675]  #0:  (&(&mm->ioctx_lock)->rlock){+.+...}, at: [<c119ba1a>] SyS_io_setup+0x63a/0xc70
      [  471.688721]
      [  471.688721] stack backtrace:
      [  471.692488] CPU: 1 PID: 3774 Comm: trinity-child0 Not tainted 3.11.0-next-20130906+ #496
      [  471.698437] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
      [  471.703151]  00000000 00000000 c58bbf30 c18a814b de2234c0 c58bbf58 c10a4ec6 c1b0d824
      [  471.709544]  c1b0f60e 00000001 00000001 c1af61b0 00000000 cb670ac0 c3aca000 c58bbfac
      [  471.716251]  c119bc7c 00000002 00000001 00000000 c119b8dd 00000000 c10cf684 c58bbfb4
      [  471.722902] Call Trace:
      [  471.724859]  [<c18a814b>] dump_stack+0x4b/0x66
      [  471.728772]  [<c10a4ec6>] lockdep_rcu_suspicious+0xc6/0x100
      [  471.733716]  [<c119bc7c>] SyS_io_setup+0x89c/0xc70
      [  471.737806]  [<c119b8dd>] ? SyS_io_setup+0x4fd/0xc70
      [  471.741689]  [<c10cf684>] ? __audit_syscall_entry+0x94/0xe0
      [  471.746080]  [<c18b1fcc>] syscall_call+0x7/0xb
      [  471.749723]  [<c1080000>] ? task_fork_fair+0x240/0x260
      
      Signed-off-by: default avatarArtem Savkov <artem.savkov@gmail.com>
      Reviewed-by: default avatarGu Zheng <guz.fnst@cn.fujitsu.com>
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      d9b2c871
    • Benjamin LaHaise's avatar
      aio: fix race in ring buffer page lookup introduced by page migration support · d6c355c7
      Benjamin LaHaise authored
      
      Prior to the introduction of page migration support in "fs/aio: Add support
      to aio ring pages migration" / 36bc08cc,
      mapping of the ring buffer pages was done via get_user_pages() while
      retaining mmap_sem held for write.  This avoided possible races with userland
      racing an munmap() or mremap().  The page migration patch, however, switched
      to using mm_populate() to prime the page mapping.  mm_populate() cannot be
      called with mmap_sem held.
      
      Instead of dropping the mmap_sem, revert to the old behaviour and simply
      drop the use of mm_populate() since get_user_pages() will cause the pages to
      get mapped anyways.  Thanks to Al Viro for spotting this issue.
      
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      d6c355c7
  26. Aug 30, 2013
  27. Aug 07, 2013
  28. Aug 05, 2013
    • Benjamin LaHaise's avatar
      aio: fix error handling and rcu usage in "convert the ioctx list to table lookup v3" · da90382c
      Benjamin LaHaise authored
      
      In the patch "aio: convert the ioctx list to table lookup v3", incorrect
      handling in the ioctx_alloc() error path was introduced that lead to an
      ioctx being added via ioctx_add_table() while freed when the ioctx_alloc()
      call returned -EAGAIN due to hitting the aio_max_nr limit.  Fix this by
      only calling ioctx_add_table() as the last step in ioctx_alloc().
      
      Also, several unnecessary rcu_dereference() calls were added that lead to
      RCU warnings where the system was already protected by a spin lock for
      accessing mm->ioctx_table.
      
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      da90382c
Loading