io_uring: have submission side sqe errors post a cqe [Linux 5.1]

io_uring: have submission side sqe errors post a cqe [Linux 5.1]

This Linux kernel change "io_uring: have submission side sqe errors post a cqe" is included in the Linux 5.1 release. This change is authored by Jens Axboe <axboe [at] kernel.dk> on Tue Apr 30 10:16:07 2019 -0600. The commit for this change in Linux stable tree is 5c8b0b5 (patch).

io_uring: have submission side sqe errors post a cqe

Currently we only post a cqe if we get an error OUTSIDE of submission.
For submission, we return the error directly through io_uring_enter().
This is a bit awkward for applications, and it makes more sense to
always post a cqe with an error, if the error happens on behalf of an
sqe.

This changes submission behavior a bit. io_uring_enter() returns -ERROR
for an error, and > 0 for number of sqes submitted. Before this change,
if you wanted to submit 8 entries and had an error on the 5th entry,
io_uring_enter() would return 4 (for number of entries successfully
submitted) and rewind the sqring. The application would then have to
peek at the sqring and figure out what was wrong with the head sqe, and
then skip it itself. With this change, we'll return 5 since we did
consume 5 sqes, and the last sqe (with the error) will result in a cqe
being posted with the error.

This makes the logic easier to handle in the application, and it cleans
up the submission part.

Suggested-by: Stefan B├╝hler <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>

There are 34 lines of Linux source code added/deleted in this change. Code changes to Linux kernel are as follows.

 fs/io_uring.c | 34 ++++++----------------------------
 1 file changed, 6 insertions(+), 28 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 77b247b..0a894d7 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1802,14 +1802,6 @@ static void io_commit_sqring(struct io_ring_ctx *ctx)
 }

 /*
- * Undo last io_get_sqring()
- */
-static void io_drop_sqring(struct io_ring_ctx *ctx)
-{
-   ctx->cached_sq_head--;
-}
-
-/*
  * Fetch an sqe, if one is available. Note that s->sqe will point to memory
  * that is mapped by userspace. This means that care needs to be taken to
  * ensure that reads are stable, as we cannot rely on userspace always
@@ -2018,7 +2010,7 @@ static int io_sq_thread(void *data)
 static int io_ring_submit(struct io_ring_ctx *ctx, unsigned int to_submit)
 {
    struct io_submit_state state, *statep = NULL;
-   int i, ret = 0, submit = 0;
+   int i, submit = 0;

    if (to_submit > IO_PLUG_THRESHOLD) {
        io_submit_state_start(&state, ctx, to_submit);
@@ -2027,6 +2019,7 @@ static int io_ring_submit(struct io_ring_ctx *ctx, unsigned int to_submit)

    for (i = 0; i < to_submit; i++) {
        struct sqe_submit s;
+       int ret;

        if (!io_get_sqring(ctx, &s))
            break;
@@ -2034,21 +2027,18 @@ static int io_ring_submit(struct io_ring_ctx *ctx, unsigned int to_submit)
        s.has_user = true;
        s.needs_lock = false;
        s.needs_fixed_file = false;
+       submit++;

        ret = io_submit_sqe(ctx, &s, statep);
-       if (ret) {
-           io_drop_sqring(ctx);
-           break;
-       }
-
-       submit++;
+       if (ret)
+           io_cqring_add_event(ctx, s.sqe->user_data, ret, 0);
    }
    io_commit_sqring(ctx);

    if (statep)
        io_submit_state_end(statep);

-   return submit ? submit : ret;
+   return submit;
 }

 static unsigned io_cqring_events(struct io_cq_ring *ring)
@@ -2779,24 +2769,12 @@ static int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
        mutex_lock(&ctx->uring_lock);
        submitted = io_ring_submit(ctx, to_submit);
        mutex_unlock(&ctx->uring_lock);
-
-       if (submitted < 0)
-           goto out_ctx;
    }
    if (flags & IORING_ENTER_GETEVENTS) {
        unsigned nr_events = 0;

        min_complete = min(min_complete, ctx->cq_entries);

-       /*
-        * The application could have included the 'to_submit' count
-        * in how many events it wanted to wait for. If we failed to
-        * submit the desired count, we may need to adjust the number
-        * of events to poll/wait for.
-        */
-       if (submitted < to_submit)
-           min_complete = min_t(unsigned, submitted, min_complete);
-
        if (ctx->flags & IORING_SETUP_IOPOLL) {
            mutex_lock(&ctx->uring_lock);
            ret = io_iopoll_check(ctx, &nr_events, min_complete);

Leave a Reply

Your email address will not be published. Required fields are marked *