Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core: msg_receive should yield #1835

Merged
merged 1 commit into from
Nov 6, 2014
Merged

Conversation

OlegHahm
Copy link
Member

If a thread sends blocking, but the target thread is not currently in
receive mode, the sender gets queued. If it has a higher priority it
should run again as soon as the target goes into receiving mode.

@OlegHahm OlegHahm added Process: API change Integration Process: PR contains or issue proposes an API change. Should be handled with care. Discussion: RFC The issue/PR is used as a discussion starting point about the item of the issue/PR Type: bug The issue reports a bug / The PR fixes a bug (including spelling errors) Area: core Area: RIOT kernel. Handle PRs marked with this with care! and removed Process: API change Integration Process: PR contains or issue proposes an API change. Should be handled with care. labels Oct 17, 2014
@Kijewski
Copy link
Contributor

The logic in this PR is incorrect.

Use this instead:

diff --git a/core/msg.c b/core/msg.c
index ec16e94..6a0a83a 100644
--- a/core/msg.c
+++ b/core/msg.c
@@ -310,12 +310,15 @@ static int _msg_receive(msg_t *m, int block)
         *m = *sender_msg;

         /* remove sender from queue */
+        int16_t other_prio = -1;
         if (sender->status != STATUS_REPLY_BLOCKED) {
             sender->wait_data = NULL;
             sched_set_status(sender, STATUS_PENDING);
+            other_prio = sender->priority;
         }

         eINT();
+        if (other_prio >= 0) {
+            sched_switch(other_prio);
+        }
         return 1;
     }

The thread will only yield if it's appropriate.

@OlegHahm
Copy link
Member Author

I wouldn't call it incorrect, but less efficient, yes. I will switch to your proposal if there's a consensus that this change does not have any side effects and is wanted behavior.

@kaspar030
Copy link
Contributor

Why is other_prio >= 0 sufficient as switching condition? Shouldn't there be other_prio < my_prio?

@Kijewski
Copy link
Contributor

That's handled in sched_switch.

@OlegHahm
Copy link
Member Author

With #1836 you're patch would be mostly superfluous if I replace thread_yield() with the new yield function, right?

@Kijewski
Copy link
Contributor

Yes, my PR handles this function.

@OlegHahm
Copy link
Member Author

I think even with your PR it's less expensive to check the sender priority before hand. If I just call thread_yield_higher() instead I save some conditional jumps and assignments, but to the cost of a software interrupt.

@Kijewski
Copy link
Contributor

ACK

@OlegHahm
Copy link
Member Author

@kaspar030, do you agree?

@OlegHahm
Copy link
Member Author

OlegHahm commented Nov 5, 2014

@kaspar030, @LudwigOrtmann, @authmillenon, can I get some of you for a second ACK?

@kaspar030
Copy link
Contributor

yep! ACK after squash.

If a thread sends blocking, but the target thread is not currently in
receive mode, the sender gets queued. If it has a higher priority it
should run again as soon as the target goes into receiving mode.
@OlegHahm
Copy link
Member Author

OlegHahm commented Nov 6, 2014

Squashed.

OlegHahm added a commit that referenced this pull request Nov 6, 2014
@OlegHahm OlegHahm merged commit 8dab420 into RIOT-OS:master Nov 6, 2014
@OlegHahm OlegHahm deleted the msg_receive_yield branch November 6, 2014 15:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Area: core Area: RIOT kernel. Handle PRs marked with this with care! Discussion: RFC The issue/PR is used as a discussion starting point about the item of the issue/PR Type: bug The issue reports a bug / The PR fixes a bug (including spelling errors)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants