-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes for intra-process actions #144
Fixes for intra-process actions #144
Conversation
} | ||
|
||
void store_ipc_action_feedback(FeedbackSharedPtr feedback) | ||
{ | ||
feedback_buffer_->add(std::move(feedback)); | ||
gc_.trigger(); | ||
is_feedback_ready_ = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why these are removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting the flag here was breaking the SingleThreadedExecutor
, which already sets the is_*_ready_
flags on the is_ready() API.
For EventsExecutor
, the flags are set on the take_data_by_entity_id() API.
rclcpp/include/rclcpp/experimental/action_client_intra_process.hpp
Outdated
Show resolved
Hide resolved
Some comments about how things work, I'll try to make it simple and not very long. (see new flowchart in comment below) Every time the We have 5 types of Action Client events (
For every individual goal sent we have:
Since a client can make multiple requests, we need a storage for all the individual goal IDs, event types, their callbacks, and the I created a structure to hold all the info and data to process the different events, mapped with their respective "Goal ID". So we have in the map:
Besides this map, we have all the IPC ring-buffers to hold the responses from the server. They look like:
So when we extract an element (response) form the ring buffer, we get the |
Could you elaborate which threads are involved and what each are doing for a client/server interaction? |
The threads involved are:
|
ipm->remove_action_server(ipc_action_server_id_); | ||
} | ||
|
||
protected: | ||
// Intra-process version of execute_goal_request_received_ | ||
// Missing: Deep comparison of functionality betwen IPC on/off | ||
void | ||
ipc_execute_goal_request_received(GoalRequestDataPairSharedPtr data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this ipc function I still see calls to rcl
such as rcl_action_get_zero_initialized_goal_info()
. What is the goal_info
? How is this relevant or necessary to call into rcl when going through ipc?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case, the rcl_action_goal_info_t goal_info
is just used to obtain the rcl_goal_handle
, to then update the goal state.
The "bookkeeping" of the goal state is still performed in rcl
.
"intra_process_action_send_cancel_response called " | ||
" after destruction of intra process manager"); | ||
} | ||
auto ipm = lock_intra_process_manager(); | ||
|
||
// Convert c++ message to C message | ||
rcl_action_cancel_request_t cancel_request = rcl_action_get_zero_initialized_cancel_request(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar question to my other comment, but why do we need to get cancel_request
from rcl
layer while doing ipc? I know this PR is built on top of previous work but I am missing the rationale for the interaction with the rcl layer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only the communication (send request / responses / etc) goes through intra-process.
But the rest of the logic still lives in rcl
, that is, we still use the rcl_handle
which controls the goal state, etc.
In summary, all the rcl_action_send_*
have their parallel intra_process_action_send_*
versions. But the rest of the code is common.
* Fixes for intra-process Actions * Fixes for Clang builds * Fix deadlock * Server to store results until client requests them * Fix feedback/result data race See ros2#2451 * Add missing mutex * Check return value of intra_process_action_send --------- Co-authored-by: Mauro Passerino <mpasserino@irobot.com>
* Fixes for intra-process Actions * Fixes for Clang builds * Fix deadlock * Server to store results until client requests them * Fix feedback/result data race See ros2#2451 * Add missing mutex * Check return value of intra_process_action_send --------- Co-authored-by: Mauro Passerino <mpasserino@irobot.com>
* Fixes for intra-process Actions * Fixes for Clang builds * Fix deadlock * Server to store results until client requests them * Fix feedback/result data race See ros2#2451 * Add missing mutex * Check return value of intra_process_action_send --------- Co-authored-by: Mauro Passerino <mpasserino@irobot.com>
* Fixes for intra-process actions (#144) * Fixes for intra-process Actions * Fixes for Clang builds * Fix deadlock * Server to store results until client requests them * Fix feedback/result data race See ros2#2451 * Add missing mutex * Check return value of intra_process_action_send --------- Co-authored-by: Mauro Passerino <mpasserino@irobot.com> * Fix IPC Actions data race (#147) * Check if goal was sent through IPC before send responses * Add intra_process_action_server_is_available API to intra-process Client --------- Co-authored-by: Mauro Passerino <mpasserino@irobot.com> * Fix data race in Actions: Part 2 (#148) * Fix data race in Actions: Part 2 * Fix warning - copy elision --------- Co-authored-by: Mauro Passerino <mpasserino@irobot.com> * fix: Fixed race condition in action server between is_ready and take"… (ros2#2531) * fix: Fixed race condition in action server between is_ready and take" (ros2#2495) Some background information: is_ready, take_data and execute data may be called from different threads in any order. The code in the old state expected them to be called in series, without interruption. This lead to multiple race conditions, as the state of the pimpl objects was altered by the three functions in a non thread safe way. Co-authored-by: William Woodall <william@osrfoundation.org> Signed-off-by: Janosch Machowinski <J.Machowinski@cellumation.com> * fix: added workaround for call to double calls to take_data This adds a workaround for a known bug in the executor in iron. Signed-off-by: Janosch Machowinski <J.Machowinski@cellumation.com> --------- Signed-off-by: Janosch Machowinski <J.Machowinski@cellumation.com> Co-authored-by: Janosch Machowinski <J.Machowinski@cellumation.com> Co-authored-by: William Woodall <william@osrfoundation.org> --------- Signed-off-by: Janosch Machowinski <J.Machowinski@cellumation.com> Co-authored-by: Mauro Passerino <mpasserino@irobot.com> Co-authored-by: jmachowinski <jmachowinski@users.noreply.github.com> Co-authored-by: Janosch Machowinski <J.Machowinski@cellumation.com> Co-authored-by: William Woodall <william@osrfoundation.org>
Fixes for intra-process actions:
The following (simplified) flowchart represents what's happening when the Action Client sends a goal request go the Action Server, until the server accepts and responds to the client:
The process is almost exactly the same for:
and for cancel:
In the following chart I show part of
action_client->async_get_result
but focusing on the server logic, which sends the result only if the client has requested for it: