Skip to content

fork 29.5.6 and apply ipc topic stats fix#1

Open
jayyoung wants to merge 3 commits into29_5_6_branchfrom
fix_ipc_stats_29_5_6
Open

fork 29.5.6 and apply ipc topic stats fix#1
jayyoung wants to merge 3 commits into29_5_6_branchfrom
fix_ipc_stats_29_5_6

Conversation

@jayyoung
Copy link

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR enables subscription topic statistics collection for intra-process (IPC) message delivery by passing a type-erased statistics handler into the intra-process subscription implementation, avoiding a circular include with subscription_topic_statistics.hpp.

Changes:

  • Add a type-erased StatsHandlerFn to SubscriptionIntraProcess and invoke it during intra-process message execution.
  • Wire up Subscription to provide the stats handler when topic statistics are enabled.
  • Add logging/warning indicating message age stats are not accurate for intra-process delivery.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
rclcpp/include/rclcpp/subscription.hpp Creates and passes a type-erased stats handler into the intra-process subscription constructor.
rclcpp/include/rclcpp/experimental/subscription_intra_process.hpp Adds StatsHandlerFn, stores it, and calls it during intra-process execution with a synthesized timestamp + warning.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Comment on lines 182 to +186
rmw_message_info_t msg_info;
msg_info.publisher_gid = {0, {0}};
msg_info.from_intra_process = true;

std::chrono::time_point<std::chrono::system_clock> now;
Comment on lines 18 to 23
#include <rmw/types.h>

#include <functional>
#include <memory>
#include <stdexcept>
#include <string>
Comment on lines +167 to +171
// Build a type-erased stats handler to pass into the IPC subscription.
// This avoids pulling subscription_topic_statistics.hpp into the IPC
// header where it causes circular include issues.
typename SubscriptionIntraProcessT::StatsHandlerFn stats_handler = nullptr;
if (subscription_topic_statistics) {
@tonynajjar
Copy link

tonynajjar commented Mar 13, 2026

@jayyoung I added copilot review FYI, just because why not. I am not familiar with this part of rclcpp so if you want a deeper look from me let me know and I'll put aside some time.

I would recommending opening the PR upstream though so we can get feedback from maintainers

@jayyoung
Copy link
Author

jayyoung commented Mar 13, 2026

@jayyoung I added copilot review FYI, just because why not. I am not familiar with this part of rclcpp so if you want a deeper look from me let me know and I'll put aside some time.

Same. But the fix seems small enough, and the copilot review doesn't seem to flag anything major. The only thing I am concerned about is this, because it does have an implication:

  if (stats_handler_) {
    RCLCPP_WARN_ONCE(
      rclcpp::get_logger("rclcpp"),
      "Intra-process communication does not support accurate message age statistics");
    now = std::chrono::system_clock::now();
    const auto nanos = std::chrono::time_point_cast<std::chrono::nanoseconds>(now);
    msg_info.source_timestamp = nanos.time_since_epoch().count();
  }

If I'm following correctly, that means this:

  • Intra-process messages bypass the RMW layer, so there's no real source_timestamp from the publisher to grab onto, so we have to do the above.
  • The "fix" works around this by stamping source_timestamp = now() at receive time, making message_age always == 0
  • message_period stats are accurate for intra-process
  • message_age stats are NOT accurate for intra-process — always near zero, not a real latency measurement
  • I think this is unlikely to be a practical problem since intra-process delivery should inherently be near-zero latency anyway, right? (shared pointer, no serialization, no latency?)

So I think this is a fix, but there is a better fix possible, but I think it shouldn't be an issue for now, and we can push for a better fix upstream that doesn't have this caveat.

Does that make sense @tonynajjar ?

@tonynajjar tonynajjar self-requested a review March 13, 2026 12:08
Copy link

@tonynajjar tonynajjar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants