-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Free with the same allocator in rmw_destroy_node #355
Conversation
Since rmw_allocate() was used to allocate memory, we should use rmw_free() to cleanup. Otherwise, if the user provided a custom allocator to the context we will be calling deallocate with the wrong allocator. Signed-off-by: Jacob Perron <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like it is the only case for name/namespace so this is clearly a fix for a problem.
However, it seems to me that using context->options.allocator
instead of rmw_allocate
/rmw_free
would arguably be the "better" way to allocate and free memory in the RMW layer. There are only a handful of cases, so if this is indeed the case, it probably makes more sense to change it everywhere.
I just went with a minimal change in this PR, but I agree that using the context allocator seems like the better option. |
So, it's not trivial to use the context's allocator for everything. In some places, though we have access to the context during initialization, we don't have a reference during cleanup. E.g.
Here's a related issue upstream: ros2/rmw#260 |
@jacobperron given ros2/rmw#260 , it probably makes the most sense to do your fix first. @ivanpauno do you agree? |
Yes, I think it makes more sense to first merge a fix here. |
CI looks good to me, I think all the failures are unrelated: @ivanpauno or @hidmic agree? If so, I'll merge this PR. |
Yes, they all look unrelated. This one looks interesting though https://ci.ros2.org/job/ci_osx/13347/testReport/junit/projectroot.src.core.ddsi/tests/CUnit_ddsi_locator_from_string_ipv6_invalid/. |
Yes, that's weird. I have had some problems with those tests in the past because of platform differences in name resolution, but not for a long time. I sampled some of the nightly builds and I haven't seen it there, so I'm inclined to provisionally ascribe it to a name-lookup oddity. |
Since rmw_allocate() was used to allocate memory, we should use rmw_free() to cleanup. Otherwise, if the user provided a custom allocator to the context we will be calling deallocate with the wrong allocator. Signed-off-by: Jacob Perron <[email protected]>
Since rmw_allocate() was used to allocate memory, we should use rmw_free() to cleanup. Otherwise, if the user provided a custom allocator to the context we will be calling deallocate with the wrong allocator. Signed-off-by: Jacob Perron <[email protected]>
Since rmw_allocate() was used to allocate memory, we should use rmw_free() to cleanup. Otherwise, if the user provided a custom allocator to the context we will be calling deallocate with the wrong allocator. Signed-off-by: Jacob Perron <[email protected]>
Since rmw_allocate() was used to allocate memory, we should use rmw_free() to cleanup. Otherwise, if the user provided a custom allocator to the context we will be calling deallocate with the wrong allocator. Signed-off-by: Jacob Perron <[email protected]>
Since rmw_allocate() was used to allocate memory, we should use rmw_free() to cleanup. Otherwise, if the user provided a custom allocator to the context we will be calling deallocate with the wrong allocator. Signed-off-by: Jacob Perron <[email protected]>
Since rmw_allocate() was used to allocate memory, we should use rmw_free() to cleanup.
Otherwise, if the user provided a custom allocator to the context we will be calling deallocate with the wrong allocator.
I hit a runtime error due to this bug while I was using a custom allocator.
There may be other places in the code where we're making a similar mistake. I haven't audited the rest of the code, but I'll make sure to fix similar instances if I come across them.
This should be considered for backport to Galactic and Foxy!