-
Notifications
You must be signed in to change notification settings - Fork 368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reconsider default ip interface #485
Comments
This is a proposed enhancement to make the cyclone experience better out of the box. Not an intended fix to the (IMO defective) tests in ros2/ros2cli#482. |
FYI: this also occurs with network interfaces like |
I fully support this goal. I also think it makes sense to give some background for the current behaviour:
The first is just me, really, and times have changed. |
+1 to this. It took me some time to figure out what was going on when I first moved my robot over to CycloneDDS and is a very different experience from ROS1. I think it's also quite common for robots to have multiple interfaces, but not so common for people testing in simulation. For reference, I posted on my blog about working through this, and it was the "underlying issue" that I was dealing with when I opened ros2/rmw_cyclonedds#216. |
Hi @eboasson |
Working on it: https://github.com/eboasson/cyclonedds/tree/multi-home-red-netw That’s only halfway done, but with some caveats it does work. Feel free to try it, comment on it or make suggestions. I don’t know when exactly I will be able to return to it, but the very urgent situation that required me to drop everything is definitely improving, and so, hopefully, it won’t be long. |
Is the implementation supposed to be able to bind Readers/Writers to specific network interfaces so that they receive/transmit data through them? Is it advisable to implement this by creating different transports and then linking them to the corresponding Participants? |
Hi @i-and
I think all I need to do for that to work is to allow unicast addresses (or network interface names) in the "network partition" configuration items and make sure to advertise all addresses that were listed, the rest should (...) then fall into place. I was planning to do some more work on it today, I might as well see if I can try this out. There are some details to be aware of though. Firstly, the protocol works by the readers informing the writers which addresses they are accepting data on. I could restrict a writer to a subset of the interfaces, but what if a reader then comes along that only accepts data going out on interfaces outside that subset? One could even end up with some writers for which the intersection between those two sets of interfaces is empty and some where it is not, all in the same participant. Ignoring the reader in such cases would result in the connectivity graph not matching the discovery data that's available to applications, and I think it would be a disaster for debugging. I'm included towards restricting only the readers, leaving it is up to the one doing the system configuration to be careful and not have unintended consequences. Secondly, the protocol allows one to send sets of locators in the discovery information, but no-one has yet found a way of matching those addresses to interfaces, other than by looking at the addresses and applying some heuristics and by running an additional protocol to try to figure out the actual network topology. For now, I'm opting for the heuristics version, because that works well in practice on LANs, which is what most of the systems run on. That certainly covers the common case of having a pair of IPv4 interfaces with configured addresses in different subnets, and for the vast majority of systems that I have seen, that is actually all that is needed. So I think that simple approach is a sensible starting point.
That will always work, if those participants exist in independent DDSI network stacks. If you're happy with using two domain ids, no further complications arise. If you want them in the same DDS domain, then you have to manually create the "domain" entities (we do this trick in tests all the time, but it needs to be made a bit easier as far as the interface goes). But I do think being able to do this via those "network partitions" would be much nicer, especially if I am correct in thinking that it will all automagically work if I advertise the right addresses in the reader discovery information. |
Further to me previous comment: I did around to trying it out last Friday and it worked (as expected, so with some quirks) once I removed some checks that required the addresses in network partitions to be multicast addresses. The branch on my fork now supports it (I think it will even do redundancy using interfaces A+B for readers for one set of topics and interfaces C+D for another) but it isn't ready for prime-time yet. (Besides that, the commits on that fork need a serious bit of cleaning up to turn this into a something I can submit as a PR ... patience ... I'll get there.) |
The configuration currently allows you to specify interfaces and priorities to your hearths content. I think the default case, where on some heuristic we select an interface that seems appropriate, is preferred over just blasting over all available interfaces by default. We nicely warn the user about it what decision is made and you are fully free to configure it otherwise, so unless there is a good argument to be made for a different approach I consider this "solved". |
Cyclone frequently emits warnings like “using network interface en0 (udp/192.168.1.120) selected arbitrarily from: en0, en1” to indicate ambiguity of network address. The reality is a computer can be reachable at multiple addresses. A better default would be to publish all unicast addresses during discovery.
The text was updated successfully, but these errors were encountered: