-
Notifications
You must be signed in to change notification settings - Fork 499
Description
I have encountered a situation where one ARP request causes the starvation of another ARP request.
My application creates two sockets: one connects to a reachable server on the local network, while the other attempts to connect to an unreachable server on the same network.
When the first ARP request is issued for the unreachable server before the ARP request for the reachable server, the second request is never sent. This occurs due to the rate limiting imposed by the neighbor cache (specifically, the SILENT_TIME constant in the Cache structure). The ARP request for the unreachable server is emitted once per second, while the ARP request for the reachable server is suppressed indefinitely due to this rate limit.
The README states:
ARP requests are sent at a rate not exceeding one per second.
According to the README, is the behavior I described above expected, or is it a bug?
Additional consideration: each socket already has its own discovery silent time of 1 second (the DISCOVERY_SILENT_TIME constant in the Meta struct). Is it necessary to additionally rate-limit ARP requests originating from different sockets at the cache level? I assume this might be needed to prevent a malicious application from continuously creating and connecting sockets to flood the network with ARP packets.
Thanks in advance.