Suppose an IP implementation adheres literally to the following algorithm on receipt of a packet, P, destined for IP address D: if ({Ethernet address for D is in ARP cache})
{send P}
else
{send out an ARP query for D}
{put P into a queue until the response comes back}
(a) If the IP layer receives a burst of packets destined for D, how might this algorithm waste resources unnecessarily?
(b) Sketch an improved version.
(c) Suppose we simply drop P, after sending out a query, when cache lookup fails. How would this behave? (Some early ARP implementations allegedly did this.)