Last night I moved several VMs to a newly created VNet / subnet in order to get Reserved IPs working (move them out of a network with affinity groups etc)
During the process of moving, a few VMs started randomly not being able to communicate with ANY other hosts on their local network (same Vnet, same subnet). They could not ping those hosts (ICMP inbound was enabled on the destination hosts) or connect on any TCP ports. ARP tables were empty minus the localhost / broadcast / gateways etc. Outbound connectivity to the Internet worked (on TCP ports, DNS was broken since they could not communicate internally).
I came across this thread that suggested this could be related to the Azure DNS settings. It does not make any sense that DNS settings on the Azure side could impact IP connectivity, however removing my DNS servers in Azure and rebooting the machines proved to fix the issue (at least for those VMs). It could be the case that simply modifying the network configuration was the catalyst for getting the VMs to communicate.
Today I deployed a new VM and it is experiencing the same problem. It has no INTERNAL network connectivity but can connect on TCP ports outbound to the Internet. I'm at a loss on how to fix this in a permanent fashion.