within the same Regional Virtual Network, some VMs cannot communicate with some VMs via internal IP. This issue is not limited to certain virtual network. I am experiencing this issue in 3 different subscription. however, the communication issue did go away from one of my VNet yesterday, when I left it over during the night.
as far as I understand, all VMs within the same virtual network should be able to communicate with each other using internal IP, regardless, with or without, affinity groups, but this is not the case. Can someone from MS take a look at this issue?
e.g. considering the following topology,
Regional Virtual Network: vnet(North Europe), dynamic gateway
Site-to-Site VPN: IKEv2
Affinity Group: AF1(North Europe), AF2(North Europe), etc.
Storage Account: storagevm1(AF1),storagevm2(AF2),storagevm3(AF2), etc.
Cloud Service: cloudservice1(AF1),cloudservice2(AF2), etc.
Virtual Machine: VM1(cloudservice1),VM2(cloudservice2),VM3(cloudservice2), etc.
VMs are using their own storage account, one-one mapping, same AF. single/multiple VMs within the cloud service. VMs/CloudServices are created through PowerShell as portal UI doesn't allow me to choose both virtual network and affinity group.
- firewall allows ping on all Azure VMs
- on-premises server/pc can RDP and ping to all VMs in Azure
- all VMs in Azure can ping on-premises server/pc
- almost all VMs can't communicate with almost all VMs, e.g. ping, nslookup to DNS VMs
- few VMs can communicate with few VMs, but not all VMs, not limited to the same affinity group
- Reboot within VMs, restart from portal, shutdown/start from portal, doesn't solve the problem, but it can make VM unable to communicate with VM when it could before.
- communication problem is not isolated/limited to certain affinity group
- communication problem is not limited between affinity group
- communication problem is not isolated/limited to subnet.