Hi,
I have setup an Site-to-Site vpn from AWS-VPC to Azure. This is working fine and aws instances can talk to azure vms.
I have now setup an azure internal load balancer and this is where i start to have problems. Azure vms behind the ILB are running a simple flask app. From my aws instances i can curl to the app and return data. Some times when i route through the ILB VIP the request hangs however.
ILB VIP - 172.16.0.10
Request from AWS instance which hangs::
curl -Lv http://172.16.0.10:8081/jobs/6a319bb7ca354945ad05d0515c8e8a9c/status* Trying 172.16.0.10...* Connected to 172.16.0.10 (172.16.0.10) port 8081 (#0)> GET /jobs/6a319bb7ca354945ad05d0515c8e8a9c/status HTTP/1.1> User-Agent: curl/7.35.0> Host: 172.16.0.10:8081> Accept: */*
ILB Health Check returned HTTP 200 and i have removed the second machine from the ILB so i know this backend node is working:
100.78.x.xx 0.042 168.63.xx.xx - - [26/Jun/2015:16:16:28 +0000] "GET /status HTTP/1.1" 200 51 "-" "Load Balancer Agent"
If i hit the node directly and bypass the ILB then i get a response back.
curl -Lv http://172.16.0.6:8081/jobs/6a319bb7ca354945ad05d0515c8e8a9c/status * Hostname was NOT found in DNS cache* Trying 172.16.0.6...* Connected to 172.16.0.6 (172.16.0.6) port 8081 (#0)> GET /jobs/6a319bb7ca354945ad05d0515c8e8a9c/status HTTP/1.1> User-Agent: curl/7.35.0> Host: 172.16.0.6:8081> Accept: */*> < HTTP/1.1 200 OK* Server nginx is not blacklisted< Server: nginx< Date: Fri, 26 Jun 2015 16:18:49 GMT< Content-Type: application/json< Content-Length: 26624< Connection: keep-alive< Cache-Control: private, max-age=0, no-cache, no-store< Strict-Transport-Security: max-age=15768000< X-Host: 34A8CA5D-0C68-F440-92A6-1F2DC0FF0438<
Does any one have any advice on what i can check to try and resolve this problem thanks.
Please let me know if anyone needs further information