I have some secondary hubs that communicate by radio (satellite) and the minimum latency is 500ms.
We made various settings:
- Increased timeout in the tunnel hub, following the guidelines of the documentation;
- We enable to ignore the IP verification, because the communication of these HUBs arrive using a unique NAT (Via Internet);
- We changed the firewall rule to a higher timeout;
Nothing solved the tunnel falls.
During queue failures, I completely lose access to the secondary HUB by UIM Manager, but I continue to access the server using the RDS connection over the Internet.
Note: We try to use instead of GET / ATTACH for POST, but with POST we lose the QUEUE alarm, so if the HUB is unavailable, we will not know if the client is up or down.
I remember that the tunnel client connects to the server tunnel over the Internet and not VPN.
During this process, we describe that there is a communication VPN in another location. So we put a probe of net_connect to monitor ping (latency) and telnet on port 48002 (Hub Nimbus).
The latency even being high (500ms) has no drop, meaning the pings are permanent. This is also true of telnet. That is, there is no communication failure.
The conclusion I get is that there is no tunnel break, but rather, due to a very high latency, some failure should occur during the sending of alarms and qos messages.
I want to know if there is any maximum network latency limit, so I will know that this is the root cause.
The interesting thing is that I already used hubs with 3G modem and worked perfectly.
But this scenario is causing me many problems.