Uday,
This indeed is best for a support case.
And it seems you covered many of the typical scenarios but there may be something lacking.
So the stacks you outline are really just timeouts to a back-end server and would be considered somewhat normal and wouldnt in itself cause a gateway restart.
Usually if the Gateway is restarting its because the Parent Process Process Controler (sspc_***.log) has tried to make a socket or connection to the gateway and it has not gotten response in certain time frame. And if so it will force a restart. So the first step is ensuring it is sspc (in its log) that has a killing default starting default. And is restarting the gateway. Similar to what you might do by just hitting a ssg/ping page.
The next portion is determining the why. Usually the gateway will go unresponsive if it runs out of threads (to many connections) the netstat you mention. Or runs out of memory (as you mention). Now by default the gateway only takes up to 1/2 or 2/3 of available memory. So you dont need to see the system at 100 percent memory for the JVM not to be able to allocate more memory. So how much memory is available on this vm is very important. Hopefully its much more then the 2Gb and depending on load it would definitely need to be more than 2gb if its production and getting a lot of traffic. A lot of times which may be hard to do on a unexpected restart its best to collect the following tech doc just prior to restart for your support case
Troubleshooting Gateway application failures, performance concerns, and service outages
As well gather a tar of the gateway/process controller logs.
If you havent opened an issue yet please do and if I can provide any further information please let me know.
Thanks....