In our prod environment, we have 12 policy servers split evenly between 2 data centers. In the past we always defined all 12 policy servers in the HCO for our customers. Recently we have explored using clustering in the HCO by defining 2 HCO's; the first with one datacenter as primary and the other as the fallover and the reverse for the other HCO. We then have our customers split up which HCO they use so that the app is split evenly over the datacenters.
When we tested this out, we had one of our customers reporting that they were getting random 500 errors in their app. Wondering if it would make sense that we would need to increase the max sockets per port since now we are actively using 6 policy servers instead of 12.
We will test this theory out but looking for some input as to if our hypothesis sounds logical.
Thanks