AnsweredAssumed Answered

Behavior of io.httpMaxConcurrency and how to monitor it

Question asked by StefanKlotz on Jul 11, 2018
Latest reply on Jul 20, 2018 by Dinesh.Kerai

Hi,

we have the issue, that we see regularly increased response times across different services (so it's no service specific issue). We could already identify that the delay is cause in front of the Gateway or at least before the policy is triggered. Therefor we expect, it might be related to the io.httpMaxConcurrency setting, which is queuing and therefor delaying requests. But we don't know how to monitor this, means how can we verify if this is really the issue? Is there any CLI command to verify the current value or will there be log-entries if the limit is reached?

And what's the behavior in general of the value io.httpMaxConcurrency in combination with io.httpCoreConcurrency?

- The CoreConcurrency HTTP-Listeners are always available once the Gateway is up and running and additional Listeners can be created up to MaxConcurrency, right?

- Will they be closed again, once not required anymore?

- How long does it take to create/close an additional Listener?

- Does the queue has any limit and what happend when it is reached?

- How and when will queued requests be executed? Before any new requests?

 

I also found this article, but it doesn't explain why these values are required and what's its intention?

 

So any additional help, best practice or documentation would be really helpful!

Thank you!

 

Ciao Stefan

Outcomes