Layer7 API Management

Expand all | Collapse all

Behavior of io.httpMaxConcurrency and how to monitor it

  • 1.  Behavior of io.httpMaxConcurrency and how to monitor it

    Posted Jul 11, 2018 09:26 AM

    Hi,

    we have the issue, that we see regularly increased response times across different services (so it's no service specific issue). We could already identify that the delay is cause in front of the Gateway or at least before the policy is triggered. Therefor we expect, it might be related to the io.httpMaxConcurrency setting, which is queuing and therefor delaying requests. But we don't know how to monitor this, means how can we verify if this is really the issue? Is there any CLI command to verify the current value or will there be log-entries if the limit is reached?

    And what's the behavior in general of the value io.httpMaxConcurrency in combination with io.httpCoreConcurrency?

    - The CoreConcurrency HTTP-Listeners are always available once the Gateway is up and running and additional Listeners can be created up to MaxConcurrency, right?

    - Will they be closed again, once not required anymore?

    - How long does it take to create/close an additional Listener?

    - Does the queue has any limit and what happend when it is reached?

    - How and when will queued requests be executed? Before any new requests?

     

    I also found this article, but it doesn't explain why these values are required and what's its intention?

     

    So any additional help, best practice or documentation would be really helpful!

    Thank you!

     

    Ciao Stefan



  • 2.  Re: Behavior of io.httpMaxConcurrency and how to monitor it

    Broadcom Employee
    Posted Jul 12, 2018 02:26 AM

    Hello Stefan,

     

    Have you checked the number of incoming connections when you're facing the performance issue?

    You can count it with the following command in the Privileged Shell:

     

    ss -o state established \( sport = :8080 or sport = :8443 or sport = :9443 \) \ dst 0.0.0.0/0 | egrep -v Recv-Q | wc -l

     

    If the number is way higher than the value of io.httpMaxConcurrency, a lot of requests are queued within the Gateway.
    It would be a starting point of your investigation.

    Concurrency on the gateway is explained in the following DocOps page:

     

    Concurrency Considerations in HTTP Load Balancing - CA API Gateway - 9.3 - CA Technologies Documentation 

     

    Increased response time can be caused by not only the concurrecy on the gateway but also many other reasons (network, back-end, efficiency of policy logi, etc.)
    We have a knowledge document for troubleshooting (KB000042511).

    I hope it will be useful for further investigation.

     

    Best regards,
    Seiji



  • 3.  Re: Behavior of io.httpMaxConcurrency and how to monitor it

    Posted Jul 12, 2018 08:12 AM

    In the policy code ${request.listener.concurrency} or in the audit sink ${audit.var.request.listener.concurrency} you can get the current concurrency on your listeners.

     

    We log this to an external system for each incoming message, and have historical graphs for each of our listeners.



  • 4.  Re: Behavior of io.httpMaxConcurrency and how to monitor it

    Posted Jul 12, 2018 11:07 AM

    Hi Seiji and Dimitri,

    thank you both for your comments, it was quite helpful.

    First the ss-command: I adjusted it to reflect our ports and I see an average of around 400, but around every 3 minutes we see peaks up to more than 1000. Our httpMaxConcurrency is default configured to 750, so this could be really the issue.

    But now the ${request.listener.concurrency}-value: I started the Debugger for a policy and added this variable. But its value is almost below 200 with peaks of around 250-300.

    Is this really the correct value? Because if yes, this would mean that our Limit of 750 will NOT be reached.

    We increased the httpMaxConcurrency to 1250 and it looks that the response time of our monitoring is better. Is this CWP the only value, which we need to adjust? Or is there any other value, which needs to be adjusted as well? I'm refering to the c3p0DataSource.maxPoolSize=<number> from the previous mentioned article, where it's not explained why this value is required or what's its intention.

    Do you have any further ideas or hints for this topic?

    Thank you!

     

    Ciao Stefan



  • 5.  Re: Behavior of io.httpMaxConcurrency and how to monitor it

    Broadcom Employee
    Posted Jul 13, 2018 03:08 AM

    Is there any possibility that many clients are making API calls in the same 3 minutes cycle?
    Another possibility is API calls are piled up to over 1000 because of long latency of APIs.
    Once they returned responses, the connections would be decreased.

     

    Concurrency can be limited lower than httpMaxConcurrency when allocated memory size isn't enough for API Gateway.
    It may also depend on other conditions. (e.g.  backend concurrency)

     

    The suggestions in the article you referred are probably to prevent policies from waiting for database access.
    Even if you don't use database explicitly in your policies, they need access to database, for example, writing to audit log.

     

    Best regards,
    Seiji



  • 6.  Re: Behavior of io.httpMaxConcurrency and how to monitor it

    Posted Jul 16, 2018 09:06 AM

    Hi Seiji,

    yes we are also currently checking if this increase of TCP-connections is coming from the client-side oder backend-side.

    But I still have the feeling that the gateway itself might causing this delay (e.g. Garbage Collection process).

    So is there any possibility to verify this?

    - Additional logging?

    - CLI-commands?

    - Which things/processes on the gateway could cause such a delay in general?

    Thank you!

     

    Ciao Stefan



  • 7.  Re: Behavior of io.httpMaxConcurrency and how to monitor it

    Broadcom Employee
    Posted Jul 18, 2018 12:43 AM

    Hi Stefan,

     

    If you have any concerns about Garbage Collection, the logging for it can be enabled with the following settings:

    Configure Garbage Collection Logging - CA API Gateway - 9.2 - CA Technologies Documentation 

     

    In the privileged shell, we can use major Linux commands for checking the performance of processes, such as "ps", "top", etc. You can find how busy the server is with these commands.

     

    I think there are a lot of possible causes for the delay and they need to be removed one by one.

    For example, if I divide the all possible causes into two big categories "server local" and "external conditions", I would check the usage of local resources such as Memory and CPU cores first. In the case of virtual appliance, it is easy to increase memory assignment and/or the number of CPU cores when they look not enough.


    If the delay was remaining after increasing local resources, I would look into external conditions next. It may be more difficult but we can take the same strategy. Picking up every parameter and checking its effect on the delay.

     

    Best regards,
    Seiji



  • 8.  Re: Behavior of io.httpMaxConcurrency and how to monitor it

    Posted Jul 20, 2018 04:49 AM

    Hi Stefan

     

    The API gateway has always had limitations on the monitoring gateway stats. You have mentioned you increased the httpMaxConcurrency default (750) to 1250. This should not be taken lightly as it can have a effect on the overall performance of the gateway. If you increase this figure there are a few additional changes you will need to make, I have seen an knowledge article or check with support. Java heap size is one of them.

     

    Back to monitoring. Once your gateway concurrent connections have been maxed out, I believe requests do not get queued by the gateway but dropped. We currently monitor our gateway metrics by installing the JVMMetrisAssertion, create a policy to output these metrics and an autosys job to call this policy frequently. You can then feed these into a logging tool which you can alert off or produce charts. Example output -

     

    System Statistics: heapcommitted=1309.0MB, heapmax=14331.0MB, heapused=613.0MB, threadpool=500, threadpool.waiting=500, threadpool.runnable=0

     

    Dinesh



  • 9.  Re: Behavior of io.httpMaxConcurrency and how to monitor it

    Posted Jul 26, 2018 10:16 AM

    Hi Dinesh,

    may I ask you where do you get this JVMMetrisAssertion from and how do you install it?

    Is there any kind of documentation available for this?

    Thank you!

     

    Ciao Stefan



  • 10.  Re: Behavior of io.httpMaxConcurrency and how to monitor it

    Posted Aug 09, 2018 06:10 AM

    Hi Stefan,

     

    Apologies for the late response. I believe CA provided us with this assertion. It is specific to version of the gateway.

     

    I believe it can be installed via Policy manager and CLI. Documentation is poor from CA. They should be able to guide you on installation.

     

    Kind Regards

     

    Dinesh