Layer7 API Management

  • 1.  Native MQ Performance

    Posted Feb 14, 2017 03:09 AM
      |   view attached

    Hi, 

     

    Our client's main service comes from IBM Native MQ. However, during load test test we observe that we stuck at a certain throughput of around 100 TPS. Investigating further we saw that "Back-end response time" is actually steady - approximately 5-10ms. However, "Front-End response time" increases even though both Gateway and MQ is not heavily loaded.

     

    Thread dumps at Gateway shows blocking thread:

     

       java.lang.Thread.State: WAITING (on object monitor)
            at java.lang.Object.wait(Native Method)
            at java.lang.Object.wait(Object.java:502)
            at com.ibm.mq.jmqi.remote.util.ReentrantMutex.acquire(ReentrantMutex.java:167)
            - locked <0x0000000449822ce8> (a com.ibm.mq.jmqi.remote.api.RemoteHconn$CallLock)
            at com.ibm.mq.jmqi.remote.util.ReentrantMutex.acquire(ReentrantMutex.java:73)
            - locked <0x0000000449822ce8> (a com.ibm.mq.jmqi.remote.api.RemoteHconn$CallLock)
            at com.ibm.mq.jmqi.remote.api.RemoteHconn.enterCall(RemoteHconn.java:2121)
            at com.ibm.mq.jmqi.remote.api.RemoteHconn.enterCall(RemoteHconn.java:2080)
            at com.ibm.mq.jmqi.remote.api.RemoteHconn.enterCall(RemoteHconn.java:2049)
            at com.ibm.mq.jmqi.remote.api.RemoteFAP.spiOpen(RemoteFAP.java:5948)
            at com.ibm.mq.jmqi.remote.api.RemoteFAP.spiOpen(RemoteFAP.java:5822)
            at com.ibm.mq.ese.jmqi.InterceptedJmqiImpl.spiOpen(InterceptedJmqiImpl.java:548)
            at com.ibm.mq.ese.jmqi.ESEJMQI.spiOpen(ESEJMQI.java:847)
            at com.ibm.mq.MQDestination.open(MQDestination.java:312)
            at com.ibm.mq.MQQueue.(MQQueue.java:236)
            at com.ibm.mq.MQQueueManager.accessQueue(MQQueueManager.java:2674)

     

    While there are many threads waiting for blocked thread above:

     

       java.lang.Thread.State: BLOCKED (on object monitor)
            at com.ibm.mq.MQQueueManager.accessQueue(MQQueueManager.java:2696)
            - waiting to lock <0x0000000449821390> (a com.ibm.mq.MQQueueManager)
            at com.l7tech.external.assertions.mqnative.server.ae.doWork(Unknown Source)
            at com.l7tech.external.assertions.mqnative.server.MqNativeResourceManager.a(Unknown Source)
            at com.l7tech.external.assertions.mqnative.server.ServerMqNativeRoutingAssertion.checkRequest(Unknown Source)
            at com.l7tech.server.policy.assertion.composite.ServerCompositeAssertion.iterateChildren(Unknown Source)
            at com.l7tech.server.policy.assertion.composite.ServerOneOrMoreAssertion.checkRequest(Unknown Source)
            at com.l7tech.server.policy.assertion.composite.ServerCompositeAssertion.iterateChildren(Unknown Source)
            at com.l7tech.server.policy.assertion.composite.ServerAllAssertion.checkRequest(Unknown Source)
            at com.l7tech.server.policy.ServerPolicy.checkRequest(Unknown Source)
            at com.l7tech.server.policy.w.call(Unknown Source)
            at com.l7tech.server.policy.w.call(Unknown Source)

     

    Looked through and tried to set io.mqConnectionCacheMaxSize and mq.listenerThreadLimit but it makes no difference at all. Appreciate any tips on this.

    Attachment(s)

    zip
    thread_mq2.zip   401 KB 1 version


  • 2.  Re: Native MQ Performance

    Broadcom Employee
    Posted Feb 14, 2017 05:25 PM

    Hello thanawan.sachdev ,

    Just a guess, since the threads are waiting for MQ, is there some kind of max connection limit configured on MQ?

     

    Regards,

    Mark



  • 3.  Re: Native MQ Performance

    Posted Feb 14, 2017 11:16 PM

    Hi Mark, 

     

    Checking further and there's no bottleneck at MQ side. In fact, when we use two CA nodes to submit request to the same MQ endpoints we are able to achieve the same throughput on each node. (i.e. 100 TPS on each node and 200 TPS combined). CPU is relatively very low on both nodes 10-15% utilization only. So it looks like the bottleneck has to do some settings at the gateway end.



  • 4.  Re: Native MQ Performance
    Best Answer

    Posted May 17, 2017 02:31 PM

    Hi thanawan.sachdev,

     

    I'm sorry you're having trouble with this. It is a known performance issue. While the Inbound MQ Connections have configurable pools of connections that can read from a queue in parallel. Outbound MQ Connections do not have this pool, so messages are written to queues serially. We are addressing this.

     

    At the moment a workaround is to configure multiple identical Outbound MQ Connections for the same queue and spread routes across those different connections in your policy from request to request.

     

    Regards,

    Jamie Williams



  • 5.  Re: Native MQ Performance

    Broadcom Employee
    Posted May 17, 2017 03:14 PM

    To add to Jamie's response, this item is being tracked as US279216 so when the functionality change is incorporated in the product you will see in the release notes. In the interim, if you need a fix please contact support to request assistance.

     

    Sincerely,

     

    Stephen Hughes

    Director, CA Support