Clarity

  • 1.  Wide ELAPSED_TIME variance on Auto-Suggest

    Posted May 17, 2017 04:05 PM

    Looking at CMN_SESSION data due to user complaints about performance.  In the data, seeing large variation in ELAPSED_TIME when hitting  /niku/odata/GetSuggestionsForLookup

     

    How large?  From less than 1 second to over 400 seconds.

     

    The data doesn't indicate which auto-suggest field is performing slow for the user.

     

    Can anyone manually suggest how we might identify the field (other than calling users - will do this if only way, but it's rather lame to call one's users to ask if they recall an auto-select field performing poorly, last week...

     

    Or, does anyone know of any known conditions that might cause such variation (e.g. search lookup fields with dynamic queries using "X" code and remove "X" code...)?

     

    Any other suggestions?



  • 2.  Re: Wide ELAPSED_TIME variance on Auto-Suggest

    Posted May 18, 2017 03:10 PM

    Hi Dale. Does your infrastructure group have any Real User Monitoring Tools

    I have Dynatrace connected to my PPM and I see everything. This shows everything in the user's session in the context of their click by click end user experience. We've been able to isolate and mediate little gremlins and a couple big honking monsters with this.

     

    HTH.



  • 3.  Re: Wide ELAPSED_TIME variance on Auto-Suggest

    Posted May 18, 2017 03:35 PM

    We have Aternity - fairly recently acquired, and I haven't seen that it can drill into the DB actions, which is where I think the root of problem lies.

     

    Will check to see if Aternity can do this, or not.

     

    And, you've got me thinking...



  • 4.  Re: Wide ELAPSED_TIME variance on Auto-Suggest

    Posted May 18, 2017 04:14 PM

    Cool Stuff!

    End-User Experience Monitoring | Riverbed | US 

     

    I would be shocked if this doesn't give you full stack. I'm fairly certain this will give you exactly what you're looking for.



  • 5.  Re: Wide ELAPSED_TIME variance on Auto-Suggest

    Posted Jun 02, 2017 03:48 PM

    Well, prepare to be shocked. Aternity, at least the version we purchased, won't breakdown anything inside app or db, but will breakdown everything else, end to end.  Have learned we have another tool - OpNet, which can get into the app.  Have also learned that the session data provided by our team is from Tomcat log, not CA PPM cmn_session tables - therefore, the data file is recording end-to-end.  Therefore, confident that the variance is not from the DB, but somewhere between user and DB.

     

    Have review planned for Monday - should be able to identify, with Aternity, whether variation is from outside or inside app - then, if inside, we will throw OpNet at it.  If outside app, will take it off the possible root cause list...

     

    Only possible hiccup might be that Tomcat records all user activity, while Aternity is only monitoring specified resources and specified pages - the net might not be broad enough in Aternity to capture the variation seen in Tomcat data.  If not, will have to consider adding more resources to the tracking list.



  • 6.  Re: Wide ELAPSED_TIME variance on Auto-Suggest

    Posted Jun 04, 2017 09:21 PM

    Keep me posted. I piloted App Dynamics as well. We never got good data out of it - it could have been a knowledge issue - one can't expect every pre-sales engineer to understand how to 'hook in' to all enterprise class tools especially one as uniquely architected as PPM. Although I'm doing OK with Dynatrace (last friday we put in our first optimization from Dynatrace findings) I really won't go through training until next week 9week of 6/12/2017).

       Later in June if you still can't find what you're looking for, I can demo what PPM w/Dynatrace can do. Dynatrace will support 30 day pilots to prove their value. Maybe there's an opportunity to 'pilot' with your support group to find the problem you're hunting. You get results and they expand their knowledge of what's possible with no obligation.