As you me be aware, since CA PPM 14.1 we have the possibility to create our own performance dashboard using NSQL queries against LOG_XXXX tables.
This great feature works just when environment is set to use Tomcat as Web Server. But, let's first talk about the out of the box you need to run:
1. Tomcat access log import/analyse:
According to official documentation:
This job imports and analyzes Tomcat access log files from the local CA PPM environment (all app
services), then stores and summarizes the data in designated tables (LOG_DETAILS, LOG_SUMMARY,
LOG_FILES, LOG_REPORTDEFS). With the addition of custom portlets and queries or externally
available content, this analysis data can provide details regarding the performance of the CA PPM
system. If you are not running a Tomcat application server, the job runs but does not import any
Log Date: Specifies the date for the access logs that are imported and analyzed. If no date is specified, the
default date is yesterday.
In other words, in order to see data in custom dashboards, we must to run job.
How do I use it? I enter “Today” date in Log Date, otherwise it will just pick app-access logs until day before. Once job is completed, I go and check performance portlets/dashboards.
One of the questions I get is: How many days data can I keep in those LOG_XXXXX tables? Well, it's up to customer's needs, or specific troubleshooting or just due to size of environment and activity.
Best practice is to keep 30 days, however, I've used 60 days without experiencing performance issues. Anything beyond 60 days, it will make portlets very slow.
Therefore, you will need to schedule or run following job to purge table. In case you forget, don't worry , it's already defaulted to run on regular basis to avoid performance issues and keep on with good housekeeping best practices.
2. Delete Log Analysis Date:
According to official documentation:
This job removes the CA PPM log analysis-related data. The criteria for removing the data is the
LOG_DATE field on each of the log analysis tables.
This job is scheduled automatically to run at 1:00 AM each day.
Log retention in days
Specifies the number of days that data is retained in the tables that are related to analyzed access
logs. The default value for this parameter is 30 days.
Session token retention in days
Specifies the number of days that data is retained in the table LOG_SESSIONS. The data
specifically stores a mapping of the CA PPM session token to CMN_SEC_USERS.ID for analysis and
audit purposes. The default value for this parameter is 14 days.
So far so good, this is just all theory. No, let's go to the more practical aspect.
CA PPM offer the possibility to build custom NSQL graphical portlets. That's a perfect occasion to start practicing our developing skills.
What kind of portlets can we build? See below a list with some examples:
- AVERAGE RESPONSE
- PAGES VIEWED
- USER VIEWS
- SLOW UERS
- SLOW PAGES
Let's explain some of the portlets in the list, provide a better idea.
Portlet will provide the possibility to analyze:
- average elapsed time per hour during current day (blue line TODAY).
- average from last N days without include the current day (purple lines AVG).
This is a very easy way to know if there is specific or a group of users causing some APP memory issues, for example exporting huge excels or pulling to many rows from a portlets.
In sample below we can see User 10 is causing a performance issue. This is a real sample from a customer, so I've replaced names with "User N". Problem was every day a user was pulling more than 30.000 rows in a NSQL portlet and then exporting several times to excel. As a consequence, CA PPM was slow every morning around 10 am. With this king of portlets and data, troubleshooting a piece of cake .
I could continue talking about all never ending list of portlets you can build around those tables.
Related and interesting links:
That’s all. Thanks for reading until here. Did you like it? Please, don’t be shy and share it.