Skip navigation
All Places > CA APM > Blog
1 2 3 Previous Next

CA APM

273 posts

Docker images get big real fast. If you follow a few simple guidelines you can avoid bloated images and have image sizes in the megabytes not gigabytes.

 

I will use GitHub - CA-APM/docker-introscope as example - a project for running CA APM in Docker container forked from a customer and updated with every new release of CA APM. I need to update for version 10.7 SP 1 or - after reading this post - you should be able to do it yourself.

 

1. Use Minimal Base Images

You can start by using a full blown e.g. CentOS image or you can start with an Alpine linux image. The difference may nearly be a factor 10 in image size:

docker images
ca-standard-images/alpine-jre8    3.6        91ff24d2cd8a 5 months ago  82.5MB
ca-standard-images/centos72-java8 latest     f9de4f60691a 13 months ago 592MB
postgres                          9.6-alpine 6583932564f8 11 days ago   39.3MB
centos/postgresql-96-centos7      latest     33511160be06 3 weeks ago   337MB

So when you choose your (runtime) image, check the image size and look for "alpine" in the tag.

2. Use Multi-Stage Builds

I wrote "runtime image" above because the image that you build in need not be the image you run in:

Multi-stage builds are a new feature requiring Docker 17.05 or higher on the daemon and client. Multistage builds are useful to anyone who has struggled to optimize Dockerfiles while keeping them easy to read and maintain.

(Source: Use multi-stage builds | Docker Documentation)

You may need tools or packages (e.g. *-devel) in your build environment that you don't need in your runtime environment. For CA APM the installer binary (introscope${INTROSCOPE_VERSION}linuxAMD64.bin) alone has 1.6GB because it includes multiple components: the Enterprise Manager, WebView, the database installer, the ACC config server and AOP integration. As we want to build micro services we want to install just one of the components per image. And we don't need the installer once the installation is finished.

So we COPY the installer file to our build container, run the installer and then COPY just the INTROSCOPE_HOME directory to the runtime container.

So for the Enterprise Manager installation the Dockerfile looks like this (abbreviated, see GitHub - CA-APM/docker-introscope for full file):

FROM jeanblanchard/alpine-glibc as install
# install image

WORKDIR /opt/introscope-install
COPY ${INTROSCOPE_BIN} /opt/introscope-install/
COPY eula-introscope/ca-eula.txt SampleResponseFile.Introscope.txt /opt/introscope-install/

# run the installer and hotfix
RUN chmod +x ${INTROSCOPE_BIN} && \
./${INTROSCOPE_BIN} -f SampleResponseFile.Introscope.txt && \
cd ${INTROSCOPE_HOME} && \
jre/bin/java -jar /opt/introscope-install/APM${INTROSCOPE_HOTFIX}.jar


FROM jeanblanchard/alpine-glibc
# target image
LABEL version="10.7.0-HF3"
WORKDIR ${INTROSCOPE_HOME}
COPY --from=install ${INTROSCOPE_HOME}/ ./
COPY startup.sh /opt/introscope-install/

RUN chmod +x /opt/introscope-install/startup.sh
EXPOSE 5001 8081 8444
CMD /opt/introscope-install/startup.sh
  1. We start with out "install" image (line 1): jeanblanchard/alpine-glibc - this need not be the same as the target image below!
  2. Install any packages/tools you need for the build. - We don't need anything else.
  3. Next we copy the installer and other needed files into our container (lines 5-6).
  4. Then we run the installer (lines 9-12).
  5. Now we build our "target" image, again from jeanblanchard/alpine-glibc (line 15). This should be the smallest possible image that has all you need to run your application.
  6. Then we copy the installed application from our "install" image (line 19).
  7. Add everything else that we need: files, ports, volumes, CMD to run (lines 20-24).

The resulting "target" image has just 1.44 GB instead of 3.16 GB if we build in one stage. The difference is (mostly) the installer but could be even more if you need more tools to run your build (JDK, maven, ...).

3. Use Common Dockerfiles

In my example I build images for Enterprise Manager, WebView and ACC Config Server. The first 35 lines of the Dockerfiles of Enterprise Manager and WebView are exactly the same - only some of the copied files are different, e.g. the response file that contains the options for the silent installer.

If you put everything that is common between your images at the top of your Dockerfile and differences as far down as possible, not only the base images but also the resulting layers after the first few steps will be cached and re-used by Docker. 

Summary

Following these three guidelines has helped me reduce the size of the CA APM docker images significantly: more than half for the EM and for the Postgres DB to 40 MB vs 1.4 GB.

I'd be interested to hear how you put those rules to work or if you have other hints for building small Docker images.

JMertin

Docker 1: Pitfalls to avoid

Posted by JMertin Employee Jun 19, 2018

The most common reason a set of dependent docker containers do not start up correctly, is because one of the containers did not come up correctly.

Analyzing many docker-containers, I have noticed very often "developers" to add a startup script that looks like the below (I admit, this is one of the worst versions I have seen):

 

java -jar ${javajunk} &
sleep 6;
java -jar ${javajunk2} &
sleep 10;
java -jar ${javajunk3}

 

That is the portion of a script that controls 3 java processes to be started.

We have in that example several issues, things that should definitely not be done!

  1. Docker containers are supposed to handle micro-services. Here, we start 3 java programs each using at least 2GB of Ram etc. Micro-Service is the wrong name here, it should be called Macro-Service.
    The only understandable reason to make a docker-container out of this is to have it all packaged and pre-configured to run.
  2. Programs started into the background. The Docker-Server will not be able to know if the application started or not.
  3. Depending on the host-load, 6 seconds may not be enough. The best part is 10 seconds for the 3rd app to start. Depending on the induced load by app 1 and 2, it may be too short. So if application 3 really depends on application 1 and 2 being started, this can fail. Especially if installed in a customer environment where the host already hosts some other applications causing high CPU load.
  4. Dependencies cannot really be handled here. Means, java app 2 needs to be started after java app 1 is up.

 

 

So - what is the solution here?

 

  1. Start one container per application. If that is not possible, an internal job-handler needs to be written (shell-script is well possible).
  2. Never start a program in the background. When starting the program through a shell-script, prepend "exec" to the call to replace the current shell with that program. In that case, if the application crashes, the docker-server will know.
    Note that this will only tell the docker-server that the application has been started. It will not know if it is ready.
  3. using a. and b., start one container per application and use docker-compose to start the containers in order.
    If that is not possible. Implement inside the applications a "ready" status indicator that can be passed on to other scripts or polled - so they know it is ready and the current app can be started. In the regular linux world, using mysql one can issue a regular mysql call to the mysql-command and check the status of the mysql-server.
    mysqladmin --host=db1.example.org --user=monitor status
    Uptime: 884637  Threads: 1  Questions: 5534424  Slow queries: 144  Opens: .....
     If the status returns a valid entry (known), the next container can be started.
    This will make the order and in time startup of the applications doable. All other implementations, especially java applications which tend to take ages to start, is just a wild guess.
  4. all of the above together.

 

The main issue will be, if as stated before, one app depends on another one and there is no way to tell when that app is ready. Without an internal "ready" status, this will not really be possible and lead to ugly hacks as "sleep 10" ...

In March of 2018, Gartner published the Gartner Magic Quadrant for Application Performance Monitoring (APM) suites.

In a crowded field combining long established vendors and new entrants, CA Technologies has been recognized as a leader. We believe this is recognition of our solution that addresses the three core tenets of APM – Digital Experience Monitoring, discovery, tracing and diagnostics, and advanced analytic.  CA’s application performance monitoring solutions helps DevOps and IT Operations teams meet the complex application performance challenges confronting businesses today.

 

Join our webcast on Thursday, June 28 at 11AM EST to hear how CA has engineered its leading APM solution to support the evolving needs of IT and a modern digital business.

 

Click here to register

Thank you for attending yesterday's community webcast [COMMUNITY WEBCAST] Monitoring Massively Connected Grids:  Energy Sector – May 14, 2018 @ 2 p.m. ET 

 

 

Our Presenters: 

  • Mike Beehler, Vice President, Burns & McDonnell
  • Travis Blalack, Sr Manager Grid & Telcom Network Operations, Southern California Edison
  • Adam Frary, CA Services Marketing  fraad01
  • Leroy Chang, Managing Consultant, CA Services chale03

In part 1 of this blog I covered the ‘swivel chair’ multi-view method of application performance management, why it’s far too prevalent and the challenges it poses for many customers. I discussed how modern applications are not single technologies in a traditional siloed stack; most often, they are a collection of functions delivered by disparate technologies. This poses a problem for effective application performance management—a problem worth solving.

There are a couple of strong technical solutions to consider, but before we go there, here are a few considerations:

 

  • How often will you need to modify or extend the applications you monitor?
  • Is it worth the effort to tune alert thresholds, update dashboards, and/or modify agent behaviour?
  • What level of detail do you need to see?  How much is too much?
  • Is tracing business transactions in-flight really important to you? If so, why?
  • Is it important that other team members or APM stakeholders have access to the data and/or the cross-app tool?
  • Is implementing SAP Extended Diagnostics (SED) simply too expensive, or does it not deliver the value you need?

 

Some of my customers use CA APM, while others use SAP. The main thing is to find a tool that gives you the data you need while avoiding the swivel-chair approach to APM.

A point solution that has helped many SAP customers enjoy a joined-up approach brings application environments together in a single view. It offers coherent insight and diagnostics across a disparate programming landscape that can include SAP ABAP, JAVA/.NET, PHP and more.

 

Trying to jump from one part of a complex modern application environment to another to trace business transactions in flight is both arduous and error prone. Right to View (RTV), the free APM solution bundled with SAP Solution Manager, has a fixed capability and doesn’t allow for self-augmentation. Solution Manager itself also has a somewhat static scope. Neither RTV nor Solution Manager provides a seamless, joined-up view, and their lack of flexibility can be frustrating when you’re faced with challenges that deviate from the norm.

 

One answer is CA APM with the CA ABAP extension packaged work product (PWP). CA APM offers the functionality of RTV (they are based on the same technology), but in a more extensible and flexible package. The CA ABAP extension PWP is a rich source of environmental and transactional data in a single, in-context view. In addition, a wealth of other possibilities—JavaScript PHP, Application Server and NodeJS—enable operations teams to corral a wide scope of technologies.

 

With the ABAP field pack, you can connect and retrieve data from an SAP ABAP instance and push the data to CA APM. The ABAP field pack invokes SAP BAPI or other SAP remote function modules in the SAP system to gather metrics such as enqueues, operating system, system info, data transactions, users, workload and work processes.

 

A top request of many customers I work with is to enrich the data around the SAP ABAP environment. Application owners and performance managers want to incorporate user information, sessions and dialogues, ICM status, running threads, RFC resources and buffer allocation into their application performance views. The SAP Host Agent feature of CA ABAP allows you to do this. It also adds value by monitoring the virtual environment via collecting metrics for the virtual machine such as max CPU seconds, memory allocated, number of active/waiting VMs, etc.

 

Another neat capability of CA ABAP is ABAP Hotspot Analysis, which measures SAP ABAP instance nodes: function modules, form routines and methods. To make in-context data consumption easier, the CA APM Investigator tree view displays additional metrics in table form.

 

I know this is a lot to consider. If you have real-world experience of RTV, Solution Manager or another SAP/ABAP monitoring tool, many of the terms will make sense. If a lot of this is unfamiliar to you, a more visual and practical demonstration might be useful. You can join me on this webinar (May 15, 2018) if you would like to see how to build a single view of your application and use these tools to gather data from many sources.

Note: We'll be at AWS Summit San Francisco on April 4th! We'll have our own booth there, so if you can please come and find us (the event is free!) and we'd love to chat with you.

 

CA APM is an Application Performance Management solution that can help teams gain performance insights and diagnose issues across the full software lifecycle. It includes transaction tracing that supports modern APIs and apps, and it can help teams get to the root cause of bugs in APIs, transactions, code and database calls.


Runscope is an API monitoring solution that ensures that a web service API in a mobile or web application is as reliable as code running locally or in production. Runscope customers configure API monitoring tests for both test and production and can ensure that their APIs are up, performing well, and returning the correct data.


CA APM can monitor and record the details of API responses while they’re being handled in the data center or cloud. By connecting CA APM with Runscope API monitoring, you can get a full picture and find the root cause of why an API is slow or failing.

 

Watch the following video to learn how to set up the Runscope and CA APM integration, or follow the step-by-step instructions below:

 

 


 

Connecting Runscope with CA APM

 

Go to your CA APM SaaS tenant and log in. After that, copy the url (without the path) that we’ll be using in the next steps. For example: https://954976.apm.cloud.ca.com

 

CA APM SaaS instance highlighting the URL portion starting on https to .com

 

In your Runscope account, click on your profile on the top-right and select Connected Services:

 

Runscope account highlighting the Connected Services option on the dropdown after clicking on the user's profile on the top right


Find the CA Technologies logo and click on Connect CA APM:

 

Runscope connected services page, highlighting the CA APM integration and the button Connect CA APM


Paste the CA APM URL that we copied in the first step in the text field and click on Enable APM Traces:

 

Runscope CA APM Integration page, showing the textbox where the user has to add their CA APM instance URL from the previous steps


And you’re all set! Next, we’ll look at how to start sending the API monitors information to our CA APM instance.

 

How to View an APM Trace


The first thing we need to do is enable our integration in our API monitor environment settings, to start sending information from Runscope to our CA APM instance:

 

  1. In your Runscope account, select a bucket and open an API monitor
  2. Select Editor on the left-hand side, and open the environment settings
  3. Open the Integrations tab in the environment settings and turn on the flag for the CA APM integration.


Runscope API test, showing an expanded environment settings with the Integrations tab selected, and the newly connected CA APM integration toggled on


Now that the integration is enabled, we can click on Save & Run at the top. After the test is completed, open the result page under “Recent Test Runs”:

 

Runscope API test result page, highlighting the 'Trace: View in CA APM' link at the top of the page, and the same text inside a request's expanded 'Connection' tab


The first link at the top next to Trace, “View in CA APM” will show the metrics map for the entire test.

 

You can also expand each individual API request and select the “Connection” tab. You’ll find a Trace section under Timings with another “View in CA APM” link. Following that link will land you on the metric view for that specific request.

 

Root Cause Analysis with Runscope and APM


For in-context viewing of an APM trace, please make sure you have the correct “Universe” settings. You can use “All” if you haven’t created a Runscope-specific Universe.

 

CA APM map dashboard, highlighting a Runscope API request


CA APM map dashboard, highlighting a Runscope API request

 



If you need any help with your integration, check out our full docs and please reach out to our awesome support team!

Thank you for attending our community webcast CA on CA ASM – Learn How CA’s Internal IT is Leveraging CA App Synthetic Monitor for Proactive Monitoring of SaaS apps 

 

Sadly,  the recording for this presentation did not work    Slides are attached for your reference (scroll down)

 

Let us know if you'd want a repeat performance by answering this quick poll --> Would you like to hear more about CA on CA with App Synthetic Monitor? 

 

Follow your presenters!  DevenShah and Dennis.W.Smith

Imagine if application users could see behind the serenity of the well-crafted (OK, sometimes cobbled together) user interfaces that deliver seamless user experiences envisioned in brainstorming whiteboard sessions, agile scrums, big room planning and/or flow diagrams?

 

If they could, they would likely have an eye-opening moment as they grasp the modern application architectures and come to understand the range of technologies working across on-premise and cloud infrastructures, middleware, APIs and microservices to present data, ensure security, optimize availability and performance, all while being readily adaptable to address new business requirements. It’s a truly remarkable feat.

 

Oddly, none of this happens seamlessly; instead, countless seams must be navigated and managed to give users a seamless experience.

 

And, none of this complexity is delivered overnight. It accumulates over time, causing challenges for IT:

 

  • How do we monitor performance across the full expanse of a modern application?
  • How do we know our customers are getting the best value and experience from the application?
  • Do we have the information needed to precisely isolate issues when they arise?
  • Is the data in a meaningful actionable context for each group of stakeholders?
  • Are teams using the data, remediating issues and learning over time?

 

To deliver desired capabilities, modern application management systems bring disparate technologies together for developers, testers and operators. To deliver the desired performance and user experience, application performance management (APM) needs to work across technologies and present data and insights in a context meaningful for developers, testers, operators and product owners. APM must collate and provide user views with timely, precise, accurate and actionable insights. By interpreting, baselining and differentiating between minor anomalies and major incidents, APM brings value to all levels of application ownership, not just to the developer who can interpret the raw data. All this is quite reasonable until one looks at a complex application ecosystem such as an ERP solution, where complexities can expand exponentially.

 

As an example, consider an SAP environment.

 

With a mix of programming environments (Java and SAP Advanced Business Application Programming, or ABAP) for SAP, customers often use multiple tools to manage performance. As a result, specific issues may be overlooked or not monitored by the right stakeholders and, more importantly, an issue’s context may be lost or not clearly conveyed.

 

Many SAP customers use a combination of SAP Extended Diagnostics (SED)- and SAP Solution Manager (SSM)-based components to capture and independently present performance metrics and transaction data. Think of this as a cross-platform “swivel-chair approach,” with users of the data pivoting from issue to issue, and not seeing the full ecosystem in a context-rich, coordinated way. This approach can be time consuming, frustrating to stakeholders and difficult to sustain over time. The worst scenario is that teams responsible for performance lose focus and incorrectly limit the scope of what they monitor due to the effort and complexity of using many tools.

 

Context helps teams learn.

 

Since modern applications increase in complexity during their lifecycle, teams often benefit from an APM approach that provides context that connects performance data to actionable insights and outcomes. For the SAP example, teams benefit when they can see cross-application transaction traces that traverse Java, J2EE and ABAP environments—something that all ERP shops can get excited about!

 

This makes for a much better APM user experience when diagnosing an issue whose cause is unclear.

 

Stay tuned. In Part II of this blog, I will provide technical detail focused on SAP/ABAP performance management, Java, PHP, .NET and Node JS.

 

Related Links: CA APM video and data sheet.

Imagine if application users could see behind the serenity of the well-crafted (OK, sometimes cobbled together) user interfaces that deliver seamless user experiences envisioned in brainstorming whiteboard sessions, agile scrums, big room planning and/or flow diagrams?

 

If they could, they would likely have an eye-opening moment as they grasp the modern application architectures and come to understand the range of technologies working across on-premise and cloud infrastructures, middleware, APIs and microservices to present data, ensure security, optimize availability and performance, all while being readily adaptable to address new business requirements. It’s a truly remarkable feat.

 

Oddly, none of this happens seamlessly; instead, countless seams must be navigated and managed to give users a seamless experience.

 

And, none of this complexity is delivered overnight. It accumulates over time, causing challenges for IT:

 

  • How do we monitor performance across the full expanse of a modern application?
  • How do we know our customers are getting the best value and experience from the application?
  • Do we have the information needed to precisely isolate issues when they arise?
  • Is the data in a meaningful actionable context for each group of stakeholders?
  • Are teams using the data, remediating issues and learning over time?

 

To deliver desired capabilities, modern application management systems bring disparate technologies together for developers, testers and operators. To deliver the desired performance and user experience, application performance management (APM) needs to work across technologies and present data and insights in a context meaningful for developers, testers, operators and product owners. APM must collate and provide user views with timely, precise, accurate and actionable insights. By interpreting, baselining and differentiating between minor anomalies and major incidents, APM brings value to all levels of application ownership, not just to the developer who can interpret the raw data. All this is quite reasonable until one looks at a complex application ecosystem such as an ERP solution, where complexities can expand exponentially.

 

As an example, consider an SAP environment.

 

With a mix of programming environments (Java and SAP Advanced Business Application Programming, or ABAP) for SAP, customers often use multiple tools to manage performance. As a result, specific issues may be overlooked or not monitored by the right stakeholders and, more importantly, an issue’s context may be lost or not clearly conveyed.

 

Many SAP customers use a combination of SAP Extended Diagnostics (SED)- and SAP Solution Manager (SSM)-based components to capture and independently present performance metrics and transaction data. Think of this as a cross-platform “swivel-chair approach,” with users of the data pivoting from issue to issue, and not seeing the full ecosystem in a context-rich, coordinated way. This approach can be time consuming, frustrating to stakeholders and difficult to sustain over time. The worst scenario is that teams responsible for performance lose focus and incorrectly limit the scope of what they monitor due to the effort and complexity of using many tools.

 

Context helps teams learn.

 

Since modern applications increase in complexity during their lifecycle, teams often benefit from an APM approach that provides context that connects performance data to actionable insights and outcomes. For the SAP example, teams benefit when they can see cross-application transaction traces that traverse Java, J2EE and ABAP environments—something that all ERP shops can get excited about!

 

This makes for a much better APM user experience when diagnosing an issue whose cause is unclear.

 

Stay tuned. In Part II of this blog, I will provide technical detail focused on SAP/ABAP performance management, Java, PHP, .NET and Node JS.

 

Related Links: CA APM video and data sheet.

We are pleased to announce that CA Application Performance Management (CA APM) r10.7 is now available. Among many great improvements, this release is strongly focused on cloud and container monitoring and application to infrastructure monitoring and correlation.

 

For complete details, check out the blog What's New in CA APM 10.7

 

 

Webcast Replay: 

Original Invitation: What's New in CA APM 

Webcast Replay: [REPLAY] What's New in CA APM - February 27, 2018 

Q&A Transcripts:  attached.  questions were answered verbally, listen to the recording please

 

Follow your presenter Peter Kruty:krupe04

In The Advice from the Tenzo (Cook), Eihei Dogen gives more than a few cooking tips. See https://wwzc.org/dharma-text/tenzo-kyokun-instructions-tenzo.

 

(Image -- Thanks to WIkipedia)

 

In several points, he discusses on not wasting even a single grain of rice. But whether APM administration, customer service, or daily lives, there are some important lessons to learned by this.

 

1. It is important to take care in all of our activities no matter how small. Every action counts. A single mindless mistake may mean hours of catching up.

 

2. A single-minded focus on the outcome. Whether it be a happy customer or a good meal.

 

3. A strong sense of urgency. Meals or severe problem resolutions have a limited amount of time to be delivered.

 

4. Work with the limited resources that you have . Whether it is people, hardware/software, time, or something else. Maximize all that is available to you.

 

5. Consistency of effort. After mastering an activity, do it the same excellent way day and day out. Some days we may be tired or frustrated. But this is a good goal to have.

 

 

Conclusion
There is a lot more that I could go into. But I wanted to show how saving each grain of rice can teach us more than we think!

Thank you for joining today's community webcast: How to Build Trust in your APIs with Runscope 

Please follow your presenters today:  saybar and JohnCBenbow

 

Presentation (is attached)

Start a Free Trial --> API Monitoring · Runscope API Monitoring 

Watch the Recording --> [REPLAY] How to Build Trust in your APIs with Runscope - February 13, 2018 

Introduction

This for the first time in one place, captures the important areas to investigate before opening an APM TIM SSL case.

 

I've visited some of these ideas in:
https://communities.ca.com/community/ca-apm/blog/2017/12/01/tech-tip-66-drat-why-cant-i-record-in-apm-ce-cem -- Why can't I record?

https://communities.ca.com/message/99822745#99822745 -- Private keys

 

Question #1: Are there issues with my network setup?
Very often, network and SSL issues are interrelated. If the network traffic is one-way, filtered out, empty or small packets, having dropped and out of order packets, then SSL traffic may not appear correctly or at all.

 

See the above links for possible next steps as well as https://support.ca.com/us/knowledge-base-articles.tec1122441.html SSL Decode failures.

 

Question #2 Are my private key and passphrase in order?
Often, APM admins are given private keys from their web server, firewall, and load balancer admins.However, they must trust that they received the right key in the correct format with the correct passphrase (including if in upper, lower, or mixed case). This may not be the case. To verify, compare the modulus of the certificate from the server with the private key that you were given. See How do I verify that a private key matches a certificate? (OpenSSL) .

 

Question #3 Am I using a supported TLS ciphersuite or TLS extension/feature?
If you get an unsupported cipher suite message in the TIM log, compare the ciphersuite number against a list such as https://www.thesprawl.org/research/tls-and-ssl-cipher-suites/ to learn more about the specific ciphersuite. 

 

Also see for further details
https://support.ca.com/us/knowledge-base-articles.tec1667615.html -- Supported TLS cipher suites
https://support.ca.com/us/knowledge-base-articles.TEC1926892.html -- Master secret
https://support.ca.com/us/knowledge-base-articles.TEC610516.html -- SSL session ticket

 

Question #4: Am I using TLS 1.1/1.2?
Your application may use TLS 1.1/1.2. APM TIM supports this feature with all current releases. But sometimes people forget to set explicitly DisableTLS11And12RecordsProcessing to 0 (Enable). Note by default this is implicitly set to 1 (Disable).

 

Next steps

By having gone through these four questions, you know that you are not having common networking and SSL issues. At this point, it is time to open a case providing such items as a HTTP/HTTPS trace (pcap, Fiddler trace, or equivalent), a TIM log with SSL. HTTP Components/Parameters, and networking addresses trace settings enabled. Ideally these should be both at the same time to perform event correlation.

 

Please let me know some other common questions that you ask and future CEM topics that you want to see.

Introduction:

The sun has come out today after experiencing the infrequent event of heavy snow in Tidewater, Virginia. So, now seems like a good time to get my first blog out for the year. And I want to revisit an old topic.

 

Favorite TIM Debugging Tools

From time to time, I get asked for a list of helpful tools for TIM debugging. Here are some of those utilities have found useful. The list is always changing. These should not be considered an official endorsement of the software by myself or CA Technologies.

 

1. APMscripts

     The ultimate utility scripts created by Joerg Mertin to gather the all-important info to analyze TIM performance and health. You can get from  https://github.com/CA-APM/cem-healthcheck-scripts . It is covered in APM Tech Tip -- The Missing Manual Part 3: Tim Monitoring 2 (apm-scripts) 

2. TIM Logs with Trace options enabled  . See APM Tech Tip: TIM Trace Options -- The Missing Pages Part 1 covers when to use which trace option.The same reference covers 

3. Wireshark for network data quality issues including checking if there is two way traffic, SSL cipher suites, HTTP servers and statements, missing packets, etc..

4. Transaction Inspection for overlapping definition issues Resolving APM CE Business Transaction/Defect Count Issues talks about this. Seeing which definition matches will show if it is the one that you expect.

5. To split large pcaps by size or IP address, splitcap is invaluable

6. Wikidpad for my personal CEM knowledgebase. Useful for searches.

7. Agent Ransack for analyzing and counting strings in a log. Supports regular expressions.
8. timconfigtool or any xml viewer to read domainconfig.xml. It is covered in APM Tech Tip -- The Missing Manual Part 1: TIM Analysis, Monitoring, and Other Tools  This is an internal tool

9. The TIM logs with SSL trace option on to see SSL cipher suites and SSL Decode errors (10.5 and later)

 

Questions for You:
- What other tools do you use for TIM debugging and for what situations?

- What other TIM topics do you want to see?