Skip navigation
All People > Thomas_P._Koehler_PMP > Thomas Koehler's Blog


I am a big fan of using Clarity for customer status reporting.

  • It presents a single place to collect project governance information
  • The status report portlet provides easy history to review how a project evolved
  • Anybody can see history without having to review multiple documents
  • I can update continuously during the week from any device


My status reports have a very wide distribution, including all customer team members, CA Services team members (project team, engagement team), the CA Sales team as well as other


I use all sections of the status report except

  • I do not use the financial section because of the wide distribution of the status report. For example, we do not want vendors to see the financial information
  • I do not use action items due to the limited field size and the inability to show non-CA team members as responsible for open action items

Supplemental Information

In addition to the Clarity status reporting, I provide additional information to the customer as necessary:

  • Financials: Financial reporting out of Clarity has a number of shortcomings. In addition, I report financial information only to a subset of the project team. I prefer to report financial information from the SAP BI T&M report. This way I can be sure that all time has processed correctly. If the customer wants aa more detailed breakdown of the work, I also add a tab with the information from the “Status Detail – Weekly” portlet, summarized into a pivot table.
  • Project Schedule: For smaller projects I only use the Deliverables (Tasks) section in Clarity. Larger or more complicated projects will require a MS Project schedule.

Special Cases

There are projects where Clarity status reporting does not fit. You cannot force the Clarity report format for all projects

  • Issues-oriented projects: These are projects where we are driven by project issues and the bulk of the work focuses on resolving issues. In this case, I will have aa detailed Excel issues list which becomes the final “punch” list to project conclusion.
  • Short updates when there was no work for a while: In this case, I may just send a quick email to touch base with the customer. I then copy the email into the status report portlet so I maintain history

Clarity Status Reporting when Managing Multiple Projects as One

At times, a customer contract translates into multiple CAA Clarity projects. However, to the customer this is one project and I manage it as such.

  • I chose one of the Clarity projects as the main project and use it for all reporting.
  • I place a note into the status comment section of the other projects, indicating which my main reporting project is.
  • I also create one initial status report and copy the same information from the project status field.

In addition, it is very important to align the tasks between the different projects so if I create a “Status Detail – Weekly” report, all tasks line up between the projects.

  • I make sure that the task names link back to the contract, such as “Exh. A – Requirements”, etc.

Copying a status report

As I have a large number of information on the status report page, much of which does not change from week to week, I use the copy process. See  and especially Robin’s comment how to copy even easier.


 Are you using Clarity for status reporting? Add your tips and tricks below.


I had the privilege of leading a project to implement App Synth Monitor (“ASM”) for a major financial services organization. ASM is a complete cloud-based solution to monitor Web sites to ensure they provide a positive customer experience. This ASM implementation is my latest experience implementing a CA SaaS solution (my previous one was with DCIM, a PPM-based solution, but that was a few years ago and more of a hosted service than true SaaS). I want to take this opportunity to share some of my personal observations that apply to any SaaS solution in the enterprise, not just ASM.

I want to make sure to point out that I have no complaints with the ASM development, support, product management, services, sales and pre-sales teams. They all made a herculean effort to ensure the successful implementation of ASM for this customer. The teamwork the CA teams exhibited was incredible. This project truly was a manifestation of the CA DNA.

So, why am I writing this blog? We have a very ambitious (although I think achievable) goal of 50% of our revenue from SaaS in a few short years. This we can only do if we can convince many of our enterprise customers to move to SaaS solutions. And that means that while we understand the enterprise really well, we still can improve how we move the enterprise to the cloud.

So, without further ado, here are my key points and observations about SaaS in the enterprise.

Customers expect consulting

When we write contracts, we are very focused on the technical aspects of the work. However, in SaaS even more than with on-prem work, our customers expect true consultative work. They want us to guide them to making good decisions.

Our approach should not be: “Tell us what you want us to do”. Rather, we need to explain to the customer best practices, how other customers approach the deployment and what the implications are of the decisions they make.

This means that our contracts need extra time for this kind of consultative work. It is quite possible that the actual consultative work takes much more time that the technical work. It may also mean that we need time to create and demonstrate multiple approaches so the customer can see the impact of these approaches and make an informed decision.

Everything is larger in scale than you ever expected

Over the years I have frequently been involved with first-time implementations to a large-scale enterprise of solutions we acquired.

In the case of previous ASM implementation, scale was not an issue when other customers deployed on a typical small to medium company size. This enterprise implementationridg was easily 10-15 times larger than the largest previous deployment. And each script showed levels of size and complexity we never encountered before.

Everything is more complex

What is complexity? Scripts are larger, are more intricate, are more customized, more special cases. Scripts were specific to internal browser versions that may no longer be considered modern or state-of-the-art. Scripts may take advantage of browser functionality that is not commonly used.

The other problem is that the environment may be highly customized and restricted (see issues with security below). In addition, processes may prescribe a level of restrictions we have to work around. Process constraints may be just as rigid as technical constraints and we need to take them equally serious.

Strict IT standards

In many enterprises, the IT standards are very rigid (much more so than we are used to at CA). Many of the companies I have been working with have their desktop tightly controlled. Installing special software on the desktop to accommodate our SaaS solution requires a lot of special permissions and is not practical for a larger user base.

This holds true especially for browsers. Therefore, it is imperative that our SaaS solution supports a wide variety of browsers. The comment “Just use a different browser” is not an acceptable response. Equally, changing browser settings (for example, for the proxy) is not easily accomplished as these settings may be locked down as well.

If our SaaS solution requires components to be installed within the enterprise (such as for integration with other systems or for data collection), it is critical that these locally-installed components can be hosted on a widely accepted platform. This should include, as a minimum, RedHat Linux and Microsoft Windows Server. Requiring a non-mainstream OS, such as Debian, causes significant problems.

Supporting ongoing operations is key

Once a customer deploys any solution, it has to be turned over to Operations. This is a group of people responsible for the continuous operation of the solution. Look at this group as the people that take action when an event occurs. This is often different from administrators, who are responsible for the health and welfare of the solution.

The Operations group will perform routine tasks as the result of an event. That means that routine events and the required action need to be well documented.

The more events Operations can handle, the better. Any events that are outside the Operations cookbook approach to resolution (often called the runbook) have to be escalated to other groups, often leading to delays in response.

That means that CA needs to help the customer by providing a template runbook for routine events and recommended actions. The customer can then customize the actions for their specific operational requirements.

Security is paramount

The customer expects full control over security. Both on the level of authentication and authorization. The enterprise customer expects granular, role-based security.

The enterprise security team does not like generic user or account names. They require to have special user accounts to have as little authority as possible. Requiring an “admin” account for our solution causes a lot of problems.

Another aspect of security is that communications into and out of the enterprise are tightly controlled. Firewall and proxy settings may require a lengthy review and approval process.

There are many legacy systems

One challenge is that we must be able to support legacy systems, tools and environments. Legacy may also refer to older, out of support environments.

This is especially an issue with monitoring and similar solutions. The older a legacy solution, the more monitoring is required. Of course, in this case it may be hard to properly interact with the legacy solution.

No application is an island

The enterprise customer expects its applications to interact. In the case of ASM, it was just a starting point. The information is expected to flow through CA APM (Application Performance Management) as a centralized reporting platform on to Netcool to open tickets in the customer’s service desk solution.

This means that the customer needs full access to the data collected in the ASM application downstream.

In a generalized sense, enterprise customers expect to feed data into our SaaS application and expect to have downstream access to the data. Of course, one of the most important integration is LDAP to feed user access and authentication information into our SaaS application. I know we have this for some of our applications, but not all.

Changes move through multiple lower environments into production

Customers expect to have multiple “lower” environments. I know of customers that have a DEV environment for development, a QA environment for testing, a UAT environment for user acceptance testing and, of course, a PROD environment. The rules for promotion from one environment to the next are well established. Detailed change control is required the closer you get to the PROD environment.

The customer expects a comparable setup for their SaaS application. In addition, they expect that configurations can be moved without reentering information from one environment to the next. This is critical to reduce the chance of introducing errors moving the configurations through the environments. Ideally, you have all the tools of DevOps available to manage transition through what the customers call promotion process.

And I think this is a reasonable expectation. We have multiple environments for some of the SaaS products we use, such as Salesforce or Concur.

Customers need to test new SaaS version before CA promotes to production

Any time we make a change to our SaaS platform, there is the chance we introduce errors. This happened frequently with ASM over the last year. And the risk increases exponentially the more customizations and interfaces we or the customer developed.

In order to mitigate any risks, it is important to enable a customer to get a preview of the proposed upgrade with all their customizations and interfaces. I am not proposing to give them veto power over a release. But a couple of weeks should give them time to update their special work to ensure a smooth transition.

Customers expect 100% uptime

We all know that 100% uptime is not a realistic expectation. However, if we do not strive for it, we will fall well short. Unplanned outages should be a rarity, not something that happens several times a year. There should be redundancies built into our SaaS offerings, which reduce the chances for unplanned outages. We should be able to build an environment that allows even updates to the SaaS platform to not require an outage.

Just think of Twitter. Is Twitter really mission critical? Will there be financial harm to the users if there is a planned or unplanned outage to Twitter? Now think of our users. If you are a major financial institution and a Web site goes down unnoticed, that will cause financial and reputational harm to the company. And if our customer relies on our SaaS solution to notify them when they have an outage, our SaaS solution becomes mission critical to our customer.

When there is a planned outage, we need to provide proper notification. This must include

  • Impact of outage … is it the whole solution or just a subset?
  • Potential work-arounds … how can the customer obtain comparable functionality during the outage.
  • Precise timeframe … this should be preferably during non-working hours. Granted, this can be a challenge for a world-wide solution. But at least is should be possible to perform the outage during the weekend. Just see how Salesforce or Medalia handle planed outages.
  • Communication plan … as we get closer towards the outage, we need to reinforce the communications. Again, see how Salesforce or Medalia handle planed outages.

Need to monitor our SaaS solution

So, we all understand that 100% uptime is not a realistic expectation. But if our SaaS solution is down, our customer needs to know right away so they can manually perform the work they are paying CA to do automatically. Having automatic monitoring and escalation in case of an issue end-to-end is critical. So as a backup plan, there must be comprehensive monitoring of the CA SaaS solution and proactive notification of the customer should there be any outage or other problem. This must include any interfaces and customizations.

Let me rephrase this: Any unplanned outage of our SaaS solution is completely unacceptable. However, if it should occur, it is critical that the customer is properly notified.

Replacing an existing on-prem solution

Of course, replacing an existing solution has the typical challenge of “We want a new solution but it has to work exactly like the previous one”.

However, replacing an on-prem solution with a SaaS solution has the added challenge that the customer team we are working with does not really know themselves how to fit the SaaS concepts into existing processes and procedures. This makes it especially critical that we at CA offer consultative services. Unfortunately, we are still learning ourselves how to best guide a customer to make this transition.

In addition, I have noticed that there are more instances where we work with business teams instead of IT implementing a SaaS solution.

CA look and feel

This is a pet peeve of mine and I feel obligate to point this out every opportunity I get. We at CA are a subscriber of Microsoft Office 365. Any time I am in an online Office application I have a consistent look and feel. Upper left gives me access to ALL my online office apps. I know what the sidebar does and how to show or hide it. There is a consistency between all the online versions of the applications. Same with Salesforce. I may be using a third-part plug-in but it still has the salesforce look and feel. For both examples, security is consistent … one login for each of the respective applications.

But then you look at CA solutions. There is just no consistency in user experience between applications. If you are lucky, your solution integrates with SSO. ASM does not. Just look at PPM, Agile Central and ASM (three products I am very familiar with). Who would think they are products from the same company? Looks like a bunch of point solutions. This does not represent the image of an integrated enterprise solution powerhouse.


In summary, a SaaS deployment in the enterprise has a lot of the same challenges a regular deployment has. However, fitting SaaS into an environment that has classically been used to on-prem exasperates many challenges that might otherwise be routine as we are all learning how to best guide our customer in the transition.

So, what are your thoughts? Please let me know by posting a comment below.

Now that you just passed your 12th birthday, you are ready for a well-deserved retirement. In IT terms, 12 years is like an eternity and I cannot believe you made it this long. You are, at times, painfully slow. I knew you were getting old when you started to display the dreaded "deadlock" error more and more.


You really turned out much different that I ever expected. After all, when Jim Cain and I were sitting in the Pittsburgh office one day in January 2006, all we wanted was something to make our lives easier. We created you so we could help our team navigate through the challenging transition to SAP. We thought that if we just dump a few reports into an Access database and use it to correlate information from multiple sources, it would save us time. And, boy, did it ever. We could answer questions in minutes instead of first having to do 45 minutes of research.


It is not that you were particularly attractive, but extremely easy to use. Part of it came from my constant quest to improve my personal efficiency. If I could figure out how to do something with one click instead of three, I spent the time doing it. If I spent 4 hours saving me 1 hour a week, it was a no brainer. And then multiply the time savings across our project management community, you really improved our efficiency in small (agile) increments.


But you had to be greedy. You asked for more and more data. I had to scrape around our systems just to feed you every day. And the more data I fed you, the more people were screaming for more. You provided an incredible opportunity to view the data that drives CA Services in so many ways. 


You were not just a faithful servant to me. Anybody who heard about you wanted to be able to get access to you as well. First you served the EAST, then North America and finally the world. You grew into a monster! People wanted to use you, even when there were so many other tools available. Whenever there was a management initiative, we added specialized reports so you could address the current top concerns.


In your heydays, you became the hub of collaboration for CA Services management. You allowed us to to do away with communicating via spreadsheets. Instead of passing outdated files around we could put comments about our projects right with the financial and other data. You presented and corrolated data from all our most-used systems in Services, such as SAP, Clarity, Medalian, Project Workspace on Demand, Project Closure Library, Salesforce, Field comments application (anyone remember that) and more. And, yes, you were even providing us with a better way to forecast (better than Excel, anyway). Oh, and you even told me when I was making stupid mistakes (not that I ever would, but thanks anyway).


So, now, it is finally time to retire. Yes, you are displaced with a fresh face. Your replacement offers capabilities we used to dream about (I cannot count some of the special requests that PowerBI does with ease). And with that, I no longer have to feed you data every day. I get to spend more time on what I do best. Thank you for making my life easier for a while. Yes, you will be missed but soon we will embrace the new tools and, I hate to admit it, within a short period of time, you will be nothing more than a distant memory.


PS: FSD stands for Field Services Dashboard

If my calculations are correct, by posting this blog I did it ... the coveted All-Star Collaborator badge. It nearly happened by accident. I say nearly ... will explain more below.


I discovered the joy of collaboration when Martha_Dewey reminded me that collaboration is a metrics we all will be measured on. Of course, those of you who know me know that I have an opinion about everything, and am not shy about sharing it. So I started out by liking and commenting on what is out there.


I started this journey with about 1,000 points early August and my first goal was to make the 250 point minimum by the end of the quarter. Without really trying too hard, I passed 2,000 by the end of September. 


So, when Q3 started, of course I wanted to get the All-Star Collaborator badge. But the goal seemed overwhelming. So I ignored it and just continued doing what I was doing in Q2. After all, if I can get 1,000 points in 2/3 of Q2, I should not have any problems achieving the 250 point minimum in Q3.


Then there was the discussion about the badge and I finally found where it was. Wow ... and I was already 60% done without trying at all. Now, my competitive nature came trough and I wanted to get it done. So I reviewed what I achieved and what I was lacking. I made some interesting discoveries:

  • Not all content is the same - there are questions, videos, files, documents, discussions, etc. I had posted everything as discussion and not anything in the other categories. Interesting. So, when I created something new, I started to evaluate what other category it could be in and choose the appropriate one.
  • If you get the badge "by accident", you will have way more than the required 250 points. You just about cannot help it.
  • You get more points by starting content others engage with. Each like, helpful or comment from others get you points. They add up more quickly than when you click likes all over the place.


In my journey I made some other observations:

  • Discussions about job performance metrics are the most popular ones. How can I look better come performance management time seems to be a favorite subject.
  • I cannot figure out how to engage my peers in discussions about how to do my job better. That is sad. I have put out some content about improving the way I do my job and nobody seems to bite. Do you just not care about that? Or, are my ideas just not as interesting as I thought?
  • I think the quality aspect is working itself out. I wish we would enable rating so I can give content a star from 1 to whatever. 


I may update this blog as I get additional ideas. In the meantime, join me in achieving this goal. It's not that hard.

Due to my background, the concepts in the modern software factory resonate with me very much. It brings back flashes from the past. And, yes, this blog shows my age but (at least I hope) does not make me an antique.


I have some firsts on my record (although, of course, nobody remembers or cares anymore). I am the very first to develop solid modeling (where you represent a three dimensional object on a computer) on a desktop computer, called a Terak. Later, I was the first to develop the software for an Apollo (a UNIX wanna-be), IBM PC and for an Apple Macintosh. And that is where my story begins.


In 1982, IBM made a huge splash when they released the IBM PC. At the time, the language for scientific and engineering software was FORTRAN and the original PC did not have the compute power, memory or graphics capabilities. But when the IBM PC AT came (with the professional graphics card), I was in business. Unfortunately, the development tools and FORTRAN compiler were horrible.


Fortunately, I already had a few Apollo computers, which had great development tools and my team knew how to develop for the Apollo. So, we established a process to develop on the Apollo but sell on the PC.


Here are a few key things my software factory had:

  • Follow the standards: Although the Apollo FORTRAN compiler provided some unique capabilities, we restricted ourselves to the basics to ensure portability. Any non-standard capabilities (such as certain I/O features, device access, etc.) had to be clearly isolated into target-system specific libraries. This approach allowed us to later port our software to some rather esoteric systems in short periods of time.
  • Automate, automate, automate: We established a complete automated porting process. Every evening before the last person left for home, we initiated the porting process. Then, in the morning, we could test our work on the target systems.
  • Agility in action: Of course, Agile was a concept not yet "invented" at the time. I called it design by prototype. In talking with customers or prospects, I came up with a solution and implemented it in small steps. Then I would show it in demos and get feedback. I would refine until we had a useful solution for release. 


So, yes, I am excited about the modern software factory. I wish I had these tools back then in the stone age. So, what had you tried in the past to improve the software development process?