Skip navigation
All People > Rafael Cardoso > Rafael Cardoso's Blog > 2014 > October > 13

Rafael Cardoso's Blog

October 13, 2014 Previous day Next day


OS: LINUX 64bit

EEM Version: 12.51.04 ( installed from CA WAAE Common Components disk  )


Whenever I refresh the page  on the “broken” EEM installation -  

  • I get the "500 internal sponsor error" page
  • ixgateway.log file adds these entries
    • o [1496099696] 09/05/14 16:45:43 ERROR :: SponosorCallBack::GetLibPtr2 : unable to load library [ /opt/CA/SharedComponents/EmbeddedEntitlementsManager/lib/eiamSpindle ] Error Code [ 0 ]
    • o [1498200944] 09/05/14 16:45:45 ERROR :: SponosorCallBack::GetLibPtr2 : unable to load library [ /opt/CA/SharedComponents/EmbeddedEntitlementsManager/lib/eiamSpindle ] Error Code [ 0 ]

There is a matching file in the library folder :

      -r-x------. 1 root root 7.9M Jun  4  2013 /opt/CA/SharedComponents/EmbeddedEntitlementsManager/lib/


On the working EEM server:

Refreshing the page returns the EEM login page and the igateway.log file is empty.

The file looks like

     -r-x------ 1 root root 7.9M Jun  4  2013 /opt/CA/SharedComponents/EmbeddedEntitlementsManager/lib/



This error usually happens when a missing library file is causing the problem. Or when there is no read access to it.

EEM requires a number of libraries to be installed beforehand.

Prerequisite packages to install (all are 32bit and available on the RHEL6.1 x86 installation disk)

  1. Required before the installer will run
    • libgcc-4.4.5-6.el6.i686.rpm
    • libstdc++-4.4.5-6.el6.i686.rpm
  1. Required before the web UI will function properly (only for R12.5)
    • libuuid-2.17.2-12.el6.i686.rpm

Note: EEM ship the following libraries as a part of the installation.

  1. a) zlib-1.2.3-25.el6.i686.rpm
  2. pcre-7.8-3.1.el6.i686.rpm 


I researched a few resolved issues and found this solution:



Customer tried to access the GUI from


the browser returns:

500 internal sponsor error


Errors in the igateway.log:


[1603267440] 07/14/14 19:33:37 ERROR :: SponosorCallBack::GetLibPtr2 :

unable to load library [

/opt/CA/SharedComponents/EmbeddedEntitlementsManager/lib/eiamSpindle ] Erro

r Code [ 0 ]

[1487698800] 07/14/14 19:35:27 ERROR :: SponosorCallBack::GetLibPtr2 :

unable to load library [

/opt/CA/SharedComponents/EmbeddedEntitlementsManager/lib/eiamSpindle ] Erro

r Code [ 0 ]

[2036587376] 07/14/14 19:35:27 ERROR :: SponsorManager::submit :sponsor[

favicon.ico] not found and invocation returned 404 error


Steps to resolution



- It appears that the issue is with the missing libuuid library file.

- Ran the following command

-- yum install libuuid.i686

- restarted igateway process

-- $IGW_LOC/ ./S99igateway stop

-- $IGW_LOC/ ./S99igateway start


(source EEM Web interface error)

Average Analyst License Usage?


The procedure is:

  • In order to capture the license usage you will have to run the command pdm_logstat -f lic_mgr.c 1000
  • The generic license usage message will be

         09/29 07:15:04.89 sdservername license_nxd 10216 TRACE lic_mgr.c 470 License acquired for MfBAAA@web:local; 8 2USD    licenses in use

  • From the command prompt of the server navigate to the Servicedesk logs folder. You may the run below commands in sequence

        nxcd (Hit Enter)        cd log (Hit Enter)

  • From the same command prompt run the command

        type stdlog.* | find /i "2USD licenses in use" > license_usage.txt

  • Open the license_usage.txt in a TEXTPAD or NOTEPAD++. I would prefer TEXTPAD.
  • Use Block Select Mode feature to just copy the date and time from TOP to Bottom in the TEXTPAD. Open EXCEL and paste it in the first column.
  • Again use the BLOCK SELECT MODE feature in TEXTPAD to just select the number of licenses from TOP To Bottom in TEXTPAD. Copy them into the 2nd column of the EXCEL Sheet.
  • Now the EXCEL will have 2 columns (1) Date and Time (2) Number of Licenses
  • Sort the 2nd column from Highest to Lowest.
  • This will give you the count of licenses in order highest to lowest with the date and time.


(Average Analyst License Usage?)

Rafael Cardoso

CA CMDB: Cora Cleanup

Posted by Rafael Cardoso Oct 13, 2014


Coracleanup is an NT command line utility that helps apply the latest changes to CORA to the existing and registered assets in the Database. It does not delete the assets from the product's table but only from CORA and then it re-registers them by sending a request to CORA to register the assets using the info that it got from the product's tables. If there is a duplicate and CORA can't register the asset then an error will be logged and the asset will not be registered. Coracleanup is distributed via Test Fix.


Command to check the version of CORA:



Below is the example output:


coraapi    May 17 2012    02:38:28

corajava May 17 2012    02:39:46

coramss    May 17 2012    02:38:59

coraorcl May 17 2012    02:38:42

corasql    May 17 2012    02:38:17

corasys    May 17 2012    02:38:04

corauuid May 17 2012    02:37:56

Syntax to run coracleanup:

CORACLEANUP /DATABASE <mdb> /SERVER <servername> /SCOPE all /METHOD reinit /TYPE sqlServer /USERNAME <sqluser> /PASSWORD <sql password> /LOG full


For SDM product Installs use _SD.jcl


Supported scopes:

  • USD - Apply the method only to SDM
  • NSM - Apply the method only to NSM
  • DSM - Apply the method only to DSM
  • SPM - Apply the method only to SPM
  • ALL - Apply the method too all (USD, NSM, DSM, SPM) products


Does it improve performance by using the /scope switch?

With the logging on CORA turned on about 100 assets per minute, without the logging turned on somewhere around 300 assets per minute, depending on the existing data it CORA could process data at even 600 assets per minute. The scope switch just limits the number of assets that will be processed, will not affect the overall time.

Supported methods:


Reinit - this will delete all the assets from CORA that belong to the product specified in the scope and re-register based on the information from the product's table all the assets.


Delete - this will delete all the assets from CORA that belong to the product specified in the scope. There is no need to take backup of MDB because you might get errors during coracleanup but no information will be lost. We delete only the information that we can restore. If there are any errors they might be errors generated by the product table.

Example Scenario: Because of the changes in CORA, there should be no new asset creation, but because of a bug, it creates a new one now, after fixing that bug, you might get an error telling you that the asset you are trying to register is a duplicate. In this case you will have to manually investigate the product table to confirm if this is an error on CORA side or a duplicate in the product table because the old version of CORA did not detect a duplicate.

Log files for coracleanup:

  • Coraout.log
  • Corautil.log
  • Coracleanup.log

These files should be available in the NX_ROOT\bin folder.

Troubleshooting Checklist:

1)Very often you will be asked from SE to set CORA in debug mode and deliver the coraout.log file after the problem is reproduced.


Setting CORA in debug mode is a two steps process:


a) CORA Logging is enabled by defining an environment variable called GC_CONFIG. The value of this variable should be the path of logging configuration file CoraLog4cplus.cfg. By default it is NX_ROOT/bin


  1. Take a backup and open NX_ROOT\bin\coralog4cplus.cfg
  2. Change log4cplus.logger.Cora=ERROR, CoraAppender1

  to log4cplus.logger.Cora=DEBUG, CoraAppender1 

  1. Save and close
  2. Recycle mdb_registration_nxd


The coraout.log file can also be found in the NX_ROOT/bin directory.


2)The most common error message in the coracleanup.log file is from the following:


2012-01-19 17:23:13> Cora API returned error: 470200

2012-01-19 17:23:13> RegisterAsset - registration properties: !label!AD.READER.9.3.REL.2!asset_tag!Adobe Reader 9.3 Rel. 2!

2012-01-19 17:23:13> CheckApiReturnCode - CORA Error message:DuplicateErrorReasonA subschema passed in the same properties with different source UUIDsOldUUID04F5802985155D4A9FF1D922415C6507New UUID0A77D7FFFB5A16418CC218F0BB5671D8__FILE__coraapi.cpp__LINE__5084__DATE__Mar 15 2010__TIME__11:35:02


The duplicates need to be remediated. The coracleanup has left an easy way to identify machines that where flagged duplicates. The asset_source_id in ca_owned_resource is NULL for these assets. Now there is no order that coracleanup works so just because it has a NULL field doesn't identify it as the machine that doesn t belong. This could be the asset you want to keep active and the duplicate that happen to register first in coracleanup may be the one you want to make inactive.


Once we have the name of an asset that is duplicated. Pick the one you want to make the healthy asset. The one you will make inactive should NOT have a federated asset association. Once this is confirmed make it inactive if it isn’t already. Now we have to update the attributes to make it is unique. For exampleyou can take the label and just add _duplicate then copy the result into all other fields. hostname, dnsname, macaddress, serialnumber and alt ci id. This identifies the machine well and also ensures that the asset will be unique.Then once the dupes are remediated we will need to run again coracleanup and make sure we don t see any errors.


Some SQL queries:


With this query you can identify which assets haven a UUID=Null


SELECT * FROM ca_owned_resource WHERE asset_source_uuid IS NULL and asset_type_id=1 (Here asset_type_id = 1 means that the asset is hardware and CORA handles only hardware not software.)


The following query can be run in the SQL Query Analyzer by selecting the 'mdb' to check if the information available in CORA is correct:


select asset_source_uuid, ca_logical_asset.logical_asset_uuid,

ca_asset.asset_uuid, host_name, serial_number, label, asset_tag, dns_name,

mac_address from ca_asset_source, ca_logical_asset, ca_asset,

ca_logical_asset_property where host_name like '<HOST NAME>'

and ca_asset.asset_uuid = ca_logical_asset.asset_uuid

and ca_logical_asset.logical_asset_uuid = ca_asset_source.logical_asset_uuid

and ca_logical_asset.logical_asset_uuid =



The above query lists all the attributes of the Asset as per the WHERE clause

The WHERE Clause can be changed to check on the other properties like

Serial_Number, Asset_Tag, DNS_NAME, MAC_ADDRESS, LABEL as well


3)Known SQL Server 2008 Error

If the DMBS is SQL Server 2008, coracleanup will not be able to connect to the database. To bypass this problem and be able to run it, please open support issue to get registry files which needs to be updated.


The procedure is described on the following TEC.

: How to import servers from a file?
Document ID: TEC496185


Anyway, you can check on directory “C:\Program Files\CA\CCA Server\doc\samples", its available a file containing all the information you need to upload servers by file.





  I've found an error during a installation of a new orchestrator.


To clear up:

  When you install for the first time the PAM, it will create a certificate with a password you provide. From documentation, they dont make any reference for special chars but there's a issue when your password have one.

  By default, a installation provides a orchestrator. If you want to install another, you need to follow Configuration > Installation > Install Orchestrator.One step of the wizard is to fulfil the certificate password. HERE IS THE PROBLEM. If your password have special chars, you need to follow the next steps.

  Lets assume the password we have set following certificate password: "benfica123%#7".

  When are at this point, you could enter the right password 12309 times but the result is always the same: "The password you entered does not match.."


  This is due a serverlet enconding problem. From wireshark, we could figure out easily the problem:

    As you could notice, the wizard only have assume the benfica% instead of benfica123%#7.

  The workaround/solution here is to replace the # for %23.

  You can get the right codes here:

  If you replace the value by according the table, the password now matches to the same password set before on certificate.



  Thanks and best regards,

  The PAM uses jboss and consequently jar files. It uses a library named as JavaMail API that provides access to an SMTP server.


  Edit and add the following lines:

  - To use simple login:





  By default, the PAM consider all mechanisms. If set, lists the authentication mechanisms to consider, and the order in which to consider them. Only mechanisms supported by the server and supported by the current implementation will be used. The default is "LOGIN PLAIN DIGEST-MD5 NTLM", which includes all the authentication mechanisms supported by the current implementation.


  For GMail, please check the TLS configuration.

  Hope this helps in something.

  Thank for your help anyway.

Anyone with experience in IT support knows the importance of knowledge in reducing resolution time. Anyone with math skills can extrapolate business value from rapid resolution. Despite its obvious benefits, Knowledge Management (KM) remains a frustration for a vast majority of enterprises. Why?

Because organizations continue to approach KM as a monolithic publication effort with ancillary inputs from Incident and Problem Management.

By combining principles from Knowledge Centered Support (KCS) with the ITIL framework and a few basic workflows, an enterprise with the right cultural mindset can make KM work with far less effort.


The Objective of IT Knowledge Management

IT’s need for Knowledge Management is not complex. Although ITIL lists five objectives to KM and KCS and boils it down to the “creation, use and evolution of knowledge”, this article, because of its focus on IT support, is more specific:

The objective of IT Knowledge Management is to create, maintain and make available concise and actionable information to users and IT support groups in order to resolve service disruptions quickly and respond to customer queries satisfactorily. 

The challenge is to collect, maintain, and make that knowledge accessible.


Flailing and Failing

How well are IT organizations managing their knowledge? Do support agents have rapid access to current knowledge for a vast majority of customer contacts?

Does the enterprise require waves of knowledge initiatives to address a stagnant knowledge lifecycle? Is there a group of knowledge authors scrambling to review and update solutions?  Are stale solutions common?

Gone are the days of separate knowledge applications run by a core team of authors. The monolithic approach to KM works no better today than it did 10 years ago but many organizations continue to flail about in an attempt to write the definitive support encyclopedia.

For organizations to achieve the objectives of KM, they must move toward distributed, open-source authorship.

If solution content originates at the point of problem support, where should authorship take place? This past weekend, I spent hours on the phone with a satellite TV provider trying to fix a problem on a secondary satellite receiver. After two hours, I noticed that the coax cable had a little red device on it and mentioned it to the support agent. “Oh my Gosh!”, she cried. “That device blocks the receiver from communicating with the parent receiver.  The instructions should have had me check that right away”. When I asked how hard it was to update the solution, she replied that she was already doing it. This is how to make KM effective.

One must drive content to the lowest possible level and implement a flexible, role-based approval mechanism that deploys the updated solution with minimal fuss.

Knowledge Management is Integral, Not Additional

Most organizations have implemented one or more repositories of “solutions” and most of those organizations struggle to encourage adoption by users and authors. The ineffectiveness of Knowledge Management derives from just a few basic misunderstandings:

  1. Centralized authorship simply does not work.
  2. If we look at Incident and Problem Management as recipes, Knowledge Management must be a primary ingredient rather than a garnish.
  3. Because the Service Desk is the face of IT and depends so heavily on an effective Knowledge Base, agent input must be dynamic and empowered.
  4. The Knowledge Management workflow must be flexible to support distributed authorship and review.
  5. Authorship and article utilization deserve meaningful rewards as an incentive for adoption.

To address these issues, one needs to employ a mixture of wisdom from several sources.  There are a number of standards for Knowledge Management.

So Many Knowledge Management Standards

Knowledge Management does not make standardization easy.  While this article discusses IT Knowledge Management, a standard cannot ignore the management of content and documents across the enterprise.  In general, the standards with broader scope will offer less prescriptive guidance for IT managers.

(ITIL) – ISO/IEC 20000 – ITIL’s approach to Knowledge Management is academic.  Though the inputs from Problem Management and Incident Management are clearly defined, ITIL is tentative in demanding the required participation and ITIL provides scant guidance in establishing a workflow.

Knowledge Centered Support – though not an official standard, KCS is comprehensive and its approach maps best to the real world.  KCS emphasizes that KM must be incorporated into the process flows of both Incident and Problem Management.  This paper draws heavily on KCS.

Other Standards

Though this article focuses on ITIL and KCS, there are other standards worthy of mention:

Standards Australia International is Australia’s non-government standards body and is a member of the International Organization for Standardization (ISO). The SAI publishes “AS 5037-2005 Knowledge Management – A Guide”.

European Committee for Standardization has published standard “CWA-14924″ in five parts.  Part 1 lays out a framework and basic KM steps (Identify, Create, Store, Share, Use) but is weak on workflow guidance. There is considerable guidance on project management.

British Standards Institute publishes “PAS2001:2001 Knowledge Management”, a high-level document with limited value for process design and implementation.


Though ITIL is weak in Knowledge Management guidance, the overall framework encourages integration. As the document “KCS Overview” states, “KCS is not something a support organization does in addition to solving problems, KCS becomes the process for solving problems.”  While ITIL talks about inputs and outputs, KCS incorporates Knowledge Management into the processes used for solving problems. When organizations “implement” ITIL, Knowledge Management is often a separate implementation driven by the Service Desk.  As for Incident and Problem Management, the processes and tools may allow integration but typically act as feeds to the monolithic Knowledge Management team.

Because the typical implementation of Knowledge Management relies heavily on one or a few core teams of authors to generate content, the process flow includes numerous points of review and approval. Each point represents a bottleneck.

When Knowledge Management drives rather than follows the problem resolution process, it transforms itself and its dependent processes into an elegantly simple and self-sustaining engine for efficiency.

Below is the simplified workflow for solution creation (with KCS states noted):


Figure 1: Knowledge Article Creation

This simplified flow relies heavily on issue responders (i.e. Service Desk, technical support) to initiate and update the “solution”.  For this to succeed, the tools and processes of the responders must efficiently enable contribution. Furthermore, the organization must meaningfully reward such contribution.

This approach is in stark contrast to the monolithic Knowledge Management group where a small number of “authors” provide solutions to issue responders. One need only tour the Service Desk of such an organization to gauge the success of such an effort. Support personnel maintain their own notes with yellow stickies, notebooks, and heterogeneous repositories. Hop rates (call transfers) are high. FCR (first contact resolution) is low. Customer satisfaction suffers.

Knowledge Creation Baked into Incident and Problem Management

In the “Knowledge Article Creation” diagram, steps 4 and 5 are pivotal.  Within these steps, the agent must have a quick way to create or update solutions. A single tool should allow the agent to respond to calls, create incident records, search knowledge solutions and update those knowledge solutions. The approval process should be simple while allowing for variation of depth.


Figure 2: Service Desk Role in KM

In figure 2, many organizations are concerned that step 8 (Document Solution) will encumber the responder, thereby increasing service costs. In the absence of prudent guidelines, such concern is well founded. One can address this concern by limiting input in step 8 to simple solutions and updates. Anything more should be deferred to a sub-process for Solution Review (step 10). Step 10 can be distributed across numerous organizational units, allowing the responder’s department to update the solutions upon which they depend. Basically, step 8 only works if the workflow and toolset enable the responder to complete the task very quickly.

Solution Creation Reward System

Rewards, an important contributor to Knowledge Management success, are based on Key Performance Indicators such as those listed below:

  • Most articles created/updated in past 30 days.
  • Highest average user rating in past 30 days.
  • Total articles deemed “useful” by at least 90% of users in past year.
  • Most articles used to solve a problem in past 30 days.

For a reward to have meaning, it must be deemed of high value. This does not mean that it must be expensive. Although, recognition is a major component of any reward, the organization can budget for gifts such as gift certificates, parking space, cash, merchandise, CIO lunch, group outing, etc. Be creative and make it desirable for employees.

Solution Creation and the Challenge of Outsourcing

By asking support personnel to create and update solutions, some organizations introduce a conflict. When an enterprise measures the value of an agent by call volume, what incentive does the agent have to take the time to produce solutions?

There are three parts to this answer:

  • First, it may be necessary, especially for service desk agents, to limit the time spent on each solution.
  • Second, the organization can use both call volume and solution updates as measures.
  • Third, keep the solutions simple.  The “KCS Practices Guide” provides excellent guidelines on article composition. More importantly, the KCS approach relies on both “Solve” and “Evolve” to maintain article health. Thus, an agent can start the article lifecycle with a quick but readable note and later, others can enhance the article with updates.

graph3Figure 3: Consortium for Service Innovation

Let’s look at two examples:

  1. Quick Solution Update – an agent deals with an incident where the solution is correct but the steps are slightly out of order. Without delaying the resolution for the customer, the agent has already begun the update. The call ends and the agent spends less than five minutes to complete the update. Next call.
  2. New Solution – an agent cannot find a knowledge solution for the incident but is able to resolve the incident with quick input from another source. Recognizing that the issue is likely to recur, the agent take five additional minutes (after the call) to document the steps taken during the call and submit the solution for review. If the solution is incomplete, the reviewer can prevail upon the agent or another SME to enhance it. For the agent, such work would have no impact on call volume measurement.

Solution creation becomes more complex when suppliers are involved. Although the execution remains under supplier control, the client company should provide contractual incentives (and penalties) for knowledge participation based on KPIs. From past experience, it would be prudent to measure both knowledge contribution and knowledge quality while also reviewing the supplier’s workflow to ensure capability. This arrangement often requires an additional approval mechanism at the supplier level.