Skip navigation
All Places > CA Security > CA Single Sign-On > Blog > 2017 > August
2017

Migrating applications is actually an onboarding exercise. You’ll need to define the resources that are protected for an application, assign agents or CA Access Gateway proxy rules, determine a login method for the applications and access rules, and map those rules to user roles and/or groups. Sometimes interim policies or login methods are employed during the parallel phase and disabled or discarded once the end state has been achieved.

 

Bill of Materials

For each application to be migrated, a bill of materials needs to be created to onboard the application to the new security infrastructure; for example:

 

  • Use cases to be covered for each application
  • Inventory of protected resources (URLs)
  • Inventory of current web agents involved
  • Authentication methods needed
  • Step-up flows
  • Authorization policies by URL grouping rule groups (for large number of applications, it can reach into 100s per group)
  • Response header requirements by rule grouping - response groups (some applications may get more headers than needed, but it makes operations a lot faster)
  • Admin users and roles/use cases
  • Federation requirements
  • Session timeout requirements
  • Session store and session customizations; e.g., storage and retrieval of session-specific data outside of standard supported ones/user store schema elements for a session
  • Session assurance feature, if needed, and which URLs will need it

Once the bill of materials is created, mapping to configurations needs to occur to accomplish migration of functionality specific to the target application(s).

 

Prioritization of applications to be migrated can be based on a number of factors. To mitigate risk, migration can begin with simpler low-profile applications and graduate to more complex higher-profile applications.

 

In all strategies, URL schemes are critical criteria for isolating change scope. Common approaches include:

Migrating application by application:

  • Select application(s) protected by a set/cluster of related web servers.
  • Limit scope further by migrating one set of URL schemes.
  • Partitioning by domain name might be possible based on URL/domain hosting strategies used in current applications.

The fallback strategy becomes easier if web server/CA SSO agents are deployed in parallel and Layer 4 load balancer is used to graduate applications into production.

 

Migrating by business unit:

  • All applications servicing a business unit may be migrated in one phase.
  • Business units with smaller footprints could be targeted first.

 

Migrating by user-specific communities (e.g., new net benefits users, technical users or accounting users):

  • Migrating smaller audiences at one time contains the change impact to a smaller portion of the user population.

 

For all strategies, change management plans that cover business processes, technology, communications, operations and training of involved constituencies are required for optimal adoption.

 

Identifying Pilot Applications

Identifying the pilot applications to be on-boarded is crucial to ensuring a quick win and laying the foundation for successful long-term expansion of the solution.

 

The two most common approaches for a pilot are:

  • User communities: Build a solid single sign-on infrastructure and slowly expose the infrastructure to a single user community. Historically, the most common starting point has been internal users, then expanding to partners and finally customers/Internet users. 
  • Key web application/site: A key revenue-generating or cost-saving initiative, such as an e-commerce application, partner extranet, or employee intranet, often drives the need for a web single sign-on solution. Starting here can be effective, as it accomplishes a key business initiative while establishing technical requirements and a central security framework for the larger implementation. 

 

If you are interested in exploring CA SSO implementation options contact Amin.PashapourAlamdary@ca.com.

And watch for my next post, where Six Considerations for Accelerating a CA SSO Implementation.

Now that we have evaluated the existing solution, mapped requirements and performed a gap analysis as described in my first post, Factors to Consider When Pondering a New Single Sign-On Solution, you’ll want to make sure that the solution has the best possible chance of adoption—and success. This post shares some of leading practices for SSO migrations.

Here’s a simplified reference architecture to build out the new solution:

 

  

With the latest version of CA SSO, administration tasks can be done using REST API calls. Also, new capabilities of CA SSO include REST API-based Authentication and Authorization as well as being an OpenID Provider.

 

Develop Co-Existence Strategy

 

Almost all enterprises have some form of Single Sign-On services. Will you need to support single sign-on in the interim between the “legacy” solution and CA SSO? The answer to this question will determine how the two worlds will co-exist during the parallel migration period.

 

If there is no need for session sharing between the two solutions, users can log in to each environment independently and use applications protected by each solution; however, users will need to authenticate multiple times—a less than optimum experience. It’s not surprising that this option is rarely chosen.

 

The more likely scenario is some form of single sign-on while applications are being migrated to CA SSO.  This could take one of a few forms: federation using SAML, dual protection, or single protection. 

 

Co-Existence Method

Description

Federation with SAML

One of the environments acts as the identity provider while the other is the service provider. To minimize changes to the legacy environment, it typically acts as the identity provider.

 

·         Applications are protected by only one solution—legacy or CA SSO.

·         Apps protected by CA SSO use a SAML authentication method to leverage the legacy session or login methods.

·         If a user spends a long time in one legacy session, the other session may expire due to inactivity, forcing a login.

·         Once all applications are migrated to CA SSO, login methods can be changed to non-SAML.

Dual Protection

Both the legacy and CA SSO environments protect all applications. This usually means the legacy agent/web gate and the CA SSO agent/CA Access Gateway are configured to protect each application. Initially, the CA SSO policies are very basic and enforce SMSESSION (CA SSO session management cookie) validation and refreshes. The legacy policies enforce access control.

 

·         Login processes are customized to generate legacy and CA SSO sessions, such as use of login servers with redirects.

·         Policies equivalent to legacy policies are migrated to CA SSO.

·         Once all policies exist in CA SSO, legacy agents can be removed. Login can switch over to CA SSO login methods.

·         Both sessions are maintained during application access.

Single Protection

The legacy and CA SSO environments protect different sets of applications. This usually means that an application is protected by the legacy agent/WebGate or the CA SSO agent/CA Access Gateway, not both. 

·         Login is customized to generate both legacy and CA SSO sessions.

·         Session timeouts create challenges if a user spends a long time in one session. The other session may expire due to inactivity and force a login.

 

The method(s) you choose will depend on the capabilities of the legacy environment.  If there is no federation support, SAML or OpenID methods are not an option. Understanding the requirements will help make some of these decisions easier.

 

If you are interested in exploring CA SSO implementation options contact Amin.PashapourAlamdary@ca.com.

And watch for my next post, where I will discuss optimizing your CA SSO migration.

Issue

Changes made to login/passwords forms (login.fcc, smpwservices.fcc etc) are not reflected immediately in the browser.

Environment

Web Agent : r12.5 and above

Cause

There are many things that could go wrong here. 

For e.g

Resolution

  • FCC Forms are cached by default by web agent to improve the web agent performance. This can be disabled by setting ACO EnableFormCache=No.
  • Refer to web server documentation to disable any web server specific cache settings . For e.g in case of IIS you could consider disabling Output caching (UserCache/Kernel Cache). For IIS it is also recommended to set ACO IISCacheDisable=yes
  • Clear browser cache by deleting browsing history or alternatively you can append a random querystring to ensure that it doesn't serve the web page from it's local cache (e.g you can try accessing login.fcc?1=1 , login.fcc?2=2 etc. ) 
  • To ensure that you are modifying the correct FCC page, the best way is to the enable web agent trace log and check the location where it is loading the forms from . For e.g.Here we can see that the web agent loaded the localized fcc form from forms_en-US directory rather than the default forms directory 

 

[08/23/2017][10:12:03][5052][2620][CSmFormTemplateCache.cpp:196][CSmFormTemplateCache::GetForm][][][][][][][Serving form template 'C:\Program Files\CA\webagent\win64\samples/forms_en-US/login_en-US.fcc' from cache.]

 

The speed with which SMPolicyTraceAnalysis tool runs through trace logs has been improved.  The (long) time it takes to process large logs been on my mind for quite a while.  A cluster of recent cases with fairly large logs, tipped me into looking at it.

 

New version 670 : now downloadable from: 

Siteminder Policy Trace Analysis 

 

The main speed advantage in the changes comes from taking advantage of the larger memory in the 64bit JVM, and holding all the counters and open transactions in memory.  Previously for the 32bit JVM, I had to keep both counters and open transactions in temporary disk files.  

 

There is now a separate thread for reading the trace data and processing the trace data.  I tried pool of threads, to process the counters, and parse input lines, but they were slightly slower than just two threads (I suspect there is some  bottleneck and more threads could work, but for now the improvement was enough).

 

I've just used options available in JAVA 7, I did try some of the JAVA 8 pool features, but they did not add any speed, so those are currently commented out.

 

I did want to store the results in a database, to be able to incrementally add more trace logs, to make it easier to re-generate reports with say different graphs, and to be able to store processed traces from different (say policy )  servers to allow plotting data from multiple machines - but again this was slower - so that is on hold for now.

 

 

Here were the improvements I had for my test systems: 

 

Running on Windows: 

5.4 gig of logs, takes 3+ hours with older version, now takes less than 20min.

(Windows 7 on Dell Precision M4800 laptop with 32gig of memory and 256 SSD disk, )

 

Running on Linux, 

2 gig of logs takes 44min with older version, and now takes 4min. 

(Mint Linux 18. on Dell Precision T7500 with 47 gig of memory and fairly normal 7200 rpm HDD)

 

 

 

The JVM Options: 

The run.bat/run.sh now has the following options: 

java -XX:+UseLargePages -XX:+UseFastAccessorMethods -XX:+UseParallelGC -jar SMPolicyTraceAnalysis.jar 

 

Occasionally some uses have had trouble with the -XX:+UseLargePages  which may require specific permission on win7 machines - removing the options will slow it down a small amount - and you will still get most of the benefits.

 

Memory Usage:

The above runs show the JVM sitting at using 5gig of memory, the way the JVM works under load however is that it will use memory up to a point where the Garbage Collector is triggered.  The exact of memory requiest depends on how long your transactions are, how many are open at once,  and how long (ie how long the counters need to count) but the application will generally run quite easily run in 3-4gig of memory. 

 

More JVM Options:

There are more JVM optimisation that could be applied, particularly for java 8, if you want to experiment :

http://docs.oracle.com/javase/7/docs/technotes/tools/windows/java.html

http://docs.oracle.com/javase/8/docs/technotes/tools/windows/java.html

 

 

New version 670 : now downloadable from: 

Siteminder Policy Trace Analysis 

 

Cheers - Mark

----
Mark O'Donohue
Snr Principal Support Engineer - Global Customer Success

The Policy Server and Agents utilizes various encryption keys to encrypt sensitive data stored & passed between CA SSO components in a CA Single Sign-On environment.

 

The following diagram gives an overview of the various encryption keys used in CA SSO environment :

What are the purpose of various encryption keys used by CA SSO ?

 

Policy Store Key

Policy store key is used to encrypt :

  • Sensitive data stored in policy store (e.g LDAP bind credential, trusted host shared secrets etc.)
  • Sensitive data stored in policy server management console ( e.g policy store/audit store/session store credential )
  • Key store data when policy store and key store are collocated.

 

Key Store Key

Key store key is used to encrypt the Agent Keys & Session Ticket Keys stored in Key Store when policy store and key store are not collocated. 

When separate key store is not configured, the Key store key is same as the Policy store key.

 

Agent Keys :

  • Agent keys are used to encrypt/decrypt CA Single Sign-On cookies sent to the browser
  • Agent keys are managed by the Policy Server, and distributed to agents periodically.

 

Session Keys

Session keys are used to encrypt :

  • Data sent to and from Policy server to web agent
  • Data sent to and from Policy server to Administrative UI

 

Session Ticket/Persistent Keys

  • Session ticket keys are used by the Policy Server to encrypt session tickets (specs). Session tickets contain

credentials and other information relating to a session (including user credentials). Agents embed session tickets in CA Single Sign-On cookies, but cannot access the contents since they do not have access to session ticket keys which never leave the Policy Server

  • CA SSO also provides the ability for user tracking so a site can remember that a user had a valid session with the site some time ago. If user tracking is enabled, the Identity Spec is generated every time a new Session Spec is generated. The Identity Spec is also encrypted with the Session Ticket Key known only to the Policy Server and passed back to the Web Agent. 
  • Session ticket keys are also used to encrypt the password services data (blob) stored in User Store.

 

More about Session Ticket Keys here :

https://communities.ca.com/community/ca-security/ca-single-sign-on/blog/2016/09/02/tech-tip-ca-single-sign-onpolicy-serverpersistent-keysession-ticket-key-introduced

 

What is the impact of resetting Persistent Key/ Session Ticket Key?

Resetting persistent Key has following impacts :

  • Existing logged in user sessions will not be valid anymore. User will have to re-login to establish a new session.
  • Existing password blob will be no more be valid, which means all the information related to password change, login tracking etc. is lost

 

Policy server Host Key

A built-in static (hard coded) key stored in Policy server binaries.

It is used to encrypt/decrypt data stored in EncryptionKey.txt.

 

Web agent Host Key

A built-in static (hard coded) key stored in Web agent binaries.

It is used to encrypt/decrypt shared secret stored in SmHost.conf along with the hostId of the machine.

 

Shared Secret

The shared secret is used to mutually authenticate the Agent and the Policy Server and to distribute the session keys from the Policy Server to the Agent.

 

What is stored in EncryptionKey.txt ?

Policy Store Key is stored as encrypted string in EncryptionKey.txt file.

 

Here is what happens during the policy server installation :

 

 

During Policy server startup :

 

Policy store key is stored in Policy server memory.

 

How is shared secret for Trusted Host created and stored ?

When registering a Trusted Host, a shared secret is auto generated. This auto generated shared secret is then encrypted using the combination of Web Agent Host Key and the HostId of the machine and stored in the SmHost.conf file on the web agent side.

 

On the policy server side also, the corresponding shared secret for the Trusted Host is encrypted with the Policy store key and stored in the policy store :

 

If these shared secret do not match, the agent to policy server handshake fails.

 

How is the communication channel between Policy server & Web Agent protected ?

CA SSO uses RC4 encryption with randomly-generated 128-bit session keys to encrypt data sent over a TCP connection between a Policy Server and an Agent. The shared secret is first used to mutually authenticate the Agent. Once the handshake (authentication) is complete Policy Server distributes the session keys to the Agent.

 

Encryption of CA SSO Cookies

There are three major types of cookies that may be sent with every request (there are other cookies that are used only during authentication, like FORMCRED cookies):

  • SMSESSION - contains session ticket, always present , encrypted with either static or dynamic agent keys depending upon configuration
  • SMIDENTITY - identity cookie. Present only if the "User Tracking" option is configured, always encrypted with static agent keys.
  • SMDATA - cookie that keeps user credentials. Present only if the "Allow Form Authentication Scheme to Save Credentials” option is configured, always encrypted with static agent keys. 

 

How and where is the Key store key stored while using separate key store ?

If a key store is configured as a separate store from the policy store, then the key store encryption key can be set manually from the Policy server management console --> Keys tab :

 

The provided value is then encrypted with the Policy store key and stored in the CA SSO registry as below :

[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Netegrity\SiteMinder\CurrentVersion\ObjectStore]
"KeyStoreEncryptionKey"="{RC2}DgMpaQDh5tmGMBMnQD+AfA=="

 

What are the various FIPS mode supported by CA SSO ?

 

CA SSO can be configured to operate in three FIPS modes :

  • Compat Mode - read both FIPS/Non Fips always write non FIPS keys
  • Migration Mode - read both FIPS and non FIPS - always generate FIPS keys
  • FIPs Only Mode - only read/write FIPS keys

 
While operating in Compat Mode, it uses RC4-128 bit cipher (Session Keys) to encrypt traffic between Policy Server and Web Agent.
While operating in Migration Mode or FIPs Only Mode, it uses AES-128 bit cipher to encrypt traffic between Policy Server and Web Agent.
 
CA SSO embeds RSA 's Crypto-C ME v2.0 cryptographic library, which has been validated as meeting the FIPS 140-2 Security Requirements for Cryptographic Modules. The validation certificate number for this module is 608. CA SSO’ s Java-based APIs use a FIPS-compliant version of the Crypto-J cryptographic library.
 
In FIPS-only mode or Migration Mode, CA SSO uses the following algorithms:

  • AES Key Wrap for key encryption.
  • AES in OFB mode (HMAC-SHA 256) for channel encryption.
  • AES in CBC mode (HMAC-SHA 224) for encrypting tokens used to facilitate

 

Types of Agent Keys

The Policy Server provides the following two types of Agent keys:

  1. Dynamic
    Generated by Policy Server and distributed to connected Policy Servers and any associated Web Agents
    Can be rolled over at regular interval or manually by using the Key Management feature in the UI.

2. Static
Remain the same indefinitely
Can be generated or entered manually. When using a specified manual value , the static agent key will be derived based on the value provided. It won't be the exact same string.

Can be rolled over using the UI

 

There are in total 4 different agent keys , they are :

  • (Key Marker 1) An Old Key is a Dynamic key that contains the last value used for the Agent key before the current value.
  • (Key Marker 2) A Current Key is a Dynamic key that contains the value of the current Agent key.
  • (Key Marker 3) A Future Key is a Dynamic key that contains the next value that will be used as the Current key in an Agent key rollover.
  • (Key Marker 4) Static Key

 

Agent Keys can be exported by running following command :

smkeyexport -d<admin> -w<password> -okeys.txt

 

Sample Key export file :

 

#! SiteMinder Version 12.52
# Export Flags: Encrypted, Export Keys, Export Key store data.
objectclass: Root
Oid: 0a-00000000-0000-0000-0000-000000000000
AgentTypes:
Schemes:
Agents:
AgentGroups:
UserDirectories:
Domains:
Admins:
AuthAzMaps:
CertMaps:
SelfRegs:
ODBCQueries:
PasswordPolicies:
KeyManagement: 1a-fa347804-9d33-11d3-8025-006008aaae5b
AgentKeys: 1b-a0b79a43-eca3-4090-9082-4a30604fd108, 1b-912d25e3-26c0-4103-9c08-397733217fee, 1b-87336696-825f-4c71-b919-2bf2cb61578f, 1b-68860af4-01ff-4227-a034-795df2b93c99
RootConfig:
VariableTypes:
PropertyCollections:
TaggedStrings:
TrustedHosts:
IMSDirectories:
IMSEnvironments:
IMSOptionLists:
SharedSecretPolicy:
IMS6Directories:
IMS6Environments:
objectclass: KeyManagement
Oid: 1a-fa347804-9d33-11d3-8025-006008aaae5b
IsEnabled: true
ChangeFrequency: 2
ChangeValue: 0
NewKeyTime: 1502334000
OldKeyTime: 1502248172
FireHour: 3
PersistentKey: {RC2}VDPKLgZZDJ3mEjM3WzphnvBt2GCIQrNqa6TR174l279K6QLPC0dhZRlPNLvCp/A/
objectclass: AgentKey
Oid: 1b-a0b79a43-eca3-4090-9082-4a30604fd108
KeyMarker: 1
Key: {RC2}r0T7TDNWvPME3VOr6b+43YJjULngsqGsHcMBxsVnuk09Ijh7jsPe5+4xs/OccTvx
objectclass: AgentKey
Oid: 1b-912d25e3-26c0-4103-9c08-397733217fee
KeyMarker: 3
Key: {RC2}6XtDG3PqFgJ5t5JCXiy0S2Ohc6eIv5sNr6Pi06JfXR/hGfyJbvTUtnGfKcacX3kc
objectclass: AgentKey
Oid: 1b-87336696-825f-4c71-b919-2bf2cb61578f
KeyMarker: 2
Key: {RC2}ZDUJcusH5LBcutHqWdMNTxoL78LpXsRQ4OdLeZRIyXwJAzWZckh9H2uXxi9svAFX
objectclass: AgentKey
Oid: 1b-68860af4-01ff-4227-a034-795df2b93c99
KeyMarker: 4
Key: {RC2}5fIwrHQHpgb4ycaZcvYNmAQ2mY4PCgADZW3GMzlyxvUsF8F5nN1h0gEd9rOpNJmm

 

Note : 

  • If not using the clear text export option (-c) , the exported encrypted agent key values always comes as different. 
  • The leading {RC2} or {AES} string indicates the keys are encrypted.

 

How to configure Agent Keys & Session Ticket Key if using Multiple Policy store with Separate Key Store

 

If a network configuration is composed of multiple Policy Servers, policy stores, and master key stores, an administrator with appropriate privileges can specify the same static key and session ticket key for each policy store in order to facilitate one or more of the following:

  • Single sign-on across all Agents
  • Password Services with a common user directory

 

Common Issues with Key(s)

1. Duplicate Agent Keys

 

Symptoms:

SSO fails with the following error in the web agent trace log :

Unable to decode SMSESSION cookie

       

Identification:

 

To identify if your key store have duplicate set of agent keys, perform the key store export using the following     command :

smkeyexport -d<admin> -w<password> -okeys.txt

 

Then count the number of AgentKey in the export file. If there are more than 4 agent keys, then it means you have

duplicate set of agent keys.

If there are more than 4 agent keys, there will be no guarantee which set of keys an Agent will utilize if more than one set is delivered from the Key Store on Agent start up.
Consider a scenario, that there are two set of agent keys - set 1 & set 2. Now, if Web Agent 1 utilizes set 1 and Web Agent utilizes set 2, the SMSESSION cookie encrypted by one agent will not be decoded by another agent eventually breaking the SSO.

 

Common cause for duplicate agent keys :

When using dynamic agent keys, it is required that only ONE policy server is configured to generate agent keys. If multiple policy server generates agent keys , it will most likely end up with duplicate set of agent keys.

Key store could also end up with duplicate agent keys during agent key export and import

Refer : Tech Tip : CA Single Sign-On:: Policy Server : Best practice on importing Agent Keys  

   

Resolution :

    KB : Cleaning up duplicate agent keys How to Clean up a SiteMinder Key Store? 

 

2. "No Session" error in Administrative UI  and "Failed to decrypt persistent key error" in SMPS log.

 

Symptoms:

  • "No Session" error in Administrative UI while trying to access Key Management --> Agent Key /Session Key Management option.

  • Policy server log  (smps.log ) shows error : "Failed to decrypt persistent key"

[2088/972][Wed Aug 09 2017 04:26:41][SmObjKeyManagement.cpp:459][ERROR][sm-Server-03080] Failed to decrypt persistent key

 

Common Cause:

The persistent key (session ticket key ) is encrypted using Policy store key (or Key Store Key if using separate key store). The encrypted Policy store key is stored in EncryptionKey.txt file on the policy server bin directory. So, this error indicates that the current Policy store key (EncryptionKey.txt) is unable to decrypt the persistent key from the key store.

 

Such a situation can occur if, for example, the EncryptionKey.txt file was changed or copied from another machine or the persistent key was created by the policy server with the different Policy store Key (EncryptionKey.txt)

 

Resolution

1. (Preferred) If there is a backup of the original (valid) EncryptionKey.txt or the Persistent Key (in the key store) try reverting to it and see if that works.

2. If the prior solution does not work then proceed to do following steps which basically creates a new Persistent Key in the key store.

 

a) Stop Policy server

b) Stop Administrative UI

c) Set following registry :

HKEY_LOCAL_MACHINE\SOFTWARE\Netegrity\SiteMinder\CurrentVersion\ObjectStore

DWORD key:

AllowEmptyEncKey

Value: 1

(If not already existing, create this registry. What this registry does is, even if the Policy server is unable to decrypt the existing persistent key , it will use empty persistent key to encrypt the sensitive data in the policy store )

d) Start Policy server 

e) Start Administrative UI

f) Login to the Administrative UI and navigate to Key Management --> Session Key Management tab

  ( You won't get "No session" error after setting above registry )

   Either click Rollover Now under Generate Random Session Ticket Key or Specify a Session Ticket Key and click

   Rollover Now button under it.

g) Once the new persistent key is created, either delete the registry key (AllowEmptyEncKey) created above or set the value to 0. For security reason , it is strongly advised to do not leave the AllowEmptyEncKey=1 in the production server.

h) Restart Policy server

i)  Restart Administrative UI

 

3. Key store export shows error "Unable to decrypt agent key with policy store / key store key"

 

Symptoms:

While performing the key store export it shows following error :

 

C:\Users\Administrator>smkeyexport -dsiteminder -wsiteminder -oc:\keyenc3.txt
Unable to decrypt agent key with policy store / key store key
Unable to decrypt agent key with policy store / key store key
Unable to decrypt agent key with policy store / key store key
Unable to decrypt agent key with policy store / key store key

 

Common Cause:

This is similar to "Failed to decrypt persistent Key" error as discussed above.

Agent Keys are also encrypted with the Policy store key (Or Key store key if using separate key store) .

So if the Policy store key changes (change of EncryptionKey.txt) , the policy server will no longer be able to decrypt the Agent keys.

 

Resolution :

1. Stop policy server which is configured to generate agent keys.

2. Delete all existing agent keys directly from the key store

e.g 

For RDBS : delete from smagentkey4

For LDAP , you may use the ldapmodify command or your GUI interface to sequentially select and delete all keys.

Example command:

# ldapmodify -D "cn=directory manager" -w dirmanagerpassword -h localhost
dn: smAgentKeyOID4=1b-4a79595f-9a40-1000-a34a-830cefdf0cb3, ou=PolicySvr4,ou=SiteMinder,ou=Netegrity,o=ghost
changetype: delete

(Note: The example commands are for example only and will need to be modified for your environment.)

3. Start Policy server

During the startup, if the policy server configured to generate agent keys doesn't find any agent keys, it will create it.

By default Policy server creates all 4 agent keys with the identical values :

 

 

 

 

 

Introduction

CA SSO uses unique OID to represent various objects. They are of format : "XX-XXXXXXXX-XXXX-XXXX-XXXX-************" (where each X is a hexadecimal character.

The leading two hexadecimal characters signifies the type of the object

 

For e.g

06 signifies Realm, 03 signifies Domain , 0b signifies Rule and so on ..

 

CA.SM::Realm@06-f0352125-7284-4c76-97b4-90bd2c0d662c

CA.SM::Domain@03-4897ec92-d419-4a24-be20-1ff4be366b1

CA.SM::Rule@0b-e3d824b1-52f5-4161-908d-4800b1887214

 

Question

What is the full list of supported OID types ?

Environment

PS : 12.52 SP1

Answer

The full list of supported OID types are as below.

Please note these values are in DECIMAL :

 

         Null = 0

        ,Device = 1

        ,DeviceGroup = 2

        ,Domain = 3

        ,Policy = 4

        ,PolicyLink = 5

        ,Realm = 6

        ,Response = 7

        ,ResponseAttr = 8

        ,ResponseGroup = 9

        ,Root = 10

        ,Rule = 11

        ,RuleGroup = 12

        ,Scheme = 13

        ,UserDirectory = 14

        ,UserPolicy = 15

        ,Vendor = 16

        ,VendorAttr = 17

        ,Admin = 18

        ,ServerCommand = 19

        ,AgentCommand = 20

        ,AuthAzMap = 21

        ,CertMap = 22

        ,SelfReg = 23

        ,ODBCQuery = 24

        ,PasswordPolicy = 25

        ,KeyManagement = 26

        ,AgentKey = 27

        ,RootConfig = 28

        ,Variable = 29

        ,VariableType = 30

        ,ActiveExpr = 31

        ,PropertyCollection = 32

        ,PropertySection = 33

        ,Property = 34

        ,TaggedString = 35

        ,TrustedHost = 36

        ,IMSTask = 37

        ,IMSRole = 38

        ,IMSDirectory = 39

        ,IMSManagedObject = 40

        ,IMSManagedObjectAttr = 41

        ,IMSEnvironment = 42

        ,IMSTaskScreen = 43

        ,IMSTaskScreenField = 44

        ,IMSOrgRoleBinding = 45

        ,IMSOrgRoleBindingDelegatedUser = 46

        ,IMSOrgRoleBindingGrantor = 47

        ,IMSOptionList = 48

        ,SharedSecretPolicy = 49

        ,IMS6Directory = 50

        ,IMS6ManagedObject = 51

        ,IMS6ManagedObjectAttr = 52

        ,IMS6Environment = 53

        ,IMS6Task = 54

        ,IMS6Role = 55

        ,IMS6TabDefinition = 56

        ,IMS6ScreenDefinition = 57

        ,IMS6Tab = 58

        ,IMS6Screen = 59

        ,IMS6ScreenField = 60

        ,IMS6RoleChangePolicy = 61

        ,IMS6RoleAdminPolicy = 62

        ,IMS6RoleMemberPolicy = 63

        ,IMS6RoleOwnerPolicy = 64

        ,IMS6RoleScopeRule = 65

        ,IMS6RoleRule = 66

        ,IMS6ValidationRuleSet = 67

        ,IMS6ValidationRule = 68

        ,IMS6BLTH = 69

        ,IMS6IdentityPolicy = 70

        ,IMS6IdentityPolicySet = 71

        ,IMS6CertificationPolicy = 72

        ,IMSDirectory6 = 50

        ,IMSManagedObject6 = 51

        ,IMSManagedObjectAttr6 = 52

        ,IMSEnvironment6 = 53

        ,IMSTask6 = 54

        ,IMSRole6 = 55

        ,IMSTabDefinition6 = 56

        ,IMSScreenDefinition6 = 57

        ,IMSTab6 = 58

        ,IMSScreen6 = 59

        ,IMSScreenField6 = 60

        ,IMSRoleChangePolicy6 = 61

        ,IMSRoleAdminPolicy6 = 62

        ,IMSRoleMemberPolicy6 = 63

        ,IMSRoleOwnerPolicy6 = 64

        ,IMSRoleScopeRule6 = 65

        ,IMSRoleRule6 = 66

        ,IMSValidationRuleSet6 = 67

        ,IMSValidationRule6 = 68

        ,IMSBLTH6 = 69

        ,IMSIdentityPolicy6 = 70

        ,IMSIdentityPolicySet6 = 71

        ,IMSCertificationPolicy6 = 72

Introduction

The manual key rollover option for Dynamic Agent Key is disabled by default. 

This KB guides how to enable this feature.

 

Environment

Policy server : r12.5 and above

Instructions

1. Perform a full key store export by running following command :

smkeyexport -d<admin> -w<password> -okeys.txt

 

2. Once the key store is is exported, change the value for IsEnabled option under KeyManagement to true from false:

Old :

objectclass: KeyManagement
Oid: 1a-XXXXX
IsEnabled: false
ChangeFrequency: 0
ChangeValue: 0
NewKeyTime: 0
OldKeyTime: 1502258688
FireHour: 0
PersistentKey: {RC2}2SraPUoK8PLYItUrJFCeck7rlcWl77g+3vpJY07rso39+ojFmbn7zn0IdwGjWeCQ

 

New :

objectclass: KeyManagement

Oid: 1a-XXXXX
IsEnabled: true
ChangeFrequency: 0
ChangeValue: 0
NewKeyTime: 0
OldKeyTime: 1502258688
FireHour: 0
PersistentKey: {RC2}2SraPUoK8PLYItUrJFCeck7rlcWl77g+3vpJY07rso39+ojFmbn7zn0IdwGjWeCQ

Note : DO NOT MAKE ANY OTHER CHANGE

 

3. After making the above change, save the export file and import it by running following command :

smkeyimport -d<admin> -w<password> -ikeys.txt

4. You should now have the manual rollover option enabled for the dynamic agent key 

 

Summary

It is often required to access the default CA SSO generated response attributes in the custom active response/rules to evaluate custom logic.

Some sample CA SSO generated attributes are :

  • SM_USERSESSIONIP
  • SM_USERDN
  • SM_USERPASSWORD

The full list of default CA SSO generated attributes can be found by searching for keyword "CA SiteMinder®-Generated User Attributes" in CA SSO documentation

https://docops.ca.com/ca-single-sign-on/12-52-sp1/en/configuring/policy-server-configuration/responses-and-response-groups/ca-siteminder-generated-user-attributes

Environment

PS : r12.5 and above

Instructions

To default CA SSO generated user attributes can be accessed using the SmUserContext.getProp(java.lang.String propName) API call as below.

In the example below, we are accessing the default CA SSO response attribute : SM_USERIPADDRESS

 

 

    public String
    invoke(ActiveExpressionContext context,
           String param)
        throws Exception
    {

        if (context == null)
        {
           // should never happen
           throw new IllegalArgumentException("ActiveResponseSample invoked without context");
        }

        // the User Context is required to use the methods like getProp, setProp..
        UserContext theUserContext = context.getUserContext();

        if (theUserContext == null)
        {
            context.setErrorText("No User Context.");
            return null;
        }

     return theUserContext.getProp("SM_USERIPADDRESS");
    }

 

Step 1: Create an active response as shown below :

Step 2 : Configure the Active Response with either OnAuthAccept or OnAccessAccept rule.

Step 3 : Compile the attached sample ActiveResponseSample.java class by running java-build.bat (windows) /java-build.sh (unix).

Note: Prior to running you will need to update the path to the JDK install directory in the JAVA_HOME variable by editing the java-build.bat (windows) /java-build.sh (unix) files.

 

Step 4. Once compiled, copy the ActiveResponseSample.class and copy it to the <Policy server>/config/properties directory.

 

Note: This "properties" directory is by default in the classpath of Policy server so you don't need to modify JVMOptions.txt.

 

If you choose to deploy the class in any other directory, then you will need to add the path to that directory as a classpath in the JVMOptions.txt file.

 

Test:

 

Policy server Trace Log :

 

[08/07/2017][01:30:07][2908][1564][][SmAuthUser.cpp:700][ServerTrace][][][][][][][][][][][][][][ActiveResponseSample: ActiveResponseSample:: returning ClientIP= ['10.129.160.255']][01:30:07.792][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][ActiveResponseSample:: returning ClientIP= ['10.129.160.255']][][][][][][][][][][][][][] 

Additional Information

1) Not all response attributes are available at all events (OnAuthAccept, OnAuthReject, OnAccessAccept etc.) 

So before implementation please verify if the response attribute you are interested is available for the event that you require it in :

https://docops.ca.com/ca-single-sign-on/12-52-sp1/en/configuring/policy-server-configuration/responses-and-response-groups/ca-siteminder-generated-user-attributes

 

2) Active Response are by default cached. If you need the active response to evaluate every time on the Policy server , disable attribute caching for this active response.( In the active response creation screen in Administrative UI)

 

Is your organization considering a new single sign-on solution? Before you do anything, you may want to read this guide to successful migrations. The first topic we’ll handle are the preliminary steps you should take to determine if the migration is the right move for your organization.

 

A well thought-out strategy for migrating to CA Single Sign On (CA SSO) from another access management solution reaps several benefits:

  • A seamless user experience
  • Minimal to no disruption to user activity
  • Low risk
  • An accelerated pace of migration that delivers quick wins.

 

Of course, a well thought-out strategy requires effective planning. What considerations do we need to take into account? We’ve been involved in countless CA SSO migrations over the years, and we’ll share our leading practices with you here. One caveat, though, before you use this as the guide to end all guides: Your organization’s environment has unique variables that must be factored in before performing this migration.

 

Here’s a general view of the process:

  • Evaluate the existing solution
  • Map requirements and perform gap analysis
  • Build out the new solution
  • Develop co-existence strategy
  • Develop and execute application migration strategy

 

Step #1: Evaluate the Existing Solution

You can glean a lot of information from the current solution: existing requirements, login methods, authentication and authorization load, and audit reporting.  Discovery elements include:

  • Evaluate the solution design documentation
  • Interview stakeholders, application owners and users, and security administrators
  • Understand average and peak transaction volumes for logins and authorizations for sizing the new solution
  • Analyze existing application protection needs:
    • Login methods
    • Coarse- or fine-grained authorization
    • Roles, group memberships, or user attribute values needed for valid access
    • Data needed by application in cookies, headers, etc.
  • Current processes for on boarding applications

 

Step #2: Map Requirements and Perform Gap Analysis

This phase maps solution requirements and use cases to out-of-the-box (OOTB) features of CA SSO. (A few use cases will be beyond the scope of CA SSO’s OOTB capabilities.)

 

Here’s a sample mapping table:

Use Case Number

Description

OOTB/Custom

Notes

1

User login

OOTB

Uses uid attribute from LDAP directory

2

User logout

OOTB

Specified in Agent Configuration Object

3

Cross-session validation

Custom

Java SDK and CA SDK needed

4

Multi-factor authentication using mobile OTP

OOTB

Requires integration with CA Advanced Authentication


Existing policies or protection schemes can also be analyzed to generate requirements for protecting each application when using CA SSO. These requirements can be used to create policy definitions:

  • Application resources to be protected (path, URI, etc.)
  • Web server(s) hosting application
  • Authentication method required for user identification (login form, multi-factor, certificate, token, windows session, etc.)
  • Authorization method for validating user entitlements, group membership, user attribute values, etc.
  • Special considerations in determining user access—network location, time of day or week, etc.
  • Policy server
  • Data stores: policy store, user store(s)
  • CA Access Gateway and/or web agents
  • Administrative UI server

 

All this information is mapped into equivalent configurations on the CA SSO side.

 

If these first two steps tell you that your organization will benefit from a new single sign-on solution, I hope you’ll read my next post highlighting co-existence and build out of the new environment, which will be available next week.

Question

Policy server cache flushing from the command line by running command : smpolicysrv -flushcache doesn't work.

While running this command , it gives following error in the smps.log :

 

[CServer.cpp:7668][INFO][sm-Server-04520] Server 'flushcache' command received. 

[CServer.cpp:7669][INFO][sm-Server-04750] Server 'flushcache' command is disabled.Please contact CA Technologies Customer Support for further information 

Note : 

 

  • Cache update is already enabled on the policy server by running command :smpolicysrv -enablecacheupdates.
  • Cache flushing from Administrative UI is working.

Environment

  • Policy server : r12.5, r12.51, r12.52, r12.6, r12.7 (inclusive of all SP & CR)
  • Policy server OS : Any

Answer

Policy server cache flushing functionality from the command line is deprecated.

The cache can now be flushed only from Administrative UI.