Skip navigation
All Places > CA Security > CA Single Sign-On > Blog > 2017 > March
2017

Introduction

In this blog we will discuss about the steps required to display various login related error message in the login page when using the FCC form authentication

Environment

  • Policy Server : ANY
  • Web Agent : ANY

Instructions

  • Configure OnAuthAttempt response to set ErrorMsg cookie with value "User Not Found".

authattempt.jpg

  • Configure OnAuthReject response to set ErrorMsg cookie with value "Wrong password. Try again."

authreject.jpg

  • Configure OnAuthAccept response to expire the ErrorMsg cookie on successful authentication.

authaccept.jpg

  • Associate these Responses with the respective rules.

policy.jpg

  • Create an HTML FORMS authentication scheme using customlogin.fcc

authscheme.jpg

  • Save the attached customlogin.fcc in the <webagent>/samples/forms/ directory
  • Restart web server.

 

 

 

Note : In order for web agent to do 302 redirect to back to the login page and to be able to read the error message cookie, the display login form and form being posted to needs to be different.

i.e you need to provide a different FCC form in the FORM ACTION field.

In this example, our login page is customlogin.fcc but instead of posting it to self , it is posting to the OOTB login.fcc

<form NAME="Login" ACTION="/siteminderagent/forms/login.fcc" METHOD="POST">

 

Testing:

  • Invalid User ID

invalid user.jpg

  • Invalid credential

invalidcreds.jpg

  • Successful Authentication

succcesful.jpg

 

Attachment:

 

Introduction 

The purpose of this blog entry is to show all the different types of logs that are available in CA SSO (aka Siteminder) Policy Server.  Tracing information would come in next blog. I will also be referring to some useful utilities commonly used with CA SSO for troubleshooting. 

 

CA-SSO Policy Server Logging Procedure: 

 The following gives an overview of the major components of Policy Sotre and also shows the name of (all) the logs that can be enabled and where they get their data from:

 

  Note: I may not be discussing the use all the utilities, but the diagram indicates where they would be used.

Log Files

Depending on the problem you are experiencing, Support may request one or more of the following log files:

Component

Files

Policy Server

The Policy Server log (smps.log)
The Policy Server profiler log (smtracedefault.log)
The audit log (smaccess.log)

Web Agent

The Web Agent log
The Web Agent trace log
The web server error log
The web server access log

WSS Agent

WSS Agent log
XML Processing Message Log
Web Agent trace log (WSS Agent for Web Servers only)
Application server or web server error log
Application server or web server access log

 

Policy Server Logs 

  • smps.log: The Policy Server log file records information about the status of the Policy Server and, optionally, configurable levels of auditing information about authentication, authorization, and other events in the Policy Server log file. When the Policy Server is started, its version information and configurations are recorded in the Policy Server log.

  • "smpolicysrv -stats" command will show the output on smps.log.

Cron job or Windows Scheduler can be configured to run "smpolicysrv -stats" command to get the policy server statistics.

** Explanation of the fields:

- Msgs = Number of thread pool messages handled
- Waits = # of times Dequeue had to wait for a message before timeout reached
- Misses = # of times waiting thread woke up to find no message and timeout reached
- Max HP Msg = # maximum number of High Priority messages on the queue since the last reset - of stats
- Max NP Msg = # maximum number of Normal Priority messages on the queue since the last reset of stats
- Current Depth = # message in the queue at the time of executing –stats
- Max Depth = # maximum number of messages on the queue since the last reset of stats
- Current High Depth = # High priority messages in the queue at the time of executing –stats
- Current Norm Depth = # Normal priority messages in the queue at the time of executing -stats
- Current Threads = # threads running at the time of executing -stats
- Max Threads = maximum thread number reached

Connections:

- Current = # of agent connections
- Max = maximum # of connections since last reset
- Limit = maximum allowable connections
- Exceeded limit = # of times exceeded the limit

"Busy threads" refers to the number of thread is currently active on stack and processing request.
"Wait" and "missed" are historical record, does not mean much unless an incident is happening, could be an indicator of how much threads are utilized.
"Reset" can either be policy server restarted or admin intentionally flushed the statistics by command line options

  • smtracedefault.log: Policy Server Trace log with configuring the Policy Server Profiler. Check the "Enable Profiling" option, then click on the "Configure Settings" button.


      * Config File – config file saved under C:\Program Files\CA\siteminder\config\smtracedefault

** More detail of “Configure the Policy Server Profiler” can be found from document.
https://docops.ca.com/ca-single-sign-on/12-6-01/en/configuring/configure-the-policy-server-profiler

- Configuration Settings: Useful configuration for troubleshooting

-   /Component

Functional group of components that will be logged.

Note: Often all except “Server/Policy_Object_Cache” generates lots of log lines – so often good to leave out.

-   / Data

Items:   <data item: PreciseTime, SrcFile, Function, Pid, Tid, Message  to give better results.>

-   / Filter


      (Expression can be added to exclude items – not widly used but occasionaly helpful)

 

Siteminder Policy Trace Analysis

https://communities.ca.com/message/97562726

“Siteminder Policy Trace Analysis” is a java Policy Log analysis tool that we have been using in CA Support for a while now for analysis of various SiteMinder logs.

 - Additional tips are available from author's blog post:
Tech Tip - [PreciseTime] gives better Graphs & Stats with SMTraceAnalysisTool 

Other Loggings

  • Xtrace – xTrace is an XPSConfig option that captures XPS errors in Policy Store. This option is available from CA SiteMinder Release 12.51.

 

 

 

 Type the number to enable
Need to do “U” to write the entries.

The config file Saves to :

C:\Program Files (x86)\CA\siteminder\config\XPS.cfg

** More detail of xTrace can be found from document.

How to Use xTrace - CA Single Sign-On - 12.8 - CA Technologies Documentation 

  • Smaccess.log – audit log

 

Useful Tools for Troubleshooting

  • Wireshark: Wireshark is a free and open source packet analyzer. It is used for network troubleshooting, analysis, software and communications protocol.

Usage: Decode SSL, LDAP protocol.
https://www.wireshark.org/ 

  • Netstat: netstat (network statistics) is a command-line network utility tool that displays network connections for the Transmission Control Protocol (both incoming and outgoing), routing tables, and a number of network interface (network interface controller or software-defined network interface) and network protocol statistics. 

 

  • Top: Top command displays processor activity of your Linux box and displays tasks managed by kernel in real-time. It'll show processor and memory are being used and other information like running processes.

 

  • Fiddler: Fiddler is an HTTP debugging proxy server application.

    - To enable HTTPS traffic decryption:
    Open Fiddler-> Tools-> Fiddler Options-> HTTPS -> Chec ‘Decrypt HTTS traffic’->
    http://www.telerik.com/fiddler

 

  • Debug Diagnostic tools: The Debug Diagnostic Tool (DebugDiag) is designed to assist in troubleshooting issues such as hangs, slow performance, memory leaks or memory fragmentation, and crashes in any user-mode process. 

      https://www.microsoft.com/en-us/download/details.aspx?id=49924

 

 

  • Strace: strace is a diagnostic, debugging and instructional userspace utility for Linux. It is used to monitor and tamper with interactions between processes and the Linux kernel, which include system calls, signal deliveries, and changes of process state. 

        -   Command: strace -p <pid>

  • Pstack: pstack attaches to the active processes named by the pids on the command line, and prints out an execution stack trace, including a hint at what the function arguments are.If the process is part of a thread group, then pstack will print out a stack trace for each of the threads in the group.
    If symbols exist in the binary (usually the case unless you have run strip(1)), then symbolic addresses are printed as well.

 

  • Core dumps: Core dump, memory dump, or system dump consists of the recorded state of the working memory of a computer program at a specific time, generally when the program has crashed or otherwise terminated abnormally. (Please refer to ‘Debug Diagnostic tools’ above in case Core Dump file was not generated properly.)
  • Pkgapp: Pkg_app is a script that can take a a core/gcore or process id and gather all the libraries required by Sun Support to debug a core/gcore file. (More detail of how to get stack trace with pkgapp can be found from following link- https://communities.ca.com/people/SungHoon_Kim/blog/2016/02/25/collecting-pkgapp-and-how-to-get-the-stack-trace)

 

Work with Support

If you require assistance from the CA Single Sign-On Support team, there is specific information you can gather and include when opening a Support ticket. Including as much information as possible helps to reduce the amount of time it takes the Support team to resolve the issue.

 

Note: If you are attaching log files as part of your Support engagement, be sure that the set of files matches. Also, ensure that all the files are from the same time as when the issue occurred.

 

Have a continuing nice time with your logging. 

 

Cheers - Gwan

---- Gwan Yu Kim Snr Support Engineer - Global Customer Success

Summary:

The Enhanced Session Assurance with DeviceDNA™ has been completely redesigned in r12.6.

In this blog we will discuss about this new redesigned architecture, compare it with old architecture and also review the steps required to configure it in the new design.

Environment:

  • Policy Server : R12.6
  • CA Access Gateway : R12.6 

So what has changed ?

 

To understand this, let's see what the earlier design was.

The existing design of session assurance implementation relied on following components :

Policy Server

  • CA RiskMinder Service (aka CA Advanced Authentication Server)

CA Access Gateway

  • Session Assurance Flow App - This App interacts with the Policy server using AgentAPI. This also interacts with CA RiskMinder services via JDBC-ODBC bridge.

 

 

Problems with the existing design

  • Heavy footprint of CA Advanced Authentication components on both Policy server and CA Access Gateway servers.
  • Difficult to maintain Resource.dat & Master Key file.
  • Complex PS/Access Gateway upgrade path

 

The New Design

  • Removed CA Advanced Authentication components on Policy server.
  • Simplified Session Assurance Flow App on CA Access Gateway by developing it using CA Advanced Authentication SDK.

 

 

What this now mean is:

  • Advanced Authentication Server is not installed on the Policy Server.
  • Default domain “AdvAuthDEFAULTORG” and host configuration objects those were created in previous releases for use of Advanced Auth Server are no more created.
  • No Master Key.
  • DSN for Advanced Authentication Server is not created on PS & CA Access Gateway
  • Resource.dat is not created in Policy server/bin directory.
  • Previously deployed web apps - uiapp, aaloginservice,authapp is now replaced with just one new "sessionassuranceapp" on CA Access Gateway. 

 

How do I configure Session Assurance  now?

Following are the steps required to configure session assurance now :

 

CA Access Gateway

  • Install CA Access Gateway 12.6
  • Configure CA Access Gateway to use SSL for front end Apache
  • Ensure SessAssurance Application is enabled in server.conf :
<Context name="SessionAssuarance Application">
docBase="sessionassuranceapp"
path="authapp"
enable="yes"
</Context>
  • Ensure SACExt ACO parameter value is .sac  

  • Ensure IgnoreExt ACO parameter contains .sac extension

 

 

CA SSO Policy Server

 

  • Install CA SSO Policy server to 12.6
  • Create Session Assurance End Point as below :

 

  • Enable Session Server on the Policy server. 

  • Add Session Assurance End Point to the realm for which you want to enable the session assurance functionality.

(It is optional to configure realm as persistent)

 

Configure Log Files for Troubleshooting

 

1. Enable Audit Logs and also configure Enable Enhanced Tracing on the Policy server.

Navigate to following in the registry editor :

HKEY_LOCAL_MACHINE\SOFTWARE\Netegrity\SiteMinder\CurrentVersion\Reports

Add following dword :

"Enable Enhance Tracing"=1The Audit logs contains authentication and authorisation activity related to Session Assurance flow.

2. Enable debug logging for Session Assurance Flow app on CA Access Gateway

Navigate to CA\secure-proxy\Tomcat\webapps\sessionassuranceapp\WEB-INF\classes and set the log level to DEBUG in log4j.properties as below:

# Define the root logger with appender SAFileAppender
log4j.rootLogger = DEBUG, SAFileAppender
# Set the appender named SAFileAppender to be a File appender
log4j.appender.SAFileAppender=org.apache.log4j.FileAppender
log4j.appender.SAFileAppender.File=${catalina.base}/../proxy-engine/logs/SessionAssuranceApp.log
log4j.appender.SAFileAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.SAFileAppender.layout.conversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:- %m%n

 

Testing:

 

 

Additional Information:

Configure Enhanced Session Assurance with DeviceDNA™ - CA Single Sign-On - 12.6.01 - CA Technologies Documentation 

In my first post I outlined the problems inherent in a traditional policy-based SSO model. Here, I’ll give a general overview of an SSO model that doesn’t require policies—a refreshing and liberating change that solves a multitude of problems that arise in policy-based SSO models.

 

You’re probably thinking that a non-policy-based SSO model is best suited to legacy and simpler web applications. You can use this model in those circumstances, but my experience tells me that you will be pleasantly surprised by how easily this model adapts to newer, more advanced and complex enterprise applications. These applications’ mature authentication and authorization security framework allows integration of SSO with portals, SAP, ERP and commerce platforms, most of which have modules for managing and delivering records, transactions, data, users and content. (For the sake of convenience, I’ll refer to these apps collectively as integrated apps, or IApps.)

 

These integrations require a security framework that manages access to the IApps’ modules. The good news is that most IApps have their own pre-defined security framework for authentication and authorization. An IApp’s security framework is generally closed, which means it has a built-in model for enforcing access control and controlling the user’s digital experience, an underlying security database against which it provides fine-grain access in the IApp. As a result, there’s no need to create policies in SSO to regulate access; in fact, it’s simply not feasible for a third party to create accessibility policy models for IApps. If you did, you’d essentially be plunking a security model on top of an already effective security model, thus creating myriad challenges at the design and integration levels for Development, Operations and other teams. It’s simply not worth your time or money, and it will make your enterprise security framework more complex, less manageable and more difficult to maintain.

 

In this non-policy-based SSO model, SSO’s access control capabilities are not enforced. In other words, while SSO controls authenticating users to the IApps, it does not control access of the users within the IApps, because the IApps already have that capability, which I call native authorization.

 

Decoupling SSO from an IApp’s native authorization doesn’t mean that SSO can’t augment access control. Integrating SSO with risk analytics can determine if a user session carries risk, and this information is passed to the IApp, which either by itself or using other integrations, acts upon the risk. In fact, SSO can terminate the user session if it deems it too risky, thereby increasing overall security and protecting applications.

 

For example, let’s say a commerce portal is integrated with CA SSO, CA Advanced Authentication, CA API Gateway and CA Payment Security. When a user tries to access your platform from a not-so-secure environment, such as an airport kiosk, CA SSO and CA Advanced Authentication pass that knowledge to the IApp, in this case perhaps an airline ticket portal, banking app or social media platform. In effect, CA Security solutions are communicating to the IApp, “This request comes from a high-risk user session.” The IApp integration with CA Security solutions either denies access, allows the user to complete only low-risk activities, or asks the user to take actions so that he/she can experience more capabilities.

 

That’s the beauty of a non-policy-based SSO model: It integrates SSO with advanced risk analytics and supports native authorizations and APIs without requiring its own complex set of policies.

 

Questions? Comments?

Introduction 

The purpose of this blog entry is to show how to enable all the different types of trace logs that are available in CA SSO Access Gateway (formerly known as Secure Proxy Server).   I will also be referring to the Access Gateway product as "Ag" in the article -however some of the slides predate the name change so will show up as SPS.  

 

Ag can be used a few different ways, and depending upon what you are using it for will determine what logs you want to enable.   I've split this up into different themes:

  • Ag Logging when used as Reverse Proxy Server
  • Ag Logging when used as Federation Gateway
  • Ag Logging for ProxyUI 
  • Ag Logging for WebServices

 

This log covers the Logginfg when used Ag is used as a Reverse Proxy Server the other logging profiles will be added as a separate documents at a later date. 

 

 

 

Ag Logging when used as a Reverse Proxy Server: 

 

The following gives an overview of the major components of Ag and also shows the name of (all) the logs that can be enabled and where they get their data from:

 

 

 

When used as a reverse proxy server, requests come in from the client, to Apache httpd, get passed to Apache/tomcat and then get forwarded to a backend server for processing.  The backend then completes the request and the data is then passed back to tomcat, to httpd and back to the client.    Note: I wont be discussing the use of fiddler and wireshark, but the diagram indicates where they would be used.

 

In summary we have: 

  • Apache Logs
  • Mod_Jk logs
  • Proxy Engine Logs
  • Web Agent Logs
  • Httpclient Logs

Each of which is covered in the sections below. 

 

 

Apache Logs 

The two major logs for apache httpd are access_log and error_log, these log the interaction with the user <-> httpd process. The httpd.conf entries are :

 

- Access_log  - settings in httpd.conf :  

The formats are defined here : 

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined

LogFormat "%h %l %u %t \"%r\" %>s %b" common

 

And the rotating logs are set here : 

CustomLog "|'C:/Program Files/CA/secure-proxy/httpd/bin/rotatelogs.exe' 'C:/Program Files/CA/secure-proxy/httpd/logs/access_log' 10M" common

 

Occasionally it is good to supplement what is in the access_log and to get some insight to a problem.  The example above it shows the %{User-Agent}i header, but you can also use that to capture cookies for example:  \"%{SMSESSION}C\" .   The option %T is also useful, since it logs the total time it took Ag to process the request and return the response to the user. 

 

A complete list of LogFormat parameters is available here: 

Tech Note : Enable httpclient logging in Access Gateway 12.7 

 

Note: And one final point to remember, the the access_log entry is written at the END of processing the request.  So if apache httpd crashed then the requests that are currently in flight when the crash happened are NOT logged. 

 

- Error_log  - settings in httpd.conf :  

The formats are defined here : 

# LogLevel: values include: debug, info, notice, warn, error, crit, alert, emerg.
LogLevel warn

 

And the rotating logs are set here : 

ErrorLog "|'C:/Program Files/CA/secure-proxy/httpd/bin/rotatelogs.exe' 'C:/Program Files/CA/secure-proxy/httpd/logs/error_log' 10M"


For debugging, you can raise the LogLevel to debug.   Apache 2.4 also has extra levels trace1 ... trace8, there are needed when you want to trace the raw data packets and SSL handshaking problems between the front end client and the httpd process. So for debugging often we can recommend : 

LogLevel trace8

 

The apache error_log is also good place to find the exact httpd and mod_jk version numbers: 

 

 

Mod_Jk Logs

Mod_jk is the Apache httpd module that forwards requests onto tomcat.  The log settings for it are in httpd.conf : 

 

JkLogFile "|'C:/Program Files/CA/secure-proxy/httpd/bin/rotatelogs.exe' 'C:/Program Files/CA/secure-proxy/httpd/logs/mod_jk.log' 10M"

JkLogLevel error

The log-level parameter describes what detail of logging should occur.   Possible values are : debug  info  error

 

For debug level logging, it is best to also set the JkRequestLogFormat, to display more detail of the transaction:

JkLogLevel debug

JkRequestLogFormat "%w %V %T %m %H %p %U %s"

That will show most of the raw byte data of what is send from httpd -> tomcat and what is returned.  The settings are explained here :  https://tomcat.apache.org/connectors-doc/reference/apache.html 

 

Sample mod_jk.log : 

 

 

Proxy Engine Logs

The proxy engine has two main logs :  

    server.log

    nohup*.out  

These are in secure-proxy/proxy-engine/logs directory by default.   The server.log is the log4j logging for the proxy-engine, and the nohup_<pid>.out log is the redirect of stdout and stderror logs 

 

server.log

Logging level for server.log is set in Tomcat/properties/logger.properties 

log4j.rootCategory=INFO,SvrFileAppender

log4j.rootCategory.ResourceBundle=root

The log level can bec changed to OFF, FATAL, ERROR, WARN, INFO, DEBUG, ALL

 

nohup_<date>_<time>.out log : 

We generally don't change the logging in this one as it logs the stdout/stderr logs from the proxy-engine.  Although one useful tip is adding "-verbose" to the java startup, and then you get the exact .jar file that each class is loaded from in this log.   A new timestamped log is started each time the proxy-engine is started.  The nohup log is good at capturing the stacktrace when Exceptions are thrown in proxy-engine eg: 

 

 

 

Web Agent Logs

 

Ag comes witn the standard WebAgent logs.  These are enabled via the ACO settings as per the normal agent eg: 

Must be enabled, and setup as normal agent ACO parameter: 

 

WebAgent.log 

LogAppend="NO"
LogFile=“YES"
LogFileName=“c:\ca\proxy-engine\logs\WebAgent.log"
LogFileSize="100"

 

 

WebAgentTrace.log 

TraceAppend="NO"
TraceConfigFile=“c:\ca\proxy-engine\conf\defaultagent\SecureProxyTrace.conf"
TraceFile=“YES"
TraceFileName=“c:\ca\proxy-engine\logs\WebAgentTrace.log"
TraceFileSize="100"

 

 

WebAgentTrace.log SecureProxyTrace.conf settings :

The SecureProxyTrace.conf is slightly different to the WebAgentTrace.conf.  It has ProxyAgent as default. 

I also tend to add Agent_Con_Manager, and AgentFunc as components. 

And add data items :  PreciseTime, Function,  and SrcFile as shown below: 

 

WebAgentTrace.log with proxy-rule messages : 

Additionally for the webagenttrace to log the proxy rule evaluation you need to add debug=“yes” to proxy-rules.xml to get additional error messages specific to SPS :

 

 

WebAgentTrace.log examples: 

After setting the above then we endup with normal trace log like: 

 

 

And with Ag specific messages for proxy-rules such as:

 

 

HttpClient Logs

Http client  logs the raw data GET/POST that is sent to the backend and and reply that is received.   So it is good for debugging the interaction with the backend server. 

 

To enable httpclient logging in server.conf set :
           httpclientlog = “yes”

and restart the proxy-engine service. 

 

Note: For Ag R12.7 there is extra setting needed to enable httpclient logging: 

https://communities.ca.com/community/ca-security/ca-single-sign-on/blog/2017/09/01/tech-note-enable-httpclient-logging-in-agent-gateway-127

 

HttpClient / Java SSL Logging

Java has the ability to log the SSL handshake and transfer of data.  This is done by adding  -Djavax.net.debug=all 
to the java runtime startup.  The file this needs to be applied to differs per platform :

For Windows - proxy-engine/conf/SpsProxyEngine.properties
For Unix - proxy-engine/proxyserver.sh

As show below: 

Enable SSL tracing for java:

 

SSL Tracing in the nohup and server.log files : 

These logs then show the SSL handshake, and decrypt/hash of each packet send and received when proxy-engine communicats to the SSL backend: 

 

Have a nice time enjoying your logging. 

 

Cheers - Mark

----
Mark O'Donohue
Snr Principal Support Engineer - Global Customer Success

 

This document is part of a series on Logging in SSO components: 

Tech Tip:How to enable trace logging in SSO (aka Siteminder) Webagent 

Tech Tip : Policy Server Loggings 

Tech Tip : Howto enable Tracing in Access Gateway (fka: Secure Proxy Server) 

Making the Complex Simple: Non-Policy-Based Single Sign-On is the Wave of the Future

 

As we all know, single sign-on (SSO) permits a user to access multiple applications and Web services with a single set of login credentials, usually a user name and password. Organizations value SSO because their customers, employees and partners can access only those applications that the organization wants the particular user to have rights to, thus protecting the organization’s on-premise applications and those in the cloud. SSO increases employee and partner productivity by providing quick access to organizational data across an array of devices, eliminating the need for users to re-enter credentials when switching applications during a session and allowing employees and partners to collaborate and innovate within the network. SSO also helps the organization track user activity and monitor user accounts.

Let’s take a quick look at a typical, policy-based SSO deployment. Traditionally, the simplest method is to deploy agents on the application server or web server. Agents act as gatekeepers to applications and web services, but to do their job, agents must be told who can and can’t enter. Policies, which are created in SSO and stored in a policy store on a server, define the access that agents grant to various users, typically based on the user’s role.

When a user tries to access an app, the agent weighs the incoming request against policy. If the agent decides that policy allows access to the user, the user is authenticated and access is granted. The agent also determines whether there are restrictions on the user’s access. Is a user allowed only in public areas, or can he/she enter private areas—and if so, which private areas?

As tech leaders whose organizations use SSO know, the policy-based model can quickly become very complex, especially when you add more objects to control: more applications, more and different kinds of users, more agents and servers, more user repositories, and more capabilities. With each new addition, you have to rejigger the policies that drive the SSO platform—or worse, layer on new ones. There’s no end to the potential number of layers.

With more layers of policies, you get an increasingly complex labyrinth that’s more difficult to maintain and sustain. Those difficulties lead to cost increases due to additional resources and higher operational overhead.

What’s wrong with this picture? Sooner or later, the policy-laden SSO system becomes more complex to manage. It’s a scenario I’ve seen organizations come up against and in my next post (coming soon) I’ll propose a solution to this situation.

In the meantime, please add your comments if this scenario sounds far too familiar.