Skip navigation
All People > Sascha Preibisch > Sascha Preibisch's Blog
1 2 3 Previous Next

Sascha Preibisch's Blog

34 posts

Hi everybody!


Some of you may have read my blog post on OAuth vs. LDAP. That topic seems to be interesting for quite some time now. I think it may be a topic people talk about because it may not be too simple to see the difference of those two. Just recently I was told that it would be good to get a comparison between OAuth, LDAP and OpenID Connect.


This post is providing that comparison.


LDAP (Lightweight Directory Access Protocol)

An LDAP server (full disclosure: I am not an expert on LDAP) is a directory that contains details, attributes about users. It may contain a username, firstname, lastname, password (or the hash of a password), address, certificates, date of birth, roles, all kinds of stuff. The data of an LDAP gets accessed for different purposes:

  • authenticate a user: compare the given username and password against values found in the LDAP
  • retrieve attributes: retrieve firstname, lastname, role for a given username
  • authorize users: retrieve access level for directories for a given username

I believe that most developers, at some point in time, had to deal with an LDAP server. So I also believe that most developers will agree with what I just described.



OAuth is a framework that enables applications (clients) gain access to resources without receiving any details of users they are being used by. To make it a little more visual I am introducing an example:


The very cool app 'FancyEMailClient'


In the old days:

  • for each email provider the user provides details such as smtp server, pop3 server, username, password on a configuration page within FancyEMailClient
  • FancyEMailClient now accesses all configured email accounts on behalf of the user. More precise, FancyEMailClient is acting AS the user!
  • The user has shared all details with FancyEMailClient ... I must say, it feels a little fishy ... don't you agree?


In the days of OAuth:

  • FancyEMailClient is an oauth client and gets registered at each email provider that should be supported
  • FancyEMailClient does not ask users for any email provider details whatsoever
  • FancyEMailClient delegates authentication and authorization to the selected email provider via a redirect_uri
  • FancyEMailClient retrieves an access_token and uses this token at an API such as /provider/email to retrieve the users emails. The access_token may be granted for 'scope=email_api'
  • FancyEMailClient has no clue who the user is and has not seen any details such as username or password
  • This is perfect in regards to the users privacy needs ... However, FancyEMailClient would like to display a message such as 'Hello Sascha' if 'Sascha' is the user ... but it cannot ...

OpenID Connect

As I explained above a client does not get any details about the current user. But since most applications would at least like to display a friendly message such as 'Hello Sascha' there needs to be something to help them.

To stick to the email provider example, before OpenID Connect (OIDC) was born, these providers simply created oauth protected APIs (resources) that would return details about the current user. Users would first give their consent, afterwards the client would get the username or firstname and would display 'Hello Sascha'.


Since this became a requirement for almost any oauth clients we now have a common way of doing that, specified in OpenID Connect. OIDC has specified SCOPE values, a /userinfo API and an 'id_token' which represents an authenticated user.


In order to enhance the oauth version of FancyEMailClient the developer of it would only have to do a few little things:

  • when requesting access to emails, also request access to user details. The request would now have to include something like '...&scope=openid+profile+email+email_api&...' (scope == permissions like access control)
  • during the authentication and authorization flow the user would not only grant access to his emails but also to his personal details
  • FancyEMailClient would now receive an access_token which could not only be used at /provider/email but also at /provider/userinfo
  • FancyEMailClient can now display 'Hello Sascha'!


Now the big question: how does it all come together?

LDAP servers are the only component that exists without OAuth and OpenID Connect. LDAP servers are always the source of users (and maybe also clients and other entities). LDAP servers have always been used to authenticate users and authorize them for resources. OAuth and OpenID Connect cannot be supported if no LDAP server is available. OAuth and OpenID Connect are protocols only, not systems to manage users.


Below is a picture which I have created to show an example system:



My former manager Jay would now lean back in his chair, put his hands behind his head, put his feet onto a table and would say: Let me explain!


Case: OAuth

  1. When a user selects an email provider within FancyEMailClient his browser gets redirected to that provider. It is an OAuth authorization request and includes oauth SCOPE values.  To access the API /provider/email a SCOPE value such as 'email_api' may be included. I say 'may' because there is no standard SCOPE for that. To also gain access to the user details other SCOPE values need to be included. That is more straight forward since they have been specified within OpenID Connect. 'openid profile email' would be sufficient and are supported at practically all OIDC providers. In the end of the flow FancyEMailClient gets back an OAuth authorization_code.
  2. The user only shares his credentials with EMailProvider. He types them into the EMailProviders login page and EMailProvider will validate them against his LDAP server. (LDAP server may be a database or any other system that maintains user details)
  3. After receiving the OAuth authorization_code FancyEMailClient exchanges this short lived token for an OAuth access_token. That access_token provides access to resource APIs. I hope it is obvious that this 'exchange' request is a backchannel request, no browser is involved!
  4. FancyEMailClient accesses /provider/email and /provider/userinfo by providing the OAuth access_token which it received earlier. Although both APIs require an access_token, there is one difference. /provider/userinfo is an OpenID Connect API where as /provider/email is an API proprietary to the EMailProvider. Let's call it a plain OAuth protected API
  5. In this area I wanted to emphasize the role of the LDAP server. As you can see, it is almost involved during all requests.


Case: The old days

The same app without using OAuth would probably look something like this:

LDAP only


A user would share his credentials with FancyEMailClient. And he would do this for each single provider he has an account with. FancyEMailClient would probably also ask for other details so that an API such as /provider/userinfo would not even be necessary. FancyEMailClient would now collect all this sensitive data and could do whatever it wants with it. That is a big disadvantage!

Another disadvantage is the fact that the users credentials are now used for each single request. That increases the chances for them being exposed.



OAuth, OpenID Connect and LDAP are connected with each other. But I hope it becomes visible which component plays which role and one cannot replace the other. You may say that my explanation is very 'black & white', but I still hope that it clarifies the overall understanding.

As usual, let me know if you have any questions, positive feedback or constructive criticism.


Best regards!

Hi everybody!


I just uploaded my first youtube video. Nothing crazy and not wild, it is about OAuth. It is a short video explaining how the authorization_code flow works. You will find it here:


OAuth 2.0 - Authorization Code flow - YouTube 



Sascha Preibisch

Persistent Consent

Posted by Sascha Preibisch Employee Dec 18, 2017

Hi everybody!


This blog post is dedicated to Paul who represents one of our customers. He has asked me to give him an idea on how he could implement persistent consent in OTK until OTK supports that feature out of the box.


Persistent Consent vs. Session Consent

In OpenID Connect clients can use parameters such as prompt=none to advise the server to not request consent from the resource_owner again if he has already granted access to his resources in the past. Today OTK remembers a given consent as long as an active refresh_token is available (I call this Session Consent, not sure if there is an official term for that). The token must have been granted for the combination of resource_owner, client_id and scope.

In comparison, Paul and others have said: well, Sascha, the server should remember the decision independently of an active refresh_token (Persistent Consent).


Interim solution

We have listened and will provide this feature in a future release of OTK. In order to make this available today I am providing an interim solution which you can implement yourself today.

DISCLAIMER: the official implementation in future OTK may look different so please be prepared to have your users see the consent screen again. Also, my version did not go through QA so please do some testing for yourself before deploying this in production. The status of this provided implementation is this: works on my machine!


Implementation for OTK-4.1

Only two policies need to be updated:

  1. /auth/oauth/v2/authorize/login: find a given consent
  2. /auth/oauth/v2/authorize/consent: persist consent


In OTK a client can have multiple client_ids. For example, your client SuperCoolApp may have a client_id for the mobile app and one the the JavaScript version of it. In that case you may want to remember the given consent decision not only for the current client_id but for any of that client. This means giving consent for the mobile app will also prevent the resource_owner from seeing the consent screen when using the JavaScript version.


Changes in /auth/oauth/v2/authorize/login

Extract the value client_ident from the active session object by adding the two lines below. That value represents the client, not only one single client_id (the line numbers may not match but the 'area' should be correct).


Extract client_ident

  • line 121:
    • XPath: /authorize/client_ident
    • Variable Prefix: authorizeClientIdent


Next, make up your mind if you want to respect consent decisions of the past only in conjunction with prompt=none or always. By default prompt is required! If you want to ignore the given prompt value disable the branch that checks for it:

Ignore prompt

I am not encouraging you to ignore prompt but I know that I have received this request!


Now move existing policies into an All assertions ... container:


Before using all container

After (moving lines 167, 168, 169 from above into the new All assertions ... on line 167 below):

After using all container

When adding container assertions such as All assertions must ... or At least one assertion ... ALWAYS add comments on the right side that start with // .... This helps reading the policy.


The behaviour has not changed yet. Now we are adding logic to look up a consent decision without caring if an active refresh_token exists. This includes the introduction of an At least one assertion ... block. That is required if you want to continue to support Session Consent!:

Logic to support persistent consent


Let's go through this line by line:

  • line 169: new At least one assertion ... block
    • this block contains the All assertions ... blocks of line 170 and line 177
    • line 170 handles persistent consent
    • line 177 is the one from above where we moved existing assertions into a new block
  • line 171: find existing consent decision
    • we are using the assertion OTK Session GET which is part of OTK
    • Details:
      • Max age for cache values: 3600
      • Name of cache to be used: otkPersistentConsentCache
      • Key for cache entry: ${resource_owner}${client_ident}
  • line 172 - 173: check if consent exists, extracting the JSON consent message
    • creating the initial consent message is handled further down in this post
    • Details line 173:
      • Encode/ Decode: URL Decode
      • Source Variable: sessionValue.result
      • Target Variable: consentMessage
      • Data Type: Message
      • Content Type: application/json
  • line 174: find granted scope
    • its important to present the Consent screen if the client has not requested the SCOPE in the past (no matter what)! 
    • Details:
      • Expression: granted_scope
      • Other Message Variable: consentMessage
      • Variable Prefix: xpathScope
        • we are not using xpath, but using this name allows us to reuse a few lines of policy further down
  • line 175: find the client
    • to be fail safe extract the client_ident of the past
    • Details:
      • Expression: granted_client
      • Other Message Variable: consentMessage
      • Variable Prefix: granted_client
  • line 176: compare the current client against the one from the past
    • ${client_ident} equlasTo(${granted_client.result})


It should look like this:

Result for changes at login


Changes in /auth/oauth/v2/authorize/consent

The good news: its just a 2 liner! Look for the block on line 96 of the screenshot below:

Finding the right spot


Open that block and go right to the end. It ends with an assertion named OTK Session - Delete. Right after that we will add 2 assertions:

New assertion in consent


Lines 155 and 156 are the new ones.

  • line 155: Set Context Variable. This is where we are creating the content for a given consent. You can certainly add other values as you desire
    Consent message
  • line 156: Creating the persistent session
    Consent session


That is all there is. Here are some thoughts though:

  • Get an idea on how many consent decisions you want to persist. Specify the value Max. number of entries in the dialog above appropriately
  • Max. database age vs. Max. age for cache: the database age is the persistent memory (this example: 90 days). During this time a consent screen will not be displayed.
    The Cache age is simply the one that helps avoiding accessing the database.
    IMPORTANT: The cache age here has to match the cache age used at /auth/oauth/v2/authorize/login when using OTK Session GET
  • Although this solution works out of the box you could certainly choose to build you own assertion that works with your own, dedicated database table that you may have or want to create for this purpose
  • Client vs. Client_id: if your client has multiple client_ids which are registered for different valid SCOPE values you may not want to use client_ident but the current client_id as part of your session key
  • Revoking Persistent Consent was not covered here. Please think about the scenario on how you want to support resource_owners revoking their given consent. If you are leveraging the revocation endpoint you can add logic there to support it



I hope this post gives you an idea on how you can implement Persistent Consent rather than Session Consent. Even if you are not going for the solution described here it may still give you an idea on how features as these can be implemented very easy. If you are looking forward to see this feature in OTK in the future please give us feedback so that we can build it the way you want it.


Paul, I hope this is what you were looking for!


As always, thanks for positive feedback and constructive criticism.

Best regards, Sascha

Hi everyone!


Just in time for the weekend I have published a new tutorial that shows in detail how to build APIs that retrieve resources from a local datasource or a remote API. As an example I am accessing a database.


The shown building blocks allow a developer to implement APIs without having to know where the resources are located. The source for this tutorial is available as RESTMan bundle and can easily be imported into a CA API Gateway using the RESTMan API. 


Here are two images to give you an idea what is being shown:


Service and Data APIs:

APIs for local or remote resource retrievel


Encapsulated assertions that transparently retrieve data from a local database or via a data API:

Retrieve data from local database or remote API


Find the tutorial in the project Tutorials here: CA APIM on github.

Open the file index.html and select Encapsulating access to resources.


As always, let me know if it helps and what you like and do not like.

Hello everyone!

CA World 2017 was a very good event. We had good times showing our products and very good discussions with our customers and to-be customers.

At the pre-conference session that was about news in OTK-4.1 I was using a SOAPUI project and promised to make it available for everyone.

Well, I am excited to announce that we now have a new project in our public GitHub repository that is dedicated to tutorials. Hopefully, over time, we are able to add more helpful content, maybe also with help of our user community.


Please find the repository here: CA APIM - Open Source and select the project Tutorials.


To have this becoming a successful way of providing tutorials and examples its important that you take a moment to have a look at the REAME content.


As always, please let us know if this is helpful!

Sascha Preibisch

CA World 2017

Posted by Sascha Preibisch Employee Nov 9, 2017

CA World 2017 (13.11.2017 - 17.11.2017)


Another year has passed and next week I am off to Las Vegas for another great event! Just as many of my colleagues.


I am writing to share with you which talks I am presenting:


  1. A pre-conference session (DO1X106E) about news in our OpenID Certified OTK implementation (CA APIM OAuth Toolkit 4.1). This includes a demo for which I am using SOAPUI. Since many of you have asked me about example SOAPUI projects I am taking the chance to provide it to you
  2. A pre-conference session (DO1X118E) about microservices security including a very cool preview of an upcoming feature
  3. A pre-conference session (DO1X117E) about scalable microservices environments
  4. A TechTalk (D01T52T) during the actual conference days in the DevOps API area. The topic is the same as for the 3. pre-conference session, but more compressed to include only the highlights


I would be happy to see you there to meet and chat and pick up your thoughts on what we are doing good and what we should improve! Come and find me at the SMART bar in the DevOps API area if not during the sessions.

Hi everybody!

This weeks tip is meant for anybody implementing oauth protected API's on the CA API Gateway or CA Mobile API Gateway. With both products the OAuth Toolkit (OTK) will be leveraged.


Here is the tip: Use variables set by 'OTK Require OAuth 2.0 Token'

When implementing oauth protected API's the main assertion to use is named OTK Require OAuth 2.0 Token. That assertion finds an incoming access_token and validates it. If the given token is invalid the assertion fails and returns an error. If the token is valid a few variables are being set. And those can be very useful when it comes to requiring more than just the token itself.

Here is a list of those variables and what they contain:

  • access_token: the token that was used by the client. This is mainly for informational purposes
  • session.client_id: the client_id of the client that has requested the token initially. This is for informational purposes but it could also be used to look up other associated values of this client
  • session.scope: the granted SCOPE for this token. The content is a space separated list of values. It is useful to implement branches within an API that retrieves data based on the SCOPE. An example can be found at /openid/connect/v1/userinfo. That API first requires the SCOPE=openid (configured in OTK Require OAuth 2.0 Token). Further down it checks if the granted SCOPE includes values such as email or profile. Have a look how this variable is used with OTK SCOPE Verification
  • session.subscriber_id: this is the username of the resource_owner that has granted the initial authorization request. If no consent was required during the token issuing process (e.g.: grant_type=password) it's simply the authenticated user. If the token was issued via grant_type=client_credentials the value will be the name of the client
  • session.expires_at: the timestamp at which the token expires
  • session.custom: this contains a JSON structure. The content contains values that were specified when the oauth client was registered in OAuth Manager. It also contains runtime information. In order to learn about the content, and since it varies, do the following during development: Use an Audit Detail assertion to log the content of '${session.custom}'. Afterwards extract values using the Evaluate JSON Path Expression assertion when you know what you want to extract. By default values such as the following are available:
    • client_type: either 'confidential' or 'public'
    • grant_type: the grant_type used to obtain the token

What to do with those variables

Now that you know about these variables you can implement use cases such as the following:

  • grant access only if the token was obtained via a specific grant_type
  • extract attributes of the current user to pass them on to the backend service
  • grant different access to resources depending on the client type
  • implement rate limiting based on the access_token for special cases


I hope this helps, and as usual, let me know if you need more information or details or if you have got other related questions!

I am happy to share with you that OAuth Toolkit 4.1 (OTK-4.1) has been released last week. 


If you have previously installed OTK-4.0 you are now able to upgrade to OTK-4.1 without loosing your customization. We are happy that this is now supported! (Please read the documentation when doing that).


Here are a few links that I think are useful to get started:


As usual, please let us know how you like this version and share any suggestions for future enhancements you would like to see.

Sascha Preibisch

We are certified!

Posted by Sascha Preibisch Employee Jul 14, 2017

OpenID Certification has been achieved!


I am happy to announce that OTK-4.1 has successfully been certified for the following profiles:

  • OpenID Provider Basic
  • OpenID Provider Config


CA API Gateway and CA Mobile API Gateway are listed on the official web site of OpenID Providers: OpenID Certification | OpenID  


OTK-4.1 will be released soon as of when you are able to leverage improved and new features such as:

  • OpenID Connect Discovery
  • OpenID Connect Dynamic Registration
  • OpenID Connect JWKS_URI
  • Refresh Token can be configured to be re-used
  • Refresh Token can be configured to keep the original expiration date
  • Enhanced customizations
  • Easy upgrade process


I hope this is good news for any OTK user!

Hi everybody!


I wanted to share with you that OTK will soon be certified for the OpenID Connect Basic Profile. Maybe for even more profiles!


If you are looking for certified solutions you can soon look at OTK as one of your options. Although OTK supports OpenID Connect features for a while now the "stamp" is still missing. I will post more details as soon as we have shipped the next version.


Another requested feature will also be available:

  • being able to configure OTK to accept refresh_token multiple times
  • being able to configure OTK to issue new refresh_token but keeping the original expiration date


We are sure that these enhancements will make OTK even more valuable.


All the best!

Sascha Preibisch

API Error Handling

Posted by Sascha Preibisch Employee Apr 20, 2017

Hi everybody!


This week I did a talk about API error handling within CA APIM OAuth Toolkit (OTK). For that I have created a list of general requirements that are important when dealing with APIs. I decided to share that list here since it may be useful for others too.


Here we go:


  1. The API owner must be in control of error messages. This sounds like a given but especially when choosing a middleware product it should be evaluated if internal errors may be returned instead of ones created by the API owner/developer. That is not desired
  2. APIs should return correct error messages. This is another one that should be a given. However, if this is not the case developers will be very confused
  3. Error messages should not reveal sensitive information. The error message should not expose implementation details such as stacktraces. Error messages should be as general and as specific as possible at the same time. For example, returning "authentication failed due to invalid credentials" is general but also specific enough. It would be wrong to return "authentication failed due to the incorrect password 'xyz'"
  4. Error messages should be returned in an expected message format. If the API consumes and produces JSON messages error messages should also be returned in JSON
  5. Error messages should be maintained in a single location. This may be controversial and depends on the API development environment. But if many APIs have to be managed a system that has a central location for maintaining error messages may be used. Otherwise, if the error messages are formulated within those APIs directly, it may be difficult to change or fix them
  6. Same errors should always cause the same error message. If an API implements parameter validation and fails, the produced error message should be the same across all APIs that implement the same validation. This should be consistent for all types of errors
  7. All possible error responses should be documented. Do not let your API consumers guess what errors may occur. Document all possible errors that may be returned. This includes potential reasons for a failed request and also solutions how this can be fixed. For example, if the error says "token is invalid" you may want to document "the given access_token has expired. Repeat the request using a valid access_token"


Financial API (OpenID Foundation - FAPI)

FAPI is still in an early draft but has documented an approach for handling errors. It may be worth taking a look at it:

FAPI error handling - proposal


As usual please leave a comment with any questions or ideas or different views on this topic if you like.

Hello everybody!

I am happy to share with you that OTK and MAG 4.0 have been released. Both come with smaller and bigger changes. The main enhancements were made on this topic:

  • Enable the easy button upgrade


Easy upgrade

OTK and MAG now have specific policies that are marked as customizable where others are either implemented or  suggested to be read only. This enables us to provide a smooth upgrade procedure. Whatever has been implemented in the customizable policies will NOT be replaced with an upgrade.



In the area of customizations we are distinguishing between two different types:

  1. configurations: configure attributes such as token lifetimes in dedicated policies
  2. extensions: extend features OTK provides such as adding your own grant_type in dedicated policies


Both types are implemented as policy fragments that are included in read only policies. The ones made for configurations are generally named like the target policy but with a '#' sign as prefix (e.g.: OTK Token Lifetime Configuration becomes #OTK Token Lifetime Configuration). In this case you would copy a variable from OTK Token Lifetime Configuration and paste it into #OTK Token Lifetime Configuration and set it to your desired value.

The policies for extending OTK's features are generally named with a suffix such as ' ... Extension' (e.g.: OTK User Authentication becomes OTK User Authentication Extension).



Here is a list of my personal highlights of the new release:


  • Upgrades as of 4.x are easy
  • OTK now supports a configurable number of sessions per resource_owner/ client which was limited to 1 in the past
  • OTK now supports JWT signatures with RS256 out of the box
  • OTK now maintains the database (MySQL and Oracle) through a scheduled task. There is no need anymore for an external cron-job
  • Extensions can be used to support custom id_token creation and validation
  • Extensions can be used to support custom grant types
  • OTK uses only two single locations where local cache assertions are used. With that its very easy to replace the local cache assertion with a remote cache assertion and its also easy to audit all data that is being cached and retrieved
  • The Authorization Server's login and consent pages can be modified easily
  • Many configurations/ extensions can be configured per client by leveraging the custom field per client and client_id in OAuth Manager. This should introduce a huge fexibility


MAG received updates but here is my favourite one:

  1. A new feature is the so called enrollment process. This enables an enterprise to publish an app in the app store with a minimal configuration. The full configuration can be retrieved at runtime which makes it very flexible


I surely missed some other enhancements but I wanted to highlight the ones above. If you have further questions or if you need guidance on how to use OTK and MAG as of now please leave a comment. For the complete documentation please go here and search for CA API Management OAuth Toolkit or CA Mobile API GatewayDocumentation

Hi everybody!

This weeks tip is meant to make the life in a development environment easier. If you are in an environment where multiple developers have their own instance of a CA API Gateway but also share it with others, this tip is for you.


Here is the tip: increase the number of login attempts

You may have discovered that different developers use similar usernames when logging in to Policy Manager. For example, you may use 'admin' or 'administrator' with a simple password such as 'password' on your own CA API Gateway. Other developers may use 'admin' but with a password such as 'Password' on their CA API Gateway.


If you now ask one of the others to connect to your CA API Gateway they may attempt to login via 'admin/Password' by mistake. Unfortunately, after three failing attempts your 'admin' account will be blocked since your 'admin' password is 'password'.


In order to help you from this annoying situation you can configure a cluster-wide property that allows you to configure the number of failing login attempts before the account gets blocked. Another cluster-wide property lets you configure the blocked duration.


In Policy Manager open Tasks - Global Settings - Manage Cluster-Wide Properties and configure these variables:

  • logon.maxAllowedableAttempts: the number of failing login attempts before the account get blocked
  • logon.lockoutTime: the duration in seconds before another login attempt can be taken


Use those variables with care in a production environment but make use of them in a dev environment if possible.


I hope this helps!

Hi everybody!

Some of you who have build policies have also used assertions called Look Up In Cache and Store to Cache. Usually they are used to improve performance or to keep track of session data across multiple requests and responses. Those assertions work very good for that. But there is something you need to know about them ... .


Here is the tip: using Cache ID and Maximum entry age correct!

When using Store to Cache the following values can be configured:

  • Cache ID: the name of your cache
  • Cache entry key: the key used to identify a specific entry
  • Maximum entries: the number of entries this cache should accept
  • Maximum entry age: the lifetime of an entry
  • Maximum entry size: the size per entry


At a first glance this makes sense and does not raise any questions.

But there is the catch:

The assertion will always maintain exactly one Maximum entry age per Cache ID

Look at this example:

  • Cache ID: myCacheID, Cache entry key: myKey, Maximum entry age: 300
    • the entry for myKey will be cached for 300 seconds
  • Cache ID: myCacheID, Cache entry key: myNewKey, Maximum entry age: 600
    • the entry for myNewKey will be cached for 600 seconds

At this point the entry of myKey got removed from the cache since the new lifetime for this cache is 600!


How to use the local cache

It is important that you are aware of the described behaviour from above. Otherwise your caching strategy does not work as expected. The simplest solution is to always use the same Maximum entry age per Cache ID. A simple solution is to create a Cache ID always in combination with the used Maximum entry age. If you are using variables that contain the values do the following:

  • Cache ID: ${cacheID}${lifetime}
  • ...
  • Maximum entry age: ${lifetime}


I hope this helps building better policies!

Hi everybody!

I just received a question, the same one that I have received last week. Therefore I thought, I'll start the series "Tip of the week" with little tips and tricks around building policies.


Here is the tip: debugging policy

Debugging policies can be difficult. We do have the policy debugger and we do have a debugging policy. Both tools help, but the out-of-the-box debugging policy is not really producing a meaningful output. Therefore, here is how you should use it (well, how I use it, which may also work for you):


  1. Place an Audit Messages in policy assertion into your service and configure it to always capture requests and responses
  2. Do a right-click on the service
  3. Select the service properties, then select Enable policy debug tracing
  4. In the appearing dialog Do you want to edit the debug trace policy now? select Yes
  5. In that policy you want to remove the "Audit Messages in Policy" assertion on line 5
  6. Now select the "Add Audit Details" assertion and let's work on that ...


That assertion has this content:

TRACE:${}${} policy.guid=${trace.policy.guid} assertion.number=${trace.assertion.numberstr} assertion.shortname=${trace.assertion.shortname} status=${trace.status}

As you can see it contains strings such as "", ... "policy.guid", ... these strings are used as labels. That is all good but it makes the audit result difficult to read. It also has the "trace.status" at the end although that may be the most important information you want to look at. Here is an example of the output:



What you are interested in are error codes (trace.status) and policy line numbers (trace.assertion.numberstr). But they are hidden in the loooooong audit message, right at the end.


Your new debugging policy

You can turn this message into something very helpful by replacing the content of the audit detail assertion using this message:

TRACE: [${trace.status}][${trace.assertion.numberstr}][${trace.assertion.shortname}][${}]

I removed all labels and I moved the important content to the front. I have put variables into [...] brackets. The brackets will contain the following information:

  • 1: [...]: '0' if the assertion executed without error, >0 if an error occured. Depending on your policy you may expect assertions to fail, such as comparisons!
  • 2: [...]: the assertion line number. It will tell you exactly which line was executed. If it shows something like [3.45] read it as: line 3 in the service is a fragment (or encapsulated assertion). Within that fragment (or encapsulated assertion) the policy failed on line 45. If the line numbers are not shown for encapsulated assertions open the encapsulated assertion configuration and enable Allow debug tracing into backing policy
  • 3: [...]: the assertion name
  • 4: [...]: the policy name


The output looks like this now:

You will now find failing assertions easier by just checking the value of the first bracket. It will show you the line number  in bracket no. 2 which makes it easy to locate the failing location in policy.


I hope this helps making policy debugging easier!