Skip navigation
All People > Sascha Preibisch > Sascha Preibisch's Blog

I am happy to share with you that OAuth Toolkit 4.1 (OTK-4.1) has been released last week. 


If you have previously installed OTK-4.0 you are now able to upgrade to OTK-4.1 without loosing your customization. We are happy that this is now supported! (Please read the documentation when doing that).


Here are a few links that I think are useful to get started:


As usual, please let us know how you like this version and share any suggestions for future enhancements you would like to see.

Sascha Preibisch

We are certified!

Posted by Sascha Preibisch Employee Jul 14, 2017

OpenID Certification has been achieved!


I am happy to announce that OTK-4.1 has successfully been certified for the following profiles:

  • OpenID Provider Basic
  • OpenID Provider Config


CA API Gateway and CA Mobile API Gateway are listed on the official web site of OpenID Providers: OpenID Certification | OpenID  


OTK-4.1 will be released soon as of when you are able to leverage improved and new features such as:

  • OpenID Connect Discovery
  • OpenID Connect Dynamic Registration
  • OpenID Connect JWKS_URI
  • Refresh Token can be configured to be re-used
  • Refresh Token can be configured to keep the original expiration date
  • Enhanced customizations
  • Easy upgrade process


I hope this is good news for any OTK user!

Hi everybody!


I wanted to share with you that OTK will soon be certified for the OpenID Connect Basic Profile. Maybe for even more profiles!


If you are looking for certified solutions you can soon look at OTK as one of your options. Although OTK supports OpenID Connect features for a while now the "stamp" is still missing. I will post more details as soon as we have shipped the next version.


Another requested feature will also be available:

  • being able to configure OTK to accept refresh_token multiple times
  • being able to configure OTK to issue new refresh_token but keeping the original expiration date


We are sure that these enhancements will make OTK even more valuable.


All the best!

Sascha Preibisch

API Error Handling

Posted by Sascha Preibisch Employee Apr 20, 2017

Hi everybody!


This week I did a talk about API error handling within CA APIM OAuth Toolkit (OTK). For that I have created a list of general requirements that are important when dealing with APIs. I decided to share that list here since it may be useful for others too.


Here we go:


  1. The API owner must be in control of error messages. This sounds like a given but especially when choosing a middleware product it should be evaluated if internal errors may be returned instead of ones created by the API owner/developer. That is not desired
  2. APIs should return correct error messages. This is another one that should be a given. However, if this is not the case developers will be very confused
  3. Error messages should not reveal sensitive information. The error message should not expose implementation details such as stacktraces. Error messages should be as general and as specific as possible at the same time. For example, returning "authentication failed due to invalid credentials" is general but also specific enough. It would be wrong to return "authentication failed due to the incorrect password 'xyz'"
  4. Error messages should be returned in an expected message format. If the API consumes and produces JSON messages error messages should also be returned in JSON
  5. Error messages should be maintained in a single location. This may be controversial and depends on the API development environment. But if many APIs have to be managed a system that has a central location for maintaining error messages may be used. Otherwise, if the error messages are formulated within those APIs directly, it may be difficult to change or fix them
  6. Same errors should always cause the same error message. If an API implements parameter validation and fails, the produced error message should be the same across all APIs that implement the same validation. This should be consistent for all types of errors
  7. All possible error responses should be documented. Do not let your API consumers guess what errors may occur. Document all possible errors that may be returned. This includes potential reasons for a failed request and also solutions how this can be fixed. For example, if the error says "token is invalid" you may want to document "the given access_token has expired. Repeat the request using a valid access_token"


Financial API (OpenID Foundation - FAPI)

FAPI is still in an early draft but has documented an approach for handling errors. It may be worth taking a look at it:

FAPI error handling - proposal


As usual please leave a comment with any questions or ideas or different views on this topic if you like.

Hello everybody!

I am happy to share with you that OTK and MAG 4.0 have been released. Both come with smaller and bigger changes. The main enhancements were made on this topic:

  • Enable the easy button upgrade


Easy upgrade

OTK and MAG now have specific policies that are marked as customizable where others are either implemented or  suggested to be read only. This enables us to provide a smooth upgrade procedure. Whatever has been implemented in the customizable policies will NOT be replaced with an upgrade.



In the area of customizations we are distinguishing between two different types:

  1. configurations: configure attributes such as token lifetimes in dedicated policies
  2. extensions: extend features OTK provides such as adding your own grant_type in dedicated policies


Both types are implemented as policy fragments that are included in read only policies. The ones made for configurations are generally named like the target policy but with a '#' sign as prefix (e.g.: OTK Token Lifetime Configuration becomes #OTK Token Lifetime Configuration). In this case you would copy a variable from OTK Token Lifetime Configuration and paste it into #OTK Token Lifetime Configuration and set it to your desired value.

The policies for extending OTK's features are generally named with a suffix such as ' ... Extension' (e.g.: OTK User Authentication becomes OTK User Authentication Extension).



Here is a list of my personal highlights of the new release:


  • Upgrades as of 4.x are easy
  • OTK now supports a configurable number of sessions per resource_owner/ client which was limited to 1 in the past
  • OTK now supports JWT signatures with RS256 out of the box
  • OTK now maintains the database (MySQL and Oracle) through a scheduled task. There is no need anymore for an external cron-job
  • Extensions can be used to support custom id_token creation and validation
  • Extensions can be used to support custom grant types
  • OTK uses only two single locations where local cache assertions are used. With that its very easy to replace the local cache assertion with a remote cache assertion and its also easy to audit all data that is being cached and retrieved
  • The Authorization Server's login and consent pages can be modified easily
  • Many configurations/ extensions can be configured per client by leveraging the custom field per client and client_id in OAuth Manager. This should introduce a huge fexibility


MAG received updates but here is my favourite one:

  1. A new feature is the so called enrollment process. This enables an enterprise to publish an app in the app store with a minimal configuration. The full configuration can be retrieved at runtime which makes it very flexible


I surely missed some other enhancements but I wanted to highlight the ones above. If you have further questions or if you need guidance on how to use OTK and MAG as of now please leave a comment. For the complete documentation please go here and search for CA API Management OAuth Toolkit or CA Mobile API GatewayDocumentation

Hi everybody!

This weeks tip is meant to make the life in a development environment easier. If you are in an environment where multiple developers have their own instance of a CA API Gateway but also share it with others, this tip is for you.


Here is the tip: increase the number of login attempts

You may have discovered that different developers use similar usernames when logging in to Policy Manager. For example, you may use 'admin' or 'administrator' with a simple password such as 'password' on your own CA API Gateway. Other developers may use 'admin' but with a password such as 'Password' on their CA API Gateway.


If you now ask one of the others to connect to your CA API Gateway they may attempt to login via 'admin/Password' by mistake. Unfortunately, after three failing attempts your 'admin' account will be blocked since your 'admin' password is 'password'.


In order to help you from this annoying situation you can configure a cluster-wide property that allows you to configure the number of failing login attempts before the account gets blocked. Another cluster-wide property lets you configure the blocked duration.


In Policy Manager open Tasks - Global Settings - Manage Cluster-Wide Properties and configure these variables:

  • logon.maxAllowedableAttempts: the number of failing login attempts before the account get blocked
  • logon.lockoutTime: the duration in seconds before another login attempt can be taken


Use those variables with care in a production environment but make use of them in a dev environment if possible.


I hope this helps!

Hi everybody!

Some of you who have build policies have also used assertions called Look Up In Cache and Store to Cache. Usually they are used to improve performance or to keep track of session data across multiple requests and responses. Those assertions work very good for that. But there is something you need to know about them ... .


Here is the tip: using Cache ID and Maximum entry age correct!

When using Store to Cache the following values can be configured:

  • Cache ID: the name of your cache
  • Cache entry key: the key used to identify a specific entry
  • Maximum entries: the number of entries this cache should accept
  • Maximum entry age: the lifetime of an entry
  • Maximum entry size: the size per entry


At a first glance this makes sense and does not raise any questions.

But there is the catch:

The assertion will always maintain exactly one Maximum entry age per Cache ID

Look at this example:

  • Cache ID: myCacheID, Cache entry key: myKey, Maximum entry age: 300
    • the entry for myKey will be cached for 300 seconds
  • Cache ID: myCacheID, Cache entry key: myNewKey, Maximum entry age: 600
    • the entry for myNewKey will be cached for 600 seconds

At this point the entry of myKey got removed from the cache since the new lifetime for this cache is 600!


How to use the local cache

It is important that you are aware of the described behaviour from above. Otherwise your caching strategy does not work as expected. The simplest solution is to always use the same Maximum entry age per Cache ID. A simple solution is to create a Cache ID always in combination with the used Maximum entry age. If you are using variables that contain the values do the following:

  • Cache ID: ${cacheID}${lifetime}
  • ...
  • Maximum entry age: ${lifetime}


I hope this helps building better policies!

Hi everybody!

I just received a question, the same one that I have received last week. Therefore I thought, I'll start the series "Tip of the week" with little tips and tricks around building policies.


Here is the tip: debugging policy

Debugging policies can be difficult. We do have the policy debugger and we do have a debugging policy. Both tools help, but the out-of-the-box debugging policy is not really producing a meaningful output. Therefore, here is how you should use it (well, how I use it, which may also work for you):


  1. Place an Audit Messages in policy assertion into your service and configure it to always capture requests and responses
  2. Do a right-click on the service
  3. Select the service properties, then select Enable policy debug tracing
  4. In the appearing dialog Do you want to edit the debug trace policy now? select Yes
  5. In that policy you want to remove the "Audit Messages in Policy" assertion on line 5
  6. Now select the "Add Audit Details" assertion and let's work on that ...


That assertion has this content:

TRACE:${}${} policy.guid=${trace.policy.guid} assertion.number=${trace.assertion.numberstr} assertion.shortname=${trace.assertion.shortname} status=${trace.status}

As you can see it contains strings such as "", ... "policy.guid", ... these strings are used as labels. That is all good but it makes the audit result difficult to read. It also has the "trace.status" at the end although that may be the most important information you want to look at. Here is an example of the output:



What you are interested in are error codes (trace.status) and policy line numbers (trace.assertion.numberstr). But they are hidden in the loooooong audit message, right at the end.


Your new debugging policy

You can turn this message into something very helpful by replacing the content of the audit detail assertion using this message:

TRACE: [${trace.status}][${trace.assertion.numberstr}][${trace.assertion.shortname}][${}]

I removed all labels and I moved the important content to the front. I have put variables into [...] brackets. The brackets will contain the following information:

  • 1: [...]: '0' if the assertion executed without error, >0 if an error occured. Depending on your policy you may expect assertions to fail, such as comparisons!
  • 2: [...]: the assertion line number. It will tell you exactly which line was executed. If it shows something like [3.45] read it as: line 3 in the service is a fragment (or encapsulated assertion). Within that fragment (or encapsulated assertion) the policy failed on line 45. If the line numbers are not shown for encapsulated assertions open the encapsulated assertion configuration and enable Allow debug tracing into backing policy
  • 3: [...]: the assertion name
  • 4: [...]: the policy name


The output looks like this now:

You will now find failing assertions easier by just checking the value of the first bracket. It will show you the line number  in bracket no. 2 which makes it easy to locate the failing location in policy.


I hope this helps making policy debugging easier!

Federation with OpenID Connect

I have been asked to write a blog post about federation in the context of OpenID Connect. Before I continue I would like to mention that a draft for OpenID Connect Federation exists. To see details please visit this website.

In this blog post I am not referencing that draft but I want to explain a few components that exist in OpenID Connect that can be used in any case.



At the end of this blog post you will know how to leverage OpenID Connect Discovery, Json Web Key (JWK) and Json Web Token (JWT). The bottom of this blog post shows a screenshot of my implementation. It starts where the authorization_code gets exchanged for an access_token.


The complete message flow this blog post is based on follows the OAuth authorization_code flow (RFC 6749). I am assuming that you understand that flow. This blog post should help you understand how to use an id_token.


Important APIs

For this blog post we are referencing two API's found in the context of OpenID Connect:

  1. /.well-known/openid-configuration (Specification for Discovery)
  2. /jwks_uri (referenced in Discovery)

Those API's (and with that the specifications) enable anyone to configure oauth clients and validate JWT. The JWT in this case contains an OpenID Connect id_token and is digitally signed using RS256 as algorithm.


On a side note:

JWT and id_token are often use synonymously. Even in this blog post you will find that. Please remember:

  • id_token: a JSON message with a well defined structure based on OpenID Connect
  • JWT: a base64url encoded string containing a jwt-header, jwt-payload, jwt-signature
  • in the context of OpenID Connect: jwt-payload --> base64url encoded id_token!

How these APIs relate to each other

Here a brief explanation of these API's:

  • /.well-known/openid-configuration
    • Idea: a list of details that are supported by an OAuth/ OpenID Connect provider
    • Content: "location of authorization endpoint", "location of token endpoint", "supported SCOPEs and claims", "algorithms for JWT signatures", information like that
    • In this blog post we will act as a client of Microsoft (MS) and will leverage their discovery API
    • Details: Discovery - details
  • /jwks.json:
    • Idea: a list of public keys (certificates) that can be used to validate JWT
    • In this blog post we will act as a client of Microsoft and will leverage their jwks API
    • Example: Microsoft's jwks_uri


Here is an image visualizing the connection between different APIs:

Overview of important OpenID Connect API's

Here is the cool part of the API relationship:

  • as a provider you only have to publish the Discovery endpoint (/.well-known/openid-configuration). All other API's are  referenced in its JSON response
  • as a client you only need to know the location of the discovery endpoint and you'll get all others
  • the image also shows the /register endpoint which allows clients to register themselves as an oauth client. We are not using that in this post
  • /authorize and /token are the usual oauth endpoints
  • /jwks.json contains all keys (certificates) required to validate JWT


Leveraging MS Office 365 accounts

As an example I am now explaining how I configured the OTK (OAuth Toolkit) authorization server to accept Office 365 id_token to log user in to OTK.


NOTE: I have got a new website at which you can see this example working when you select "Sign in with Microsoft": You can try it and remove your registration afterwards if you wish. 


The complete story looks like this:

  1. register yourself as a developer and create an oauth application. Important: register the correct redirect_uri at which you want to receive an authorization_code!
  2. build an application that uses the oauth credentials received from Microsoft for your app
  3. implement the redirect_uri. Handle success cases but also error cases
  4. implement the validation of the JWT
  5. implement the validation of the id_token
  6. extract any value of the id_token that you want to use as the username, i.e.: "name" or "email"


It looks like a lot to do ... and it is .. but depending on your oauth/oidc provider it may be as simple as peeing a hole into the snow or as complicated as restoring a 1973 Corvette. In my case it was not complicated.


Register an oauth client with Microsoft

I registered myself as a developer with Microsoft and created an oauth application named "Saschas Authorization Server". With that I received oauth client credentials. You can register yourself here: register

On my authorization server's login page I have included a logo which, when clicked, initiated the social login flow with Microsoft. At the end of this flow I received an authorization_code at my redirect_uri.


Implement the redirect_uri

At this API the authorization_code will be received and exchanged for an access_token (actually, in the case of MS, you will receive an id_token packaged as JWT instead of an access_token). The response looks something like this (shortened for readability):






In general, a JWT has this format:

jwt-header [.] jwt-payload [.] jwt-signature


The following steps are all implemented within my redirect_uri!


Prepare jwks

Microsoft publishes its discovery endpoint here: MS Discovery. If you follow that link you will see that all APIs that we need are referenced in the JSON response. In this case we are interested in "jwks_uri". The response message of that API looks like this:

 "keys": [{
 "kty": "RSA",
 "use": "sig",
 "kid": "Y4ueK2oaINQiQb5YEBSYVyDcpAU",
"x5t": "Y4ueK2oaINQiQb5YEBSYVyDcpAU",
 "n": "p3pKrlon...CroPTYQ",
 "e": "AQAB",
"x5c": ["MIIDBTCC....../8IHkxt"],
 "issuer": "{tenantid}/v2.0"
}, ...

It is an array of "keys". You may want to study all details, but what we need is this:

  • kid:
    • when we validate the JWT, we will find a matching kid in the JWT header. The kid tells us which certificate we have to use to validate it


Validate JWT

In OTK we can use the "Decode Json Web Token" assertion to validate the JWT. Other products surely have similar tools available. We just have to do 2 manual steps before we can use that assertion:


  1. base64 decode the JWT header (the string in front of the first [.] of the JWT). It looks like this:  {"typ":"JWT","alg":"RS256","kid":"1LTMzakihiRla_8z2BEJVXeWMqo"}
  2. extract the kid using a JSON Path assertion


As you remember, the kid references the certificate that can be used to validate the signature. We can now use that value with our "Decode Json Web Token" assertion like this:


Configuration of Decode Json Web Token

  • Source Payload: the JWT that we have received
  • Validation Method: tell the assertion to use a context variable when searching for "secrets"
  • Recipient Key Context Variable: contains the content of the JWKS file that I earlier imported after downloading it from Microsoft
  • Key Type: a JSON Web Key Set. As I said, its an array of keys
  • Key ID: here it comes! We need to tell the assertion which certificate to use, identified by the kid
  • Destination Variable Prefix: the payload of the JWT, which is the id_token. That part is the one we are actually interested in!


If the signature validation is successful we know that this token was issued by Microsoft!  Now the content of the id_token needs to be validated!


Validate the id_token

The content of the id_token looks something like this:

"ver": "2.0",
"iss": "",
"aud": "2a817ae....9aa4b61",
"exp": 1485205244,
"iat": 1485118844,
"name": "Sascha Preibisch",
"preferred_username": "",
"email": "",
"sub": "AAAAA....Geg5k",
"tid": "9188....b66dad"

  • iss: the issuer. It contains a tenantid. You can choose to care about a specific tenantid (as I did) or simply accept anyone. In the case you care you MUST verify that it includes your acceptable tenant. The concept of a tenantid is specific to MS. In google's case you could simply check if the issuer is (or, one or the other may appear)
  • aud: my client_id. You MUST verify that it matches the one you used when requesting the token!
  • exp: the expiration. You MUST verify that it has not yet expired
  • name: the user's name information
  • preferred_username: I modified it for this example, but it will match the username that you are using in your corporation
  • email: the email address
  • sub: a value identifying the user within the system of MS (or maybe the tenant, not sure ...)
  • tid: the tenantid which matches the value in the iss value


Except for extracting details of the id_token this is all you have to do. You have accepted and validated an id_token, issued by a third party. Your user is logged in!


To show you how few lines of policy you need in CA API Gateway to implement this I have added a screenshot below. It starts at "authorization_code received, exchange it for a token" (expand the image by clicking it):


API to validate a JWT and id_token

  • line 52: call the /token endpoint to receive an access_token in exchange for an auhorization_code
  • line 55, 56: MS does not issue an access_token but an id_token (which comes in handy in our case). On line 55 we are extracting the id_token from the JSON response that was received. On line 56 we are setting a variable with the content naming it "jwt"
  • lines 57 - 60: match the JWT header, base64 decode it, extract the kid.
  • line 62: validate the JWT. That assertion is configured as shown further up
  • line 63 - 68: extracting values from the id_token
  • line 69 - 71: validating claims such as "exp", "aud" and "iss"
  • line 72 - ... : extracting details such as "sub", "name" and "email" which are then used as username and user-id


Do not forget

Even if you forget most of this blog post, keep the most important steps in mind:

  • receive and id_token
  • validate the signature
  • validate claims such as "exp"
  • be happy about this easy and convenient way of logging in users with third-party credentials


That's it!


I hope this blog post gives you an idea how id_token (JWT) can be used for federation.

Please leave a comment if you need further details or if you find an error or if you like what I described here.


Thanks a lot!

Welcome PKCE!


PKCE (RFC 7636)

Since OTK-3.6.00 was released at the end of December we support PKCE!

That's it, just wanted to let everyone know.


Easy Button upgrade

Some of the OTK users have realized that it is not always easy to upgrade from one version of OTK to another. In order to tackle that issue we are currently reorganizing the structure of OTK. The end result will be an OTK that is configurable AND upgradable with minimum effort.


In order to validate what we are doing I would like to invite all our customers to take part at our next sprint demo on Thursday, 26. January. We will demo the following:

  • how to implement custom grant_types
  • how to implement custom id_token validation and generation
  • how to upgrade from one version to another without losing customizations


You will be able to provide immediate feedback and help us do a better job.

If you are interested please send an email to and I will forward you an invitation.

PKCE, RFC 7636

Proof Key for Code Exchange by OAuth Public Clients

In OAuth 2 there are different types of clients. Confidential clients and public clients. By design public clients are not able to maintain a client secret. A typical public client would be implemented in JavaScript. Another typical public client would be a native mobile app. To explain PKCE I am using the native app as an example.


Native apps that use the OAuth authorization code flow are required to associate (register) a custom scheme callback_uri in the device's operating system (OS). If the browser on the device finds the callback_uri as target URL it will forward it to the OS. The OS will find the associated app and launches it.


Unfortunately multiple apps could register the same callback_uri. The OS will pick one of those randomly. That is bad! It is bad because the callback_uri will include the issued authorization code! The randomly launched app is now in possession of the authorization code. And since the app is of type public it does not need to provide a secret to the Authorization Server when  exchanging the code for an access_token.


What happened here is that GoodApp requested an authorization code but EvilApp received it and is now able to use it. See the simple graphic below:


OAuth Authorization Request without PKCE


How does PKCE help?

RFC 7636 has specified new parameters to be included in the initial authorization request and the code exchange request. The initial request is sent to /authorize and its intention is to receive an authorization_code. The code exchange request is sent to /token to exchange the authorization_code for an access_token.


First step: The initial request to /authorize will include these additional parameters:

  1. code_challenge: a value that is only known to GoodApp (to keep it close to the upper image)
  2. code_challenge_method (optional): either plain or S256. plain indicates to the server that code_challenge is a plain value as given to the server. S256 indicates that it has been hashed using SHA-256

The authorization server will take those values (plain is the default if the code_challenge_method was not given) and associates them with the code that is being issued.


Second step: The code exchange request to /token will include one additional parameter:

  1. code_verifier: the value that was used to generate code_challenge for the initial request


Server side validation with grant_type=authorization_code:

The server will lookup the code_challenge and code_challenge_method that are associated with the given code. It will then do one the following validation checks:

  1. IF (code_challenge_method == plain) THEN compare(code_verifier, code_challenge)
  2. ELSE compare(sha-256(code_verifier), code_challenge)


Even if EvalApp (to keep it close to the upper image) would have been in possession of the authorization_code this time it would not be able to exchange it for an access_token because the server side validation would fail.



I hope this post helps understanding PKCE. I also hope that it helps understanding that supporting it requires just a small code change. And I hope it makes it obvious why PKCE should be used ALWAYS with public clients that use the the authorization code flow.

OTK (CA API OAuth Toolkit) will soon support it out of the box. Those customers who do not want to wait will find a blog post with instructions on how to implement PKCE support in the next few days here in this space.


As always, please leave a comment with questions or requests.

Sascha Preibisch

OAuth vs. LDAP

Posted by Sascha Preibisch Employee Oct 18, 2016

On Twitter I have read about a company that asked the question:


Are you using OAuth or LDAP?


That of course is nonsense! One does not replace the other or takes on its role.


OAuth is an authorization framework! Its not made to authenticate users, its meant to have a user (resource_owner) authorize a client to access certain resources (simplified, and depending on the flow, I know ...). The user certainly has to be authenticated during that process. The user authentication may be done against an LDAP server!


Update (18. Oct. 2016)


I realized that this topic seems to be quite popular for many people. For that reason I have created a graphic trying to emphasize the relationship between OAuth and LDAP visually. It is simple but it may still helps.


OAuth vs. LDAP


Please leave  comment for questions or suggestions.


Update (12. Feb. 2018)


I have written a slightly more extensive blog post on this topic. Please find it here: 

OAuth vs. LDAP vs. OpenID Connect

Update - How to use SAML instead of JWT to integrate with an external Login-Server

After I was asked if SAML could be used instead of JWT and I have said yes, of course, I was asked several times to provide an example on how to do that. If have now implemented it and can share how it works.


First of all, whenever you want to start modifying something in OTK such as replacing JWT with SAML DO NOT do it in the OTK policies! That is way too complicated. Instead, create a sample API, copy the JWT parts out of the policy and put them into this new API. Use a tool such as Postman or SOAPUI and get that API working. Set sample values wherever you want. Once that is done, start replacing the bits and pieces that you want to swap until its working. After you did that, put those policy snippets into the OTK policies. Doing it this way makes troubleshooting way simpler.


What to do

You have to update policies at 7 locations. Luckily its almost always the same so you could create policy fragments and re-use them. In this guide we will simply copy and paste the same pieces. You have to have a private key to sign the SAML and you need a public cert for the encryption. In this example, I have created a private key and referenced it for both. In your environment you would import the public certificate of the login-server, the login-server would import your gateways public certificate to validate signatures.


First step:

/auth/oauth/v2/authorize, 1 single location: Create the initial SessionData variable

At line number 117 the policy uses an Encode Json Web Token: sign payload assertion. That has to be replaced with the following:


SAML instead of JWT


Line 118: Base64 encode the existing session data which is a JSON structure

Line 119: The SAML assertion requires a credential context. Since no user is involved and this SAML assertion is not about users but carrying data I used the URL of the login-server as username. The receiving API will check for that value.

Line 120: Create the SAML assertion using the private key odic (select one available in your system)

Line 121: Base64 encode the created SAML assertion XML message. Although we are not using JWT I kept the variable names to require less modifications. If you plan to not do JWT at all you may want to change the names to make it less confusing when looking at the policy in a few months (or even a few weeks) time

Line 122: URL Encode the string


For line 120, this is how you can configure it when going through the assertion creation wizard:

  • Step 2: SAML Version:  Version 2
  • Step 3: Issuer: select X.509 Subject Name, Issuer Value: Default
  • Step 4: SAML Statement: Attribute Statement
  • Step 6: Add the following attribute (and leave everything with default values):
    • Attribute Name: sessionData
    • Attribute Name Format: Basic
    • Attribute Value: ${sessionDataB64}
  • Step 7: Name Identifier: Include Name Identifier, Format: automatic, Name Identifier Value: From Credentials
  • Step 8: Subject Confirmation
    • Subject Confirmation Method: Bearer
    • Subject Confirmation Data: ${location_login_server}
  • Step 9: Conditions: Use Default Validity Period, Audience Restriction: ${location_login_server}
  • Step 10: Digital Signatures: Sign Assertion with an Enveloped Signature


This API is ready to go. As you can see, a few more lines then before but in the end very straight forward. Remember that you may want to configure your SAML assertion differently, this is just an example.


Next stop:

/auth/oauth/v2/authorize/login, 3 locations, Validate and create SAML

This API receives the SAML token created at the previous API. It has to validate it and, after the user has been authenticated, create a new one which has to be signed and partially encrypted. 2 of these 3 locations will use exactly the same policy snippet. In the original policy look for lines 50, 99 and 162. You can also search for JSON Web Token to find the JWT assertions.



In the original policy replace lines 50, 51 and 99, 100 with the following:


SAML instead of JWT


Line 51: Base64 decode the received SAML token. URL encoding is done by the gateway automagically. The created variable is of type XML

Line 52: Validate the SAML token

Line 53: Decode the session data into the variable sessionData of type application/json


For line 52 (and 104 after inserting these lines), this is how you can configure it when going through the assertion creation wizard:

  • SAML Version: Version 2.x
  • Attribute Statement:
    • Attribute Name: sessionData
    • Attribute Name Format: Basic
    • Attribute Value: Allow any non-empty value
  • Subject Confirmation: Method: Bearer, Recipient: ${location_login_server}, Check Validity Period
  • Name Identifier: Unspecified
  • Conditions: Audience Restriction: ${location_login_server}
  • Embedded Signature: Require Embedded Signature


As I said, configure this once and use it at both locations.



Now we have to create a SAML token. After modifying the 2 policy locations above we will continue at line 168 (which has been 162 in the original policy, Encode Json Web Token: sign & encrypt payload). As I said earlier, the SAML policy snippets are used multiple times.


Below is the new policy snippet to create the signed and encrypted SAML token. I will only explain the differences compared to the one at /auth/oauth/v2/authorize:

SAML instead of JWT


Line 169: Encode the variable sessionData instead of session 

Line 170, 171: Reference the variable location_consent_server instead of location_login_server

Line 172: Overwrite the variable issuedSamlAssertion to issuedSamlAssertion, use message type XML. This turns a string into a message which is needed at line 173

Line 173: Encrypt the sessionData. Please note that you may want to encrypt more parts of the message. In this example I only encrypt the sessionData itself


For line 173, this is how you can configure it when going through the assertion creation wizard:

  • Elements: /saml2:Assertion/saml2:AttributeStatement/saml2:Attribute/saml2:AttributeValue
  • Encryption method: choose one that is good for your environment. I did not worry about it for this example and used this one:
  • Specify certificate: from the menu select the public certificate your login server provided for encryption. It must have been imported into the gateway first. In my case I selected my example private key as the source
  • Do a right-click and configure the target message to be: issuedSamlAssertion


Last stop

/auth/oauth/v2/authorize/consent, 3 locations, just one new implementation

This policy has to validate the SAML and decrypt the sessionData element. It also has to create it. 


Search for the occurences of Decode Json Web Token. They will come in pairs, one is decrypting the JWT, the other is checking the signature. Replace both pairs to look like this:

SAML instead of JWT


As you may see it is the same as on lines 50, 51 that we have created earlier. The only addition is the decrypt assertion on line 49. That assertion only requires one piece of configuration in the dialog:

  • XPath to encrypted element: /saml2:Assertion/saml2:AttributeStatement/saml2:Attribute/xenc:EncryptedData

Also, do a right-click and configure the target message to be: samlBearer


The remaining JWT assertion is Encode Json Web Token: sign & encrypt payload. Replace that line (64 originally) with this:

SAML instead of JWT


Guess what? It is exactly the same as before! Lines 67 - 74 match lines 168 - 175 further up.



You can now integrate with an external login-server using SAML instead of JWT. Use this example and apply any modifications as you need them. In addition to API tests I have used the Safari browser with the OTK browser based test clients to verify that its working. I had some trouble using Chrome (displaying an empty screen) but I did not want to postpone this post cause of debugging purposes. Maybe you can let me know what the problem is :-)


I hope this helps!

Many of our customers have asked us to support easy integration of OTK with an external, existing Login-Server. Until now it was easy to connect to an external IDP but that is not always sufficient.


Well, we have listened and have updated OTK-3.5.00 to support this scenario. This blog post describes how it works.


Starting point: fresh install of OTK-3.5.00


With this version OTK introduces new API’s. In addition to /auth/oauth/v2/authorize the API’s /auth/oauth/v2/authorize/login and /auth/oauth/v2/authorize/consent have been introduced:

  • …/login handles the user authentication
  • …/consent handles the users consent decision (grant or deny)

As in older versions, and in compliance with RFC 6749 (OAuth 2.0) , clients will send initial authorization requests to /auth/oauth/v2/authorize. OTK runs through a validation process and either fails the request or continues.


In the case of a valid request OTK will create a session and keeps track of that. But in addition to that OTK also creates a signed JWT (JSON Web Token). That signed JWT is called sessionData and includes values associated with the initial client request. It is bound to the session that OTK keeps track of.


The response for the initial request will be a redirect to /auth/oauth/v2/authorize/login. That API receives the session identifier (sessionID) and also the session JWT (sessionData). OTK validates the JWT signature, checks if it is associated to the given sessionID and, if valid, responds with a default login page.


Once the user provided his credentials OTK validates them. If valid, OTK updates the JWT and signs it again. This time it also encrypts it due to the now sensitive content. An Auto-Form-POST to /auth/oauth/v2/authorize/consent is returned. That API displays a consent screen. It includes the client’s name and the requested SCOPE. Once the user decided to either grant or deny the request a form POST back to …/consent is executed. This time OTK takes the decision, issues an authorization_code or access_token (if granted) and redirects back to the client.


How do I leverage my Login-Server?


Well, almost very simple. But for sure … simple. There are only a few steps required. A few requirements for the external Login-Server, a few for OTK.


The external Login-Server


  • MUST be able to validate and create signed & encrypted JWT using a shared secret
  • MUST be able to authenticate the user
  • MUST be able to redirect to /auth/oauth/v2/authorize/consent and include all required parameters



  • OTK uses two shared secrets. One to generate/validate the JWT signature, one to encrypt/decrypt the JWT. These are custom values. To leverage the external Login-Server those secrets have to be configured on that server
  • OTK has to be configured to redirect to the external Login-Server


That’s it!


If you now say: well, my Login-Server does not support JWT then this is the right opportunity to implement it. There are open source libraries available that make it very easy to do so. For an evaluation project I was able to implement JWT signing and encryption in Java within a few hours.


Sascha, I want more details!


Before you ask, here are details you need to know. They are all documented but here I will mention the most important bits and pieces.

Here is a simple but hopefully useful graphic of the possible flows:


OTK integrating with an external server


First things first


Configure the shared secrets in OTK and your external Login-Server:

  • Use the policy manager and open the policy OTK Authorization Server Configuration and set the values for “otk_session_secret” and “otk_session_secret_encryption”
  • “otk_session_secret” is used to create and validate the JWT signature
  • “otk_session_secret_encryption” is used to encrypt/decrypt the JWT


Configure OTK to redirect to your external Login-Server. In this example the external server will handle the login step only, not the consent decision:

  • Use the policy manager and open the same policy as before. Configure the variables “host_login_server” and “path_login_server”
  • “host_login_server”: it has to follow this pattern: https://your_exernal_login_server:port
  • “path_login_server”: this has to be the path that is the target for the redirect, i.e.: by default: /auth/oauth/v2/authorize/login


Configure your external Login-Server to redirect to OTK after the user has been authenticated. The target is https://your_gateway:port/auth/oauth/v2/authorize/consent


TIP: When working in the policy manager ALWAYS turn on policy comments and assertion line numbers!

Requests, back and forth

In this example a client (client_id=xyz, name=ExampleApp) has been registered in OTK and is now using the authorization_code flow. The external Login-Server is hosted on


GET /auth/oauth/v2/authorize?






OTK receives the request and validates the client_id, the redirect_uri and the SCOPE. (NOTE: in OTK only SCOPE’s that have been registered for that client can be requested).


If valid OTK responds with a redirect:


Status: 302,





  • action: “display” indicates that a login screen has to be shown. The other option would be “login” for the case of a client using “prompt=none” and “id_token_hint=an-id_token” to indicate that the user is already logged in
  • sessionID: this value is opaque to the external Login-Server and has to be passed back to OTK after the user was authenticated
  • sessionData: that JWT’s signature has to be validated


The JWT content is a JSON message like the example below. It has three main sections:



   "session": {



   "request_consent": {

       "client_name": "ExampleApp",

       "scope_verified": "opened email"


   "request_parameters": {

       "display": "page",

       "prompt": "login consent",

       "id_token_hint": "",

       "acr_values": "",

       "client_id": "xyz",


       "scope": "openid email"



  • session: the session to which the JWT is bound including the expiration time in seconds
  • request_consent: the content to be displayed on the consent page (client name, SCOPE)
  • request_parameters: the parameters that were received in the initial client request


Once received extract the "exp"(irationtime) from the section "session" and verify that it has not expired. Also verify that "sessionID" in the same section matches the given "sessionID" parameter. Oh, and certainly verify the JWT signature! Continue if those validations were successful.


In the simple case your server would now display an authentication page. The user authenticates (however you want him to authenticate!) and your server validates the credentials.


Now your server has to update the JSON message in the section "session". And because some of the values are sensitive (e.g.: salt, which is required to create the ‘sub’ value in the id_token) it has to be signed and encrypted. Do the following:

  • current_user_consent: set it to "none"
  • current_username: set it to the username
  • current_user_role: set it to an appropriate value, e.g.: “admin”, “user”, “visitor” whatever is associated with that user. If no role exists set it to “user”. The roles "admin" and "user" are handled by default in OTK.
  • current_user_acr: leave it untouched or set it to a value that is valid in your environment and represents the used authentication class
  • current_user_authTime: set it to the time when the user authenticated. Most likely to “now” (in seconds, 10-digits)
  • salt: set it to a value that is only known in your environment and associated with the authenticated user. It will be used by OTK but will never be exposed


Once this is done use the updated JSON message, sign end encrypt it. Create an Auto-Form-POST back to OTK. It has to include the following parameters:


POST https://your_gateway:port/auth/oauth/v2/authorize/consent




OTK will receive the message and will do the following:

  • verify that “consent” is the appropriate action
  • verify that sessionID has not expired, is known and has not been used before
  • decrypts sessionData and validates the signature
  • verifies that sessionData is bound to the given sessionID


After that OTK will extract the given values and continues its flow. The user will be prompted with a consent page.



This blog post should enable you to get started with an integration. Always remember that it is mostly about handling the session JWT and sending it to the right location.


For questions please leave a comment.

Sascha Preibisch

CA World 2016

Posted by Sascha Preibisch Employee Sep 29, 2016

CA World 2016, 14. Nov. - 18. Nov. 2016, Las Vegas


Another year, another CA World! I will be there doing a pre-conference workshop on the topics of OAuth and OpenID Connect. I will also be available at the SmartBar to answer questions that come up regarding OAuth, OpenID Connect or the CA Mobile API Gateway.


If you plan to attend feel free to leave a comment and let me know. I am happy to meet and greet and answer questions that you may have.


I hope to to see you there,