Skip navigation
All People > Sascha Preibisch > Sascha Preibisch's Blog
1 2 3 Previous Next

Sascha Preibisch's Blog

32 posts
Sascha Preibisch

Persistent Consent

Posted by Sascha Preibisch Employee Dec 18, 2017

Hi everybody!

 

This blog post is dedicated to Paul who represents one of our customers. He has asked me to give him an idea on how he could implement persistent consent in OTK until OTK supports that feature out of the box.

 

Persistent Consent vs. Session Consent

In OpenID Connect clients can use parameters such as prompt=none to advise the server to not request consent from the resource_owner again if he has already granted access to his resources in the past. Today OTK remembers a given consent as long as an active refresh_token is available (I call this Session Consent, not sure if there is an official term for that). The token must have been granted for the combination of resource_owner, client_id and scope.

In comparison, Paul and others have said: well, Sascha, the server should remember the decision independently of an active refresh_token (Persistent Consent).

 

Interim solution

We have listened and will provide this feature in a future release of OTK. In order to make this available today I am providing an interim solution which you can implement yourself today.

DISCLAIMER: the official implementation in future OTK may look different so please be prepared to have your users see the consent screen again. Also, my version did not go through QA so please do some testing for yourself before deploying this in production. The status of this provided implementation is this: works on my machine!

 

Implementation for OTK-4.1

Only two policies need to be updated:

  1. /auth/oauth/v2/authorize/login: find a given consent
  2. /auth/oauth/v2/authorize/consent: persist consent

 

In OTK a client can have multiple client_ids. For example, your client SuperCoolApp may have a client_id for the mobile app and one the the JavaScript version of it. In that case you may want to remember the given consent decision not only for the current client_id but for any of that client. This means giving consent for the mobile app will also prevent the resource_owner from seeing the consent screen when using the JavaScript version.

 

Changes in /auth/oauth/v2/authorize/login

Extract the value client_ident from the active session object by adding the two lines below. That value represents the client, not only one single client_id (the line numbers may not match but the 'area' should be correct).

 

Extract client_ident

  • line 121:
    • XPath: /authorize/client_ident
    • Variable Prefix: authorizeClientIdent

 

Next, make up your mind if you want to respect consent decisions of the past only in conjunction with prompt=none or always. By default prompt is required! If you want to ignore the given prompt value disable the branch that checks for it:

Ignore prompt

I am not encouraging you to ignore prompt but I know that I have received this request!

 

Now move existing policies into an All assertions ... container:

Before:

Before using all container

After (moving lines 167, 168, 169 from above into the new All assertions ... on line 167 below):

After using all container

When adding container assertions such as All assertions must ... or At least one assertion ... ALWAYS add comments on the right side that start with // .... This helps reading the policy.

 

The behaviour has not changed yet. Now we are adding logic to look up a consent decision without caring if an active refresh_token exists. This includes the introduction of an At least one assertion ... block. That is required if you want to continue to support Session Consent!:

Logic to support persistent consent

 

Let's go through this line by line:

  • line 169: new At least one assertion ... block
    • this block contains the All assertions ... blocks of line 170 and line 177
    • line 170 handles persistent consent
    • line 177 is the one from above where we moved existing assertions into a new block
  • line 171: find existing consent decision
    • we are using the assertion OTK Session GET which is part of OTK
    • Details:
      • Max age for cache values: 3600
      • Name of cache to be used: otkPersistentConsentCache
      • Key for cache entry: ${resource_owner}${client_ident}
  • line 172 - 173: check if consent exists, extracting the JSON consent message
    • creating the initial consent message is handled further down in this post
    • Details line 173:
      • Encode/ Decode: URL Decode
      • Source Variable: sessionValue.result
      • Target Variable: consentMessage
      • Data Type: Message
      • Content Type: application/json
  • line 174: find granted scope
    • its important to present the Consent screen if the client has not requested the SCOPE in the past (no matter what)! 
    • Details:
      • Expression: granted_scope
      • Other Message Variable: consentMessage
      • Variable Prefix: xpathScope
        • we are not using xpath, but using this name allows us to reuse a few lines of policy further down
  • line 175: find the client
    • to be fail safe extract the client_ident of the past
    • Details:
      • Expression: granted_client
      • Other Message Variable: consentMessage
      • Variable Prefix: granted_client
  • line 176: compare the current client against the one from the past
    • ${client_ident} equlasTo(${granted_client.result})

 

It should look like this:

Result for changes at login

 

Changes in /auth/oauth/v2/authorize/consent

The good news: its just a 2 liner! Look for the block on line 96 of the screenshot below:

Finding the right spot

 

Open that block and go right to the end. It ends with an assertion named OTK Session - Delete. Right after that we will add 2 assertions:

New assertion in consent

 

Lines 155 and 156 are the new ones.

  • line 155: Set Context Variable. This is where we are creating the content for a given consent. You can certainly add other values as you desire
    Consent message
  • line 156: Creating the persistent session
    Consent session

 

That is all there is. Here are some thoughts though:

  • Get an idea on how many consent decisions you want to persist. Specify the value Max. number of entries in the dialog above appropriately
  • Max. database age vs. Max. age for cache: the database age is the persistent memory (this example: 90 days). During this time a consent screen will not be displayed.
    The Cache age is simply the one that helps avoiding accessing the database.
    IMPORTANT: The cache age here has to match the cache age used at /auth/oauth/v2/authorize/login when using OTK Session GET
  • Although this solution works out of the box you could certainly choose to build you own assertion that works with your own, dedicated database table that you may have or want to create for this purpose
  • Client vs. Client_id: if your client has multiple client_ids which are registered for different valid SCOPE values you may not want to use client_ident but the current client_id as part of your session key
  • Revoking Persistent Consent was not covered here. Please think about the scenario on how you want to support resource_owners revoking their given consent. If you are leveraging the revocation endpoint you can add logic there to support it

 

Summary

I hope this post gives you an idea on how you can implement Persistent Consent rather than Session Consent. Even if you are not going for the solution described here it may still give you an idea on how features as these can be implemented very easy. If you are looking forward to see this feature in OTK in the future please give us feedback so that we can build it the way you want it.

 

Paul, I hope this is what you were looking for!

 

As always, thanks for positive feedback and constructive criticism.

Best regards, Sascha

Hi everyone!

 

Just in time for the weekend I have published a new tutorial that shows in detail how to build APIs that retrieve resources from a local datasource or a remote API. As an example I am accessing a database.

 

The shown building blocks allow a developer to implement APIs without having to know where the resources are located. The source for this tutorial is available as RESTMan bundle and can easily be imported into a CA API Gateway using the RESTMan API. 

 

Here are two images to give you an idea what is being shown:

 

Service and Data APIs:

APIs for local or remote resource retrievel

 

Encapsulated assertions that transparently retrieve data from a local database or via a data API:

Retrieve data from local database or remote API

 

Find the tutorial in the project Tutorials here: CA APIM on github.

Open the file index.html and select Encapsulating access to resources.

 

As always, let me know if it helps and what you like and do not like.

Hello everyone!

CA World 2017 was a very good event. We had good times showing our products and very good discussions with our customers and to-be customers.

At the pre-conference session that was about news in OTK-4.1 I was using a SOAPUI project and promised to make it available for everyone.

Well, I am excited to announce that we now have a new project in our public GitHub repository that is dedicated to tutorials. Hopefully, over time, we are able to add more helpful content, maybe also with help of our user community.

 

Please find the repository here: CA APIM - Open Source and select the project Tutorials.

 

To have this becoming a successful way of providing tutorials and examples its important that you take a moment to have a look at the REAME content.

 

As always, please let us know if this is helpful!

Sascha Preibisch

CA World 2017

Posted by Sascha Preibisch Employee Nov 9, 2017

CA World 2017 (13.11.2017 - 17.11.2017)

 

Another year has passed and next week I am off to Las Vegas for another great event! Just as many of my colleagues.

 

I am writing to share with you which talks I am presenting:

 

  1. A pre-conference session (DO1X106E) about news in our OpenID Certified OTK implementation (CA APIM OAuth Toolkit 4.1). This includes a demo for which I am using SOAPUI. Since many of you have asked me about example SOAPUI projects I am taking the chance to provide it to you
  2. A pre-conference session (DO1X118E) about microservices security including a very cool preview of an upcoming feature
  3. A pre-conference session (DO1X117E) about scalable microservices environments
  4. A TechTalk (D01T52T) during the actual conference days in the DevOps API area. The topic is the same as for the 3. pre-conference session, but more compressed to include only the highlights

 

I would be happy to see you there to meet and chat and pick up your thoughts on what we are doing good and what we should improve! Come and find me at the SMART bar in the DevOps API area if not during the sessions.

Hi everybody!

This weeks tip is meant for anybody implementing oauth protected API's on the CA API Gateway or CA Mobile API Gateway. With both products the OAuth Toolkit (OTK) will be leveraged.

 

Here is the tip: Use variables set by 'OTK Require OAuth 2.0 Token'

When implementing oauth protected API's the main assertion to use is named OTK Require OAuth 2.0 Token. That assertion finds an incoming access_token and validates it. If the given token is invalid the assertion fails and returns an error. If the token is valid a few variables are being set. And those can be very useful when it comes to requiring more than just the token itself.

Here is a list of those variables and what they contain:

  • access_token: the token that was used by the client. This is mainly for informational purposes
  • session.client_id: the client_id of the client that has requested the token initially. This is for informational purposes but it could also be used to look up other associated values of this client
  • session.scope: the granted SCOPE for this token. The content is a space separated list of values. It is useful to implement branches within an API that retrieves data based on the SCOPE. An example can be found at /openid/connect/v1/userinfo. That API first requires the SCOPE=openid (configured in OTK Require OAuth 2.0 Token). Further down it checks if the granted SCOPE includes values such as email or profile. Have a look how this variable is used with OTK SCOPE Verification
  • session.subscriber_id: this is the username of the resource_owner that has granted the initial authorization request. If no consent was required during the token issuing process (e.g.: grant_type=password) it's simply the authenticated user. If the token was issued via grant_type=client_credentials the value will be the name of the client
  • session.expires_at: the timestamp at which the token expires
  • session.custom: this contains a JSON structure. The content contains values that were specified when the oauth client was registered in OAuth Manager. It also contains runtime information. In order to learn about the content, and since it varies, do the following during development: Use an Audit Detail assertion to log the content of '${session.custom}'. Afterwards extract values using the Evaluate JSON Path Expression assertion when you know what you want to extract. By default values such as the following are available:
    • client_type: either 'confidential' or 'public'
    • grant_type: the grant_type used to obtain the token

What to do with those variables

Now that you know about these variables you can implement use cases such as the following:

  • grant access only if the token was obtained via a specific grant_type
  • extract attributes of the current user to pass them on to the backend service
  • grant different access to resources depending on the client type
  • implement rate limiting based on the access_token for special cases

 

I hope this helps, and as usual, let me know if you need more information or details or if you have got other related questions!

I am happy to share with you that OAuth Toolkit 4.1 (OTK-4.1) has been released last week. 

 

If you have previously installed OTK-4.0 you are now able to upgrade to OTK-4.1 without loosing your customization. We are happy that this is now supported! (Please read the documentation when doing that).

 

Here are a few links that I think are useful to get started:

 

As usual, please let us know how you like this version and share any suggestions for future enhancements you would like to see.

Sascha Preibisch

We are certified!

Posted by Sascha Preibisch Employee Jul 14, 2017

OpenID Certification has been achieved!

 

I am happy to announce that OTK-4.1 has successfully been certified for the following profiles:

  • OpenID Provider Basic
  • OpenID Provider Config

 

CA API Gateway and CA Mobile API Gateway are listed on the official web site of OpenID Providers: OpenID Certification | OpenID  

 

OTK-4.1 will be released soon as of when you are able to leverage improved and new features such as:

  • OpenID Connect Discovery
  • OpenID Connect Dynamic Registration
  • OpenID Connect JWKS_URI
  • Refresh Token can be configured to be re-used
  • Refresh Token can be configured to keep the original expiration date
  • Enhanced customizations
  • Easy upgrade process

 

I hope this is good news for any OTK user!

Hi everybody!

 

I wanted to share with you that OTK will soon be certified for the OpenID Connect Basic Profile. Maybe for even more profiles!

 

If you are looking for certified solutions you can soon look at OTK as one of your options. Although OTK supports OpenID Connect features for a while now the "stamp" is still missing. I will post more details as soon as we have shipped the next version.

 

Another requested feature will also be available:

  • being able to configure OTK to accept refresh_token multiple times
  • being able to configure OTK to issue new refresh_token but keeping the original expiration date

 

We are sure that these enhancements will make OTK even more valuable.

 

All the best!

Sascha Preibisch

API Error Handling

Posted by Sascha Preibisch Employee Apr 20, 2017

Hi everybody!

 

This week I did a talk about API error handling within CA APIM OAuth Toolkit (OTK). For that I have created a list of general requirements that are important when dealing with APIs. I decided to share that list here since it may be useful for others too.

 

Here we go:

 

  1. The API owner must be in control of error messages. This sounds like a given but especially when choosing a middleware product it should be evaluated if internal errors may be returned instead of ones created by the API owner/developer. That is not desired
  2. APIs should return correct error messages. This is another one that should be a given. However, if this is not the case developers will be very confused
  3. Error messages should not reveal sensitive information. The error message should not expose implementation details such as stacktraces. Error messages should be as general and as specific as possible at the same time. For example, returning "authentication failed due to invalid credentials" is general but also specific enough. It would be wrong to return "authentication failed due to the incorrect password 'xyz'"
  4. Error messages should be returned in an expected message format. If the API consumes and produces JSON messages error messages should also be returned in JSON
  5. Error messages should be maintained in a single location. This may be controversial and depends on the API development environment. But if many APIs have to be managed a system that has a central location for maintaining error messages may be used. Otherwise, if the error messages are formulated within those APIs directly, it may be difficult to change or fix them
  6. Same errors should always cause the same error message. If an API implements parameter validation and fails, the produced error message should be the same across all APIs that implement the same validation. This should be consistent for all types of errors
  7. All possible error responses should be documented. Do not let your API consumers guess what errors may occur. Document all possible errors that may be returned. This includes potential reasons for a failed request and also solutions how this can be fixed. For example, if the error says "token is invalid" you may want to document "the given access_token has expired. Repeat the request using a valid access_token"

 

Financial API (OpenID Foundation - FAPI)

FAPI is still in an early draft but has documented an approach for handling errors. It may be worth taking a look at it:

FAPI error handling - proposal

 

As usual please leave a comment with any questions or ideas or different views on this topic if you like.

Hello everybody!

I am happy to share with you that OTK and MAG 4.0 have been released. Both come with smaller and bigger changes. The main enhancements were made on this topic:

  • Enable the easy button upgrade

 

Easy upgrade

OTK and MAG now have specific policies that are marked as customizable where others are either implemented or  suggested to be read only. This enables us to provide a smooth upgrade procedure. Whatever has been implemented in the customizable policies will NOT be replaced with an upgrade.

 

Customizations

In the area of customizations we are distinguishing between two different types:

  1. configurations: configure attributes such as token lifetimes in dedicated policies
  2. extensions: extend features OTK provides such as adding your own grant_type in dedicated policies

 

Both types are implemented as policy fragments that are included in read only policies. The ones made for configurations are generally named like the target policy but with a '#' sign as prefix (e.g.: OTK Token Lifetime Configuration becomes #OTK Token Lifetime Configuration). In this case you would copy a variable from OTK Token Lifetime Configuration and paste it into #OTK Token Lifetime Configuration and set it to your desired value.

The policies for extending OTK's features are generally named with a suffix such as ' ... Extension' (e.g.: OTK User Authentication becomes OTK User Authentication Extension).

 

Highlights

Here is a list of my personal highlights of the new release:

OTK

  • Upgrades as of 4.x are easy
  • OTK now supports a configurable number of sessions per resource_owner/ client which was limited to 1 in the past
  • OTK now supports JWT signatures with RS256 out of the box
  • OTK now maintains the database (MySQL and Oracle) through a scheduled task. There is no need anymore for an external cron-job
  • Extensions can be used to support custom id_token creation and validation
  • Extensions can be used to support custom grant types
  • OTK uses only two single locations where local cache assertions are used. With that its very easy to replace the local cache assertion with a remote cache assertion and its also easy to audit all data that is being cached and retrieved
  • The Authorization Server's login and consent pages can be modified easily
  • Many configurations/ extensions can be configured per client by leveraging the custom field per client and client_id in OAuth Manager. This should introduce a huge fexibility

MAG

MAG received updates but here is my favourite one:

  1. A new feature is the so called enrollment process. This enables an enterprise to publish an app in the app store with a minimal configuration. The full configuration can be retrieved at runtime which makes it very flexible

 

I surely missed some other enhancements but I wanted to highlight the ones above. If you have further questions or if you need guidance on how to use OTK and MAG as of now please leave a comment. For the complete documentation please go here and search for CA API Management OAuth Toolkit or CA Mobile API GatewayDocumentation

Hi everybody!

This weeks tip is meant to make the life in a development environment easier. If you are in an environment where multiple developers have their own instance of a CA API Gateway but also share it with others, this tip is for you.

 

Here is the tip: increase the number of login attempts

You may have discovered that different developers use similar usernames when logging in to Policy Manager. For example, you may use 'admin' or 'administrator' with a simple password such as 'password' on your own CA API Gateway. Other developers may use 'admin' but with a password such as 'Password' on their CA API Gateway.

 

If you now ask one of the others to connect to your CA API Gateway they may attempt to login via 'admin/Password' by mistake. Unfortunately, after three failing attempts your 'admin' account will be blocked since your 'admin' password is 'password'.

 

In order to help you from this annoying situation you can configure a cluster-wide property that allows you to configure the number of failing login attempts before the account gets blocked. Another cluster-wide property lets you configure the blocked duration.

 

In Policy Manager open Tasks - Global Settings - Manage Cluster-Wide Properties and configure these variables:

  • logon.maxAllowedableAttempts: the number of failing login attempts before the account get blocked
  • logon.lockoutTime: the duration in seconds before another login attempt can be taken

 

Use those variables with care in a production environment but make use of them in a dev environment if possible.

 

I hope this helps!

Hi everybody!

Some of you who have build policies have also used assertions called Look Up In Cache and Store to Cache. Usually they are used to improve performance or to keep track of session data across multiple requests and responses. Those assertions work very good for that. But there is something you need to know about them ... .

 

Here is the tip: using Cache ID and Maximum entry age correct!

When using Store to Cache the following values can be configured:

  • Cache ID: the name of your cache
  • Cache entry key: the key used to identify a specific entry
  • Maximum entries: the number of entries this cache should accept
  • Maximum entry age: the lifetime of an entry
  • Maximum entry size: the size per entry

 

At a first glance this makes sense and does not raise any questions.

But there is the catch:

The assertion will always maintain exactly one Maximum entry age per Cache ID

Look at this example:

  • Cache ID: myCacheID, Cache entry key: myKey, Maximum entry age: 300
    • the entry for myKey will be cached for 300 seconds
  • Cache ID: myCacheID, Cache entry key: myNewKey, Maximum entry age: 600
    • the entry for myNewKey will be cached for 600 seconds

At this point the entry of myKey got removed from the cache since the new lifetime for this cache is 600!

 

How to use the local cache

It is important that you are aware of the described behaviour from above. Otherwise your caching strategy does not work as expected. The simplest solution is to always use the same Maximum entry age per Cache ID. A simple solution is to create a Cache ID always in combination with the used Maximum entry age. If you are using variables that contain the values do the following:

  • Cache ID: ${cacheID}${lifetime}
  • ...
  • Maximum entry age: ${lifetime}

 

I hope this helps building better policies!

Hi everybody!

I just received a question, the same one that I have received last week. Therefore I thought, I'll start the series "Tip of the week" with little tips and tricks around building policies.

 

Here is the tip: debugging policy

Debugging policies can be difficult. We do have the policy debugger and we do have a debugging policy. Both tools help, but the out-of-the-box debugging policy is not really producing a meaningful output. Therefore, here is how you should use it (well, how I use it, which may also work for you):

 

  1. Place an Audit Messages in policy assertion into your service and configure it to always capture requests and responses
  2. Do a right-click on the service
  3. Select the service properties, then select Enable policy debug tracing
  4. In the appearing dialog Do you want to edit the debug trace policy now? select Yes
  5. In that policy you want to remove the "Audit Messages in Policy" assertion on line 5
  6. Now select the "Add Audit Details" assertion and let's work on that ...

 

That assertion has this content:

TRACE: service.name=${trace.service.name} policy.name=${trace.policy.name} policy.guid=${trace.policy.guid} assertion.number=${trace.assertion.numberstr} assertion.shortname=${trace.assertion.shortname} status=${trace.status}

As you can see it contains strings such as "service.name", ... "policy.guid", ... these strings are used as labels. That is all good but it makes the audit result difficult to read. It also has the "trace.status" at the end although that may be the most important information you want to look at. Here is an example of the output:

 

 

What you are interested in are error codes (trace.status) and policy line numbers (trace.assertion.numberstr). But they are hidden in the loooooong audit message, right at the end.

 

Your new debugging policy

You can turn this message into something very helpful by replacing the content of the audit detail assertion using this message:

TRACE: [${trace.status}][${trace.assertion.numberstr}][${trace.assertion.shortname}][${trace.policy.name}]

I removed all labels and I moved the important content to the front. I have put variables into [...] brackets. The brackets will contain the following information:

  • 1: [...]: '0' if the assertion executed without error, >0 if an error occured. Depending on your policy you may expect assertions to fail, such as comparisons!
  • 2: [...]: the assertion line number. It will tell you exactly which line was executed. If it shows something like [3.45] read it as: line 3 in the service is a fragment (or encapsulated assertion). Within that fragment (or encapsulated assertion) the policy failed on line 45. If the line numbers are not shown for encapsulated assertions open the encapsulated assertion configuration and enable Allow debug tracing into backing policy
  • 3: [...]: the assertion name
  • 4: [...]: the policy name

 

The output looks like this now:

You will now find failing assertions easier by just checking the value of the first bracket. It will show you the line number  in bracket no. 2 which makes it easy to locate the failing location in policy.

 

I hope this helps making policy debugging easier!

Federation with OpenID Connect

I have been asked to write a blog post about federation in the context of OpenID Connect. Before I continue I would like to mention that a draft for OpenID Connect Federation exists. To see details please visit this website.

In this blog post I am not referencing that draft but I want to explain a few components that exist in OpenID Connect that can be used in any case.

 

Summary

At the end of this blog post you will know how to leverage OpenID Connect Discovery, Json Web Key (JWK) and Json Web Token (JWT). The bottom of this blog post shows a screenshot of my implementation. It starts where the authorization_code gets exchanged for an access_token.

 

The complete message flow this blog post is based on follows the OAuth authorization_code flow (RFC 6749). I am assuming that you understand that flow. This blog post should help you understand how to use an id_token.

 

Important APIs

For this blog post we are referencing two API's found in the context of OpenID Connect:

  1. /.well-known/openid-configuration (Specification for Discovery)
  2. /jwks_uri (referenced in Discovery)

Those API's (and with that the specifications) enable anyone to configure oauth clients and validate JWT. The JWT in this case contains an OpenID Connect id_token and is digitally signed using RS256 as algorithm.

 

On a side note:

JWT and id_token are often use synonymously. Even in this blog post you will find that. Please remember:

  • id_token: a JSON message with a well defined structure based on OpenID Connect
  • JWT: a base64url encoded string containing a jwt-header, jwt-payload, jwt-signature
  • in the context of OpenID Connect: jwt-payload --> base64url encoded id_token!

How these APIs relate to each other

Here a brief explanation of these API's:

  • /.well-known/openid-configuration
    • Idea: a list of details that are supported by an OAuth/ OpenID Connect provider
    • Content: "location of authorization endpoint", "location of token endpoint", "supported SCOPEs and claims", "algorithms for JWT signatures", information like that
    • In this blog post we will act as a client of Microsoft (MS) and will leverage their discovery API
    • Details: Discovery - details
  • /jwks.json:
    • Idea: a list of public keys (certificates) that can be used to validate JWT
    • In this blog post we will act as a client of Microsoft and will leverage their jwks API
    • Example: Microsoft's jwks_uri

 

Here is an image visualizing the connection between different APIs:

Overview of important OpenID Connect API's

Here is the cool part of the API relationship:

  • as a provider you only have to publish the Discovery endpoint (/.well-known/openid-configuration). All other API's are  referenced in its JSON response
  • as a client you only need to know the location of the discovery endpoint and you'll get all others
  • the image also shows the /register endpoint which allows clients to register themselves as an oauth client. We are not using that in this post
  • /authorize and /token are the usual oauth endpoints
  • /jwks.json contains all keys (certificates) required to validate JWT

 

Leveraging MS Office 365 accounts

As an example I am now explaining how I configured the OTK (OAuth Toolkit) authorization server to accept Office 365 id_token to log user in to OTK.

 

NOTE: I have got a new website at which you can see this example working when you select "Sign in with Microsoft": oauth.blog/register. You can try it and remove your registration afterwards if you wish. 

 

The complete story looks like this:

  1. register yourself as a developer and create an oauth application. Important: register the correct redirect_uri at which you want to receive an authorization_code!
  2. build an application that uses the oauth credentials received from Microsoft for your app
  3. implement the redirect_uri. Handle success cases but also error cases
  4. implement the validation of the JWT
  5. implement the validation of the id_token
  6. extract any value of the id_token that you want to use as the username, i.e.: "name" or "email"

 

It looks like a lot to do ... and it is .. but depending on your oauth/oidc provider it may be as simple as peeing a hole into the snow or as complicated as restoring a 1973 Corvette. In my case it was not complicated.

 

Register an oauth client with Microsoft

I registered myself as a developer with Microsoft and created an oauth application named "Saschas Authorization Server". With that I received oauth client credentials. You can register yourself here: register

On my authorization server's login page I have included a logo which, when clicked, initiated the social login flow with Microsoft. At the end of this flow I received an authorization_code at my redirect_uri.

 

Implement the redirect_uri

At this API the authorization_code will be received and exchanged for an access_token (actually, in the case of MS, you will receive an id_token packaged as JWT instead of an access_token). The response looks something like this (shortened for readability):

 

{

 "token_type":"Bearer",

 "id_token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjFMVE16YWtpaGlSbGFfOHoyQkVKVlhlV01xbyJ9.eyJ2ZXIiOiIyLjAiLC...DRiNjZkYWQifQ.y28AK...uv_vrKlLow"

}

In general, a JWT has this format:

jwt-header [.] jwt-payload [.] jwt-signature

 

The following steps are all implemented within my redirect_uri!

 

Prepare jwks

Microsoft publishes its discovery endpoint here: MS Discovery. If you follow that link you will see that all APIs that we need are referenced in the JSON response. In this case we are interested in "jwks_uri". The response message of that API looks like this:

{
 "keys": [{
 "kty": "RSA",
 "use": "sig",
 "kid": "Y4ueK2oaINQiQb5YEBSYVyDcpAU",
"x5t": "Y4ueK2oaINQiQb5YEBSYVyDcpAU",
 "n": "p3pKrlon...CroPTYQ",
 "e": "AQAB",
"x5c": ["MIIDBTCC....../8IHkxt"],
 "issuer": "https://login.microsoftonline.com/{tenantid}/v2.0"
}, ...

It is an array of "keys". You may want to study all details, but what we need is this:

  • kid:
    • when we validate the JWT, we will find a matching kid in the JWT header. The kid tells us which certificate we have to use to validate it

 

Validate JWT

In OTK we can use the "Decode Json Web Token" assertion to validate the JWT. Other products surely have similar tools available. We just have to do 2 manual steps before we can use that assertion:

 

  1. base64 decode the JWT header (the string in front of the first [.] of the JWT). It looks like this:  {"typ":"JWT","alg":"RS256","kid":"1LTMzakihiRla_8z2BEJVXeWMqo"}
  2. extract the kid using a JSON Path assertion

 

As you remember, the kid references the certificate that can be used to validate the signature. We can now use that value with our "Decode Json Web Token" assertion like this:

 

Configuration of Decode Json Web Token

  • Source Payload: the JWT that we have received
  • Validation Method: tell the assertion to use a context variable when searching for "secrets"
  • Recipient Key Context Variable: contains the content of the JWKS file that I earlier imported after downloading it from Microsoft
  • Key Type: a JSON Web Key Set. As I said, its an array of keys
  • Key ID: here it comes! We need to tell the assertion which certificate to use, identified by the kid
  • Destination Variable Prefix: the payload of the JWT, which is the id_token. That part is the one we are actually interested in!

 

If the signature validation is successful we know that this token was issued by Microsoft!  Now the content of the id_token needs to be validated!

 

Validate the id_token

The content of the id_token looks something like this:

{
"ver": "2.0",
"iss": "https://login.microsoftonline.com/9188....b66dad/v2.0",
"aud": "2a817ae....9aa4b61",
"exp": 1485205244,
"iat": 1485118844,
"name": "Sascha Preibisch",
"preferred_username": "sascha.preibisch@ca.com",
"email": "sascha.preibisch@ca.com",
"sub": "AAAAA....Geg5k",
"tid": "9188....b66dad"
}

  • iss: the issuer. It contains a tenantid. You can choose to care about a specific tenantid (as I did) or simply accept anyone. In the case you care you MUST verify that it includes your acceptable tenant. The concept of a tenantid is specific to MS. In google's case you could simply check if the issuer is accounts.google.com (or https://accounts.google.com, one or the other may appear)
  • aud: my client_id. You MUST verify that it matches the one you used when requesting the token!
  • exp: the expiration. You MUST verify that it has not yet expired
  • name: the user's name information
  • preferred_username: I modified it for this example, but it will match the username that you are using in your corporation
  • email: the email address
  • sub: a value identifying the user within the system of MS (or maybe the tenant, not sure ...)
  • tid: the tenantid which matches the value in the iss value

 

Except for extracting details of the id_token this is all you have to do. You have accepted and validated an id_token, issued by a third party. Your user is logged in!

 

To show you how few lines of policy you need in CA API Gateway to implement this I have added a screenshot below. It starts at "authorization_code received, exchange it for a token" (expand the image by clicking it):

 

API to validate a JWT and id_token

  • line 52: call the /token endpoint to receive an access_token in exchange for an auhorization_code
  • line 55, 56: MS does not issue an access_token but an id_token (which comes in handy in our case). On line 55 we are extracting the id_token from the JSON response that was received. On line 56 we are setting a variable with the content naming it "jwt"
  • lines 57 - 60: match the JWT header, base64 decode it, extract the kid.
  • line 62: validate the JWT. That assertion is configured as shown further up
  • line 63 - 68: extracting values from the id_token
  • line 69 - 71: validating claims such as "exp", "aud" and "iss"
  • line 72 - ... : extracting details such as "sub", "name" and "email" which are then used as username and user-id

 

Do not forget

Even if you forget most of this blog post, keep the most important steps in mind:

  • receive and id_token
  • validate the signature
  • validate claims such as "exp"
  • be happy about this easy and convenient way of logging in users with third-party credentials

 

That's it!

 

I hope this blog post gives you an idea how id_token (JWT) can be used for federation.

Please leave a comment if you need further details or if you find an error or if you like what I described here.

 

Thanks a lot!

Welcome PKCE!

 

PKCE (RFC 7636)

Since OTK-3.6.00 was released at the end of December we support PKCE!

That's it, just wanted to let everyone know.

 

Easy Button upgrade

Some of the OTK users have realized that it is not always easy to upgrade from one version of OTK to another. In order to tackle that issue we are currently reorganizing the structure of OTK. The end result will be an OTK that is configurable AND upgradable with minimum effort.

 

In order to validate what we are doing I would like to invite all our customers to take part at our next sprint demo on Thursday, 26. January. We will demo the following:

  • how to implement custom grant_types
  • how to implement custom id_token validation and generation
  • how to upgrade from one version to another without losing customizations

 

You will be able to provide immediate feedback and help us do a better job.

If you are interested please send an email to sascha.preibisch@ca.com and I will forward you an invitation.