Thursday, 21 June 2018

Keycloak Cordova Browser Tabs support

Thanks to gtudan we finally have support for browser tabs for Cordova in our JavaScript adapter. This enables using a system browser tab to do the login flows to Keycloak, which brings better security and also single sign-on and single sign-out to mobile applications secured with Keycloak.

This will be included in Keycloak 4.1.0.Final which will be released soon. In the meantime check this screen-cast to see this in action!

Sunday, 17 June 2018

Red Hat Single Sign-On in Keynote demo on Red Hat Summit!

Red Hat Summit is one of the most important events during the year. Many geeks, Red Hat employees and customers have great opportunity to meet, learn new things and attend lots of interesting presentations and trainings. During the summit this year, there were few breakout sessions, which were solely about Keycloak and Red Hat SSO. You can take a look at this blogpost for more details.

One of the most important parts of Red Hat Summit are Keynote demos, which show the main bullet points and strategies going forward. Typically they also contain the demos of the most interesting technologies, which Red Hat uses.

On the Thursday morning keynote, there was this demo to show the Hybrid Cloud with 3 clouds (Azure, Amazon, Private) in action! There were many technologies and interesting projects involved. Among others, let's name Red Hat JBoss Data Grid (JDG), OpenWhisk or Gluster FS. The RH-SSO (Red Hat product based on Keycloak project) had a honor to be used as well.

Red Hat SSO setup details

The frontend of the demo was the simple mobile game. RH-SSO was used at the very first stage to authenticate users to the mobile game. Each attendee had an opportunity to try it by yourself. In total, we had 1200 players of the game.

There was loadbalancer up-front and every user was automatically forwarded to one of the 3 clouds. The mobile application used RH-SSO Javascript adapter (keycloak.js) to communicate with RH-SSO.

With Javascript application, whole OpenID Connect login flow happens within browser and hence can rely on sticky session. So since Javascript adapter is used, you may think that we can do just "easy" setup and let the RH-SSO instances across all 3 clouds to be independent of each other and have each of them to use separate RDBMS and infinispan caches. See the image below for what such a setup would look like:

With this setup, every cloud is aware just about the users and sessions created on itself. This is fine with sticky session, but it won’t work for failover scenarios in case if one of the 3 clouds is broken/removed. There are also other issues with it - for example that admins and users see just sessions created on particular cloud. There are also potential security issues. For example when admin disables user on one cloud, user would still be enabled on other clouds as changes to user won’t be propagated to other clouds.

So we rather want to show more proper setup aware of the replication. Also because one part of the demo was showing failover in action. One of the 3 clouds (Amazon) was killed and users, who were previously logged in Amazon, were redirected to one of the remaining 2 clouds. The point was that the end user won't be able to recognize any change. Hence users previously logged in Amazon must be still able to refresh their tokens in Azure or Private cloud. This in turn meant that the data (both users, user sessions and caches) need to be aware of all 3 clouds.

In Keycloak 3.X, we added support for Cross-datacenter (Cross-site) setup with usage of external JDG servers to replicate data among datacenters (tech preview in RH-SSO 7.2). The demo was using exactly this setup. Each site had JDG server and all 3 sites communicate with each other through those JDG servers. This is standard JDG Cross-DC setup. See the picture below for what the demo looked like:

The JDG servers were not used during the demo just for the purpose of the RH-SSO, but also for the purpose of other parts of the demo. The details are described in the other blog by Sebastian Laskawiec. The JDG servers were setup with ASYNC backups, which was more effective and was completely fine for the purpose of the demo due the fact that mobile application was using keycloak.js adapter. See RH-SSO docs for more details.

Red Hat SSO customizations

The RH-SSO was using standard RH-SSO openshift image . For Cross-DC setup, we needed to do configuration changes as described in the RHSSO documentation . Also few other customizations were done.

JDG User Storage

RH-SSO Cross-DC setup currently requires both replicated RDBMS and replicated JDG server. When preparing to demo, we figured that using the clustered RDBMS in OpenShift replicated across all 3 clouds, is not very straightforward thing to setup.

Fortunately RH-SSO is highly customizable platform and among other things, it provides supported User Storage SPI , which allows customers to plug their own storage for RH-SSO users. So instead of setup of replicated RDBMS, we created custom JDG User Storage. So users of the example realm were saved inside JDG instead of the RDBMS Database.

Lessons learned is, that we want to make the Keycloak/RH-SSO Cross-DC setup simpler for administrators. Hence we're considering removing the need for replicated RDBMS entirely and instead store all realms and users metadata within JDG. So just replicated JDG would be a requirement for Cross-DC setup.

Other customizations

For the purpose of the demo, we did custom login theme. We also did Email-Only authenticator, which allows to register user just by providing their email address. This is obviously not very secure, but it's pretty neat for the example purpose. Keynote users were also able to login with Google Identity Provider or Red Hat Developers OpenID Connect Identity Provider, which was useful for users, who already had an account in those services.

If you want to try all these things in action, you can try to checkout our Demo Project on Github and deploy it to your own openshift cluster! If you have 3 clouds, even better! You can try the full setup including JDG to try exactly the setup we used during keynote demo.

Thursday, 14 June 2018

Keycloak 4.0.0.Final Released

To download the release go to the Keycloak homepage.

For details on what is included in the release check out the Release notes

The full list of resolved issues is available in JIRA.

Before you upgrade remember to backup your database and check the upgrade guide for anything that may have changed.

Thursday, 31 May 2018

Keycloak on OpenShift

In this post you'll see how to deploy Keycloak on OpenShift. You'll also learn how to deploy a Node.js based REST service and an HTML5 application to OpenShift and secure these with Keycloak.

There is also a screencast showing this example at

If you don't already have OpenShift available a good place to start is by using MiniShift.

Deploying Keycloak

First of all create a new project in OpenShift with oc by running:

oc new-project keycloak

The next thing to do is to import the Keycloak template into OpenShift, by running:

oc replace --force -f ""\

Now open the OpenShift console and open the keycloak project.

Click on Add to Project and Browse Catalog. In the catalog you should find Keycloak. Click on it.

Click next on the information. Under configuration set a username and password that you can remember in the Keycloak Administrator Username and Keycloak Administrator Password fields. Then click on create. Click on Continue to project overview.

Wait for the deployment to complete then click on the link to the application. Your browser will complain about the certificate as it is a self-signed certificate. Ignore this and proceed. Click on Administration Console, then login with the username and password you entered previously. Keep this tab open as you will need it later.

You have now deployed Keycloak onto OpenShift.

Configure Clients in Keycloak

We need to create clients for the service and the application we will secure.

Open the tab with the Keycloak admin console. Click on Clients and Create. For Client ID enter service and click Save. Under Access Type select bearer-only and click on Save.

Click on Clients then Create again. For Client ID enter app and click Save. For Valid Redirect URIs and Web Origins enter *. In production environment it is very important that you enter the correct URL for your application, but since this is a demonstration we will simply allow all URLs for simplicity. You can easily update these to the correct URLs for the application after it has been deployed.

Keep the Keycloak admin console tab open as again you will need it later.

Deploy the Service

Go back to the tab with the OpenShift console and click on Add to Project and Browse Catalog again. This time click on Node.js. Click next on Information, then click on advanced options under Configuration.

Make the following changes:

  • Name: service
  • Git Repository URL:
  • Context Dir: openshift/service
  • Secure route: enable
  • TLS Termination: Edge
  • Insecure Traffic: Redirect
  • Deployment Config
Replace the value for KEYCLOAK_URL with the URL for Keycloak. You can find this by going back to the tab with the Keycloak admin console (copy the URL up to and including "/auth").

Click on Create then Continue to the project overview. Wait for the build and deployment to complete then click on the link to the application. You should see "Not found!". Add "/service/public" to the url and you should see "message: public" in JSON.

You have now deployed and secured the service. Keep this tab open as well as you need it later.

Deploy the Application

Go back to the tab with the OpenShift console and click on Add to Project and Browse Catalog again. This time click on PHP. Click next on Information, then click on advanced options under Configuration.

Make the following changes:

  • Name: app
  • Git Repository URL:
  • Context Dir: openshift/app
  • Secure route: enable
  • TLS Termination: Edge
  • Insecure Traffic: Redirect
  • Deployment Config
Replace the value for KEYCLOAK_URL with the URL for Keycloak. You can find this by going back to the tab with the Keycloak admin console (copy the URL up to and including "/auth"). Also, replace the value for SERVICE_URL with the URL for the Service. You can find this by going back to the tab with the service (copy the URL up to and including "/service").

Click on Create then Continue to the project overview. Wait for the build and deployment to complete then click on the link to the application. You should already be logged-in. You can now invoke the service by clicking on Invoke Public to invoke the unsecured endpoint or Invoke Admin to invoke the endpoint secured with the admin role. If you click on Invoke Secured it will fail as the admin user you are logged in with does not have the user role. To be able to invoke this endpoint as well go back to the Keycloak admin console. Create a realm role named user. Then go to users find your admin user and under role mappings add the user role to the user.

You have now deployed and secured the application as well as seen how the application can securely invoke the service you deployed previously.

Thursday, 24 May 2018

Keycloak 4.0.0.Beta3 Released

To download the release go to the Keycloak homepage.


Fuse 7 Adapter

There's now support for Fuse 7.

Cordova options in JavaScript adapter

It's now possible to pass Cordova specific options to login and other methods in the JavaScript adapter. Thanks to loorent for the contribution.

Search by user id on admin console

If you wanted to search by a user by id in the admin console you had to edit the URL. It's now possible to do it directly in the user search field.


The full list of resolved issues is available in JIRA.


Before you upgrade remember to backup your database and check the upgrade guide for anything that may have changed.

Wednesday, 2 May 2018

Red Hat Single Sign-On @ Red Hat Summit

At Red Hat Summit this year there are no less than 4 sessions about Red Hat Single Sign-On! If you are going to Summit make sure to join us.

OpenShift + single sign-on = Happy security teams and happy users

Dustin Minnich, Joshua Cain, Jared Blashka, Brian Atkisson. Tuesday 4 PM.

One username and password to rule them all.

In this lab, we'll discuss and demonstrate single sign-on technologies and how to implement them using Red Hat products. We'll take you through bringing up an OpenShift cluster in a development environment, installing Red Hat single sign-on on top of it, and then integrating that with a variety of example applications.

Securing service mesh, microservices, and modern applications with JSON Web Token (JWT)

Stian Thorgersen, Sébastien Blanc. Wednesday 10:30 AM.

Sharing identity and authorization information between applications and services should be done with an open industry standard to ensure interoperability in heterogeneous environments. Javascript Object Signing and Encryption (JOSE) is a framework for securely sharing such information between heterogeneous applications and services.

In this session, we’ll cover the specifications of the JOSE framework, focusing especially on JSON Web Token (JWT). We’ll discuss practical applications of the JOSE framework, including relevant specifications, such as OpenID Connect. After this session, you’ll have an understanding of the specifications and how to easily adopt them using Red Hat single sign-on or another OpenID Connect provider.

Red Hat single sign-on: Present and future

Boleslaw Dawidowicz, John Doyle. Wednesday 3:30 PM.

Red Hat single sign-on (SSO) provides web SSO with modern, token-based protocols, such as OAuth and OpenID Connect. This session will highlight the features of the latest release and show the future direction of the technology within the Red Hat portfolio.

Securing apps and services with Red Hat single sign-on

Sébastien Blanc, Stian Thorgersen. Thursday 1:00 PM.

If you have a number of applications and services, the applications may be HTML5, server-side, or mobile, while the services may be monolithic or microservices, deployed on-premise or to the cloud. You may have started looking at using a service mesh. Now, you need to easily secure all these applications and services.

Securing applications and services is no longer just about assigning a username and password. You need to manage identities. You need two-factor authentication. You need to integrate with legacy and external authentication systems. Your list of other requirements may be long. But you don’t want to develop all of this yourself—nor should you.

In this session, we’ll demonstrate how to easily secure all your applications and services—regardless of how they're implemented and hosted—with Red Hat single sign-on. After this session, you'll know how to secure your HTML5 application or service, deployed to a service mesh and everything in between. Once your applications and services are secured with Red Hat single sign-on, you'll know how to easily adopt single sign-on, two-factor authentication, social login, and other security capabilities.

Keycloak 4.0.0.Beta2 released

To download the release go to the Keycloak homepage.


Pushed Claims

With pushed claims it is now possible for clients to push additional claims to have them used by policies when evaluating permissions.

Resource Attributes

It is now possible to define attributes on resources in order to have them used by policies when evaluating permissions.

Spring Boot 2 support

We now have support for Spring Boot 2.

Instagram identity provider

Thanks to hguerrero it is now easy to enable login with Instagram.

Slovak translation

Thanks to Joe32 we now have Slovak translations.


The full list of resolved issues is available in JIRA.


Before you upgrade remember to backup your database and check the upgrade guide for anything that may have changed.

Thursday, 19 April 2018

Keycloak Questionnaire

Are you using Keycloak? If so we would greatly appreciate it if you can take some time and answer some questions at

Thursday, 22 March 2018

Keycloak 4.0.0.Beta1 Released

I'm very pleased to announce the first release of Keycloak 4!

To download the release go to the Keycloak homepage.


Brand new login pages

The login pages have received a brand new look. They now look much more modern and clean!

UMA 2.0

Authorization Services have now introduced support for UMA 2.0 and added support for users to manage user access through the account management console. There's also a number of other additions and improvements to authorization services.

Themes and Theme Resources

It's now possible to hot-deploy themes to Keycloak through a regular provider deployment. We've also added support for theme resources. Theme resources allows adding additional templates and resources without creating a theme. Perfect for custom authenticators that require additional pages added to the authentication flow.

We've also added support to override the theme for specific clients. If that doesn't cover your needs, then there's a new Theme Selector SPI that allows you to implement custom logic to select the theme.

Native promise support to keycloak.js

The JavaScript adapter now supports native promises. Of course it still has support for the old style promises as well. Both can be used interchangeably.

Edit links in documentation

To make it easier to contribute changes to the documentation we have added links to all sections of the documentation. This brings you straight to the GitHub editor for the relevant AsciiDoctor file. There's also a quick link to report an issue on a specific page that will include the relevant page in the description.

HTTPS support on

Thanks to GitHub pages and Let's Encrypt there's finally HTTPS on About time?

Loads more..

The full list of resolved issues is available in JIRA.


Before you upgrade remember to backup your database and check the upgrade guide for anything that may have changed.

Monday, 26 February 2018

Keycloak and Istio

Keycloak and Istio

This short blog post is to share the first trials of combining Keycloak with Istio. 

What is Istio ?  

Istio is an platform that provides a common way to manage your service mesh. You may wonder what a service mesh is, well, it's an infrastructure layer dedicated to connect, secure and make reliable your different services. 

Istio, in the end, will be replacing all of our circuit-breakers, intelligent load balancing or metrics librairies, but also the way how two services will communicate in a secure way. And this is of course the interesting part for Keycloak. 

As you know Keycloak uses adapters for each of the application or service that it secures. These adapters make sure to perform the redirect if needed, to retrieve the public keys, to verify the JWT signature etc ...
There are a lot of different adapters depending on the type of application or technology that is used : there are Java EE adapters, JavaScript adapters and we even have a NodeJS adapter. 

The end of the adapters ? 

Following the Istio philosophy, these adapters would not be needed in the end because the Istio infrastructure will take care of the tasks the adapters were doing (signature verification etc ...). We are not yet there for now but in this post we will see what can already be done with Istio and how much it already can replace the role of the Adapters. 

The Envoy Sidecar

We won't dive into the details on how Istio works but there is one main concept to understand around which Istio is articulated : the Envoy Sidecar. Envoy is a high performance proxy deployed alongside with each deployed service and this is the reason we call it a "sidecar".  

Envoy captures all incoming and outgoing traffic of its "companion" service, it can then apply some basic operations and also collect data and send it to a central point of decision, called the "mixer" in Istio. The conifugration of Envoy itself happens through the "pilot" an other Istio component. 


Envoy Filters

To make it easier to add new functionnality to the Envoy Proxy, there is the concept of filters that you can stack up. Again, these filters can be congifured by the Pilot and they can gather information for the Mixer : 

The JWT-Auth Filter

The Istio team has been developping a filter that interest us : the jwt-auth filter. As the name suggests, this filter is capable of performing checks on a JWT token that the Envoy Proxy will extract from the HTTP Request's headers. 

The details about this filters can be found here.

The Keycloak-Istio Demo 

Now that you have the big picture in mind let's take a look at the demo that has been developed by Kamesh Sampath  (@kamesh_sampath) From the Red Hat Developer Experience Team to show how Keycloak and Istio can be combined : 

The demo will be running inside a Minishift instance, Minishift is a tool that helps to run OpenShift locally. Minishift has really nice support for Istio, as it takes only a few commands to install the Istio layer inside a Minishift instance. 

So inside our Minishift instance we will have  : 
  • A Keycloak Pod : a pod containing a Keycloak Server. 
  • A Web App Pod (Cars Web): this pod contains the Web App that will perform the authentification through the Keycloak login in order to obtain a JWT token 
  • Then we have the Istio related components :
    • The Pilot to configure the Envoy proxies
    • The Mixer to handle the attributes returned by Envoy
  • The API Service (Cars API) : this pod will have two containers :
    • The API service itself, in this case a simple Spring Boot Application
    • The Envoy Side-Car container

The demo repository provides the Istio script to delpoy the Envoy Sidecar alongside the Spring Boot Api Service. 

Thi is how the Cars API Pod looks like after it is deployed : 

Now, the Envoy Sidecar needs to be configured : 
  • We indicate what needs to be configured, the kind of policy and implicitly the correct filter (in our case the jwt-auth filter) will be configured. 
  • It needs to know where to retrieve Keycloak's Public key in order to verify the JWT signature. 
  • The issuer : who has generated the token ? In this case it's also the Keycloak Server. 

Now each incoming request to the API Service will be checked by the Envoy Sidecar to see if the JWT token contained in the header is valid or not. If it's valid the request be authorized otherwise
an error message will be returned. 

The full instructions of the demo (including setting up Minishift with Istio) can be found here and again thanks to the awesome Kamesh for the work he delivered for this demo. 

Friday, 9 February 2018

Keycloak and Angular CLI

So I made a schematic that installs and configures Keycloak in any Angular CLI application.

If you want to try it out, do this from the command line:
> npm install -g @ssilvert/keycloak-schematic
> ng new myApp
> cd myApp
> ng generate keycloak --collection @ssilvert/keycloak-schematic --clientId=myApp

Now Keycloak is integrated into your app.  Of course, you can do this with any existing Angular CLI application.  It doesn't have to be a new one.

Then, go to the Keycloak Admin console (master realm) and go to Clients --> Add Client --> Select File.

Select the client-import.json file that the "ng generate keycloak" command created in /myApp.

Assuming your Keycloak server is running on localhost:8080, you are ready to go.  Start your application:
> ng serve

Go to your browser to start the app and see this:
Oh joy! myApp is protected with Keycloak!

The keycloak-schematic installs a KeycloakService and a KeycloakGuard.  So you can easily:
  • Add login/logout buttons
  • Access user self service (account management)
  • Guard protected routes instead of the whole app
  • Work with roles
  • Lots more
Click here for a comprehensive getting started guide, full documentation, and sample code.

Note that this stuff is early alpha right now.  And it will move from @ssilvert to @keycloak before long.  In the mean time, I'd love to get feedback.  There is a lot to do to make Keycloak/Angular integration even better, but I think the keycloak-schematic is a big step forward.

So long, and thanks for all the fish.


Monday, 15 January 2018

Keycloak Cross Data Center Setup in AWS

Sample Keycloak Cross Data Center Setup in AWS Environment

With Keycloak 3.3.0, the support for large-scale deployment across multiple data centers (also called cross site, X-site, cross data-center, cross-DC) has become available. The natural question arose about how this support can be utilized in cloud environment. This blog post follows up on previous blog post on setting up cross-DC locally, and enhances it with an example of how to setup this type of deployment in Amazon Web Services (AWS).
It is strongly recommended to use version 3.4.3.Final at minimum as there were several important fixes done around cross-DC support since the first cross-DC-capable version.


The general architecture of a cross-DC deployment is described in detail in Keycloak documentation and briefly shown in the following diagram. There are several data centers (site1 and site2 in the picture that can be found in full scale in the documentation). The sites have a replicated database, set up ideally in multimaster synchronous replication mode. Each site has a cluster of Keycloak nodes and a cluster of Infinispan nodes. The clusters of Keycloak nodes are hidden behind a load balancer in private subnet; Infinispan nodes form a cluster within corresponding data center, and in addition utilize RELAY protocol to backup each other across data centers.


This post is based on three CloudFormation templates that gradually build two data centers with Keycloak instances, each data center in a separate AWS availability zone sharing the same virtual private cloud (VPC). Note that the templates are intended for trying/testing purposes only, not for production. The templates are described below:
  1. VPC stack. This stack creates a new VPC with four subnets: two of them in one availability zone, another two in another availability zone. One of the subnet in each availability zone is private, intended for Keycloak instances; the other subnet in each availability zone is intended for load balancer and Infinispan (so that these can communicate over the internet).

    The only parameter in this stack is the number B in VPC IP address range 10.B.0.0/16.

    Click the button below to launch this stack:
  2. Database and AMI stack. This stack creates an RDS Aurora MySQL-compatible database instance, builds Keycloak from source, creates S3 buckets necessary for dynamic node discovery via S3_PING protocol, and produces AMI image that contains both Keycloak and Infinispan preconfigured to form appropriate clusters. It relies on AWS Lambda-backed custom resources, so in order to create them, it is required that this template creates a role for these Lambdas. To launch this template, it is hence required that the user grants the CAPABILITY_IAM capability.

    Both Keycloak and Infinispan server are prepared just the same way as for running cross-DC tests, and then are placed into /opt/tests path and the relevant part of their configuration is updated to suit AWS deployment.

    This template has several parameters, most of them are self-describing:
    VPC stack name: Name of the stack created in the previous step
    Instance type for building image
    Database instance type: Type of the database as available in RDS
    Install diagnostic tools: Flag signalling whether the diagnostic tools should be installed
    URL to Maven repository for build: To speed up build, instead of downloading each Maven artifact, URL with a .zip file containing the whole $HOME/.m2 directory can be provided that would be unpacked prior to the actual build and provide the artifacts, thus speeding the build up.
    Keycloak Git repository and Git tag/branch/commit: Git repository and tag from which the build should start.

    Click the button below to launch this stack:
  3. Keycloak deployment stack. This stack creates instantiates one Infinispan node in public subnet per data center, given number of Keycloak servers in private subnet joined in the cluster in each data center, and an AWS Application load balancer to spread the load between the actual Keycloak servers. If not restoring database from backup, it also creates an initial user admin with password admin in master realm, and also configures master realm to permit insecure http access to the admin console (remember, it is only a test instance, don't do this in production!).

    This template has several parameters, most of them are self-describing:
    - AMI stack name: Name of the stack created in the previous step
    Keycloak instances per data centre: Number of Keycloak nodes per data center
    Instance type for Keycloak servers
    Instance type for Infinispan servers
    SSH key name: Name of EC2 ssh key used for instance initialization
    Load balancer scheme: This settings determines whether the load balancer would be assigned a public or private IP only. See AWS documentation for further information.
    Database backup URL: In case you have a dump of Keycloak MySQL/MariaDB database, you can initialize the database with it by providing URL to that dump. The dump might be optionally gzipped, .gz suffix of that dump is then mandatory.

    Click the button below to launch this stack:
Once you launch the last stack, Keycloak will be available at the load balancer address that will be shown in Outputs tab of the third stack under LoadBalancerUrl key.

Connecting to nodes

Since Infinispan nodes are assigned public IPs and the security group is set to permit SSH traffic, you can use standard way to access Infinispan nodes.

Accessing Keycloak nodes is only a bit more complicated since these are spawned in private subnets and can only be accessed via Infinispan nodes. You can either copy the private key to the intermediate Infinispan node and use it from there, or (easier) use SSH agent forwarding as follows:
  1. On your local host, add your AWS ssh key to agent:
    ssh-add /path/to/my/aws_ssh_key
  2. Now ssh to the Infinispan host with ssh adding the ForwardAgent option:
    ssh -oForwardAgent=yes \
  3. From the Infinispan host, you can now ssh to the Keycloak node:
    ssh ec2-user@${KeycloakServerDcX.PrivateDnsName}

Connecting to Infinispan JConsole

As you would find out from the cross-DC guide, many of the DC-wide operations require running JConsole and invoking operations on Infinispan JMX MBeans. For example, to take a DC offline, one has to first disable backups from the other DCs into the DC about to be shut down, and that is performed by invoking takeSiteOffline operation on CacheManager's GlobalXSiteAdminOperations MBean.

To connect, it is easiest to have a tunnel created to the Infinispan node via SSH command. To simplify the situation a bit, the ssh command for connecting to Infinispan server and creating the tunnel is shown in the Outputs tab of the third stack under SshToInfinispanDcX key, and it takes the following form:

ssh -L 19990: \
 -oStrictHostKeyChecking=no \
 -oUserKnownHostsFile=/dev/null \
 -oForwardAgent=yes \

In the command above, the host key checking is effectively disabled as this is only a test run, do not do this in production!
Now it is necessary to add an Infinispan management user so that it is possible to fill in JConsole credentials:

/opt/tests/cache-server-infinispan/bin/ -u admin -p pwd

The last thing is to run actual JConsole. Since JConsole does not have support for the service:jmx:remote+http protocol used by both Infinispan and Keycloak, it is necessary to modify JConsole classpath. Fortunately, this work has been already done in WildFly so we can use a script already prepared there. On your local host, extract either WildFly 10+ or Infinispan to path WF_ROOT, and run the following command:


In the New Connection window, specify Remote Process properties as follows (note that we're using port 19990 on localhost forwarded securely by ssh to actual management port above, this requires the ssh command above to be running for the whole time JConsole is used):
  • Remote Process: service:jmx:remote+http://localhost:19990
  • Username: admin
  • Password: pwd

Now you can connect to the running instance, navigate to any bean you need and perform operations as needed. The backup site names are configured by the AMI stack to values dc-1 and dc-2.

For further details, please inspect the configuration files in /opt/tests/auth-server-wildfly/standalone/configuration/standalone-ha-DC.xml and /opt/tests/cache-server-infinispan/standalone/configuration/clustered-DC.xml.


This blog has been written at the time Keycloak 3.4.3.Final has been released. There may be incompatible changes in the future but you should still be able to run the templates with this version.

Troubleshooting AWS specifics

  • Node discovery in both Keycloak and Infinispan cluster in AWS is handled by S3_PING protocol. This protocol however can operate only in regions that support Version 2 signatures due to this JGroups bug. See Amazon documentation on S3 endpoints for regions that support Version 2 signatures. Note that it might be possible to use new NATIVE_S3_PING protocol but this one has not yet been incorporated into Keycloak due to this WildFly issue. As a workaround, you might be able to use other discovery protocol, e.g. JDBC_PING.
  • The recommended database products for cross-DC deployments are only those listed in the documentation (currently Oracle Database 12c RAC and Galera cluster for MariaDB). It is possible to use ones available from Amazon RDS service. The templates from this blog are only ready for MySQL/MariaDB databases.
  • It is possible to use Amazon ALB for load balancing when the related target group is set to support Load balancer stickiness. ALB uses proprietary load balancer cookie and ignores routes set in Keycloak cookies, hence adding the route to cookie should be disabled in Keycloak configuration.

Thursday, 4 January 2018

Keycloak, Apache and OpenID Connect

mod_auth_openidc makes it easy to secure your applications running in Apache or when Apache is used as a reverse proxy. It can be used both for enabling SSO to web applications as well as to secure RESTful services. For more details check out our documentation as well as the guides from mod_auth_openidc.

Keycloak 3.4.3.Final released

We've just released Keycloak 3.4.3.Final.

To download the release go to the Keycloak homepage.

The full list of resolved issues is available in JIRA.


Before you upgrade remember to backup your database and check the upgrade guide for anything that may have changed.