Thursday, 21 May 2015

Securing Microservices

In a traditional multi-tiered architecture like the one shown in the picture below a server-side web tier deals with authenticating the user by calling out to a relational database or an LDAP server. An HTTP session is then created containing the required authentication and user details. The security context is propagated between the tiers within the application server so there's no need to re-authenticate the user.

With a microservice architecture the server-side web application is gone, instead we have HTML5 and mobile applications on the client side. The applications invoke multiple services, which may in turn call other services. In this architecture there's no longer a single layer that can deal with authentication and usually it's stateless as well so there's no HTTP session.

As microservices is all about having many smaller services each that deal with one distinct task the obvious solution to security is an authentication and authorization service. This is where Keycloak and OpenID Connect comes to the rescue. Keycloak provides the service you need to secure micro services.

The first step to securing micro services is authenticating the user. This is done by adding the Keycloak JavaScript adapter to your HTML5 application. For mobile applications there's a Keycloak Cordova adapter, but there's also native support though the AeroGear project.

In an HTML5 application you just need to add a login button and that's pretty much it. When the user clicks the login button the users browser is redirected to the login screen on the Keycloak server. The user then authenticates with the Keycloak server. As the authentication is done by the Keycloak server and not your application it's easy to add support for multi-factor authentication or social logins without having to change anything in your application.

Once the user is authenticated, Keycloak gives the application a token. The token contains details about the user as well as permissions the user has.

The second step in securing micro services is to secure them. Again, with Keycloak this is easy to do with our adapters. We've got JavaEE adapters, NodeJS adapters and are planning to adding more in the future. If we don't have an adapter it's also relatively easy to verify the tokens yourself. A token is basically just a signed JSON document and can be verified by the services itself or by invoking the Keycloak server.

If a service needs to invoke another service it can pass on the token it received, which will invoke the other service with the users permissions. Soon we'll add support for services to authenticate directly with Keycloak to be able to invoke other services with their own permissions, not just on behalf of users.

For more details about OpenID Connect you can look at the OpenID Connect website, but the nice thing is with Keycloak you don't really need to know the low level details so it's completely optional. As an alternative just go straight to the Keycloak website, download the server and adapters, and check out our documentation and many examples.

Tuesday, 19 May 2015

Keycloak 1.2.0.Final released

No major changes since 1.2.0.CR1 only some minor bug fixes.

For full details see JIRA and to download go to SourceForge.

Monday, 11 May 2015

Distribution changes

We're making some changes to how Keycloak is distributed. Some changes already arrived in 1.2.0, while others will come in 1.3.0. In this post I'll explain what changes we're making and why we're doing it.

Standalone will be the default download of the Keycloak server and will be the recommended option for production and also for non-JavaEE developers. While the appliance was built on the full WildFly application server, the new standalone download will be based on WildFly Core. This will provide a much smaller download and also reduce the footprint of the server.

For JavaEE developers we'll also provide an option to install Keycloak server into an existing WildFly application server. This download is not recommended for production. It will only be possible to install a release off Keycloak server onto one specific WildFly version.

These changes doesn't not affect adapters. Adapters will still be available for JBoss AS7, several versions of WildFly, EAP, Tomcat, Jetty and more.

There are a few reasons why we're going away from the WAR option:

  • Compatibility - We can focus on providing features instead of spending time trying to get things to work on multiple containers.
  • Testing - Again, we can focus on testing features rather than testing multiple containers.
  • WildFly features - We believe WildFly is the best container and it provides a lot of features other containers don't (for example modules, JBoss CLI, patching).
  • Integration - By only integrating with one container we can provide a single way of doing things. Some features are not feasible to add to multiple containers, for example authenticating clients with SSL certificates.

Wednesday, 6 May 2015

Persistent grants in Keycloak

In Keycloak versions before 1.2.0.CR1, we had two separate entities representing client applications. These were Applications, which didn't require consent from user after authentication and OAuth Clients, where the consent was always required.

In Keycloak 1.2.0.CR1 we simplified this and we merged both Applications and OAuth clients to single Client entity, which has on/off configuration switch in Keycloak admin console specifying if consent should be required or not.

When consent is not required from the user, then after user authenticates, the client application will automatically receive access token with all the personal info and permissions of the user. However if consent is required, Keycloak will display consent screen where user needs to confirm all the permissions required by particular Client.

From this release, the granted consents are persistent. This means that user doesn't need to confirm consent for particular client more times. Also each user has possibility to see all the available Client applications in Keycloak Account management and see:

  • Permissions he has for particular application
  • Granted personal info to particular application
  • Granted permissions for particular application

He can also revoke previously granted consent, so next time he authenticates to the application, the consent screen will be shown again.

Also it's possible to view and manage consents from Keycloak admin console. Admin can see the available consents for every user and he can revoke them.

Tuesday, 5 May 2015

Keycloak 1.2.0.CR1 Released

We've just released Keycloak 1.2.0.CR1. As usual this feature brings some great new features. This time around there's also some fairly big changes.

Distribution changes

Keycloak is now available in 3 downloads. Standalone, overlay and demo bundle. The standalone download is aimed at production and non-JavaEE developers and provides a standalone Keycloak server. Overlay is mainly aimed at JavaEE developers that want to add Keycloak to an existing WildFly 8.2.0.Final (or EAP 6.4.0.GA) installation. Finally the demo bundle is aimed at early adapters and contains WildFly with Keycloak as well as all documentation and examples.

Theme changes

We've updated the look and feel of the admin console, login pages and account management to better match PatternFly. This provides a better integration between Keycloak and other JBoss projects.

Client changes

In previous versions Keycloak had applications and oauth clients. The main difference between the two was that oauth clients required consent from the user. These have been combined into a single client with an option to require consent or not.

New features

  • Token mapping - Through token mapping it's possible to pull in additional information from brokered identity providers
  • Store and retrieve external token - It's now possible to store the token retrieved from brokered identity providers. Clients can retrieve this if they need to invoke services secured by the external identity provider.
  • Persist and manage consents - When a user consents access to a client the consents are now saved. Users can also view and manage consents given to clients through the account management console.
  • Password Policies - Through password policies it's now possible to prevent re-use of previous passwords, require users to regularly update their password and also provide a regular expression for required password format.
  • HttpClient SPI - The introduction of a HttpClient SPI makes it possible to configure the HTTP connections initiated by Keycloak. For example to provide a trust store.
  • KeycloakContext - KeycloakContext is exposed through KeycloakSession and gives providers access to HTTP headers, cookies, query parameters, etc.
  • Logging Updates - The JBoss Logging event listener is now enabled by default for new realms. This makes it easier to view debug log information for login events.
  • Spring Security Adapter preview - We now have a Spring Security Adapter. There's is no documentation and we haven't tested it thoroughly so consider this a preview.

Monday, 20 April 2015

Kerberos support in Keycloak

From the version 1.2.0.Beta1, Keycloak supports login with Kerberos ticket through SPNEGO. SPNEGO (Simple and Protected GSSAPI Negotiation Mechanism) is used to authenticate transparently through the web browser after the user has been authenticated with Kerberos when logging-in his desktop session. For non-web cases or when Kerberos ticket is not available during login, Keycloak also supports login with Kerberos username/password.

Flow

A typical use case for web authentication is the following:

  • User logs into his desktop (Such as a Windows machine in Active Directory domain or Linux machine with Kerberos integration enabled).
  • User then uses his browser (IE/Firefox/Chrome) to access a web application secured by Keycloak.
  • Application redirects to Keycloak login.
  • Keycloak sends HTML login screen together with status 401 and HTTP header WWW-Authenticate: Negotiate
  • In case that browser has Kerberos ticket from desktop login, it transfers the desktop sign on information to the Keycloak in header Authorization: Negotiate 'spnego-kerberos-token' . Otherwise it just displays login screen.
  • Keycloak validates token from browser and authenticates user. It provisions user data from LDAP (in case of LDAPFederationProvider with Kerberos authentication support) or let user to update his profile and prefill data (in case of KerberosFederationProvider).
  • Keycloak returns back to the application. Communication between Keycloak and application happens through OpenID Connect or SAML messages. The fact that Keycloak was authenticated through Kerberos is hidden from the application. So Keycloak acts as broker to Kerberos/SPNEGO login.

Keycloak also supports credential delegation. In this case, the web application might be able to reuse the Kerberos ticket and forwards it to another service secured by Kerberos (for example LDAP server or IMAP server). The tricky part is, that SPNEGO authentication happens on Keycloak server side, but you want to use the ticket on the application side. For this scenario, we serialize the GSS Credential with the underlying ticket and send it to the application in the OpenID Connect access token. For this scenario there are 2 more points:

  • Application deserializes the GSS credential with Kerberos ticket sent to it in access token from Keycloak
  • Application uses Kerberos ticket for sending request to another service secured by Kerberos

The whole flow may look complicated, but from the user perspective, it's the opposite! User with Kerberos ticket just visits the URL of the web application and he is logged automatically without even seeing Keycloak login screen.

Setup

For the Kerberos setup, you need to install and configure the Kerberos server and Kerberos client. You also need to setup your web browser. Also you need to export keytab file from your kerberos server and make it available to Keycloak. But these steps are not specific for Keycloak (you always need to do them for SPNEGO login support in any kind of web applications) and they are specific according to your OS and Kerberos vendor, so they are not described in details here.

Keycloak specific steps are just:

  • Setup of federation provider in keycloak admin console. You can either setup:
    • Kerberos federation provider - This provider is useful if you want to authenticate with Kerberos NOT backed by LDAP server.
    • LDAP federation provider - This provider is useful if you want to authenticate with Kerberos backed by LDAP server. Kerberos backed by LDAP server is very often the case in production for environments like FreeIPA or MSAD Windows domain
  • Enable GSS credential protocol mapper for your application (This is mandatory only if you need credential delegation).

For more details, take a look at Keycloak documentation, which describes everything in details and also points you to the Kerberos credential delegation example and FreeIPA Keycloak docker image .

Tuesday, 14 April 2015

Keycloak on Kubernetes with OpenShift 3

This is the second of two articles about clustered Keycloak running with Docker, and Kubernetes. In the first article we manually started one PostgreSQL docker container, and a cluster of two Keycloak docker containers. In this article we’ll use the same docker images from DockerHub, and configure Kubernetes pods to run them on OpenShift 3.


See the first article for more detailed instructions on how to set up Docker with VirtualBox.

Let’s get started with Kubernetes.


Installing OpenShift 3


OpenShift 3 is based on Kubernetes, so we'll use it as our Kubernetes provider. If you have a ready-made Linux system with Docker daemon running, then the easiest way to install OpenShift 3 is by installing Fabric8.


Open a native shell, and make sure your Docker client can be used without sudo - you either have to have your environment variables set up properly, or you have to be root.


To set up shell environment determine your Docker host’s IP, and set the following:


 export DOCKER_IP=192.168.56.101
 export DOCKER_HOST=tcp://$DOCKER_IP:2375


Make sure to replace the IP with the one of your Docker host.


Now, simply run the following one-liner that will download and execute Fabric8 installation script, which among other things installs OpenShift 3 as a Docker container, and properly sets up your networking using iptables / route ...


 bash <(curl -sSL https://bit.ly/get-fabric8) -f


It will take a few minutes for various Docker images to be downloaded, and started. At the end a browser window may open up - if you are in a desktop environment. You can safely close it as we won’t need it.


The next thing to do is to set up an alias for executing an OpenShift client tool:


 alias osc="docker run --rm -i -e KUBERNETES_MASTER=https://$DOCKER_IP:8443 --entrypoint=osc --net=host openshift/origin:v0.3.4 --insecure-skip-tls-verify"


Note: OpenShift development moves fast. By the time you're reading this the version may not be v0.3.4 any more. You can use docker ps to identify the current version used.

Every time we execute osc command in the shell, a new Docker container is created for one-time use from OpenShift image, and its local copy of osc is executed.


Let’s make sure that it works:

 osc get pods


We should get back a list of several pods created by OpenShift and Fabric8.


Kubernetes basics



Kubernetes is a technology for provisioning and managing of Docker containers.


While the scope of Docker is one host running one Docker daemon, Kubernetes works at the level of many hosts each running a Docker daemon, and a Kubernetes agent called Kubelet. There is also a Kubernetes master node running a kubernetes daemon, providing central management, monitoring, and provisioning of components.


There are three basic types of components in Kubernetes:

Pods

Pod is a virtual server that is composed of one or more Docker containers - which are like processes in this virtual server. Each pod gets a newly allocated IP address, hostname, port space, and process space which are all shared by the docker containers of that pod (they can even communicate via SystemV IPC or POSIX message queues).


Services

Service is a front end portal to a set of pods providing the same service. Each service gets a newly allocated IP, where it listens on a specific port, and tunnels established connections to backend pods in round-robin fashion.


Replication controllers

These are agents that monitor pods. Each agent enforces that a specified number of instances of its monitored pod is available at any one time. If there are more it will randomly delete some, it there are less it will create new ones.



Every requested action at the level of Kubernetes occurs at the level of pods. While you can still directly interact with Docker containers using docker client tool, the idea of Kubernetes is that you shouldn’t. Any operation at Docker container level is supposed to be performed automatically by Kubernetes as necessary. If a certain Docker container started and monitored by Kubernetes dies, Kubernetes will create a new one.


When one pod needs to connect to another - like in our case Keycloak needs to connect to PostgreSQL - that should be done through a service. While individual pods come and go, constantly changing their IP addresses, the service is a more permanent component and also its IP address is thus more permanent.


Armed with that knowledge we can now define and create a new Keycloak cluster that uses PostgreSQL.


Creating a cluster using Kubernetes


There is an example container definition file available on GitHub that makes use of the same Docker images we used in the previous article.


In this example configuration we define three services:
  • postgres-service ... listens on port 5432 and tunnels to postgres pods to port 5432
  • keycloak-http-service … listens on port 80 and tunnels to keycloak pods to port 8080
  • keycloak-https-service … listens on port 443 and also tunnels to keycloak pods, but to port 8443


We then define two replication controllers:
  • postgres-controller … monitors postgres pods, and makes sure exactly one pod is available at any one time
  • keycloak-controller … monitors keycloak pods, and makes sure exactly two pods are available at any one time


And we define two pods:
  • postgres-pod … contains one docker container based on latest official ‘postgres’ image
  • keycloak-pod … contains one docker container based on latest jboss/keycloak-ha-postgres image


With this file we can now create, and start up our whole cluster with one line:


 osc create -f - < keycloak-kube.json


We can monitor progress of new pods coming up, by first listing pods:


$ osc get pods

POD                         IP            CONTAINER(S)                
keycloak-controller-559a8   172.17.0.12   keycloak-container
keycloak-controller-zorqg   172.17.0.13   keycloak-container
postgres-controller-exkqq   172.17.0.11   postgres-container          


(there are more columns, but I did not include them here)


What we are interested in here are the exact pod ids so we can attach to their output.


We can check how PostgreSQL is doing:


 osc log -f postgres-controller-exkqq


And then make sure each of the Keycloak containers started up properly, and established a cluster:

 osc log -f keycloak-controller-559a8


In my case the first container has started up without error, and I can see the line in the log that tells the cluster of two Keycloak instances has been established:


2015-04-02T09:26:18.683827888Z 09:26:18,678 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,shared=udp) ISPN000094: Received new cluster view: [keycloak-controller-559a8/keycloak|1] (2) [keycloak-controller-559a8/keycloak, keycloak-controller-zorqg/keycloak]


When things go wrong


Let’s also check the other instance:


 osc log -f keycloak-controller-zorqg



In my case I see a problem with the second instance - there is a nasty error:


2015-04-02T09:26:37.124344660Z 09:26:37,074 ERROR [org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider] (MSC service thread 1-1) Change Set META-INF/jpa-changelog-1.1.0.Final.xml::1.1.0.Final::sthorger@redhat.com failed.  Error: Error executing SQL ALTER TABLE public.EVENT_ENTITY RENAME COLUMN TIME TO EVENT_TIME: ERROR: column "time" does not exist: liquibase.exception.DatabaseException: Error executing SQL ALTER TABLE public.EVENT_ENTITY RENAME COLUMN TIME TO EVENT_TIME: ERROR: column "time" does not exist

...


What’s going on?


It turns out that Keycloak version that we used for the Docker image at the time of this writing contains a bug that appears when multiple Keycloak instances connecting to the same PostgreSQL database start up at the same time. The bug can be tracked in project’s JIRA.


In my case all I have to do is kill the problematic instance, and Kubernetes will create a new one. 

The proper handling would be for Kubernetes to detect that one pod has failed to start up properly, and kill it. But then Kubernetes would have to understand how to detect a fatal startup condition in a still running Keycloak process. As an alternative we could have Keycloak exit the JVM with error code when detecting improper start up. In that case Kubernetes would create another pod instance automatically.

 osc delete pod keycloak-controller-zorqg


Kubernetes will immediately determine that it should create another keycloak-pod to lift their count to two.


$ osc get pods

POD                         IP            CONTAINER(S)                
keycloak-controller-559a8   172.17.0.12   keycloak-container
keycloak-controller-xkq43   172.17.0.14   keycloak-container
postgres-controller-exkqq   172.17.0.11   postgres-container          


We can see another pod instance: keycloak-controller-xkq43 with a new IP address.


Let’s make sure it starts up:


 osc log -f postgres-controller-xkq43


This time the instance starts up without errors, and we can also see that a new JGroups cluster is established:


2015-04-02T10:09:32.615783260Z 10:09:32,615 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000094: Received new cluster view: [keycloak-controller-559a8/keycloak|3] (2) [keycloak-controller-559a8/keycloak, keycloak-controller-xkq43/keycloak]


Making sure things work


We can now try to access Keycloak through each of the pods - just to make sure - since the proper way to access Keycloak now is through keycloak-service.


In my case the following two pod urls properly work (they can be accessed from Docker host): http://172.17.0.12:8080, and http://172.17.0.14:8080


The ultimate test is to use a keycloak-service IP address.


Let’s list the running services:


$ osc get services


NAME                    SELECTOR           IP              PORT
keycloak-http-service   name=keycloak-pod  172.30.17.192   80
keycloak-https-service  name=keycloak-pod  172.30.17.62    443
postgres-service        name=postgres-pod  172.30.17.246   5432


(there are more columns, but I did not include them here)


We can see all our services listed, and we can see their IP addresses. Here we’re interested in keycloak-http-service so let’s try to access Keycloak through it from Docker host: http://172.30.17.192


Note, that if you want to access this IP address from another host (not the one hosting Docker daemon) you would have to set up routing or port forwarding.


For example, using boot2docker on OS X, and accessing a VirtualBox instance running Docker daemon I have to go to native Terminal on OS X and type:


sudo route -n add 172.30.0.0/16 $DOCKER_IP


When browser establishes a TCP connection to port 80 of Keycloak service’s IP address, there is a tunneling proxy there that creates another connection to one of the Keycloak pods (chosen in round robin fashion), and tunnels all the traffic through to it. Each Keycloak instance will therefore see the client IP to be equal to the service IP. Also, during our browser session many connections will be established - half of them will be tunneled to one pod, the other half to the other pod. Since we have set up Keycloak in clustered mode it doesn’t matter which pod the request hits - they both use the same distributed cache, and consequently always generate the same response - without a need for sticky sessions.


Conclusion


We used an example Kubernetes configuration to start up PostgreSQL, and a cluster of two Keycloak instances on OpenShift 3 through Kubernetes, using the same Docker images available on DockerHub that we used in the previous article where we created the same kind of cluster - in that case using Docker directly.

For production quality scalable cloud we still need to provide a monitoring mechanism that detects when a Keycloak instance isn't operational.

Also worth noting is that Keycloak clustered setup used in our image requires multicast, and will only work when Keycloak pods are deployed on the same Docker host - the same Kubernetes worker node. Multicast is generally not available in production cloud environments, and the fact that it does work here is a side-effect of current implementation of OpenShift 3, and may change in the future. For a more proper cloud setup, a Kubernetes-aware direct TCP discovery mechanism should be configured on JGroups. One candidate solution for that is a kubeping project.

Also, for real high availability we should also make sure the database is highly available. In this example we used PostgreSQL, for which there are multiple ways to make it highly available, with different tradeoffs between data consistency and performance. Maybe a topic for another post.