lagom kubernetes

Lagom: From DEV Mode to Kubernetes (Azure Edition)

Overview

In a previous post, we talked about using Lagom for Event Sourcing and CQRS. This is exactly what we did for one of our projects lately with Reaktika.

But sooner or later, it’s time to bring this baby to production.

It’s easy to get started with Lagom based on the provided examples by Lightbend. Also, a lot of effort is put into getting up and running easily and trying out Lagom… in development mode.

As soon as we try to bring a set of applications (because Lagom is a microservices framework) to production, there are some additional steps to be taken care of. And that is exactly what we will discuss in this post.

Let’s jump in.

lagom kubernetes

The Cookie Monster Microservices Application

We won’t go too much into details about the application internals since we covered that in our previous blog. However, it’s important to show how the external components integrate with the Lagom application, especially if we are deploying to Kubernetes. That is why we will demonstrate a set of three Lagom applications that all illustrate an actual use case (ahum: we‘re talking about actual integrations, we could have a discussion about the business value of the content itself).

The example application is the cookie monster™ application*:

cookie monster microservices application

The Cookie application will be a clustered Lagom application that will run in multiple Kubernetes pods. For simplicity, we will use a managed Postgres database for the read side as well as for the write side. We will also make sure to publish all journal events to a Kafka topic (indicated by the dotted line).

This Kafka topic will be read by another Lagom application, Analytics, that is reporting and analyzing the events from the cookie app. For example, when new cookies are added.

The Monster service will consume the Cookie app (pun intended) with a synchronous call to adapt the information and pass it along. This is an important use case to illustrate the automatic service discovery that is provided by Lagom and makes integrating different microservices easy, also on Kubernetes.

Quite some things to setup so let’s get to work!

The Azure Flavors of Postgres and Kafka

We will demonstrate deploying this application on Azure, so we will use some managed services.

Azure has a managed Postgres service which we will use for the read and write side of the Cookie applications. Cosmos DB is by the way a good Azure alternative for a write side Cassandra database, since it also exposes a Cassandra API which is also supported by Lagom out-of-the-box.

We will also use Event Hub as a managed Kafka compliant service. As we will illustrate, Event Hub has made it possible to use Kafka clients to integrate with EventHub, cool stuff!

And most importantly, for the Kubernetes cluster we use the Azure Kubernetes Service or AKS to have a fully managed cluster.

The Goal

Before we dive in, let’s start with an overview of the solution we want to setup. This will give you some perspective on the parts we are working on and how all pieces fit together.

kubernetes

We’ll go over the steps one by one, starting from the Azure infrastructure, defining all application credentials, defining the external services to expose all components and setting up the Ingress to expose the applications to the outside world. Most focus will go to configuring the Lagom applications to integrate with all the components we see above to have a fully functional distributed application.

Infrastructure Setup

First things first, we need to setup some infrastructure in our Azure account.
We won’t spend too much time going over these steps but in the code repository, a list of all Azure CLI commands is included to setup the full infrastructure. Be sure to check that out when following along.

Setup your Azure account if you don’t have one yet and install the Azure CLI tool [1]. Next up:

• Create a resource group
• Create a container registry (ACR) for our docker containers (cookiemonsterrepository) [2]
• Create a Kubernetes cluster, that links to the ACR (CookieCluster)[3]
• Link the cluster to our local kubectl configuration to interact with the cluster
• Setup a managed Postgres cluster
• Setup an EventHub namespace and 1 EventHub (which is similar to a Kafka topic) [4]
• Create an ingress controller with public IP and DNS name for the cluster

Poof. Magic**.
Now we’re good to go to start integrating Lagom with this infrastructure!

In the end, we should have a running Kubernetes cluster, container registry for our docker images, Event Hub and Postgres.

Credentials Setup

Before we start the Lagom setup, we get some credentials up and running in the cluster for our external services.

Postgres uses username and password to connect to the database, which we have created when setting up the database in the previous section.
We will configure these credentials as Kubernetes secrets and inject them later on in the Lagom applications:

apiVersion: v1
kind: Secret
metadata:
  name: postgres-credentials 
type: Opaque
data:
  username: Y29va2llYWRtaW5AY29va2llcG9zdGdyZXM=
  password: Q29va2llMTIz

Note that the data in the secret needs to be Base64 encoded.

The endpoint for the database can be configured using an external service name:

kind: Service
apiVersion: v1
metadata:
  name: postgres
spec:
  type: ExternalName
  externalName: cookiepostgres.postgres.database.azure.com

EventHub uses SASL as the security protocol.
A JAAS or Java Authentication and Authorization Service (JAAS) login configuration file contains all details for our Kafka client to connect to our EventHub, i.e. Kafka topic.

# Create Kafka security details
> kubectl create secret generic kafka-credentials --from-file=jaas.conf

For EventHub, we also need to configure the endpoint to connect to. In this case, we will configure the IP address of the Azure service. Later on, we will reference this service with the defined name and port defined below.

apiVersion: v1
kind: Service
metadata:
  name: kafka
spec:
  ports:
    - protocol: TCP
      name: "broker"
      port: 9093
      targetPort: 9093
      nodePort: 0
---
apiVersion: v1
kind: Endpoints
metadata:
  name: kafka
subsets:
  - addresses:
      - ip: 52.236.186.6
    ports:
     - port: 9093
       name: "broker"

Ingress Setup

A last step before we are able to deploy our Lagom applications is to deploy our Ingress which will manage the external access to our cluster and decides which traffic needs to be routed to which application.

We will configure both the Cookie service and the Monster service to be exposed to the outside world:

apiVersion: "extensions/v1beta1"
kind: Ingress
metadata:
  name: "cookie-ingress"
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    ingress.kubernetes.io/ssl-redirect: "false"
  namespace: "default"
spec:
  rules:
    - host: cookiemonster.westeurope.cloudapp.azure.com
      http:
        paths:
          - path: "/api/cookie-service"
            backend:
              serviceName: "cookie-service"
              servicePort: 9000
          - path: "/api/monster-service"
            backend:
              serviceName: "monster-service"
              servicePort: 9000

Configuring SSL is out of scope for this blog, but it can also be setup in this Ingress in combination with a certificate manager like cert-manager.

If we test our API, we notice that our services are not reachable.

> curl -X GET cookiemonster.westeurope.cloudapp.azure.com/api/cookie-service/cookies

<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body>
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx/1.19.1</center>
</body>
</html>

Finally, time to change that by deploying our Lagom services!

Lagom Setup

In DEV mode, there is not too much to be configured since by default Lagom starts a local Kafka cluster and Cassandra database. Also, a mechanism is provided to locally resolve the location of your Lagom services. This is achieved by mixing in the LagomDevModeComponents in the application.

In production, we’ll need to shift gears and make sure a service locator is used that can handle service discovery in Kubernetes.
For this, we mix in the AkkaDiscoveryComponents:

override def load(context: LagomApplicationContext):LagomApplication = 
  new CookieServiceApplication(context) with AkkaDiscoveryComponents

For the production image, we will create a production.conf configuration file where we can override all necessary configurations for production.

We’ll configure the discovery method to use the kubernets-api:

discovery-method = kubernetes-api

Also, we will use the name postgres in the connection string which will resolve to our external service that we have defined before. Also, we will reference environment variables for the username and password.

db.default {
  driver = "org.postgresql.Driver"
  url = "jdbc:postgresql://postgres/cookie_db?sslmode=require"
  username = ${?POSTGRES_USERNAME}
  password = ${?POSTGRES_PASSWORD}
  hikaricp {
    maximumPoolSize = 5
  }
}

For Kafka, we need to configure the security protocol to be SASL_SSL:

akka.kafka.producer {
  kafka-clients {
  security.protocol=SASL_SSL
  sasl.mechanism=PLAIN
  ssl.endpoint.identification.algorithm=""
  }
}

And as discussed, we’ll configure the discovery mechanism in Kubernetes.

akka.management {
  cluster.bootstrap {
    contact-point-discovery {
      discovery-method = kubernetes-api
      required-contact-point-nr = ${REQUIRED_CONTACT_POINT_NR}
    }
  }
}

akka.discovery {
  kubernetes-api {
    pod-label-selector = "app=%s"
  }
}

Note that we can also configure the minimal amount of contact points that need to be up to consider a clustered application to be up.

If we want to deploy our application to Kubernetes, we first need to containerize the application. We use docker to build the Lagom application and afterwards push it to our container registry on Azure.

## BUILD
docker build -t cookie-app --build-arg APP_NAME=cookie-impl .

## TAG
docker tag cookie-app cookieregistry.azurecr.io/cookie-app:1.0.0

## PUSH TO REGISTRY
docker push cookieregistry.azurecr.io/cookie-app:1.0.0

We can build all three services by changing the APP_NAME build argument and the image name. An alternative is to build the docker image by using the sbt-native-packager plugin.

Lagom Deployment

Money time, we are able to deploy our application to the cluster.

Let’s define a deployment for our service where we link everything together:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: cookie-app
  name: cookie-app
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: cookie-app
  template:
    metadata:
      labels:
        app: cookie-app
        actorSystemName: cookie-app
      spec:
        volumes:
          - name: kafka-secret
            secret:
              secretName: kafka-credentials
      containers:
        - name: cookie-app
          image: cookieregistry.azurecr.io/cookie-app:1.0.0
          ports:
            - name: remoting
              containerPort: 2552
              protocol: TCP
            - containerPort: 8558
              protocol: TCP
            - name: http
              containerPort: 9000
              protocol: TCP
          volumeMounts:
            - name: kafka-secret
              mountPath: /etc/kafka/secrets
              readOnly: true
          env:
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: AKKA_CLUSTER_BOOTSTRAP_SERVICE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: "metadata.labels['app']"
            - name: POSTGRES_USERNAME
              valueFrom:
                secretKeyRef:
                  name: postgres-credentials
                  key: username
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres-credentials
                  key: password
            - name: REQUIRED_CONTACT_POINT_NR
              value: "2"
            - name: "JAVA_OPTS"
              value: "-Dlagom.akka.discovery.service-name-mappings.kafka_native.lookup=_broker._tcp.kafka.default.svc.cluster.local -Dplay.http.secret.key='mySuperSecretApplicationSessionToken' -Djava.security.auth.login.config=/etc/kafka/secrets/jaas.conf -Dconfig.resource=production.conf"

In this deployment file, we setup the environment variables for the Postgres connection, link the application to the production.conf we made earlier, configure the Kafka credentials and service mapping and configure the minimum amount of required nodes in the cluster.

To expose a stable interface to the different application pods, we also define a service for each application:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: "cookie-service"
  name: "cookie-service"
  namespace: "default"
spec:
  type: ClusterIP
  ports:
    - name: http
      port: 9000
      protocol: TCP
      targetPort: 9000
    - name: remoting
      port: 2552
      protocol: TCP
      targetPort: 2552
    - name: management
      port: 8558
      protocol: TCP
      targetPort: 8558
  selector:
    app: "cookie-app"

An important thing to notice, is that service names in the Kubernetes service should match the service name in the Lagom service descriptor. This way the services can locate each other. In our case the Monster service will need to contact the Cookie app.

🎊 TADAAA 🎉, we have configured the full application stack! Congratulations.

lagom deployment

The full code project can be found on Github.

Let’s Eat Them Cookies

To test the cookie monster application, we’ll list the cookies on as well the Cookie service as the Monster service and also open the logs of the Analytics service.

As soon as we start adding cookies to the Cookie service we can see the cookies being added, as well as being logged by the Analytics service.

The Monster service integrates with the Cookie service and shows the adapted results (i.e. eating a couple of cookies along the way).

Conclusion

In this blog post, we have build on top of our previous post, where we showed how the Lagom framework makes it easy to get started in DEV mode by configuring a lot of the integrating components out-of-the-box.

Here, we went over all necessary steps to get our set of Lagom applications to Kubernetes running in production mode. When deployed to the cluster, the true power of Lagom can be observed by dynamically forming clusters with multiple instances of an application and automatically discovering other services in the cluster when they come up and get removed from the cluster.

Deploying a Lagom application in this way, enables you to build much more robust applications that can scale with demand and withstand application failures.

Stay tuned for more reactive architecture blogs.
Do you have a question, great ideas and/or opportunities, feel free to get in touch and we’ll be happy to discuss any of them.

Notes:
(*) It’s called ‘the’ cookie monster™ application since the DNS name for our Kubernetes cookie cluster was still available, patent pending.
(**) In the project repository, the full code example includes a list of Azure CLI commands to perform to setup all these infrastructure components. Checkout the azure_setup.md file.

Links:
[1] https://docs.microsoft.com/en-us/cli/azure/eventhubs/eventhub?view=azure-cli-latest
[2] https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-acr
[3] https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster
[4] https://docs.microsoft.com/en-us/cli/azure/postgres?view=azure-cli-latest