Deploying via Kubernetes

Last updated on 19th April 2024

Note the following example makes usage of the Azure instance of Kubernetes; however if you follow the instructions below the process is as standard as creating a Docker image more here and using that for deployment.

Deploying using ACS (Different from ACS-Engine)

Before you Begin

While the methods to do this are possible via the Azure Portal - the below setup makes usage of Powershell. Make sure that you have the Azure CLI and the Kubernetes CLI installed.

Azure CLI - https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest

Kubernetes CLI - Run "az acs kubernetes install-cli"

From Powershell make sure you are both logged into the Azure subscription, and Docker hub

Login-AzureRmAccount

docker login

Then you will need to publish your Docker image to a repo see here for more info You can do this either directly through docker hub, or the steps below.

Working with Docker

After running mvn -s settings.xml clean package docker:build you will have created in the target/docker folder your properties file, and the computed jar. (Update this Pom.xml to include the webapp directory for the splash/demo console, or just update the generated Dockerfile directly.

The Dockerfile will look like this.

FROM simudyne/scala-sbt:2.12.12.1.0.4
ADD /simudyneSDK.properties //
ADD /simudyne-demos-1.0-SNAPSHOT-all.jar //
ENTRYPOINT ["java", "-jar", "/simudyne-demos-1.0-SNAPSHOT-allinone.jar"]

(Optional) Working with a Container Registry

Below we'll first create the Azure Container Registry (ACR). Again substitute the Resource Group name, and you'll also want to choose an ACR name. Once logged in you will query to find the ACR login server name which is where you will tag and push your repo.

Then similar to pushing to the docker hub you can tag, and then push your image.

az acr create --resource-group <myResourceGroup> --name <acrName> --sku Basic az acr login --name <acrName> az acr list --resource-group <myResourceGroup> --query "[].{acrLoginServer:loginServer}" --output table docker tag simudyne-demos-2.0.0-beta.2 <acrLoginServer>/simu-repo:demos docker push <acrLoginServer>/simu-repo:demos

Setup Resource and Cluster

  1. First create a resource group changing myResourceGroup, and possibly location as needed. You can freely do this via the portal if you want.

  2. Then we actually create the cluster, this automatically create 3 worker/agent nodes. It will take a bit of time, so go grab a coffee or something.

  3. We'll then merge the cluster into our context in order to make commands via the kubectl. (as an FYI you can make a copy/bat to shorten this to 'kc')

  4. We'll first check the nodes using the --watch or -w command. This will override the screen (Ctrl-C) to exist but will automatically update with changes.

  5. We then create the deployment as specified in the yml.

  6. Check the pod to make sure the deployment is live.

  7. Then we'll expose the deployment as a service. (This is different than creating a service via yml)

az group create --name simudyne-demos --location westeurope az acs create --orchestrator-type kubernetes --resource-group simudyne-demos --name myK8SCluster --generate-ssh-keys az acs kubernetes get-credentials --resource-group simudyne-demos --name myK8SCluster kc get nodes -w kc create -f .\dockerdemo.yml kc get pods -w kc expose deployments/docker-demo --port=8080

Working with Ingress/Nginx

  1. Once installed, you'll first want to initialize Helm

  2. Than you'll install the controller.

  3. Check that's installed by using the list command

  4. (Optional) If you are not using a cert as provided by an App Service Certificate (exporting the key/crt from the pfx) you will want to install and work with a Let's Encrypt cert which will be automatically generated for you.

  5. For our purposes though we already have a wildcard certificate and just need to create the secret that will be used by the TLS section of the yml

  6. Finally we check the services and wait for the public ip to show. (will start as ). Take note you should ONLY see a single IP exposed through the Nginx controller. This is because by exposing the application as we did above instead of via a different load balancer or server we've enabled the TLS to handle through Nginx instead of our application.

helm init --upgrade helm install stable/nginx-ingress helm list helm install --set config.LEGO_EMAIL=yourname@email.com --set config.LEGO_URL=https://acme-v01.api.letsencrypt.org/directory stable/kube-lego kc create secret tls custom-tls-cert --key .\Certs\certificate.key --cert .\Certs\certificate.crt kc create -f .\demosimudyne.yml kc get svc -w

Then Update DNS to point to the IP address exposed by the Ingress Controller. Note that ONLY the Ingress Controller (as a Load Balancer) should show a public IP. This is because rather than deploying the application under it's own Load Balancer which requires us to server SSL Traffic, we are using Ingress to properly route that to the app at port 8080 instead.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
      kubernetes.io/tls-acme: "true"
      nginx.ingress.kubernetes.io/affinity: "cookie"
      nginx.ingress.kubernetes.io/session-cookie-name: "route"
      nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
    name: ingress
spec:
    rules:
      - host: demo.simudyne.com
        http:
          paths:
            - path: /
              backend:
                serviceName: docker-demo
                servicePort: 8080
               
    tls:
        - hosts:
            - demo.simudyne.com
          secretName: custome-tls-cert

Setting up Auto-scaling

Now that will create a pod across the 3 nodes, but if you want more replicas there are 2 commands you can employ.

The first "kubectl scale" allows you to choose a deployment, and assign how many replicas you want. That's not ideal though.

Instead you should use autoscale, set a parameter (this case cpu % = 50) and a min and max # of pods. 3 pods is the common minimum and really you should keep it to that. Note this autoscales the pods only NOT THE ACTUAL MACHINES. To do that you'd need to go into the Container Service on the Azure Portal and increase the node count.

Alternatively you can do this via Powershell following the below step to specify a new agent count.

kubectl scale --replicas=5 deployment/docker-demo

kubectl autoscale deployment docker-demo --cpu-percent=50 --min=3 --max=10

az acs scale --resource-group=myResourceGroup --name=myK8SCluster --new-agent-count 4

Note: If you are autoscaling - you will need to configure the load balancer to have Session Persistence - this is done by going to the Load Balancer you created as part of your deployment. This means that if you change the setup, or have to redeploy the configuration you will need to configure the Load Balancer again. Select "Client IP"

Also make sure that you have the nginx-ingress affinity sections in your YAML as otherwise when working with our simulations you won't get a sticky session.

Adding Operations Management Suite (OMS)

This is an additional tool that is mainly for monitoring you K8s cluster, and can be quite helpful to determine if you're getting a lot of errors, whether there's an undue load, etc.

First you'll actually want to create a service for analytics on Azure, this will give you the workspace ID and Key that you'll need to update below.

Create and give it a name, once complete you'll see a screen like the below.

From there click 'Advanced Settings' and go to the Linux Server connections

Use these values to update the below myWorkspaceID and myWorkspaceKey

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: omsagent
spec:
 template:
  metadata:
   labels:
    app: omsagent
    agentVersion: v1.3.4-127
    dockerProviderVersion: 10.0.0-25
  spec:
   containers:
     - name: omsagent
       image: "microsoft/oms"
       imagePullPolicy: Always
       env:
       - name: WSID
         value: myWorkspaceID
       - name: KEY
         value: myWorkspaceKey
       - name: DOMAIN
         value: opinsights.azure.com
       securityContext:
         privileged: true
       ports:
       - containerPort: 25225
         protocol: TCP
       - containerPort: 25224
         protocol: UDP
       volumeMounts:
        - mountPath: /var/run/docker.sock
          name: docker-sock
        - mountPath: /var/log
          name: host-log
       livenessProbe:
        exec:
         command:
         - /bin/bash
         - -c
         - ps -ef | grep omsagent | grep -v "grep"
        initialDelaySeconds: 60
        periodSeconds: 60
   volumes:
    - name: docker-sock
      hostPath:
       path: /var/run/docker.sock
    - name: host-log
      hostPath:
       path: /var/log

From here create the daemonset by running the below command, using the traditional and watch to wait until it's available. To view the portal click "OMS Portal" in the above image from the Log Analytics

kubectl create -f oms-daemonset.yaml kubectl get daemonset

After a few minutes this portal should populate with a wealth of data.

Steps to Update with New Release

az acs kubernetes get-credentials --resource-group simudyne-demos --name myK8SCluster kc set image deployment docker-demo docker-demo=simudyne/docker-demo:alpha1 kc get pods -w

Working with AKS (Preview)

There are a few changes you will need to do in order to instead work with AKS (Preview)

  1. First we need to register with the provider so we can use the Preview

  2. We create a resource group same as before.

  3. Very little change here for creating the cluster and merging into the context. It's simply changing to aks.

  4. Then we'll want to make sure (especially for below custom metrics) we'll need to upgrade Kubernetes. To do this we first will get possible upgrades

  5. Then we'll apply the upgrade (this will take a while)

  6. Finally we can show that the upgrade to the cluster has processed.

az provider register -n Microsoft.Network az provider register -n Microsoft.Storage az provider register -n Microsoft.Compute az provider register -n Microsoft.ContainerService

az group create --name myResourceGroup --location westeurope az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1 --generate-ssh-keys az aks get-credentials --resource-group myResourceGroup --name myAKSCluster

az aks get-upgrades --name myAKSCluster --resource-group myResourceGroup --output table az aks upgrade --name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.8.7 az aks show --name myAKSCluster --resource-group myResourceGroup --output table

The other change you'll now be able to do is custom metrics for HPA, including memory utilization built in. You'll need to make changes to the resources in your application YAML, and instead of using the autoscale fature you'll deploy the HPA via a new YAML.

resources:
  requests:
    memory: "128Mi"
    cpu: 250m
  limits:
    memory: "2Gi"
    cpu: 750m

Below is the hpa.yml

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: docker-demo
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: docker-demo
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 80
  - type: Resource
    resource:
      name: memory
      targetAverageValue: 1Gi