{"data":{"markdownRemark":{"html":"<p>Note the following example makes usage of the Azure instance of Kubernetes; however if you follow the instructions below the process is as standard as creating a Docker image <a href=\":version/reference/run_deploy/docker\">more here</a> and using that for deployment.</p>\n<h2 id=\"deploying-using-acs-different-from-acs-engine\"><a href=\"#deploying-using-acs-different-from-acs-engine\" aria-hidden=\"true\" class=\"anchor\"><svg aria-hidden=\"true\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>Deploying using ACS (Different from ACS-Engine)</h2>\n<h3 id=\"before-you-begin\"><a href=\"#before-you-begin\" aria-hidden=\"true\" class=\"anchor\"><svg aria-hidden=\"true\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>Before you Begin</h3>\n<p>While the methods to do this are possible via the Azure Portal - the below setup makes usage of Powershell. Make sure that you have the Azure CLI and the Kubernetes CLI installed.</p>\n<p>Azure CLI - <a href=\"https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest\">https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest</a></p>\n<p>Kubernetes CLI - Run \"az acs kubernetes install-cli\"</p>\n<p>From Powershell make sure you are both logged into the Azure subscription, and Docker hub</p>\n<p><code class=\"language-text\">Login-AzureRmAccount</code></p>\n<p><code class=\"language-text\">docker login</code></p>\n<p>Then you will need to publish your Docker image to a repo see here for <a href=\":version/reference/run_deploy/docker\">more info</a> You can do this either directly through docker hub, or the steps below.</p>\n<h2 id=\"working-with-docker\"><a href=\"#working-with-docker\" aria-hidden=\"true\" class=\"anchor\"><svg aria-hidden=\"true\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>Working with Docker</h2>\n<p>After running mvn -s settings.xml clean package docker:build you will have created in the target/docker folder your properties file, and the computed jar. (Update this Pom.xml to include the webapp directory for the splash/demo console, or just update the generated Dockerfile directly.</p>\n<p>The Dockerfile will look like this.</p>\n<div class=\"gatsby-highlight\" data-language=\"dockerfile\"><pre class=\"language-dockerfile\"><code class=\"language-dockerfile\">FROM simudyne/scala-sbt:2.12.12.1.0.4\nADD /simudyneSDK.properties //\nADD /simudyne-demos-1.0-SNAPSHOT-all.jar //\nENTRYPOINT [&quot;java&quot;, &quot;-jar&quot;, &quot;/simudyne-demos-1.0-SNAPSHOT-allinone.jar&quot;]</code></pre></div>\n<h2 id=\"optional-working-with-a-container-registry\"><a href=\"#optional-working-with-a-container-registry\" aria-hidden=\"true\" class=\"anchor\"><svg aria-hidden=\"true\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>(Optional) Working with a Container Registry</h2>\n<p>Below we'll first create the Azure Container Registry (ACR). Again substitute the Resource Group name, and you'll also want to choose an ACR name. Once logged in you will query to find the ACR login server name which is where you will tag and push your repo.</p>\n<p>Then similar to pushing to the docker hub you can tag, and then push your image.</p>\n<p><code class=\"language-text\">az acr create --resource-group &lt;myResourceGroup&gt; --name &lt;acrName&gt; --sku Basic az acr login --name &lt;acrName&gt; az acr list --resource-group &lt;myResourceGroup&gt; --query &quot;[].{acrLoginServer:loginServer}&quot; --output table docker tag simudyne-demos-2.0.0-beta.2 &lt;acrLoginServer&gt;/simu-repo:demos docker push &lt;acrLoginServer&gt;/simu-repo:demos</code></p>\n<h2 id=\"setup-resource-and-cluster\"><a href=\"#setup-resource-and-cluster\" aria-hidden=\"true\" class=\"anchor\"><svg aria-hidden=\"true\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>Setup Resource and Cluster</h2>\n<ol>\n<li>\n<p>First create a resource group changing myResourceGroup, and possibly location as needed. You can freely do this via the portal if you want.</p>\n</li>\n<li>\n<p>Then we actually create the cluster, this automatically create 3 worker/agent nodes. It will take a bit of time, so go grab a coffee or something.</p>\n</li>\n<li>\n<p>We'll then merge the cluster into our context in order to make commands via the kubectl. (as an FYI you can make a copy/bat to shorten this to 'kc')</p>\n</li>\n<li>\n<p>We'll first check the nodes using the --watch or -w command. This will override the screen (Ctrl-C) to exist but will automatically update with changes.</p>\n</li>\n<li>\n<p>We then create the deployment as specified in the yml.</p>\n</li>\n<li>\n<p>Check the pod to make sure the deployment is live.</p>\n</li>\n<li>\n<p>Then we'll expose the deployment as a service. (This is different than creating a service via yml)</p>\n</li>\n</ol>\n<p><code class=\"language-text\">az group create --name simudyne-demos --location westeurope</code>\n<code class=\"language-text\">az acs create --orchestrator-type kubernetes --resource-group simudyne-demos --name myK8SCluster --generate-ssh-keys</code>\n<code class=\"language-text\">az acs kubernetes get-credentials --resource-group simudyne-demos --name myK8SCluster</code>\n<code class=\"language-text\">kc get nodes -w</code>\n<code class=\"language-text\">kc create -f .\\dockerdemo.yml</code>\n<code class=\"language-text\">kc get pods -w kc expose deployments/docker-demo --port=8080</code></p>\n<h2 id=\"working-with-ingressnginx\"><a href=\"#working-with-ingressnginx\" aria-hidden=\"true\" class=\"anchor\"><svg aria-hidden=\"true\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>Working with Ingress/Nginx</h2>\n<ol>\n<li>\n<p>Once installed, you'll first want to initialize Helm</p>\n</li>\n<li>\n<p>Than you'll install the controller.</p>\n</li>\n<li>\n<p>Check that's installed by using the list command</p>\n</li>\n<li>\n<p>(Optional) If you are not using a cert as provided by an App Service Certificate (exporting the key/crt from the pfx) you will want to install and work with a Let's Encrypt cert which will be automatically generated for you.</p>\n</li>\n<li>\n<p>For our purposes though we already have a wildcard certificate and just need to create the secret that will be used by the TLS section of the yml</p>\n</li>\n<li>\n<p>Finally we check the services and wait for the public ip to show. (will start as <pending>). Take note you should ONLY see a single IP exposed through the Nginx controller. This is because by exposing the application as we did above instead of via a different load balancer or server we've enabled the TLS to handle through Nginx instead of our application.</p>\n</li>\n</ol>\n<p><code class=\"language-text\">helm init --upgrade</code>\n<code class=\"language-text\">helm install stable/nginx-ingress</code>\n<code class=\"language-text\">helm list</code>\n<code class=\"language-text\">helm install --set config.LEGO_EMAIL=yourname@email.com --set config.LEGO_URL=https://acme-v01.api.letsencrypt.org/directory stable/kube-lego</code>\n<code class=\"language-text\">kc create secret tls custom-tls-cert --key .\\Certs\\certificate.key --cert .\\Certs\\certificate.crt</code>\n<code class=\"language-text\">kc create -f .\\demosimudyne.yml</code>\n<code class=\"language-text\">kc get svc -w</code></p>\n<p>Then Update DNS to point to the IP address exposed by the Ingress Controller. Note that ONLY the Ingress Controller (as a Load Balancer) should show a public IP. This is because rather than deploying the application under it's own Load Balancer which requires us to server SSL Traffic, we are using Ingress to properly route that to the app at port 8080 instead.</p>\n<div class=\"gatsby-highlight\" data-language=\"yml\"><pre class=\"language-yml\"><code class=\"language-yml\">apiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n    annotations:\n      kubernetes.io/ingress.class: nginx\n      kubernetes.io/tls-acme: &quot;true&quot;\n      nginx.ingress.kubernetes.io/affinity: &quot;cookie&quot;\n      nginx.ingress.kubernetes.io/session-cookie-name: &quot;route&quot;\n      nginx.ingress.kubernetes.io/session-cookie-hash: &quot;sha1&quot;\n    name: ingress\nspec:\n    rules:\n      - host: demo.simudyne.com\n        http:\n          paths:\n            - path: /\n              backend:\n                serviceName: docker-demo\n                servicePort: 8080\n               \n    tls:\n        - hosts:\n            - demo.simudyne.com\n          secretName: custome-tls-cert</code></pre></div>\n<h2 id=\"setting-up-auto-scaling\"><a href=\"#setting-up-auto-scaling\" aria-hidden=\"true\" class=\"anchor\"><svg aria-hidden=\"true\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>Setting up Auto-scaling</h2>\n<p>Now that will create a pod across the 3 nodes, but if you want more replicas there are 2 commands you can employ.</p>\n<p>The first \"kubectl scale\" allows you to choose a deployment, and assign how many replicas you want. That's not ideal though.</p>\n<p>Instead you should use autoscale, set a parameter (this case cpu % = 50) and a min and max # of pods. 3 pods is the common minimum and really you should keep it to that. Note this autoscales the pods only NOT THE ACTUAL MACHINES. To do that you'd need to go into the Container Service on the Azure Portal and increase the node count.</p>\n<p>Alternatively you can do this via Powershell following the below step to specify a new agent count.</p>\n<p><code class=\"language-text\">kubectl scale --replicas=5 deployment/docker-demo</code></p>\n<p><code class=\"language-text\">kubectl autoscale deployment docker-demo --cpu-percent=50 --min=3 --max=10</code></p>\n<p><code class=\"language-text\">az acs scale --resource-group=myResourceGroup --name=myK8SCluster --new-agent-count 4</code></p>\n<p>Note: If you are autoscaling - you will need to configure the load balancer to have Session Persistence - this is done by going to the Load Balancer you created as part of your deployment. This means that if you change the setup, or have to redeploy the configuration you will need to configure the Load Balancer again. Select \"Client IP\"</p>\n<p>Also make sure that you have the nginx-ingress affinity sections in your YAML as otherwise when working with our simulations you won't get a sticky session.</p>\n<h2 id=\"adding-operations-management-suite-oms\"><a href=\"#adding-operations-management-suite-oms\" aria-hidden=\"true\" class=\"anchor\"><svg aria-hidden=\"true\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>Adding Operations Management Suite (OMS)</h2>\n<p>This is an additional tool that is mainly for monitoring you K8s cluster, and can be quite helpful to determine if you're getting a lot of errors, whether there's an undue load, etc.</p>\n<p>First you'll actually want to create a service for analytics on Azure, this will give you the workspace ID and Key that you'll need to update below.</p>\n<p>Create and give it a name, once complete you'll see a screen like the below.</p>\n<p>From there click 'Advanced Settings' and go to the Linux Server connections</p>\n<p>Use these values to update the below myWorkspaceID and myWorkspaceKey</p>\n<div class=\"gatsby-highlight\" data-language=\"yml\"><pre class=\"language-yml\"><code class=\"language-yml\">apiVersion: extensions/v1beta1\nkind: DaemonSet\nmetadata:\n name: omsagent\nspec:\n template:\n  metadata:\n   labels:\n    app: omsagent\n    agentVersion: v1.3.4-127\n    dockerProviderVersion: 10.0.0-25\n  spec:\n   containers:\n     - name: omsagent\n       image: &quot;microsoft/oms&quot;\n       imagePullPolicy: Always\n       env:\n       - name: WSID\n         value: myWorkspaceID\n       - name: KEY\n         value: myWorkspaceKey\n       - name: DOMAIN\n         value: opinsights.azure.com\n       securityContext:\n         privileged: true\n       ports:\n       - containerPort: 25225\n         protocol: TCP\n       - containerPort: 25224\n         protocol: UDP\n       volumeMounts:\n        - mountPath: /var/run/docker.sock\n          name: docker-sock\n        - mountPath: /var/log\n          name: host-log\n       livenessProbe:\n        exec:\n         command:\n         - /bin/bash\n         - -c\n         - ps -ef | grep omsagent | grep -v &quot;grep&quot;\n        initialDelaySeconds: 60\n        periodSeconds: 60\n   volumes:\n    - name: docker-sock\n      hostPath:\n       path: /var/run/docker.sock\n    - name: host-log\n      hostPath:\n       path: /var/log</code></pre></div>\n<p>From here create the daemonset by running the below command, using the traditional and watch to wait until it's available. To view the portal click \"OMS Portal\" in the above image from the Log Analytics</p>\n<p><code class=\"language-text\">kubectl create -f oms-daemonset.yaml</code>\n<code class=\"language-text\">kubectl get daemonset</code></p>\n<p>After a few minutes this portal should populate with a wealth of data.</p>\n<h2 id=\"steps-to-update-with-new-release\"><a href=\"#steps-to-update-with-new-release\" aria-hidden=\"true\" class=\"anchor\"><svg aria-hidden=\"true\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>Steps to Update with New Release</h2>\n<p><code class=\"language-text\">az acs kubernetes get-credentials --resource-group simudyne-demos --name myK8SCluster kc set image deployment docker-demo docker-demo=simudyne/docker-demo:alpha1 kc get pods -w</code></p>\n<h2 id=\"working-with-aks-preview\"><a href=\"#working-with-aks-preview\" aria-hidden=\"true\" class=\"anchor\"><svg aria-hidden=\"true\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>Working with AKS (Preview)</h2>\n<p>There are a few changes you will need to do in order to instead work with AKS (Preview)</p>\n<ol>\n<li>\n<p>First we need to register with the provider so we can use the Preview</p>\n</li>\n<li>\n<p>We create a resource group same as before.</p>\n</li>\n<li>\n<p>Very little change here for creating the cluster and merging into the context. It's simply changing to aks.</p>\n</li>\n<li>\n<p>Then we'll want to make sure (especially for below custom metrics) we'll need to upgrade Kubernetes. To do this we first will get possible upgrades</p>\n</li>\n<li>\n<p>Then we'll apply the upgrade (this will take a while)</p>\n</li>\n<li>\n<p>Finally we can show that the upgrade to the cluster has processed.</p>\n</li>\n</ol>\n<p><code class=\"language-text\">az provider register -n Microsoft.Network az provider register -n Microsoft.Storage az provider register -n Microsoft.Compute az provider register -n Microsoft.ContainerService</code></p>\n<p><code class=\"language-text\">az group create --name myResourceGroup --location westeurope az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1 --generate-ssh-keys az aks get-credentials --resource-group myResourceGroup --name myAKSCluster</code></p>\n<p><code class=\"language-text\">az aks get-upgrades --name myAKSCluster --resource-group myResourceGroup --output table az aks upgrade --name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.8.7 az aks show --name myAKSCluster --resource-group myResourceGroup --output table</code></p>\n<p>The other change you'll now be able to do is custom metrics for HPA, including memory utilization built in. You'll need to make changes to the resources in your application YAML, and instead of using the autoscale fature you'll deploy the HPA via a new YAML.</p>\n<div class=\"gatsby-highlight\" data-language=\"yml\"><pre class=\"language-yml\"><code class=\"language-yml\">resources:\n  requests:\n    memory: &quot;128Mi&quot;\n    cpu: 250m\n  limits:\n    memory: &quot;2Gi&quot;\n    cpu: 750m</code></pre></div>\n<p>Below is the hpa.yml</p>\n<div class=\"gatsby-highlight\" data-language=\"yml\"><pre class=\"language-yml\"><code class=\"language-yml\">apiVersion: autoscaling/v2beta1\nkind: HorizontalPodAutoscaler\nmetadata:\n  name: docker-demo\nspec:\n  scaleTargetRef:\n    apiVersion: extensions/v1beta1\n    kind: Deployment\n    name: docker-demo\n  minReplicas: 2\n  maxReplicas: 10\n  metrics:\n  - type: Resource\n    resource:\n      name: cpu\n      targetAverageUtilization: 80\n  - type: Resource\n    resource:\n      name: memory\n      targetAverageValue: 1Gi</code></pre></div>","headings":[{"value":"Deploying using ACS (Different from ACS-Engine)","depth":2},{"value":"Before you Begin","depth":3},{"value":"Working with Docker","depth":2},{"value":"(Optional) Working with a Container Registry","depth":2},{"value":"Setup Resource and Cluster","depth":2},{"value":"Working with Ingress/Nginx","depth":2},{"value":"Setting up Auto-scaling","depth":2},{"value":"Adding Operations Management Suite (OMS)","depth":2},{"value":"Steps to Update with New Release","depth":2},{"value":"Working with AKS (Preview)","depth":2}],"frontmatter":{"title":"Deploying via Kubernetes","toc":null,"experimental":null}},"site":{"siteMetadata":{"title":"Simudyne Docs","latestVersion":"2.6"}}},"pageContext":{"absolutePath":"/home/vsts/work/1/s/content/2.5/reference/run_deploy/kubernetes.md","versioned":true,"version":"2.5","kind":"reference","pagePath":"/reference/run_deploy/kubernetes","chronology":{"prev":{"name":"Deploying via Docker","path":"/reference/run_deploy/docker"},"next":{"name":"Deploying on Azure Web Services","path":"/reference/run_deploy/azure"}},"lastUpdated":"2026-04-21T13:56:54.862Z"}}