On April 2018, I played for the first time with the Bot Framework, I got that idea to leverage such technologies and feature for my monthly “Azure News & Updates” blog article. It was a good opportunity for me to make my app a great conversationalist. At that time, I built this with Azure Functions v1, Bot Framework v3, static and not-compiled code in .NET Framework, I got issue with the cold start; none of this was cross-platform at that time… Since then I have learned a lot about OSS, Docker and Kubernetes. On that regard, why not modernizing my Bot with more Cloud Native Computing practices?!

Before trying to reinvent the wheel I found those following insightful and inspirational resources:

  • In 2017-01, Sertaç Özercan used Docker, ACR and ACS to deploy his Bot in NodeJS
  • In 2018-02, Jonathan Harrison used Docker, ARM Templates, AKS, Helm 2, Nginx, TravisCI to deploy his Bot in NodeJS
  • In 2018-09, Ali Mazaheri used Docker, Draft, ACR and AKS to deploy his Bot in .NET Core 2
  • In 2019-03, Roberto Cervantes used Docker, ACR, AKS, Helm 2, Nginx, Cert-Manager, Azure DevOps to deploy his Bot in .NET Core 2
  • In 2019-05, Luis Antonio Beltran Prieto and Humberto Jaimes used Docker to deploy their Bot in .NET Core 2

Huge kudos to them for taking the time to share and document their knowledge and learnings!

From this and as we speak in December 2019, I have been able to leverage latest and greatest features and technologies to modernize and containerize my Bot, here below are few highlights and concepts you will be able to find with my own implementation:

Architecture diagram showing the app and its interactions with the different technologies and services such as Terraform, Application Insights, Helm, AKS and Azure Search.

.NET Core 3.1

I was able to implement my Bot with .NET Core 3.1 just announced early December 2019. That’s the new LTS version, where performance is greatly improved, the garbage collector use less memory and this version has been hardened for Docker.

Docker base image

I’m using the mcr.microsoft.com/dotnet/core/aspnet base image, you could find the entire list of tags available here: https://mcr.microsoft.com/v2/dotnet/core/aspnet/tags/list. If you don’t know yet, base images of Microsoft related products are now published in the Microsoft Container Registry, read the story here. Furthermore, I’m using the alpine version of this base image in order to reduce the size of the image as well as the surface of threat with such small alpine distribution. Notice also that I’m not using latest, 3 nor 3.1 but explicitly aspnet:3.1.0 and sdk:3.1.100 versions to be able to update them accordingly as new versions will arrive.
The size of my image is now 112 MB.

Helm 3 and Helm chart

Helm 3 went out in November 2019, this major version got rid of Tiller, but that’s not all! With this implementation I was able to build the associated Helm 3 chart, pushed it in Azure Container Registry (ACR) and deploy it in a Tiller-less Kubernetes cluster. Furthermore, this Helm chart contains all the Kubernetes objects the application needs to successfuly run on any Kubernetes cluster: Deployment, Service, Ingress, Issuer, NetworkPolicies, Secrets as well as its dependency with the Nginx Ingress Controller chart (see inside the Chart.yaml).

Azure Pipelines

Inspired by my blog article Tutorial: Using Azure DevOps to setup a CI/CD pipeline and deploy to Kubernetes I was able to implement both CI/Build and CD/Release in YAML, to build the Docker image and Helm chart, push them in ACR to then trigger the deployment in AKS via Helm. In addition to that, I was able to leverage my blog article A recipe to deploy your Azure resources with Terraform via Azure DevOps to combine my CI/CD with Terraform within the same Appplication’s pipeline. I also added an Approval/Check point between the Build/CI and Release/CD Stages (note: it’s a manual process for now).

Screenshot of the summary of a successfull run of the Build and Release phases in Azure DevOps.

Kubernetes Ingress Controller

To be able to register the backend of your Azure Bot Service, even if it could be hosted anywhere (literally), it should expose an HTTPS endpoint. In my case, I’m exposing my Bot by Nginx as an Ingress Controller by deploying the ingress-nginx Helm chart. I’m also using the following annotation to add a DNS on my public Azure IP Address:
Furthermore, I’m leveraging the jetstack/cert-manager Helm chart for generating my Certificate and configure my TLS termination.

You could find my GitHub Pull Request showing in details the implementation of the 5 topics mentioned above: https://github.com/mathieu-benoit/MyMonthlyBlogArticle.Bot/pull/10. Furthermore, you will find this other GitHub PR where my Helm chart got enrich with more Kubernetes objects and Helm dependencies: https://github.com/mathieu-benoit/MyMonthlyBlogArticle.Bot/pull/18.

Application Insights

To be able to get telemetry with this Bot like Requests, Exceptions, Response time, Search terms used, etc. I’m leveraging Application Insights embedded in my Bot.

Unit Tests

Unit Tests run within the Dockerfile and the results are published as part of the CI/Build in Azure Pipeline. This allows to run unit tests consistently whenever and wherever a docker build command runs. You could have a look at the associated GitHub PR I did for this.


At first when I implemented this Bot, I leveraged ARM Templates, but my preference now is more with Terraform to accomplish Infrastructure-as-Code (IaC), so here is my GitHub PR for the implementation of this. In my Azure Pipeline, I do terraform plan in the Build/CI Stage and do terraform apply in the Release/CD Stage.

Interesting stuffs, isn’t it!?

Regarding the price, before this new implementation it was almost free with an Azure Functions in the backend because I don’t have a lot of traffic, and actually I could still continue leverage Azure Functions, if I would like. But now, I have made the decision to do it with Kubernetes, to learn more about it, so it will cost me 3 Kubernetes Nodes (VMs), but I have other workloads running on that Kubernetes cluster so this cost is shared by multiple workloads. Furthermore, I have now common practices to deploy any workload consistently via Kubernetes APIs, so I’m saving the cost for the deployments, the automation, the maintenance, etc. that’s other invisible/implicit costs to take into account when comparing the real and concrete cost…

Great learnings for me! Feel free to leverage all of this for your own context and needs!

Cheers! ;)