Categories
.net architecture azure devops

PART 2.1: WEB API PIPELINE – Building a Scalable App Environment with Infrastructure and Deployment

Using .NET, Angular, Kubernetes, Azure/Devops, Terraform, Eventhubs and other Azure resources.

image by author

This is one part of a series. So if you have not read the PART 0: OVERVIEW you can go there and read it to get an overview of what we will actually doing here …

Introduction

In the last PART 2: WEB API we have created our simple web api in .net core which provides a simple api for getting all messages or posting single messages. The sent messages endpoint will be sent the message to an event hub and from there it can be consumed by our workers ( which we build in the next parts!).

Now we will create a build and release pipeline (CI/CD) for our web api. The build pipeline will build the artifacts and upload the container in our azure registry.

We will create a release pipeline with two stages (acc and prd). The release pipeline will use our keyvaults and apply the secrets as environments vars to the app. We will write k8s deployment, service and ingress files to deploy the web api in our aks.

You can download the code from the git repository.

Deployment Files

Lets start with writing the needed deployment files…

Docker

Start by creating a file named “Dockerfile” in the “./.deploy” folder inside the webapi repository. Put then the following code into it.

This dockerfile is quite simple. It copies the required sources (WebApi and Common) and restore, build and publish the application. After then we define the container’s entry point. But one thing is something which could be important! I have been recognized that when I am using the default rendered dockerfile from visual studio it will not work. It is because the “runtime:3.1-buster-slim” will be used for runtime, but when running in aks it has to be the full sdk. If not then you will get an error on the aks like: “It was not possible to find any installed .NET Core SDKs”.

Kubernetes

To deploy our web api to our aks we need some deployment files. First we create a folder inside the “.deploy” folder and name it “k8s”.

We start by creating a file inside the k8s folder and name it “config.yaml”. Then put the following code into it.

We use this config file to push non secret environment variables to the application. At this time we only have one entry here for the “CORS”. Keep in mind that our web api replaces automatically values in “appsettings” with values from these environment variables.

Now we do the same with the secrets… Create a file named “secrets.yaml” besides the config.yaml. Then put the following code into it.

So here it is the same as with secrets. This variables will be pushed to the application as secrets. We have here the appinsights key, the storage connection string and the event hub connection string. All this secrets will be set inside the release pipeline later with the values from the the azure keyvault.

Now we came to the deployment file for the web api. Please create a file “deployment.yaml” and put the following code into it.

At least this deployment file pull the docker image from the azure container registry and pass our config and secret environment variables to the created pod. The values for the “ENVIRONMENT” and the “RELEASE_ARTIFACTS_BUILD_NOTIFIER_BUILDID” variables came from the release pipeline.

Then we need a k8s service, which let us connect to the application. To do that create a file and name it “service.yaml” and put the following…

We provide a service to connect to the pod over the port 8080. Now we have to provide an ingress which represents the routes for the web api in the aks. …create a file “ingress.yaml” and put the following…

This ingress will receive traffic from our public id which was created in the infrastructure part by creating the helm ingress controller. We can define here a lot, but to keep this simple, we only have one rule which routes all the traffic to our web api application on port 8080.

All this k8s files will be applied by the release pipeline, which we will see later in this post…

Build Pipeline

First we create the build pipeline and initialize terraform for our workspaces and validate them. Furthermore we create the artifact with the terraform files to later create the resources in the release pipeline.

Let’s got to azure devops now and navigate to your notifier project pipelines. The press “create pipeline” and choose Azure Repos Git for your source. Select the “Infrastructure” repository. Choose then “Starter pipeline” and click “Save and run” und yes commit it directly into the master branch. After the job has been successfully finished. Check out the repository changes from the origin. After that you should see “azure-pipelines.yml” file und your “Infrastructure” folder. From here we start to add our needed build stuff. So open the file in your editor and let’s go…

First thing to do is delete all stuff in there and then copy the following.

The trigger sets a branch where the pipeline will be automatically triggered. The repository source is to include the repositories which needed in the build. This is in our case only “self” (the repository which triggered the build). And the pool where we define our vmImage. We set it to latest ubuntu.

If we have this done, we can go to the steps. First delete all the code inside the steps section. (I have moved the pipeline file into a “.deploy” folder – if you want to do that to, you have to change the path from the yaml in the pipeline!) Lets start by setting the base configuration of the pipeline by putting the following code into the empty yaml file.

Here we set the trigger, resources, the pool and some variables. Pay attention to the resource that we include here the needed common lib repository. In the last part I describe the common lib. It is a lib which is used by web api and the workers… So it is an extra repository which you also need to create in you azure devops (if you do not have already done this!) You can download the code from the common lib here.

Now we come to the steps and tasks section… Put the following code directly under the variables section.

It begins with checking out the relevant repositories (self and common lib). In the first task we build the docker image for acc environment by our created docker file.

In the second task we push image to our container registry. The next two steps do the same for prod. You might be thinking why do we have to do the same for different environments… And yes this is only because we wanted to have every resource for every environment. But yes it is also a good idea to share some resources like the container registry. But the other has the advantage that we have real independent resources for every stage…

The next task is to copy our k8s files to the artifacts directory, so it can be uploaded in the last step as an artifact.

When you push your next changes in master branch the pipeline will be automatically triggered and should be run successfully.

Release Pipeline

For this release pipeline an hopefully already configured Azure ARM Service Connection is needed. We used this already in PART 1.1 Infrastructure Pipeline. I will here not go so much in detail of how creating the stages, adding artifacts and adding pre deployment conditions. (How to do this in detail is described in PART 1.1 Infrastructure Pipeline and can be easily adopted.)

Now let’s go to azure devops and create a release pipeline (azuredevops -> pipelines -> releases -> new) and name it “WebApi Release”. Then go to the pipeline in edit mode and start first by adding the web api artifact. At the end we want to have two stages (“acc” and “prd”). Set pre deployment conditions from “acc” to “After Release” and from “prd” to “After Stage -> acc”

Before we adding release tasks we go to the “Variables” tab and set some pipeline variables.

NameValueScope
DOMAIN_URL_BASE_NAMEnotifier.comRelease
DOMAIN_URL_SUFFIX-accacc
DOMAIN_URL_SUFFIX-prdprd
ENVIRONMENTaccacc
ENVIRONMENTprdprd
RELEASE_ARTIFACTS_BUILD_NOTIFIER_BUILDID$(Build.BuildId)Release
ApplicationInsights__InstrumentationKey$(ApplicationInsights–InstrumentationKey)Release
EventHubSettings__ConnectionString$(EventHubSettings–ConnectionString)Release
StorageSettings__ConnectionString$(StorageSettings–ConnectionString)Release

The values for app insights, event hubs and connection string comes from our key vault, which we provide through a task in the next step.

Now we start adding tasks for the release. Please go to the “Tasks” tab and select the “acc” stage. And start adding the first task “Azure Key Vault”. Set ARM Service connection and select the web api key vault. This tasks makes it possible to use the azure key vault inside the pipeline. The real cool thing here is that we have no password here it is all hidden in the key vault and the key vault access is via the service connection.

image by author

The second task is for replacing the variables in our kubernetes deployment files by the ones from our release pipeline. We choose here the “Replace Tokens” task. Set here the kubernetes folder as root directory. The default values for a replacement match is “startwith=#{” and “endswith=}#”. So we can leave it by default because

The last task is for deploying our web api app to aks! For this we need a further service connection. So go to “Project Settings” and select “Service Connections”. Then click create new service connection from type “Kubernetes” and select your azure subscription and wait for the login window. After then you should see your clusters. Select the acc cluster and give it a name “AKS Notifier ACC”. When you have done this you can go back to your pipeline release tasks and add the last task “Kubectl”.

image by author

Set here the Kubernetes Service Connection (in my case AKS Notifier – but in your case AKS Notifier ACC – because we have for every stage a k8s cluster). Then select the “apply” command set the path where the kubernetes deployment files are.

Then you should add the same tasks for the “prd” stage which I will not repeat here…

Verify

So ok if we now release the web api and the pipeline runs successfully then we could check if everything is working. To do that please make first a connection to the acc aks…

az aks get-credentials --resource-group notifier-resource-group-acc --name notifier-aks-acc

and then get the current services…

kubectl get services

You should now see the ingress controller which was created by infrastructure. This one should have an external ip. Then you should also see the webapi service connection running on port 8080.

Then you could check the pods…

kubectl get pods

When the pod is running, then all is good. I f you get an image loop backoff then please run the following command to get more information.

kubectl describe pod POD_NAME

I could guess that in our case it could be that the aks has no authorization to the container registry. When this is then please allow it by typing the following and create a new release.

az aks update -n notifier-aks-acc -g notifier-resource-group-acc --attach-acr notifiercontainerregistryacc

If all work up to here please call the ping (GET) endpoint (which does not uses other resouces like event hubs or something else).

GET http://YOUR_PUBLIC_IP/api/notifications/ping

If you get back “PING” then your ingress routing to your web api is working. Then go a step further lets try to get all notifications.

GET http://YOUR_PUBLIC_IP/api/notifications

When you get now an empty array. We are sure that our connection to our azurestorage is working. If this is not working then probably your connection string is wrong or was not set/replaced correctly in the pipeline.

And by posting to the endpoint a message should be sent to event hubs and the message should be saved in the azure table.

POST http://localhost:8080/api/notifications?message=hello this is my message

Conclusion

We created here a CI/CD pipeline for our web api, which is now running on an aks. The web api can be reached from outside the cluster and can be used for getting and sending notifications.

Preview

In the next chapter we are going to create the Notifier Workers (app insights, email) which can consume our notifications sent from the web api over the event hub.

Categories
architecture azure devops

PART 1.1: INFRASTRUCTURE PIPELINE – Building a Scalable App Environment with Infrastructure and Deployment

Using .NET, Angular, Kubernetes, Azure/Devops, Terraform, Eventhubs and other Azure resources.
Image for post
image by author

This is one part of a series. So if you have not read the PART 0: OVERVIEW you can go there and read it to get an overview of what we will actually doing here …

Introduction

In the last Part 1 INFRASTRUCTURE we have created the resources we need for our scalable web application. In this part we want to automate the infrastructure by creating a pipeline for it. The goal of it is like with every other pipeline. We can deliver our infrastructure via continuous integration and approval process for different stages.

We need to download terraform inside our build agent. For this we have different possibilities. We could do this “manually” (bash script) or use the terraform build/release tasks. I wanted to use the build tasks, because they take a lot of work away from you, but they can not deal with workspaces and that sucks. The first “problem” was to select a workspace. This is not supported. So you can do it with bash. And yes this works. But the problem is, that the task does not take the selected workspace. It takes me hours to identify what is going wrong there, but there is a bug in that task.

So I decided to use the build task to download terraform to the agents and do the rest (authentication and terraform actions) via bash commands.

Furthermore I have to say that I had some issues to determine/fix while building this pipelines. This may differs totally from environment to environment. But the good thing is when you will try to fix your personal issues you will learn a lot of how terraform and azure works.

Prerequisites

You need something to do before we can really start with the pipelines… First you need to work through the first part (PART 1), where we have created our terraform configurations. These can also be downloaded/cloned from https://dev.azure.com/sternschleuder/Notifier/_git/Infrastructure?version=GBfeature%2Fpart1.

Terraform Build/Release Task

Go to the terraform build/release tasks and install the task into you azure devops environment.

Azure

First delete all resources that we have created in the PART 1 of the series (best way is to delete the resource group we used there), because we want to create everything new with our pipelines. And to avoid different states and other problems.

Then we create a resource group which is explicitly used for creating the resources. Or in other words to persist our terraform state remotely, so we can work with pipelines, local and with other developers sharing the same state.

So for this please create a resource group named “notifier-resource-group”. We need a storage account which used by terraform to save the states. So create one and name it “notifiertfstore” and choose the even created resource group as resource group for the storage. Then go inside the storage and choose “Storage Explorer” and right click on “BLOB CONTAINERS” and “Create blob container” and name it “notifiertfstate”. This is the location where terraform will create the states.

And a good thing would be to add a group in azure and add your user account to it (the account where you have logged in to azure cli “az login“). This is a good thing when we wanted to use terraform plan/apply from local or if other devs wanted to do this. Else there could be some auth errors when accessing the key vaults resources. So go to “Azure Active Directory” click “Groups” and create one named “notifier-devs”.

Code Changes

First open the “backend.tf” file and replace that code with the following:

terraform {
  backend "azurerm" {
    tenant_id            = YOUR_TENANT_ID
    subscription_id      = YOUR_SUBSCRIPTION_ID
    resource_group_name  = "notifier-resource-group"
    storage_account_name = "notifiertfstore"
    container_name       = "notifiertfstate"
    key                  = "terraform-notifier.tfstate"
  }
}

Here we are going to use our new created remote backend for persisting the terraform state. Please set here your tenant and subscription id (You can get this information by “az account list” – id is the subscription id!).

Now we need to change our “common.yaml” to the following. We add here our service connection to have also access to our keyvaults. Do not forget to replace the placeholder in capitals with your information! The “YOUR_SERVICE_CONNECTION_OBJECT_ID” you will get from the azure resource manager service connection created later in the build pipeline section. After the service connection is created you will find the service connection name and object id in azure in azure ad under app registrations.

tenant_id: YOUR_TENANT_ID

kv_allow:
    YOUR_SERVCE_CONNECTION_NAME: # Service Connection (principle which used in azuredevops pipeline)
        object_id: YOUR_SERVICE_CONNECTION_OBJECT_ID
        secret_permissions: ["get", "list", "delete", "set"]
    notifier-devs: # Allow group
        object_id: YOUR_NOTIFIER_DEVS_OBJECT_ID
        secret_permissions: ["get", "list", "delete", "set", "recover", "backup", "restore"]

Build Pipeline

First we create the build pipeline and initialize terraform for our workspaces and validate them. Furthermore we create the artifact with the terraform files to later create the resources in the release pipeline.

Let’s got to azure devops now and navigate to your notifier project pipelines. The press “create pipeline” and choose Azure Repos Git for your source. Select the “Infrastructure” repository. Choose then “Starter pipeline” and click “Save and run” und yes commit it directly into the master branch. After the job has been successfully finished. Check out the repository changes from the origin. After that you should see “azure-pipelines.yml” file und your “Infrastructure” folder. From here we start to add our needed build stuff. So open the file in your editor and let’s go…

First thing to do is delete all stuff in there and then copy the following.

trigger:
- master

resources:
  repositories:
  - repository: self

pool:
  vmImage: 'ubuntu-latest'

The trigger sets a branch where the pipeline will be automatically triggered. The repository source is to include the repositories which needed in the build. This is in our case only “self” (the repository which triggered the build). And the pool where we define our vmImage. We set it to latest ubuntu.

If we have this done, we can go to the steps. First delete all the code inside the steps section. As the first action we install terraform with the “Terraform Installer” task.

steps:
  - task: TerraformInstaller@0
    displayName: Install Terraform Latest 
    inputs:
      terraformVersion: 'latest'

Then we need to authorize to azure. We do this with a “Azure CLI” task. But before we can do this we have to create a service connection in azure devops to add azure into azure devops. For this do the following:

  1. Click on “Project settings”
  2. Click on “Service connections” in the “Project Settings” sidebar
  3. Click on “Create service connection”
  4. Choose “Azure Resource Manager” and click “Next”
  5. Click “Service principal (automatic)” and “Next”
  6. Select Scope Level “Subscription” and choose your azure subscription
  7. Leave the resource group empty, so you have access to all resource groups from your subscription. (maybe there will pop up an auth window where you have to login with your azure credentials)
  8. And enter a name for the Azure Resource Manager connection (In my case “ARM Notifier”)

After we have created the service connection, we can add the task. Leave a blank line after the last step and put the following code in to it… And care about indentation! (yaml is very sensitive with this)

  - task: AzureCLI@1
    displayName: Authorize Azure
    inputs:
      azureSubscription: 'ARM Notifier'
      scriptLocation: inlineScript
      inlineScript: |
        echo "##vso[task.setvariable variable=AZURE_CLIENT_ID;issecret=false]${servicePrincipalId}"
        echo "##vso[task.setvariable variable=AZURE_CLIENT_SECRET;issecret=true]${servicePrincipalKey}"
        echo "##vso[task.setvariable variable=AZURE_SUBSCRIPTION_ID;issecret=false]$(az account show --query 'id' -o tsv)"
        echo "##vso[task.setvariable variable=AZURE_TENANT_ID;issecret=false]${tenantId}"
      addSpnToEnvironment: true

This task authorize us with our created “ARM Notifier” service connection. Now we can initialize an validate our configurations. We do this with following bash task where we are using the “ARM” environment variables. Which we putting under the azure cli task with line break in-between.

  - bash: |
      terraform init
      for ENV in "acc" "prd"
      do
        terraform workspace select $ENV || terraform workspace new $ENV
        terraform validate
      done
    workingDirectory: '$(System.DefaultWorkingDirectory)'
    displayName: 'Terraform Init/Validate configuration'
    env:
      ARM_CLIENT_ID: $(AZURE_CLIENT_ID)
      ARM_CLIENT_SECRET: $(AZURE_CLIENT_SECRET)
      ARM_SUBSCRIPTION_ID: $(AZURE_SUBSCRIPTION_ID)
      ARM_TENANT_ID: $(AZURE_TENANT_ID)

We directly go further by adding another task which copies our terraform configs the artifact staging directory. So put…

  - task: CopyFiles@2
    displayName: Copy Terraform Configs
    inputs:
      SourceFolder: '.'
      Contents: '**'
      TargetFolder: '$(build.ArtifactStagingDirectory)'
      CleanTargetFolder: true
      OverWrite: true

And the last task is to publish the artifact…

  - task: PublishBuildArtifacts@1
    displayName: Publish Terraform Artifacts
    inputs:
      PathtoPublish: '$(build.ArtifactStagingDirectory)'
      ArtifactName: 'tf'
      publishLocation: 'Container'

So – if we have all done this, our build pipeline should work! Please trigger your build pipeline to see if it is working. Best to push the code changes and the should the pipeline automatically be triggered.

Release Pipeline

The release pipeline is responsible for applying our terraform changes in azure. First we are going to create a new release pipeline. For this go again to azure devops -> pipelines -> releases and create a pipeline. Select the “empty job” template and press apply. Then name the stage “acc plan” and close the sidebar window. The release pipelines (at the moment) can not be edited via yaml. So we have to use the user interface. But after we have closed the sidebar our pipeline looks like this:

Image for post
image by author

I have already renamed the pipeline name (on the top) to “Infrastructure Release”. We need to choose the artifact now (in the graphic above it is already done). Press the add button in the artifacts section and choose build as source type. Choose the notifier project and the Infrastructure build pipeline as source like in the graphic below and press add.

Image for post
image by author

Now we add some tasks here. For this please click on the “acc plan” stage on the job link. Then we are in the task view. Click the plus button to add the “terraform install” task. The task here is almost the same like before in the build pipeline, but we have to use the ui here. But there is nothing more to fill out here.

Next we add a further task to authorize to azure. Please select an Azure CLI task and choose our “ARM Notifier” for the “Azure Resource Manager connection”. The script type is “Shell” and choose inline script andput the following into it:

echo "##vso[task.setvariable variable=AZURE_CLIENT_ID;issecret=false]${servicePrincipalId}"
echo "##vso[task.setvariable variable=AZURE_CLIENT_SECRET;issecret=true]${servicePrincipalKey}"
SUBSCRIPTION_ID=`az account show --query 'id' -o tsv`
echo "selected subscription ${SUBSCRIPTION_ID}"
echo "##vso[task.setvariable variable=AZURE_SUBSCRIPTION_ID;issecret=false]${SUBSCRIPTION_ID}"
echo "##vso[task.setvariable variable=AZURE_TENANT_ID;issecret=false]${tenantId}"

Please open the “Advanced” tab and enter the “$(working_dir)” as “Working Directory”. This is an variable which will be set later when have create a task group from our tasks…

Now we came to the last task to initialize, sleect workspace and call the terraform plan or apply action. Choose a new bash task choose “Inline” and put in the following:

terraform init
terraform workspace select $(env) || terraform workspace new $(env)

if [ "$(tf_action)" = "apply" ]; then
       terraform $(tf_action) -auto-approve
else
       terraform $(tf_action)
fi

This script uses two variables (“$(env)” and “$(tf_action)”). Also these vars will be set later… Check here the auto approve flag when applying the changes. And we need to add here the environment variables which came from the azure cli task. So please set these in the “Environment Variables” tab. Add the following:

NameValue
ARM_CLIENT_ID$(AZURE_CLIENT_ID)
ARM_CLIENT_SECRET$(AZURE_CLIENT_SECRET)
ARM_SUBSCRIPTION_ID$(AZURE_SUBSCRIPTION_ID)
ARM_TENANT_ID$(AZURE_TENANT_ID)
Environment Variables

Now is the time to create a task group from our tasks. Which will give us a lot of advantages (faster creating a new deployment stage, changes will be applied globally where task group ist used, etc.). So select these three task like in the image below, right click and choose “create taskgroup”. Name it “Terraform Plan/Apply”.

Image for post
image by author

If this is done we see our task group and have to fill out the variables which we have defined in the tasks. Enter “acc” for environment, “plan” for terraform action and choose the working directory with the configuration files, like in the image below.

Image for post
image by author

Go back to the pipeline view and click clone on the “acc plan” stage (hover over stage and then an icon will appear!). Click on the new cloned stage and rename it to “acc apply” this stage is where terraform will create the resources. This stage should be triggered after the “plan stage” was successful.

Image for post
image by author

And I would create an approval for this. So click the thunder/user symbol to open the pre deployment conditions. Trigger should be “After Stage” and pre deployment approvals enabled with a selected user.

Image for post
image by author

For the production stages we can clone once more the “acc plan” and “acc apply” stage. Let’s do this… After we have done this we change the name to “prd plan” and “prd apply”. Then we have to choose the correct predeployment conditions for each stage. Click on the thunder/user symbol and change the conditions analog to acc stages. (In the prd apply stage we maybe need to deselect the acc stages and select the “prd plan” stage.)

The pipeline view should look like this now:

Image for post
image by author

But now we need to change the variables for each stage. Our “acc plan” is ok, but the other ones are cloned so they need some adjustments. Let’s jump into the task view by clicking the task link in the “acc apply” stage. Select the taskgroup and change the “tf_action” to “apply”. That’s all!

Then choose the “prd plan” stage under the “Task” dropdown and select again the tasgk group and set the “env” to “prd” and the action should be “plan” Do the same thing with “prd apply” stage and check if the action is already “apply”. Then save the pipeline.

Uh one thing would be cool – let’s use continuous deployment trigger. Click on the thunder icon near the artifact and then do the following:

Image for post
image by author

Save the pipeline again! Now the build pipeline and release pipeline should be automatically be triggered and release the plan. And then we only need to approve to roll it out!

Enable Access AKS to use ACR

To give the k8s cluster access to the container registry we need to do that explicitly. Else when we want to deploy something from our registry in k8s we will get an authorization error. Todo this please call the following line with the azure cli.

az aks update -n notifier-aks-acc -g notifier-resource-group-acc --attach-acr notifiercontainerregistryacc

You will ask why we have to do this here and not with terraform. So you need admin rights for your azure portal and the pipeline does not have it. It is possible to configurate this by terraform with azure active directory etc. I do not want to make the things much more complex here, that is why we have to go this way here.

Problems I had to face with

In principle this is all very easy to configure and setup, but there are so many things which could you drive crazy… For example I wanted to use the terraform cli task for all, but there was a strange bug with selecting and then using the right workspace. This was not so easy understand, because I thought I have missed something or whatever. But the task was the problem… However here some other problems which I encountered.

Terraform – Resource Already Imported

First I got some errors after I want to apply my resources. Some resources were already there and I have to import them, although it was the first time I have created them. For this reason I deleted the complete resource group which we used in the PART 1 of the tutorial. Then I read something about it, that there is cache with the terraform azurerm version 2.* which can produce this kind of issue. So if this happens you have to get your state up to date by importing these resources. You can do it by (example from a key vault secret):

terraform import azurerm_key_vault_secret.kvs_webapi_appinsights https://kv-webapi-acc.vault.azure.net/secrets/ApplicationInsights--InstrumentationKey/d9bff6b232d0412fb3aa2d9e9a07961

Terraform – State Container Locked

This occurs, when I wanted to apply terraform changes first locally and then via pipeline. I had an error in my local changes and the apply does not work, then I pushed the fixed code to the repo and the pipeline failed. It says that the container is in lease state. Ok I thought I take a look in azure to that state file in the container, but the container infos said that there is nor lease state. Hm -however I thought… I marked the state as lease by right clicking on that state file and choose the action. After then I removed that state. And now there was no more blocking in the pipeline. May be terraform could not remove the lease state after the error.

Key Vault – Access Policies

First I used the “current azure rm” information for creating resources and access policies, etc. But in that case you have to keep in mind that these values logically differ when you apply resources from you local machine or using the pipeline. And if only one policy is created then we will get into some auth errors when accessing the key vault secrets, because only the creator of the resource has access. To avoid this as I described here in this post, I have added access policies for an azure group and the azurerm service connection.

Conclusion

I have learned here, that it is very easy to setup such a pipeline, but not easy to get it run. For this there is a deeper understanding of azure and terraform needed. But if you go through this you will learn a lot.

So created a good working build and release pipeline for our infrastructure which is working quite well. And it is easily to extend to some more stages/environments!

The updated code for this part can be downloaded from https://dev.azure.com/sternschleuder/Notifier/_git/Infrastructure?path=%2F&version=GBfeature%2Fpart1_1&_a=contents.

Preview

In the next post PART 2 we will create the web api for our notifier web application. Here we will learn to use our created resources inside a .net webapi application.

Categories
.net architecture azure devops Uncategorised

PART 0: OVERVIEW – Building a Scalable App Environment with Infrastructure and Deployment

Using .NET, Angular, Kubernetes, Azure/Devops, Terraform, Eventhubs and other Azure resources.
Image for post
image by author

This post will be he first of a series, because it makes no sense to put all that stuff into one post. So this post will outline the overall demo application and infrastructure and how we will start to develop it.

What do we learn here?

We will build a modern style web application environment which uses a lot of technologies to bring that up and running. We learn how to wire all that pieces together and automate them. Starting by creating the infrastructure in azure using terraform and integrate it with azure devops pipelines. Then we are going to create a simple webapi in .net core which uses azure tables to store data and an event hub for posting messages to our system. After then we need to create multiple worker which can consume our messages. And at least a small functional user interface in angular which uses the webapi. We will talk a lot of configurations, keeping secrets secret and other stuff which can make problems when connecting all those parts.

What we will not do!

This demo application/environment will be far away from a complete production ready application. There will be no authentication or other security things which are extremely important, nor sufficient error handling, unit tests or great design patterns inside each software piece, etc. The pattern here is more the overall environment with pipelines, message broker, small services, etc. The code logic will be very simple, so we can concentrate on things we want to learn here.

Which technologies/tools we will use for coding, deploying and hosting?

For programming the backend will use C#/.NET Core/WebAPI and Angular/TypeScript for the frontend. We use Azure DevOps for the build/release pipelines and source control (.git). The complete infrastructure will be created in Azure with Terraform for defining the infrastructure in code. In Azure we will use Event Hubs as our Message Broker, Azure Tables to store the Notifications, Application Insights as one of our notification receiver, Key Vault to keep our secrets secret, Container Registry for our Docker Images and a Kubernetes Sevice (AKS) for hosting and managing our Docker Container.

What kind of functionality are we developing?

I think of a very small “notifier” application could make sense here. I think with this, we get all parts explored. The functionality is very simple. The app provides an interface for creating, listing and resending notifications to there consumer.

I start explaining the flow at the top of the diagram below. First the user should be able to create a notification via the user interface (made here with angular). The ui calls the web api to create a notification. The web api stores the notification in the table and sends a notification message to the event hub. At least the two consumer (application insights worker and email worker) receive them and do there job. The web api provides an additional “get notifications” endpoint by which the ui can read them. So then the user could select one or the other and resend the notification(s).

Image for post
image by author

Actually we do not need this “complex building” to realize this simple functionality, but this one has the known advantages of a microservice architecture and scalable system which I will not explain here to keep this short as possible.

What are the next steps?

In each part I will explain one “brick” to get this all to work. I explain in every specific post what we do need and achieve here. In a real world project it would make more sense not to split all the infrastructure tasks in one part (and for example) the web api into another.

Before we are going to start we should prepare a little bit. So we need an Azure DevOps account. And create a project named “Notifier”. Make sure that you choose git for source control! The work item template does not matter to us, because we will not use it. Then we need an Azure account. When we have this done we can start. Following the steps. So then lets go … (But I will spare myself the saying “Let’s get your hands dirty.”)