Categories
Uncategorised

JS/TS – add and remove eventlisteners properly without loosing the “this” context

image by author

Before I go deeper into the topic I want to let you know if you are only searching for the solution then you can directly go to the “Solution” section in this article. Otherwise you can read from here…

JS Eventslistener

So when you want to listen for example for an “click” event on any html element, then you have to use “element.addEventListener(‘click’, myhandler);”. So every thing is cool here. Often there is a case that you register these events more than once. So it is a common and good practice to remove the listener before you register a new one. This can be done with “element.removeEventListener(‘click’, myhandler);”. So and here the problem starts…

Problem

You have two main problems when you want your “this” context stays the same (in an handler) and really want to clean the event registration. I will demonstrate the problems in the following examples. I will first setup a base test class which helps us to reduce the code which is used by all concrete test cases.

Test Setting Class

The following class is the our base class for all our coming test cases. And it is only to avoid writing redundant code. So what does this class do is first creating a div element with an id attribute in the constructor. Then it provides a method which simulates a click on this element. (this is only for testing purposes, because I do not want to create an html file…)

Use defined Event Handler

Here is our first try. We have an “addEventListener” method which first call the “removeEventListener” and then register the event handler with a defined method. After then we create an instance of the class and call “addEventListener” for multiple times. And the lasst thing is invoking the simulate click event.

So this solution does only call our defined method “onClickEventHandler” for one time, which is good. But we also loose our class instance context. So we are no more able to call another method in this class instance, because the “this” scope is now the element which was clicked…

Use anonymous Event Handler

So then you could use an anonymous function as event handler which we do in the following example. We invoke the methods in the same order and with the same count as we have done in the previous example.

Here we keep our class instance scope inside the anonymous event handler but the problem is we are not able to remove the event listener. This is because we always need to remove the event listener “instance”. But with the anonymous method we are creating every time when we add or remove the event listener a new function. So at the end our handler will be called twice – which sucks…

Use defined Method which returns Event Handler

Then you might came to the point and try to combine these two things and call a defined method which returns an event handler.

But the problem here is the same like with the anonymous functions, because at least you give the “addEventListener” function an anonymous function. We have only wrapped it into a getter.

Solution

Now it is time for the solution. Here we define first a “that” variable which contains the current “this” context. We defined an event handler as an variable and set a function (our event handler) to it. Now we have a variable which we can use to remove the listener and we can “that” for our class instance “this” context.

Conclusion

So as you can see scoping and removing listeners an be annoying, but we can solve this. But this does not change the fact that it is a little hacky, but that is javascript…

Categories
.net architecture azure devops

PART 2.1: WEB API PIPELINE – Building a Scalable App Environment with Infrastructure and Deployment

Using .NET, Angular, Kubernetes, Azure/Devops, Terraform, Eventhubs and other Azure resources.

image by author

This is one part of a series. So if you have not read the PART 0: OVERVIEW you can go there and read it to get an overview of what we will actually doing here …

Introduction

In the last PART 2: WEB API we have created our simple web api in .net core which provides a simple api for getting all messages or posting single messages. The sent messages endpoint will be sent the message to an event hub and from there it can be consumed by our workers ( which we build in the next parts!).

Now we will create a build and release pipeline (CI/CD) for our web api. The build pipeline will build the artifacts and upload the container in our azure registry.

We will create a release pipeline with two stages (acc and prd). The release pipeline will use our keyvaults and apply the secrets as environments vars to the app. We will write k8s deployment, service and ingress files to deploy the web api in our aks.

You can download the code from the git repository.

Deployment Files

Lets start with writing the needed deployment files…

Docker

Start by creating a file named “Dockerfile” in the “./.deploy” folder inside the webapi repository. Put then the following code into it.

This dockerfile is quite simple. It copies the required sources (WebApi and Common) and restore, build and publish the application. After then we define the container’s entry point. But one thing is something which could be important! I have been recognized that when I am using the default rendered dockerfile from visual studio it will not work. It is because the “runtime:3.1-buster-slim” will be used for runtime, but when running in aks it has to be the full sdk. If not then you will get an error on the aks like: “It was not possible to find any installed .NET Core SDKs”.

Kubernetes

To deploy our web api to our aks we need some deployment files. First we create a folder inside the “.deploy” folder and name it “k8s”.

We start by creating a file inside the k8s folder and name it “config.yaml”. Then put the following code into it.

We use this config file to push non secret environment variables to the application. At this time we only have one entry here for the “CORS”. Keep in mind that our web api replaces automatically values in “appsettings” with values from these environment variables.

Now we do the same with the secrets… Create a file named “secrets.yaml” besides the config.yaml. Then put the following code into it.

So here it is the same as with secrets. This variables will be pushed to the application as secrets. We have here the appinsights key, the storage connection string and the event hub connection string. All this secrets will be set inside the release pipeline later with the values from the the azure keyvault.

Now we came to the deployment file for the web api. Please create a file “deployment.yaml” and put the following code into it.

At least this deployment file pull the docker image from the azure container registry and pass our config and secret environment variables to the created pod. The values for the “ENVIRONMENT” and the “RELEASE_ARTIFACTS_BUILD_NOTIFIER_BUILDID” variables came from the release pipeline.

Then we need a k8s service, which let us connect to the application. To do that create a file and name it “service.yaml” and put the following…

We provide a service to connect to the pod over the port 8080. Now we have to provide an ingress which represents the routes for the web api in the aks. …create a file “ingress.yaml” and put the following…

This ingress will receive traffic from our public id which was created in the infrastructure part by creating the helm ingress controller. We can define here a lot, but to keep this simple, we only have one rule which routes all the traffic to our web api application on port 8080.

All this k8s files will be applied by the release pipeline, which we will see later in this post…

Build Pipeline

First we create the build pipeline and initialize terraform for our workspaces and validate them. Furthermore we create the artifact with the terraform files to later create the resources in the release pipeline.

Let’s got to azure devops now and navigate to your notifier project pipelines. The press “create pipeline” and choose Azure Repos Git for your source. Select the “Infrastructure” repository. Choose then “Starter pipeline” and click “Save and run” und yes commit it directly into the master branch. After the job has been successfully finished. Check out the repository changes from the origin. After that you should see “azure-pipelines.yml” file und your “Infrastructure” folder. From here we start to add our needed build stuff. So open the file in your editor and let’s go…

First thing to do is delete all stuff in there and then copy the following.

The trigger sets a branch where the pipeline will be automatically triggered. The repository source is to include the repositories which needed in the build. This is in our case only “self” (the repository which triggered the build). And the pool where we define our vmImage. We set it to latest ubuntu.

If we have this done, we can go to the steps. First delete all the code inside the steps section. (I have moved the pipeline file into a “.deploy” folder – if you want to do that to, you have to change the path from the yaml in the pipeline!) Lets start by setting the base configuration of the pipeline by putting the following code into the empty yaml file.

Here we set the trigger, resources, the pool and some variables. Pay attention to the resource that we include here the needed common lib repository. In the last part I describe the common lib. It is a lib which is used by web api and the workers… So it is an extra repository which you also need to create in you azure devops (if you do not have already done this!) You can download the code from the common lib here.

Now we come to the steps and tasks section… Put the following code directly under the variables section.

It begins with checking out the relevant repositories (self and common lib). In the first task we build the docker image for acc environment by our created docker file.

In the second task we push image to our container registry. The next two steps do the same for prod. You might be thinking why do we have to do the same for different environments… And yes this is only because we wanted to have every resource for every environment. But yes it is also a good idea to share some resources like the container registry. But the other has the advantage that we have real independent resources for every stage…

The next task is to copy our k8s files to the artifacts directory, so it can be uploaded in the last step as an artifact.

When you push your next changes in master branch the pipeline will be automatically triggered and should be run successfully.

Release Pipeline

For this release pipeline an hopefully already configured Azure ARM Service Connection is needed. We used this already in PART 1.1 Infrastructure Pipeline. I will here not go so much in detail of how creating the stages, adding artifacts and adding pre deployment conditions. (How to do this in detail is described in PART 1.1 Infrastructure Pipeline and can be easily adopted.)

Now let’s go to azure devops and create a release pipeline (azuredevops -> pipelines -> releases -> new) and name it “WebApi Release”. Then go to the pipeline in edit mode and start first by adding the web api artifact. At the end we want to have two stages (“acc” and “prd”). Set pre deployment conditions from “acc” to “After Release” and from “prd” to “After Stage -> acc”

Before we adding release tasks we go to the “Variables” tab and set some pipeline variables.

NameValueScope
DOMAIN_URL_BASE_NAMEnotifier.comRelease
DOMAIN_URL_SUFFIX-accacc
DOMAIN_URL_SUFFIX-prdprd
ENVIRONMENTaccacc
ENVIRONMENTprdprd
RELEASE_ARTIFACTS_BUILD_NOTIFIER_BUILDID$(Build.BuildId)Release
ApplicationInsights__InstrumentationKey$(ApplicationInsights–InstrumentationKey)Release
EventHubSettings__ConnectionString$(EventHubSettings–ConnectionString)Release
StorageSettings__ConnectionString$(StorageSettings–ConnectionString)Release

The values for app insights, event hubs and connection string comes from our key vault, which we provide through a task in the next step.

Now we start adding tasks for the release. Please go to the “Tasks” tab and select the “acc” stage. And start adding the first task “Azure Key Vault”. Set ARM Service connection and select the web api key vault. This tasks makes it possible to use the azure key vault inside the pipeline. The real cool thing here is that we have no password here it is all hidden in the key vault and the key vault access is via the service connection.

image by author

The second task is for replacing the variables in our kubernetes deployment files by the ones from our release pipeline. We choose here the “Replace Tokens” task. Set here the kubernetes folder as root directory. The default values for a replacement match is “startwith=#{” and “endswith=}#”. So we can leave it by default because

The last task is for deploying our web api app to aks! For this we need a further service connection. So go to “Project Settings” and select “Service Connections”. Then click create new service connection from type “Kubernetes” and select your azure subscription and wait for the login window. After then you should see your clusters. Select the acc cluster and give it a name “AKS Notifier ACC”. When you have done this you can go back to your pipeline release tasks and add the last task “Kubectl”.

image by author

Set here the Kubernetes Service Connection (in my case AKS Notifier – but in your case AKS Notifier ACC – because we have for every stage a k8s cluster). Then select the “apply” command set the path where the kubernetes deployment files are.

Then you should add the same tasks for the “prd” stage which I will not repeat here…

Verify

So ok if we now release the web api and the pipeline runs successfully then we could check if everything is working. To do that please make first a connection to the acc aks…

az aks get-credentials --resource-group notifier-resource-group-acc --name notifier-aks-acc

and then get the current services…

kubectl get services

You should now see the ingress controller which was created by infrastructure. This one should have an external ip. Then you should also see the webapi service connection running on port 8080.

Then you could check the pods…

kubectl get pods

When the pod is running, then all is good. I f you get an image loop backoff then please run the following command to get more information.

kubectl describe pod POD_NAME

I could guess that in our case it could be that the aks has no authorization to the container registry. When this is then please allow it by typing the following and create a new release.

az aks update -n notifier-aks-acc -g notifier-resource-group-acc --attach-acr notifiercontainerregistryacc

If all work up to here please call the ping (GET) endpoint (which does not uses other resouces like event hubs or something else).

GET http://YOUR_PUBLIC_IP/api/notifications/ping

If you get back “PING” then your ingress routing to your web api is working. Then go a step further lets try to get all notifications.

GET http://YOUR_PUBLIC_IP/api/notifications

When you get now an empty array. We are sure that our connection to our azurestorage is working. If this is not working then probably your connection string is wrong or was not set/replaced correctly in the pipeline.

And by posting to the endpoint a message should be sent to event hubs and the message should be saved in the azure table.

POST http://localhost:8080/api/notifications?message=hello this is my message

Conclusion

We created here a CI/CD pipeline for our web api, which is now running on an aks. The web api can be reached from outside the cluster and can be used for getting and sending notifications.

Preview

In the next chapter we are going to create the Notifier Workers (app insights, email) which can consume our notifications sent from the web api over the event hub.

Categories
.net C# Uncategorised

Why I completely switched to VS Code

…also for C#/.NET development…

image by author

Foreword

First I have to say that I like Visual Studio very much. It is a good working IDE for development .NET stuff. I used it for over ten years now for mobile, desktop and web development. Yes of course sometimes visual studio can a bit annoying because of it’s performance and “magic” behind the scenes. But at least it is a good Program…

However some weeks ago I needed to reinstall windows and began installing all the required programs which I needed. Then I stopped after installing VS Code and said to myself: “Hm my msdn professional abo has been ended and the next Visual Studio Version (2020) will cost me some money. Let’s try rider? Hm no – before I want give VS Code a real try!” This is because I use VS Code a lot for all my other developments like angular, go, flutter, etc. and I know this should work with .NET.

So my first expectation was that I get some syntax coloring and all the compiling, tests, etc. will be running in terminal, or have to be done in a not so common way.

My Experience

So I knew that I can use also VS Code, but I always thought, that I have to dispense a lot of features. Most of the devs I know work with Rider or Visual Studio Professional/Enterprise, because they thought the same like me. But I was totally surprised that it worked so well. Until now I miss nothing really. It’s quite the opposite. I am enjoying the fast editor and the customizing for the individual needs without manipulating the source of the project.

I started by installing the “C# (powered by OmniSharp)” plugin. That plugin provides syntax highlighting, reference recognition, debugging, etc. I tried this on an existing solution (The root folder where the solution is located needs to be opened). If you want to work with full support by C# plugin you have to work with solution files. When you want to debug your project for the first time, you have to set a launch file. Here you can specify the startproject etc. With this installed you can debug your code, set breakpoints, view variables, add watch expressions and modify the code (like in the immediate window in Visual Studio) within “Debug Console” window.

By the “Visual Studio IntelliCode” plugin you get the same AI intelli sense experience like in Visual Studio. And to get a more powerful importing namespace experience use the “Auto-Using for C#” plugin. It provides importing when typing knowing types. And for a better overview you can install the “vscode-icons”.

If you need a kind of “gui” for the nuget packages you can install the “NuGet Package Manager” plugin, but of course you can also use the dotnet cli for it.

Then I thought ok that is really cool but what is with unit tests. Can I run tests inside VS Code or do I have to use the cli? And the answer is – yes it is very easy possible. So you need first the “TestExplorer UI” and then the “.NET Core Test Explorer” plugin. After installing these you have a new icon on the left which opens the test explorer. You might have to edit the settings to specify on which locations the plugin should be search for unit tests. And then you can go with it.

For creating new solutions or adding projects to the existing solution you have to use the dotnet cli. I thought – but while I wrote this, I have done a quick research and I came to the “vscode-solution-explorer”. With this installed you have the same experience like in Visual Studio. On the left side (activity bar) you have an additional Visual Studio icon. By pressing it you have the same view like in Visual Studio. You can create projects add existing projects, view all references, add nuget packages, etc.

There are thousands of plugins how you can optimize your dev environment for your needs. So but I am happy with the plugins I have addressed here.

Plugin List

  • C# OmniSharp
  • Test Explorer UI
  • .NET Core Test Explorer
  • Auto-Using for C#
  • NuGet Package Manager
  • vscode-solution-explorer
  • Visual Studio IntelliCode

There are so many more useful C# helper plugins, for code generating etc. But to get started very comfortable this is my recommendation.

Missing

May be there are some tools which you will not get. For profiling, code quality and that stuff I use other tools in anyway which are then part of the pipeline… I really found nothing which I would miss.

Conclusion

So if I work on projects which runs on .NET Core or 5 then I will definitely choose VS Code for now. It feels good to work with. I do not get it why some guys say that VS Code is not so much powerful etc. I tried it and I think I will not install Visual Studio again, except to that point when I have to change some old webforms code :).

Refactoring, debugging, testing and writing the code feels very great to me and the setup was very easy. May be the entry is a little more difficult for new unexperienced users, but I think in Visual Studio you have to know what you are doing to! So give it a try and tell me your thoughts of your experience!

Categories
.net architecture C#

Skeleton for Vertical Layered Web API in .NET CORE

image by Vecteezy.com

…with fluent validation and automapping…

Today I just want to share a basic skeleton web api written in .net core. This is a kind of basic set up which works fine for microservices as well as some bigger ones. So I am a very big fan of not only horizontal layering ( see post Horizontal and Vertical Layers in Software Development). The example also shows how an application layer validation could e applied and how to easily map the items through the layers. So may be this helps the one or the other…

First the link to the source: layered-net-core-app

Some Description

So this skeleton app shows how an application could be split into vertical layers. This has multiple advantages (again see Horizontal and Vertical Layers in Software Development). Furthermore it shows with one example the flow of the data and connections from top (controller/application layer) to bottom (data layer). The data layer is only pseudo code and does not persist stuff, to focus on the layering and communication stuff.

The application has three layers:

  • Application Layer
  • Business Layer
  • Data Layer

…and several projects:

  • WebApi (Contains startup, adding middleware, setups, controller routings, forming the output and at least hosting the application)
  • WebApi.Configuration (DI bootstrapping / connecting the layers together)
  • WebApi.Common (Common stuff for the web api)
  • Application (The implementation of the application layer)
  • Application.Contract (The access to the application layer from outside the application layer)
  • Business (The implementation of the business layer)
  • Business.Contract (The access to the business layer from outside the business layer)
  • DataLayer (The implementation of the data layer)
  • DataLayer.Contract (The access to the data layer from outside the data layer)

Application Layer

The application layer is used here in this example by the web api project (Controllers). But the Controllers has only access to the application layer contracts. This allows us to avoid unwanted direct connection to use the implementation directly or to access business entities which knows the application layer implementation, but should not possible to expose by the api to the “outer world”. The application layer implementation knows only about the business layer contracts to communicate with it.

What this layer will does depends on the call and the needs for it. But from my perspective it is responsible for providing services for getting data from business/domain side and/or triggers domain/business logic. Furthermore it orchestrate all this needed calls to business logic and for example third party services to get/set the wanted result.

Business Layer

The business layer knows only about the data layers contracts. The business lay exposes only interfaces to use the layer and the business entities.

This layer should contain all the business/domain logic. All business relevant calculations and manipulations should occur here. No manipulation of business entities should be done outside of this layer.

Data Layer

The data layer knows only about the database or storage where all the data is persisted. The contracts from the data layer should only be used from the business logic to make sure that persisting data follows the rules inside the business layer. It is the connection to database and or storage. If provides repositories to store and get items from the database/storge.

Comments

This splitting is for the most services to much, but it demonstrates very well how the layer can securely communicate to each other. This pattern fits best for situations where no object relation mapper is used, because in entity framework the business entities are the tables in the data layer. So this layer is mostly a little overdone.

Personal I think a layering in microservices for an application layer and domain layer (where all the domain logic and data storage is handled ) is the best compromise to securely work with domain logic and have not total overhead of bubbling through thousands of layers. But this depends like every time on the needs of the application.

Categories
architecture Uncategorised

Horizontal and Vertical Layers in Software Development

image by author

…and the ideas behind it

In this post I will share some ideas about structuring a software application. I will describe what horizontal and vertical layering for me is and show how they can be applied.

Why

Layering an application is useful for having a better overview of the application it is clear what to expect in the specific layer. You have less complicated and non-transparent dependencies. This all applies for vertical and horizontal layers but take affect in a different way. At least you end up with a much more modular app which can be tested/unit tested more easily and is open for extension and different deployments.

Overview

In short vertical layering is about application layers inside a microservice, monolith, etc. – at least inside of single executable application. Horizontal layering is about splitting an application into different domains/services/components (e.g. microservice architecture). See the diagram below.

image by author

Vertical

Layering helps a lot to get a better overview over the application. It is much easier to find things, because you know what to expect in the specific layer. And it is always clear what I can do in the specific layer (because of the contracts in the specific layer). It prevents unwanted use of code in the wrong layer. For example (in .net webapi) exposing business entities to the outer world of a web api. Or accessing the data layer directly from the controller – and so on… (When you can manipulate entities where ever you are in the program it is dangerous and you will loose the control over them and it could be come to invalid manipulations, etc.)

Furthermore you automatically avoid cycling dependencies. For example – imagine you have two application services. Each service uses a business or domain service for creating and persisting the same business entity (and do their other specific service stuff…). Because of “sharing code in business layer” or better to put that business/domain logic into it, the possibility of cycling dependency is less than using all service in one layer, because it is then not clear what to use and what not.

It is not really the thing how many layers do you have (this depends on your needs), but you should have them and only expose access through a contract to the next upper layer. In the diagram you see how this is applied.

image by author

For example in C# you can do this kind of separation in two ways. Imagine we have data, business and application layer.

First Method

You can create a project for every layer plus a contract for each one (like in the image). The data layer knows only it’s own contracts. The business layer knows from the data contract project and it’s own contract project. And the application layer knows only about business layer contract and it’s own project. With this you avoid directly accessing entities from the application layer.

Second Method

You can create a project for each layer (application, business and data). And put the contracts directly in folder from that specific project. The business layer knows the data layer. And the application layer know the business layer. So to avoid direct access to the implementation we can mark the implementation classes as internal and only the contracts as final.

I like the first approach a little more because it is directly more clear and I can share the contracts as a library easily without the implementations. But both are doing there job.

Horizontal

Horizontal layering is for me building and using components, or sometimes libraries but above all it is separating an application into different services.

For example microservices are a good example for horizontal layers. And/or splitting an application into multiple domains. And these domains are self contained and have no need to communicate with others. Then you have multiple services which are scalable and it is clear what they are doing. Of course sometimes/often services (when using DDD in subdomains) need to communicate, but this is mostly done over messages (message broker in microservices) and or service endpoints. So there is no direct use of domain logic outside the domain from the service.

Conclusion

So today I would say the most important thing is to have horizontal layering to provide better possibilities to scale the application and the other mentioned stuff. But I think it is also important to have vertical layering. The more code you have in one application the more it is important to have vertical layering. For example a monolith without vertical layers is absolutely horrible to maintain and I could bet the dependencies are very confusing and it is not really clear what will be used where… If had build your monolith/big service in vertical layers, it is much more easy to create horizontal layers from it. So for me both are layering styles are important and has partly similar and sometimes other advantages then the other.

If you questions or suggestions or if you see this completely different please make a comment and we can speak about it. I am always happy to hear perspectives from other devs.

Categories
.net architecture azure C#

PART 2: WEB API – Building a Scalable App Environment with Infrastructure and Deployment

Using .NET, Angular, Kubernetes, Azure/Devops, Terraform, Eventhubs and other Azure resources.
Image for post
image by author

This is one part of a series. So if you have not read the PART 0: OVERVIEW you can go there and read it to get an overview of what we will actually doing here …

Introduction

In the last PART 1.1: INFRASTRUCTURE PIPELINE we have finalized the infrastructure part with building our pipeline for it.

Now we are going further and start the web api for our notification application environment. We will build the web api in .net core. The web api provides two endpoints. One for creating a message and another for getting all messages. This is sufficient for our example. Furthermore we implementing the use of azure storage table, event hubs, application insights and key vaults.

Yes – I repeat myself, but all the code here is not really complete production ready (error handling, retries, tests, etc.). Maybe I will put some more features (if there is anyone interested in) to all parts of this application, when we are ready with our base version. But our base versions has already a lot of stuff in there, so lets start…

Prerequisites

We only need an editor or development environment for .net and the .net core 3.1 framework for creating a .net core application (But I think this is obvious!). And we need the “Common lib” for our .net projects which will be described below.

Common lib

We need the common lib for this part as well as for our workers (next parts!) which should handle the sent notifications. The common lib source can be downloaded/cloned from the feature/part2 branch and put it next to the infrastructure folder. I will not list all the code here, but I will give a short overview about the content and describe the folders below.

Data

In this folder is the notification entity located which are saving in a azure storage table. For this there is also a repository, which take the job for communicate with storage table. (Please read the code for getting more information how this work in detail and or visit https://docs.microsoft.com/en-us/azure/cosmos-db/tutorial-develop-table-dotnet – and do not be confused that we are using the cosmos db api. This api is working with azure storage table.)

Extensions

One extension is to register to application insights and the other one for using the key vaults where we get later our secrets from. The key vault credentials will be pushed via environment variables (but this is part of the next part – 🙂).

Protobuf/Messages

The protobuf folder contains the notification message which we will be used to send/receive to/from our eventhub. I choose to send the message via this binary format. The messages folder contains the C# version of the message (which we will use in our code). If you take a look at the “Notifier.Common.csproj” fill you will find an item group which take this generation job (for this the gprc tool will be used).

Settings

Here are all settings defined as objects, which we will use in our needed .net projects.

WebApi

Again we should here create first a repository for the web api where we can push our code for creating the pipelines etc. in the next part. So let’s create an azure devops repository named “WebApi” in our “Infrastructure” project and clone this next to the “Infrastructure” and “Common” folders. The complete sourcecode can be also downloaded/cloned here.

Base Setup

Now create a .net core console project and name it (incl. the solution) “Notifier.WebApi”. We start by editing the “csproj” file. So replace/edit your contents with the following.

One interesting thing in the first property group is the “UserSecretsId”. So yes we use for local development the user secrets feature. So no secrets has to be in the repository. (The user secret will be created when you right click for the first time the “project -> manage user secrets”. Here we will do it directly in the project file and a good thing is to add the application name to the secret. Else it is very hard find it on your computer.)

The next item group is obvious. The “appsettings.json” which we will create very soon!

Then we have some nuget packages we will need for our web api and last but not least including the common lib as a project reference (make sure the path is right in your environment). A better approach would be to add the common lib as a git submodule or nuget package, but for now this is ok here.

It is time to create the “appsettings.json” file in the project root. Please put the content below into that file.

First we configure the log level for our logging. Then we see settings for our azure resources. These are secrets, so we will define them in our user secrets file later. And at last the core settings which is for our later angular frontend to access the the web api without cors issues.

Let’s go to our user secrets file. For this right click the project and select “Manage User Secrets”. If the file does not open (sometimes with older .net core versions I had the issue) you could do it by using the .net core cli or simply open the file located in “C:\Users\{YOUR_USER}\AppData\Roaming\Microsoft\UserSecrets\notifier-webapi-6fd34aeb-1b78-4492-86dd-a5aa00ce38cd” then put the following in there and find your secrets in your azure portal.

We use here the secrets from our acceptance environment. Here can you find your secrets…

  • Application Insights – Instrumentation Key: select notifier-application-insights-acc resource -> Overview and find the key on the top right.
  • Storage Table – Connection String: select notifierstoreacc resource -> Access keys and then copy the primary connection string.
  • Event Hubs – Connection String: select eventhubs-acc -> Event Hubs -> notifications -> Shared access policies -> send and copy the primary connection string.

Implementation

We start by directly implementing the functionality of the api. Later we wire this together when we setup the “Program.cs” and “Startup.cs”. we do this in this order, because else we have to jump between files and/or have to deal with errors because the things we want to wire are not existent at this time…

So let’s start by creating the model for the notification response. Please create a folder “Models” in the root project directory and create a class named “NotificationModel.cs” and put the following in to it.

The model only contains a message and a timestamp. So let’s go further with the services. For this create a folder named “Services” and create the following in files in there:

Above the service interface (“INotificationService”) with two simple methods in there. And now the implementation (“NotificationService”):

The constructor takes arguments for the logger, eventhub settings (object come from the common lib) and the repository which is also located in the common lib. With this we have all to start here. The “CreateAndSendAsync” does exactly what it is called. First it creates the entity and save it into the table and second it sends to the event hub. The functionality is split into two private methods which makes it more cleaner and better to read. Please check the private methods and the common lib functionality for further information how the event hubs and storage table is used here. (This is a very simple implementation herem without retries etc.)

Now we are ready to create the controller which defines our endpoints. So we start again with creating a folder “Controllers” and create a class named “NotificationController.as” in there and put the following code into it.

We inject the notification service here to let the service do our work. The controller creates the endpoint and format the response for our two methods. So we created here the following endpoints:

  • GET /api/notifications – returns all notifications
  • POST /api/notifications – create, save and send the notification

This is all we need for the logic! Now we need to wire this together…

Wiring the Parts

We start with the “Program.cs” where the entry point of the application is. Please open that file and replace the content with the following.

We start creating the host when the main function is called. At first we configuring the app by calling “ConfigureAppConfiguration”. We add the appsettings.json, environment variables and commandline arguments to our configuration. The we add (or try to add) the user secrets (which is the case in local development). Then we add (or try to add) the key vault (which is the case when we pass the credentials for it via environment variables – but we will discuss this in the next chapter). All the secrets will be replaced by user secrets or key vault. Next we configure our logging. First we register our console logger and second to log in application insights (which results in traces there). And in the last step we configure our web server where we will use “Kestrel” and call our web server startup class “Startup.cs” which contains the following.

Here we will do some startup stuff, but the main focus should be how configure our services. In line 45 we start configuring our app settings, so we can inject them with the IOptions<T> interface in our services, etc. Then we add the repository, and the notification service to our di context. You will find some other basic configuration here, which I will not describe here in detail.

Test

If we have all done here correct, you could run the application locally and test the endpoints. You could use postman or like me the the REST Client (for visual studio code plugin), which is quiet cool, because I can code my requests here and versioning them, etc. Following the requests:

So and if you have created some messages and get them you could check application insights on azure portal and view for example the logs (notifier-application-insights-acc -> Logs -> traces). Or check the application map, which should should show the connection between the components. At this time we can see that our web api sends to event hub and calls to to the azure table.

Conclusion

We have created a .net core web api here which cares about secrets and uses diverse azure resources like event hubs, application insights, key vaults and azure tables. We can now create, persist and read/receive notifications.

Preview

In the next PART 2.1, we will bring the web api to our acceptance stage running in docker and the kubernetes cluster. And this we want to integrate and automate in the azure pipelines.

Categories
architecture azure devops

PART 1.1: INFRASTRUCTURE PIPELINE – Building a Scalable App Environment with Infrastructure and Deployment

Using .NET, Angular, Kubernetes, Azure/Devops, Terraform, Eventhubs and other Azure resources.
Image for post
image by author

This is one part of a series. So if you have not read the PART 0: OVERVIEW you can go there and read it to get an overview of what we will actually doing here …

Introduction

In the last Part 1 INFRASTRUCTURE we have created the resources we need for our scalable web application. In this part we want to automate the infrastructure by creating a pipeline for it. The goal of it is like with every other pipeline. We can deliver our infrastructure via continuous integration and approval process for different stages.

We need to download terraform inside our build agent. For this we have different possibilities. We could do this “manually” (bash script) or use the terraform build/release tasks. I wanted to use the build tasks, because they take a lot of work away from you, but they can not deal with workspaces and that sucks. The first “problem” was to select a workspace. This is not supported. So you can do it with bash. And yes this works. But the problem is, that the task does not take the selected workspace. It takes me hours to identify what is going wrong there, but there is a bug in that task.

So I decided to use the build task to download terraform to the agents and do the rest (authentication and terraform actions) via bash commands.

Furthermore I have to say that I had some issues to determine/fix while building this pipelines. This may differs totally from environment to environment. But the good thing is when you will try to fix your personal issues you will learn a lot of how terraform and azure works.

Prerequisites

You need something to do before we can really start with the pipelines… First you need to work through the first part (PART 1), where we have created our terraform configurations. These can also be downloaded/cloned from https://dev.azure.com/sternschleuder/Notifier/_git/Infrastructure?version=GBfeature%2Fpart1.

Terraform Build/Release Task

Go to the terraform build/release tasks and install the task into you azure devops environment.

Azure

First delete all resources that we have created in the PART 1 of the series (best way is to delete the resource group we used there), because we want to create everything new with our pipelines. And to avoid different states and other problems.

Then we create a resource group which is explicitly used for creating the resources. Or in other words to persist our terraform state remotely, so we can work with pipelines, local and with other developers sharing the same state.

So for this please create a resource group named “notifier-resource-group”. We need a storage account which used by terraform to save the states. So create one and name it “notifiertfstore” and choose the even created resource group as resource group for the storage. Then go inside the storage and choose “Storage Explorer” and right click on “BLOB CONTAINERS” and “Create blob container” and name it “notifiertfstate”. This is the location where terraform will create the states.

And a good thing would be to add a group in azure and add your user account to it (the account where you have logged in to azure cli “az login“). This is a good thing when we wanted to use terraform plan/apply from local or if other devs wanted to do this. Else there could be some auth errors when accessing the key vaults resources. So go to “Azure Active Directory” click “Groups” and create one named “notifier-devs”.

Code Changes

First open the “backend.tf” file and replace that code with the following:

terraform {
  backend "azurerm" {
    tenant_id            = YOUR_TENANT_ID
    subscription_id      = YOUR_SUBSCRIPTION_ID
    resource_group_name  = "notifier-resource-group"
    storage_account_name = "notifiertfstore"
    container_name       = "notifiertfstate"
    key                  = "terraform-notifier.tfstate"
  }
}

Here we are going to use our new created remote backend for persisting the terraform state. Please set here your tenant and subscription id (You can get this information by “az account list” – id is the subscription id!).

Now we need to change our “common.yaml” to the following. We add here our service connection to have also access to our keyvaults. Do not forget to replace the placeholder in capitals with your information! The “YOUR_SERVICE_CONNECTION_OBJECT_ID” you will get from the azure resource manager service connection created later in the build pipeline section. After the service connection is created you will find the service connection name and object id in azure in azure ad under app registrations.

tenant_id: YOUR_TENANT_ID

kv_allow:
    YOUR_SERVCE_CONNECTION_NAME: # Service Connection (principle which used in azuredevops pipeline)
        object_id: YOUR_SERVICE_CONNECTION_OBJECT_ID
        secret_permissions: ["get", "list", "delete", "set"]
    notifier-devs: # Allow group
        object_id: YOUR_NOTIFIER_DEVS_OBJECT_ID
        secret_permissions: ["get", "list", "delete", "set", "recover", "backup", "restore"]

Build Pipeline

First we create the build pipeline and initialize terraform for our workspaces and validate them. Furthermore we create the artifact with the terraform files to later create the resources in the release pipeline.

Let’s got to azure devops now and navigate to your notifier project pipelines. The press “create pipeline” and choose Azure Repos Git for your source. Select the “Infrastructure” repository. Choose then “Starter pipeline” and click “Save and run” und yes commit it directly into the master branch. After the job has been successfully finished. Check out the repository changes from the origin. After that you should see “azure-pipelines.yml” file und your “Infrastructure” folder. From here we start to add our needed build stuff. So open the file in your editor and let’s go…

First thing to do is delete all stuff in there and then copy the following.

trigger:
- master

resources:
  repositories:
  - repository: self

pool:
  vmImage: 'ubuntu-latest'

The trigger sets a branch where the pipeline will be automatically triggered. The repository source is to include the repositories which needed in the build. This is in our case only “self” (the repository which triggered the build). And the pool where we define our vmImage. We set it to latest ubuntu.

If we have this done, we can go to the steps. First delete all the code inside the steps section. As the first action we install terraform with the “Terraform Installer” task.

steps:
  - task: TerraformInstaller@0
    displayName: Install Terraform Latest 
    inputs:
      terraformVersion: 'latest'

Then we need to authorize to azure. We do this with a “Azure CLI” task. But before we can do this we have to create a service connection in azure devops to add azure into azure devops. For this do the following:

  1. Click on “Project settings”
  2. Click on “Service connections” in the “Project Settings” sidebar
  3. Click on “Create service connection”
  4. Choose “Azure Resource Manager” and click “Next”
  5. Click “Service principal (automatic)” and “Next”
  6. Select Scope Level “Subscription” and choose your azure subscription
  7. Leave the resource group empty, so you have access to all resource groups from your subscription. (maybe there will pop up an auth window where you have to login with your azure credentials)
  8. And enter a name for the Azure Resource Manager connection (In my case “ARM Notifier”)

After we have created the service connection, we can add the task. Leave a blank line after the last step and put the following code in to it… And care about indentation! (yaml is very sensitive with this)

  - task: AzureCLI@1
    displayName: Authorize Azure
    inputs:
      azureSubscription: 'ARM Notifier'
      scriptLocation: inlineScript
      inlineScript: |
        echo "##vso[task.setvariable variable=AZURE_CLIENT_ID;issecret=false]${servicePrincipalId}"
        echo "##vso[task.setvariable variable=AZURE_CLIENT_SECRET;issecret=true]${servicePrincipalKey}"
        echo "##vso[task.setvariable variable=AZURE_SUBSCRIPTION_ID;issecret=false]$(az account show --query 'id' -o tsv)"
        echo "##vso[task.setvariable variable=AZURE_TENANT_ID;issecret=false]${tenantId}"
      addSpnToEnvironment: true

This task authorize us with our created “ARM Notifier” service connection. Now we can initialize an validate our configurations. We do this with following bash task where we are using the “ARM” environment variables. Which we putting under the azure cli task with line break in-between.

  - bash: |
      terraform init
      for ENV in "acc" "prd"
      do
        terraform workspace select $ENV || terraform workspace new $ENV
        terraform validate
      done
    workingDirectory: '$(System.DefaultWorkingDirectory)'
    displayName: 'Terraform Init/Validate configuration'
    env:
      ARM_CLIENT_ID: $(AZURE_CLIENT_ID)
      ARM_CLIENT_SECRET: $(AZURE_CLIENT_SECRET)
      ARM_SUBSCRIPTION_ID: $(AZURE_SUBSCRIPTION_ID)
      ARM_TENANT_ID: $(AZURE_TENANT_ID)

We directly go further by adding another task which copies our terraform configs the artifact staging directory. So put…

  - task: CopyFiles@2
    displayName: Copy Terraform Configs
    inputs:
      SourceFolder: '.'
      Contents: '**'
      TargetFolder: '$(build.ArtifactStagingDirectory)'
      CleanTargetFolder: true
      OverWrite: true

And the last task is to publish the artifact…

  - task: PublishBuildArtifacts@1
    displayName: Publish Terraform Artifacts
    inputs:
      PathtoPublish: '$(build.ArtifactStagingDirectory)'
      ArtifactName: 'tf'
      publishLocation: 'Container'

So – if we have all done this, our build pipeline should work! Please trigger your build pipeline to see if it is working. Best to push the code changes and the should the pipeline automatically be triggered.

Release Pipeline

The release pipeline is responsible for applying our terraform changes in azure. First we are going to create a new release pipeline. For this go again to azure devops -> pipelines -> releases and create a pipeline. Select the “empty job” template and press apply. Then name the stage “acc plan” and close the sidebar window. The release pipelines (at the moment) can not be edited via yaml. So we have to use the user interface. But after we have closed the sidebar our pipeline looks like this:

Image for post
image by author

I have already renamed the pipeline name (on the top) to “Infrastructure Release”. We need to choose the artifact now (in the graphic above it is already done). Press the add button in the artifacts section and choose build as source type. Choose the notifier project and the Infrastructure build pipeline as source like in the graphic below and press add.

Image for post
image by author

Now we add some tasks here. For this please click on the “acc plan” stage on the job link. Then we are in the task view. Click the plus button to add the “terraform install” task. The task here is almost the same like before in the build pipeline, but we have to use the ui here. But there is nothing more to fill out here.

Next we add a further task to authorize to azure. Please select an Azure CLI task and choose our “ARM Notifier” for the “Azure Resource Manager connection”. The script type is “Shell” and choose inline script andput the following into it:

echo "##vso[task.setvariable variable=AZURE_CLIENT_ID;issecret=false]${servicePrincipalId}"
echo "##vso[task.setvariable variable=AZURE_CLIENT_SECRET;issecret=true]${servicePrincipalKey}"
SUBSCRIPTION_ID=`az account show --query 'id' -o tsv`
echo "selected subscription ${SUBSCRIPTION_ID}"
echo "##vso[task.setvariable variable=AZURE_SUBSCRIPTION_ID;issecret=false]${SUBSCRIPTION_ID}"
echo "##vso[task.setvariable variable=AZURE_TENANT_ID;issecret=false]${tenantId}"

Please open the “Advanced” tab and enter the “$(working_dir)” as “Working Directory”. This is an variable which will be set later when have create a task group from our tasks…

Now we came to the last task to initialize, sleect workspace and call the terraform plan or apply action. Choose a new bash task choose “Inline” and put in the following:

terraform init
terraform workspace select $(env) || terraform workspace new $(env)

if [ "$(tf_action)" = "apply" ]; then
       terraform $(tf_action) -auto-approve
else
       terraform $(tf_action)
fi

This script uses two variables (“$(env)” and “$(tf_action)”). Also these vars will be set later… Check here the auto approve flag when applying the changes. And we need to add here the environment variables which came from the azure cli task. So please set these in the “Environment Variables” tab. Add the following:

NameValue
ARM_CLIENT_ID$(AZURE_CLIENT_ID)
ARM_CLIENT_SECRET$(AZURE_CLIENT_SECRET)
ARM_SUBSCRIPTION_ID$(AZURE_SUBSCRIPTION_ID)
ARM_TENANT_ID$(AZURE_TENANT_ID)
Environment Variables

Now is the time to create a task group from our tasks. Which will give us a lot of advantages (faster creating a new deployment stage, changes will be applied globally where task group ist used, etc.). So select these three task like in the image below, right click and choose “create taskgroup”. Name it “Terraform Plan/Apply”.

Image for post
image by author

If this is done we see our task group and have to fill out the variables which we have defined in the tasks. Enter “acc” for environment, “plan” for terraform action and choose the working directory with the configuration files, like in the image below.

Image for post
image by author

Go back to the pipeline view and click clone on the “acc plan” stage (hover over stage and then an icon will appear!). Click on the new cloned stage and rename it to “acc apply” this stage is where terraform will create the resources. This stage should be triggered after the “plan stage” was successful.

Image for post
image by author

And I would create an approval for this. So click the thunder/user symbol to open the pre deployment conditions. Trigger should be “After Stage” and pre deployment approvals enabled with a selected user.

Image for post
image by author

For the production stages we can clone once more the “acc plan” and “acc apply” stage. Let’s do this… After we have done this we change the name to “prd plan” and “prd apply”. Then we have to choose the correct predeployment conditions for each stage. Click on the thunder/user symbol and change the conditions analog to acc stages. (In the prd apply stage we maybe need to deselect the acc stages and select the “prd plan” stage.)

The pipeline view should look like this now:

Image for post
image by author

But now we need to change the variables for each stage. Our “acc plan” is ok, but the other ones are cloned so they need some adjustments. Let’s jump into the task view by clicking the task link in the “acc apply” stage. Select the taskgroup and change the “tf_action” to “apply”. That’s all!

Then choose the “prd plan” stage under the “Task” dropdown and select again the tasgk group and set the “env” to “prd” and the action should be “plan” Do the same thing with “prd apply” stage and check if the action is already “apply”. Then save the pipeline.

Uh one thing would be cool – let’s use continuous deployment trigger. Click on the thunder icon near the artifact and then do the following:

Image for post
image by author

Save the pipeline again! Now the build pipeline and release pipeline should be automatically be triggered and release the plan. And then we only need to approve to roll it out!

Enable Access AKS to use ACR

To give the k8s cluster access to the container registry we need to do that explicitly. Else when we want to deploy something from our registry in k8s we will get an authorization error. Todo this please call the following line with the azure cli.

az aks update -n notifier-aks-acc -g notifier-resource-group-acc --attach-acr notifiercontainerregistryacc

You will ask why we have to do this here and not with terraform. So you need admin rights for your azure portal and the pipeline does not have it. It is possible to configurate this by terraform with azure active directory etc. I do not want to make the things much more complex here, that is why we have to go this way here.

Problems I had to face with

In principle this is all very easy to configure and setup, but there are so many things which could you drive crazy… For example I wanted to use the terraform cli task for all, but there was a strange bug with selecting and then using the right workspace. This was not so easy understand, because I thought I have missed something or whatever. But the task was the problem… However here some other problems which I encountered.

Terraform – Resource Already Imported

First I got some errors after I want to apply my resources. Some resources were already there and I have to import them, although it was the first time I have created them. For this reason I deleted the complete resource group which we used in the PART 1 of the tutorial. Then I read something about it, that there is cache with the terraform azurerm version 2.* which can produce this kind of issue. So if this happens you have to get your state up to date by importing these resources. You can do it by (example from a key vault secret):

terraform import azurerm_key_vault_secret.kvs_webapi_appinsights https://kv-webapi-acc.vault.azure.net/secrets/ApplicationInsights--InstrumentationKey/d9bff6b232d0412fb3aa2d9e9a07961

Terraform – State Container Locked

This occurs, when I wanted to apply terraform changes first locally and then via pipeline. I had an error in my local changes and the apply does not work, then I pushed the fixed code to the repo and the pipeline failed. It says that the container is in lease state. Ok I thought I take a look in azure to that state file in the container, but the container infos said that there is nor lease state. Hm -however I thought… I marked the state as lease by right clicking on that state file and choose the action. After then I removed that state. And now there was no more blocking in the pipeline. May be terraform could not remove the lease state after the error.

Key Vault – Access Policies

First I used the “current azure rm” information for creating resources and access policies, etc. But in that case you have to keep in mind that these values logically differ when you apply resources from you local machine or using the pipeline. And if only one policy is created then we will get into some auth errors when accessing the key vault secrets, because only the creator of the resource has access. To avoid this as I described here in this post, I have added access policies for an azure group and the azurerm service connection.

Conclusion

I have learned here, that it is very easy to setup such a pipeline, but not easy to get it run. For this there is a deeper understanding of azure and terraform needed. But if you go through this you will learn a lot.

So created a good working build and release pipeline for our infrastructure which is working quite well. And it is easily to extend to some more stages/environments!

The updated code for this part can be downloaded from https://dev.azure.com/sternschleuder/Notifier/_git/Infrastructure?path=%2F&version=GBfeature%2Fpart1_1&_a=contents.

Preview

In the next post PART 2 we will create the web api for our notifier web application. Here we will learn to use our created resources inside a .net webapi application.

Categories
architecture azure devops Uncategorised

PART 1: INFRASTRUCTURE – Building a Scalable App Environment with Infrastructure and Deployment

Using .NET, Angular, Kubernetes, Azure/Devops, Terraform, Eventhubs and other Azure resources.

This is one part of a serie. So if you have not read the PART 0: OVERVIEW you can go there and read it to get an overview of what we will actually doing here …

Introduction

Welcome to the first part! Now we are doing our first steps to build our notifier web application. Here we learn how to create our needed infrastructure in azure with the help of terraform. In this part we only uses a local backend for terraform which is not good, because then we run into problems when we working in teams or when we want to use pipelines. This we will cover in the next part, where we will create build/release pipelines for the infrastructure. We are going not to much deep in each configuration for the resources we create. There are so many things which are very important to know when working with all this. So I strongly recommend to go deeper into it by building your projects and researching for your needs.

Please keep in mind that when you run this terraform code you have to pay for the created azure resources. So the best is to delete the created resource groups after testing it!

What resources do we need?

If we go back to the overview part, then we see that we need the following:

Terraform

Use Infrastructure as Code to provision and manage any cloud, infrastructure, or service

https://www.terraform.io/

For me this means that it is possible to code the infrastructure no matter for what cloud. But this does not mean, that you can use your infrastructure code and change the cloud provider without changes. It means that terraform provides most cloud providers, but resources are obviously different. However coding the infrastructure has enormous advantages which are for me:

  • Put the infarstructure (as code) in a repo (versioned infrastructure)
  • Easily create complete new stages with same infrastructure (dev, tst, acc, prd, etc.)

Prerequisites

Terraform

First download the terraform cli for your os (I will use windows here for all samples – but this should not really matter). Then we need to access the cli from any location in our command line program. For this I will put this downloaded executable in C:\terraform and add a new entry in the path environment variable. If it is all correct then you should get the terraform help list by typing “terraform” in your bash/terminal/command line.

Azure CLI

Furhtermore we need the Azure CLI for authenticate to our azure account. I will do this easily with PowerShell command (Adminmode is required for this process!).

Invoke-WebRequest -Uri https://aka.ms/installazurecliwindows -OutFile .\AzureCLI.msi; Start-Process msiexec.exe -Wait -ArgumentList '/I AzureCLI.msi /quiet'; rm .\AzureCLI.msi

After installing the Azure CLI and please reopen your command line tool. Then type az. Then you should see commands from provided by the Azure CLI. Now it is time to authenticate to you azure account by typing:

az login

The browser opens automatically and you need to enter your credentials from your Microsoft account which are connected with the Azure portal. Now you will see your azure subscriptions in your command line.

Start creating

So now we will definitely start to create something. First lets create a git repository in our azure devops project “Notifier” and name it “Infrastructure” (how to). Clone it into a folder. In my case “C:\Repos\Notifier”.

Then we need an editor for writing our terraform code. I will use VSCode with the following terraform plugin Azure Terraform, but this does not really matter, we could use a simple editor for it.

Initial Terraform configuration

All starts with the main.tf file. (When calling “terraform plan” it will use all “.tf” files in the folder where terraform will be executed.) Here we can define some base settings, resource group, etc. So let’s go and create a main.tf file in our infrastructure root directory.

# Define the required provider by terraform.
provider "azurerm" {
  features {
  }
  version = "=2.33.0"
  skip_provider_registration = "true"
}

provider "helm" {
  version = "= 2.0.2"
  kubernetes {
    host                    = azurerm_kubernetes_cluster.aks.kube_config.0.host
    client_key              = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
    client_certificate      = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
    cluster_ca_certificate  = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
  }
}

# Defines our first resource - the resource group in which we create other resources.
resource "azurerm_resource_group" "rg" {
  name     = "rg_notifier_example"
 // The resource group name in azure.
  location = "West Europe"
}

The provider we are using is the “azurerm” to handle resources in azure. (When you set the “skip_provider_registration” to false, then you need to provide these in azure – what we will not cover here.). Second we define a provider for helm for creating nginx ingress controller. Then we define a resource group which we want to create.

Then we need one additional file “backend.tf”. Technically we could paste all the code in the main.tf file, but this is not very good arranged. So let’s create it in the same directory.

terraform {
  backend "local" {
}
}

We work in this part only with a local backend, so no more are required here at the moment. This means the terraform state will be created locally. (In the next chapter we change the local to a remote backend.)

Now we can initialize our terraform and create a resource group in azure. Now we have to trigger the initialization process by calling:

terrafrom init

Terraform will respond with a successful initialization message. Now we can “plan” our terraform script. The plan generates all terraform changes, adds and deletions. Terraform always uses the current state to determine the changes etc. This will not create any resource in azure! It is only a preview what will be changed when calling “apply”. So let’s check out the plan:

terraform plan

So you should see the resource group as an “add”. You should not see a change or destroy in the plan. Ok then it is time to really create our first resource. This will be done by “apply”. Apply also produces a plan which can be accepted. If “yes” then we will really made the change in azure. Ok then:

terraform apply

When we go to the azure portal, we should see our created resource group there! (Sometimes azure needs some time to finish the creation process, but this should not take long – no more than one minute.) With “terraform show” we can always take a look at our current state.

You should never change resources manually in the portal. If you do that your terraform state gets totally confused and you can damage a lot with this. So everything should be done with terraform.

Now we have all initialized, except the workspaces. In terraform we can create different workspaces and can so manage resources for different environments/stages. In our example we want limit us to use only two stages. This will be enough for demonstrating the use for it. Our application will have an “acceptance” and a “production” stage. So let’s create terraform workspaces for these two stages.

# Creates our acceptance workspace
terraform workspace new acc

# Creates our production workspace
terraform workspace new prd

# Show all worspaces (the star in the list marks the current workspace)
terraform workspace list
# Select a workspace which will be used inside the terraform code
terraform workspace select acc

Now we can take advantage of the workspaces in our terraform code. But first let’s create a folder named “settings” in our Infrastructure root folder. Inside the settings folder we create three files “prd.yaml”, “acc.yaml” and “common.yaml”. Then we add a line for the environment specific resource. One for “acc”…

resource_group_name: notifier-resource-group-acc

… and one for “prd”…

resource_group_name: notifier-resource-group-prd

… the common.yaml we need later to specify properties for all environments.

Then we reference the setting files in the main terraform script and merge the common and workspace specific settings to one settings. We use the “terraform.workspace” variable to load our setting file for that current selected workspace. Then we use our first setting variable for the resource group name. So we have a resource group for every environment. Here is the edited main.tf file:

# Define the required provider by terraform.
provider "azurerm" {
  features {
  }
  version = "=2.33.0"
  skip_provider_registration = "true"
}

provider "helm" {
  version = "= 2.0.2"
  kubernetes {
    host                    = azurerm_kubernetes_cluster.aks.kube_config.0.host
    client_key              = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
    client_certificate      = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
    cluster_ca_certificate  = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
  }
}

# Here we define settings which will be used when creating the resources.
locals {
  default_tfsettings = {

  }

  commonSettingsFile = "./settings/common.yaml"
  commonSettingsFileContent = fileexists(local.commonSettingsFile) ? file(local.commonSettingsFile) : "NoTFCommonSettingsFileFound: true"
  commonSettings = yamldecode(local.commonSettingsFileContent)

  workspaceSettingsFile = "./settings/${terraform.workspace}.yaml"
  workspaceSettingsFileContent = fileexists(local.workspaceSettingsFile) ? file(local.workspaceSettingsFile) : "NoTFWorkspaceSettingsFileFound: true"
  workspaceSettings = yamldecode(local.workspaceSettingsFileContent)

  settings = merge(local.default_tfsettings, local.commonSettings, local.workspaceSettings)
}

# Defines our first resource - the resource group in which we create other resources.
resource "azurerm_resource_group" "rg" {
  name     = local.settings.resource_group_name
 // The resource group name in azure.
  location = "West Europe"
}

Adding further resources

Now we have to add our resources we need for the application …

Application Insights

First we create a file named “application-insights.tf” in this we put the following code to create the resource. (Terraform will automatically detect the new .tf files.)

resource "azurerm_application_insights" "ai" {
  name                = local.settings.application_insights_name // Name of the resource defined in the settings file. 
  location            = azurerm_resource_group.rg.location // Use resource group location.
  resource_group_name = azurerm_resource_group.rg.name // Use our resource group from the current workspace.
  application_type    = "other" // The type of application. We use "other" here, so it is not so specific like "web", "java", etc.
  retention_in_days   = 90 // The default retention used here.
  sampling_percentage = 100 // To get the most accurate results without so many loose of data.
}

This is very simple right? So we define here the resource where we set the name from the settings file depend on the workspace we have selected. The location and resource group name came directly from our created resource group in the main.tf before. But we definitely need to add the application insights name in the setting files.

application_insights_name: notifier-application-insights-acc
application_insights_name: notifier-application-insights-prd

After this is all done we can call terraform plan to verify our changes und then apply to create the resource. Please make sure that you have selected the “acc” workspace. Ando not wonder when you call “plan” that the resource group resource will be added again. This is why we have applied the plan in the default workspace and not in the acc! In this part we will apply only in acc. In the next part when we create a pipeline for the infrastructure this will be done by the release!

terraform plan
terraform apply
Container Registry

Creating a container registry is also easy as with the application insights. Create a new file called “container-registry.tf” and put the following code to it. Please read the comments for more information.

resource "azurerm_container_registry" "acr" {
  name                = local.settings.application_insights_name // Name of the resource defined in the settings file. 
  location            = azurerm_resource_group.rg.location // Use resource group location.
  resource_group_name = azurerm_resource_group.rg.name // Use our resource group from the current workspace.
  sku                 = "Basic" // We will use the not so expensive one for this demo.
}

Then like always add entries for the name in the setting files. But for this resource we can only use lowercase alpha numeric values, so we can not use the “-” for separating words.

container_registry_name: notifiercontainerregistryacc # Some resources can only use alphanumeric names.
container_registry_name: notifiercontainerregistryprd # Some resources can only use alphanumeric names.

And apply our new stuff …

terraform plan
terraform apply
Kubernetes Service (AKS)

Time for adding our k8s cluster. Create a file in the “Infrastructure” root folder (like with the others) and name it “kubernetes-cluster.tf” and put the following code into it for a basic managed k8s cluster.

resource "azurerm_kubernetes_cluster" "aks" {
  name                = local.settings.aks_name
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  dns_prefix          = local.settings.aks_dns_prefix

  default_node_pool {
    name       = "default"
    node_count = 1
    vm_size    = "Standard_A2_v2"
  }

  identity {
    type = "SystemAssigned"
  }

  tags = {
    Environment = local.settings.aks_tag_environment
  }
}

output "client_certificate" {
  value = azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate
}

output "kube_config" {
  value = azurerm_kubernetes_cluster.aks.kube_config_raw
}

resource "helm_release" "ingress" {
    name      = local.settings.ingress_name
    repository = "https://charts.bitnami.com/bitnami"
    chart      = "nginx-ingress-controller"
    set {
        name  = "rbac.create"
        value = "true"
    }
}

This is a very basic config for the cluster. Please take a look at terraform to get more infos. The vm_size defined in line 10 should be minimum “Standard_A2_v2”. You need 2 CPU’s and 4 GB RAM. I took the “Standard_A” version, because it is sufficient for testing purposes and it is to that fact a little cheaper.

At least we define a helm release ingress resource. This is our ingress controller which creates a public id and make it possible to reach the aks outside of the cluster. We use two workspace dependent settings here which we have to add in our setting files.

aks_name: notifier-aks-acc
aks_dns_prefix: notifieraksacc
aks_tag_environment: Acceptance
aks_name: notifier-aks-prd
aks_dns_prefix: notifieraksprd
aks_tag_environment: Production

And again apply our new stuff … This can take some time (ca. 5 minutes). After everything worked correctly you get an output with a client certificate.

terraform plan
terraform apply
Event Hubs

For creating our event hub, we need first an event hub namespace in which our notifications event hub will be running in. For this please create a file named “eventhub-namespace.tf” and put the following code into it.

resource "azurerm_eventhub_namespace" "ehns" {
  name                      = local.settings.eventhub_namespace.name
  location                  = azurerm_resource_group.rg.location
  resource_group_name       = azurerm_resource_group.rg.name
  sku                       = "Standard"
  capacity                  = local.settings.eventhub_namespace.capacity
  auto_inflate_enabled      = true
  maximum_throughput_units  = local.settings.eventhub_namespace.maximum_throughput_units
  network_rulesets          = [{
    default_action       = "Deny"
    ip_rule              = []
    virtual_network_rule = []      
  }]

  tags = {
    "creator"     = "markus herkommer"
    "environment" = terraform.workspace
  }
}

So nothing special her … We create a namespace for each workspace and use workspace settings to apply different configurations. The “sku” must be min “Standard”, but terraform will inform you about that, if you try basic :).

Before we declare the needed settings lets create our actual event hub, because there we will need to add some more settings. Now we create a file and name it “eventhub-notification.tf” and put the following code to it.

# Define the eventhub
resource "azurerm_eventhub" "notifications" {
  name                = "notifications"
  namespace_name      = azurerm_eventhub_namespace.ehns.name
  resource_group_name = azurerm_resource_group.rg.name
  partition_count     = local.settings.eventhub.notifications.partition_count
  message_retention   = local.settings.eventhub.notifications.message_retention
}

# Define eventhub consumers
resource "azurerm_eventhub_consumer_group" "notifications_notifier_appinsights" {
  name                = "appinsights"
  namespace_name      = azurerm_eventhub_namespace.ehns.name
  eventhub_name       = azurerm_eventhub.notifications.name
  resource_group_name = azurerm_resource_group.rg.name
}

resource "azurerm_eventhub_consumer_group" "notifications_notifier_email" {
  name                = "email"
  namespace_name      = azurerm_eventhub_namespace.ehns.name
  eventhub_name       = azurerm_eventhub.notifications.name
  resource_group_name = azurerm_resource_group.rg.name
}

# Define eventhub authorization rules
resource "azurerm_eventhub_authorization_rule" "notifications_notifier_send" {
  name                = "send"
  namespace_name      = azurerm_eventhub_namespace.ehns.name
  eventhub_name       = azurerm_eventhub.notifications.name
  resource_group_name = azurerm_resource_group.rg.name
  listen              = false
  send                = true
  manage              = false
}

resource "azurerm_eventhub_authorization_rule" "notifications_notifier_listen" {
  name                = "listen"
  namespace_name      = azurerm_eventhub_namespace.ehns.name
  eventhub_name       = azurerm_eventhub.notifications.name
  resource_group_name = azurerm_resource_group.rg.name
  listen              = true
  send                = false
  manage              = false
}

In the first section we create the notifications event hub. In the next section our two needed consumers. (The two notifier workers – app insights and email). And in the last part we set authorization rules for this event hub. We set a rule for sending and receiving messages. And when we come to the end for every resource we need to add our used setting variables in the workspace files.

eventhub_namespace:
    name: eventhubs-acc
    capacity: 1
    maximum_throughput_units: 10

eventhub:
    notifications:
        partition_count: 2
        message_retention: 7
eventhub_namespace:
    name: eventhubs-prd
    capacity: 1
    maximum_throughput_units: 10

eventhub:
    notifications:
        partition_count: 4
        message_retention: 7

And I am sure you guess it what’s next to do …

terraform plan
terraform apply
Table Storage

To save the notifications we a need a storage and we want to store the data in azure table storage. For this we need a storage account resource. We could create the table in our services, or directly in terraform. For this time we will create it in the service later.

Create a file named “storage-account.tf” in the also known “Infrastructure” directory and put the following code in there:

resource "azurerm_storage_account" "sa" {
  name                     = local.settings.storage_account_name
  resource_group_name      = azurerm_resource_group.rg.name
  location                 = azurerm_resource_group.rg.location
  account_tier             = "Standard"
  account_kind             = "StorageV2"
  account_replication_type = "LRS"
}

In the first part we define the storage account and use the version 2 of the storage account here, but actually the V1 should also work. Our replication type is LRS which means, that our data will only be replicated in one region, which is totally ok for our use. The second part creates the table inside our storage account. Now we need to define the storage account names. (Here are only lowercase alphanumeric values allowed.)

storage_account_name: notifierstoreacc
storage_account_name: notifierstoreprd

and again …

terraform plan
terraform apply
Key Vaults

For getting access to our resources we need connectionstrings and passwords, etc. The best place for it are the key vaults. The key vaults can be used in pipelines and in our .net core services. We should create key vaults for each service. This does not really matter here, because the secrets are nearly the same, but we want to do it in a microservice manner and separate them. So we have access control over these entries per service.

Before we start to create the key vault configurations we need to add some common settings.

tenant_id: YOUR_TENANT_ID

kv_allow:
    notifier-devs:
        object_id: CURRENT_LOGGED_IN_USER_OBJECT_ID
        secret_permissions: ["get", "list", "delete", "set", "recover", "backup", "restore"]

Your tenant you can find by “az account list”. For your the object id from your current user (The user you are logged in with az login.) you can find in the azure portal. Azure Active Directory -> Users -> YOUR USER. This is needed for the key vault access policy. We will see very soon…

Now we need to add some workspace specific settings and add the key vault names.

keyvault_webapi_name: kv-webapi-acc
keyvault_worker_appinsights_name: kv-worker-insights-acc
keyvault_worker_email_name: kv-worker-email-acc
keyvault_webapi_name: keyvault-webapi-prd
keyvault_worker_appinsights_name: keyvault-worker-appinsights-prd
keyvault_worker_email_name: keyvault-worker-email-prd

After adding the settings we create a file named “keyvault-webapi.tf” and putting the following code to it:

# Key vault definition
resource "azurerm_key_vault" "kv_webapi" {
  name                        = local.settings.keyvault_webapi_name
  location                    = azurerm_resource_group.rg.location
  resource_group_name         = azurerm_resource_group.rg.name
  enabled_for_disk_encryption = false
  enabled_for_template_deployment = true
  tenant_id                   = local.settings.tenant_id
  soft_delete_enabled         = true
  soft_delete_retention_days  = 7
  purge_protection_enabled    = false
  sku_name = "standard"
}

# Access policy
resource "azurerm_key_vault_access_policy" "ap_webapi_admin" {
  for_each     = local.settings.kv_allow
  key_vault_id = azurerm_key_vault.kv_webapi.id

  tenant_id = local.settings.tenant_id
  object_id = each.value.object_id

  secret_permissions = each.value.secret_permissions
}

# Key vault entries
resource "azurerm_key_vault_secret" "kvs_webapi_appinsights" {
  name         = "ApplicationInsights--InstrumentationKey"
  value        = azurerm_application_insights.ai.instrumentation_key
  key_vault_id = azurerm_key_vault.kv_webapi.id
  depends_on = [azurerm_key_vault_access_policy.ap_webapi_admin]
}

resource "azurerm_key_vault_secret" "kvs_webapi_storage" {
  name         = "StorageSettings--ConnectionString"
  value        = azurerm_storage_account.sa.primary_connection_string
  key_vault_id = azurerm_key_vault.kv_webapi.id
  depends_on = [azurerm_key_vault_access_policy.ap_webapi_admin]
}

resource "azurerm_key_vault_secret" "kvs_webapi_eventhub" {
  name         = "EventHubSettings--ConnectionString"
  value        = azurerm_eventhub_authorization_rule.notifications_notifier_send.primary_connection_string
  key_vault_id = azurerm_key_vault.kv_webapi.id
  depends_on = [azurerm_key_vault_access_policy.ap_webapi_admin]
}

First we define the key vault. The second one is very important – the access policy. Here we define where has access to the key vaults. We already defined this in our common file – do you remember? And in this definition a foreach loop iterates over these settings and set access for the defined object id’s, which can be groups, users, or service connections. In the last section we create our entries for application insights, the connection string for the storage account (storage tables) and the connection string for sending messages to our notifications event hub.

Let’s go to the next key vault by creating a file named “keyvault-worker-appinsights.tf” and putting the following code to it:

# Key vault definition
resource "azurerm_key_vault" "kv_worker_appinsights" {
  name                        = local.settings.keyvault_worker_appinsights_name
  location                    = azurerm_resource_group.rg.location
  resource_group_name         = azurerm_resource_group.rg.name
  enabled_for_disk_encryption = false
  tenant_id                   = data.azurerm_client_config.cc.tenant_id
  sku_name = "standard"
}

resource "azurerm_key_vault_access_policy" "ap_worker_appinsights_admin" {
  key_vault_id = azurerm_key_vault.kv_worker_appinsights.id
  tenant_id = data.azurerm_client_config.cc.tenant_id
  object_id = data.azurerm_client_config.cc.object_id
  secret_permissions = [
    "get",
    "list",
    "set",
    "delete",
    "recover",
    "backup",
    "restore"
  ]
}

# Key vault entries
resource "azurerm_key_vault_secret" "kvs_worker_appinsights_appinsights" {
  name         = "ApplicationInsights--InstrumentationKey"
  value        = azurerm_application_insights.ai.instrumentation_key
  key_vault_id = azurerm_key_vault.kv_worker_appinsights.id
  depends_on = [azurerm_key_vault_access_policy.ap_worker_appinsights_admin]
}

resource "azurerm_key_vault_secret" "kvs_worker_appinsights_eventhub" {
  name         = "EventHubSettings--ConnectionString"
  value        = azurerm_eventhub_authorization_rule.notifications_notifier_listen.primary_connection_string
  key_vault_id = azurerm_key_vault.kv_worker_appinsights.id
  depends_on = [azurerm_key_vault_access_policy.ap_worker_appinsights_admin]
}

The definition is analog to the previous key vault. We create here an application insights and an event hub with listen connection string secret here.

Next the last one which is nearly the same except the name of the key vault. So create a file named “keyvault-worker-email.tf” and put the following code into that file:

# Key vault definition
resource "azurerm_key_vault" "kv_worker_email" {
  name                        = local.settings.keyvault_worker_email_name
  location                    = azurerm_resource_group.rg.location
  resource_group_name         = azurerm_resource_group.rg.name
  enabled_for_disk_encryption = false
  tenant_id                   = data.azurerm_client_config.cc.tenant_id
  sku_name = "standard"
}

resource "azurerm_key_vault_access_policy" "ap_worker_email_admin" {
  key_vault_id = azurerm_key_vault.kv_worker_email.id
  tenant_id = data.azurerm_client_config.cc.tenant_id
  object_id = data.azurerm_client_config.cc.object_id
  secret_permissions = [
    "get",
    "list",
    "set",
    "delete",
    "recover",
    "backup",
    "restore"
  ]
}

# Key vault entries
resource "azurerm_key_vault_secret" "kvs_worker_email_appinsights" {
  name         = "ApplicationInsights--InstrumentationKey"
  value        = azurerm_application_insights.ai.instrumentation_key
  key_vault_id = azurerm_key_vault.kv_worker_email.id
  depends_on = [azurerm_key_vault_access_policy.ap_worker_email_admin]
}

resource "azurerm_key_vault_secret" "kvs_worker_email_eventhub" {
  name         = "EventHubSettings--ConnectionString"
  value        = azurerm_eventhub_authorization_rule.notifications_notifier_listen.primary_connection_string
  key_vault_id = azurerm_key_vault.kv_worker_email.id
  depends_on = [azurerm_key_vault_access_policy.ap_worker_email_admin]
}

and for the last time in this post …

terraform plan
terraform apply

If everything had worked correctly the notifier-resource-group-acc in the azure portal should look like this.

image by author

Now you could check the event hubs and check, that we have our two consumers there and a shared access policy for “send” and “listen”. And make sure all secrets was written in the key vaults, but this should all be ok, else terraform has report us an error.

Please also notice two more resource groups which was created by helm for the ingress controller.

Conclusion

So you have “learned” how to start with terraform and create a bunch of resources which we will need for our scalable notifier web application. There are a lot of more options for every resource we have defined. Please take a look at terraform.

All that stuff we have written here is downloadable from the public repository: https://dev.azure.com/sternschleuder/Notifier/_git/Infrastructure?version=GBfeature%2Fpart1. Please let me know if you have any suggestions or questions.

Preview

In the next PART 1.1 we will create pipelines for our created infrastructure. This is very helpful. Then we do no more have to do “plan” and “apply” in our command line. It is then a part of the overall publishing process with all it’s benefits like, CI, approval process, stages, etc.

Categories
.net architecture azure devops Uncategorised

PART 0: OVERVIEW – Building a Scalable App Environment with Infrastructure and Deployment

Using .NET, Angular, Kubernetes, Azure/Devops, Terraform, Eventhubs and other Azure resources.
Image for post
image by author

This post will be he first of a series, because it makes no sense to put all that stuff into one post. So this post will outline the overall demo application and infrastructure and how we will start to develop it.

What do we learn here?

We will build a modern style web application environment which uses a lot of technologies to bring that up and running. We learn how to wire all that pieces together and automate them. Starting by creating the infrastructure in azure using terraform and integrate it with azure devops pipelines. Then we are going to create a simple webapi in .net core which uses azure tables to store data and an event hub for posting messages to our system. After then we need to create multiple worker which can consume our messages. And at least a small functional user interface in angular which uses the webapi. We will talk a lot of configurations, keeping secrets secret and other stuff which can make problems when connecting all those parts.

What we will not do!

This demo application/environment will be far away from a complete production ready application. There will be no authentication or other security things which are extremely important, nor sufficient error handling, unit tests or great design patterns inside each software piece, etc. The pattern here is more the overall environment with pipelines, message broker, small services, etc. The code logic will be very simple, so we can concentrate on things we want to learn here.

Which technologies/tools we will use for coding, deploying and hosting?

For programming the backend will use C#/.NET Core/WebAPI and Angular/TypeScript for the frontend. We use Azure DevOps for the build/release pipelines and source control (.git). The complete infrastructure will be created in Azure with Terraform for defining the infrastructure in code. In Azure we will use Event Hubs as our Message Broker, Azure Tables to store the Notifications, Application Insights as one of our notification receiver, Key Vault to keep our secrets secret, Container Registry for our Docker Images and a Kubernetes Sevice (AKS) for hosting and managing our Docker Container.

What kind of functionality are we developing?

I think of a very small “notifier” application could make sense here. I think with this, we get all parts explored. The functionality is very simple. The app provides an interface for creating, listing and resending notifications to there consumer.

I start explaining the flow at the top of the diagram below. First the user should be able to create a notification via the user interface (made here with angular). The ui calls the web api to create a notification. The web api stores the notification in the table and sends a notification message to the event hub. At least the two consumer (application insights worker and email worker) receive them and do there job. The web api provides an additional “get notifications” endpoint by which the ui can read them. So then the user could select one or the other and resend the notification(s).

Image for post
image by author

Actually we do not need this “complex building” to realize this simple functionality, but this one has the known advantages of a microservice architecture and scalable system which I will not explain here to keep this short as possible.

What are the next steps?

In each part I will explain one “brick” to get this all to work. I explain in every specific post what we do need and achieve here. In a real world project it would make more sense not to split all the infrastructure tasks in one part (and for example) the web api into another.

Before we are going to start we should prepare a little bit. So we need an Azure DevOps account. And create a project named “Notifier”. Make sure that you choose git for source control! The work item template does not matter to us, because we will not use it. Then we need an Azure account. When we have this done we can start. Following the steps. So then lets go … (But I will spare myself the saying “Let’s get your hands dirty.”)

Categories
go

go – sharing private code in module based apps

Image for post

In this article I want to describe the possibilities to share private common code between multiple go services. This private shared code should not be valid to public audience. Public repositories will not fit this requirement – so they are out.

I have found the following possibilities

1. Plain Source Files Via Git Submodules

For this we do not create a module from the shared code. We leave the code as simple source files and push them to a git repository. In our “main” go service apps we can now clone the submodule in the corresponding folder. The code will then be part of the apps module.

I do not like this approach very much, because I like to work with modules so I can simply use my preferred project location and so on and so forth. But this can be done! What brings us to the next solution.

2. Shared Module

This solution I tried first, because it sounds very simple and a good option to start with. But this one has a tricky thing which I do not recognize at the first time I tried. But now the example project structure for it.

project
├── service_1
|   └── go.mod (project.com/service_1)
├── service_2
|   └── go.mod (project.com/service_2)
└── shared
    ├── go.mod (project.com/shared)
    ├── util
    └── my-util.go

I have two service apps which should use the shared module. I created a module for every service app and the shared code. So far so good. But this will not work out of the box. When I tried to use the shared module code inside the service. I get it not imported. I tried to import “company.com/shared/mypackage”. So I need to link the shared module in my service module where I wanted to use the shared code. OK – I thought then I require the shared module in the service go.mod with:

module project.com/service_1
go 1.15

require (
    project.com/shared v0.0.0
)

But this could obviously not work, because at this address there is no repository, but go wanted to downloads the code from it like with every other required repository here. That is the error message which I get then:

cannot find module providing package project.com/shared/mypackage: unrecognized import path "project.com/shared/mypackage": reading https://project.com/shared/mypackage?go-get=1: 404 Not Found

However there is an option when requiring modules to replace the online location with a local location. And then it will work totally fine:

module project.com/service_1
go 1.15

require (
    project.com/shared v0.0.0
)

replace project.com/shared => ./shared

I have to say that I like this approach very much for private shared module code, because I can use modules for the shared code and can combine it with git submodules to get updated between the services. I can use it in one project with different app modules or in different project. And the import path do not have to be a real repository which I think it’s good, because it is then in independent from the location which is not when using private repositories – like in the next approach.

3. Private Repository

The last possibility I want to describe is using a private repository on github or azure devops. I like to use azure devops for my private repos, pipelines and the other cool stuff which azure devops provides. So the example will be based on this.

Creating the shared Module

So when creating the shared module you have to keep in mind that the module naming has to be the location of your origin repository. It should be something like this:

~/shared
$ go mod init dev.azure.com/{company_or_account_name}/{project}/_git/{repository_name_of_my_shared_code}.git
Using the shared module

After you have initialized the module in this way and pushed it with your shared code to origin, it is available for getting it by “go get”. Then you can go to your service module where you want to use the shared module and proceed a “go get”.

But before we need to make sure we have access to it! This can be done over ssh or https. I prefer the https method. (If you want it to use ssl method or need further explanation -> go get in azure repos). I decided to use the http method, so I need then to create a PAT (personal access token) in azure devops. After then I need to add the following line in my git.config.

[url "https://{user}:{pat}@dev.azure.com/{company}/{project}/_git/{repository_name_of_my_shared_code}.git"]
    insteadOf = https://dev.azure.com/{company}/{project}/_git/{repository_name_of_my_shared_code}.git

The user can be anything but not empty! The pat is your generated personal access token and the other information should be clear.

~/service_1
$ go get dev.azure.com/{company_or_account_name}/{project}/_git/{repository_name_of_my_shared_code}.git

You might be wondering which branch will be used by the “go get” process. It is the default branch. But this can be changed by adding the branch name at the end of repository when you call “go get”.

~/service_1
$ go get dev.azure.com/{company_or_account_name}/{project}/_git/{repository_name_of_my_shared_code}.git@{branch_name}
Using with Docker

Using the shared module in docker will produce first an error by default, because docker can not use the added security in the local git.config. Before you run the “go get” command in docker you have to provide it in dockers global git.config file.

RUN git config --global url."https://{user}:{pat}@dev.azure.com/{company}/{project}/_git/{repository_name_of_my_shared_code}.git".insteadOf "https://dev.azure.com"

RUN go get ./...

Conclusion

I do not like the first approach, because with this I am not dealing with modules and this has the known disadvantages. I think this could be more complicated because of the go root paths etc. So I would not recommend it.

The local shared module method without having a real go repository is very nice, because it is possible to do this without any repository and good to start fast without creating a real go repository. In combination with git submodules it is very flexible and modular to use. And you can use a local module name which is independent from location where the code is hosted. does not work for go repositories.

And then the private repositories. This was my preferred solution, because it works a little in the way like “npm” or “nuget” package managment. The shared module has a better version control else with submodules (at least for me). So the only thing I do not like is that the import paths hold information about the location from the code and in my point of view this is not good, but I read some other post from people who like it – so …