Categories
.net architecture azure devops Uncategorised

PART 0: OVERVIEW – Building a Scalable App Environment with Infrastructure and Deployment

Using .NET, Angular, Kubernetes, Azure/Devops, Terraform, Eventhubs and other Azure resources.
Image for post
image by author

This post will be he first of a series, because it makes no sense to put all that stuff into one post. So this post will outline the overall demo application and infrastructure and how we will start to develop it.

What do we learn here?

We will build a modern style web application environment which uses a lot of technologies to bring that up and running. We learn how to wire all that pieces together and automate them. Starting by creating the infrastructure in azure using terraform and integrate it with azure devops pipelines. Then we are going to create a simple webapi in .net core which uses azure tables to store data and an event hub for posting messages to our system. After then we need to create multiple worker which can consume our messages. And at least a small functional user interface in angular which uses the webapi. We will talk a lot of configurations, keeping secrets secret and other stuff which can make problems when connecting all those parts.

What we will not do!

This demo application/environment will be far away from a complete production ready application. There will be no authentication or other security things which are extremely important, nor sufficient error handling, unit tests or great design patterns inside each software piece, etc. The pattern here is more the overall environment with pipelines, message broker, small services, etc. The code logic will be very simple, so we can concentrate on things we want to learn here.

Which technologies/tools we will use for coding, deploying and hosting?

For programming the backend will use C#/.NET Core/WebAPI and Angular/TypeScript for the frontend. We use Azure DevOps for the build/release pipelines and source control (.git). The complete infrastructure will be created in Azure with Terraform for defining the infrastructure in code. In Azure we will use Event Hubs as our Message Broker, Azure Tables to store the Notifications, Application Insights as one of our notification receiver, Key Vault to keep our secrets secret, Container Registry for our Docker Images and a Kubernetes Sevice (AKS) for hosting and managing our Docker Container.

What kind of functionality are we developing?

I think of a very small “notifier” application could make sense here. I think with this, we get all parts explored. The functionality is very simple. The app provides an interface for creating, listing and resending notifications to there consumer.

I start explaining the flow at the top of the diagram below. First the user should be able to create a notification via the user interface (made here with angular). The ui calls the web api to create a notification. The web api stores the notification in the table and sends a notification message to the event hub. At least the two consumer (application insights worker and email worker) receive them and do there job. The web api provides an additional “get notifications” endpoint by which the ui can read them. So then the user could select one or the other and resend the notification(s).

Image for post
image by author

Actually we do not need this “complex building” to realize this simple functionality, but this one has the known advantages of a microservice architecture and scalable system which I will not explain here to keep this short as possible.

What are the next steps?

In each part I will explain one “brick” to get this all to work. I explain in every specific post what we do need and achieve here. In a real world project it would make more sense not to split all the infrastructure tasks in one part (and for example) the web api into another.

Before we are going to start we should prepare a little bit. So we need an Azure DevOps account. And create a project named “Notifier”. Make sure that you choose git for source control! The work item template does not matter to us, because we will not use it. Then we need an Azure account. When we have this done we can start. Following the steps. So then lets go … (But I will spare myself the saying “Let’s get your hands dirty.”)

Categories
go

go – sharing private code in module based apps

Image for post

In this article I want to describe the possibilities to share private common code between multiple go services. This private shared code should not be valid to public audience. Public repositories will not fit this requirement – so they are out.

I have found the following possibilities

1. Plain Source Files Via Git Submodules

For this we do not create a module from the shared code. We leave the code as simple source files and push them to a git repository. In our “main” go service apps we can now clone the submodule in the corresponding folder. The code will then be part of the apps module.

I do not like this approach very much, because I like to work with modules so I can simply use my preferred project location and so on and so forth. But this can be done! What brings us to the next solution.

2. Shared Module

This solution I tried first, because it sounds very simple and a good option to start with. But this one has a tricky thing which I do not recognize at the first time I tried. But now the example project structure for it.

project
├── service_1
|   └── go.mod (project.com/service_1)
├── service_2
|   └── go.mod (project.com/service_2)
└── shared
    ├── go.mod (project.com/shared)
    ├── util
    └── my-util.go

I have two service apps which should use the shared module. I created a module for every service app and the shared code. So far so good. But this will not work out of the box. When I tried to use the shared module code inside the service. I get it not imported. I tried to import “company.com/shared/mypackage”. So I need to link the shared module in my service module where I wanted to use the shared code. OK – I thought then I require the shared module in the service go.mod with:

module project.com/service_1
go 1.15

require (
    project.com/shared v0.0.0
)

But this could obviously not work, because at this address there is no repository, but go wanted to downloads the code from it like with every other required repository here. That is the error message which I get then:

cannot find module providing package project.com/shared/mypackage: unrecognized import path "project.com/shared/mypackage": reading https://project.com/shared/mypackage?go-get=1: 404 Not Found

However there is an option when requiring modules to replace the online location with a local location. And then it will work totally fine:

module project.com/service_1
go 1.15

require (
    project.com/shared v0.0.0
)

replace project.com/shared => ./shared

I have to say that I like this approach very much for private shared module code, because I can use modules for the shared code and can combine it with git submodules to get updated between the services. I can use it in one project with different app modules or in different project. And the import path do not have to be a real repository which I think it’s good, because it is then in independent from the location which is not when using private repositories – like in the next approach.

3. Private Repository

The last possibility I want to describe is using a private repository on github or azure devops. I like to use azure devops for my private repos, pipelines and the other cool stuff which azure devops provides. So the example will be based on this.

Creating the shared Module

So when creating the shared module you have to keep in mind that the module naming has to be the location of your origin repository. It should be something like this:

~/shared
$ go mod init dev.azure.com/{company_or_account_name}/{project}/_git/{repository_name_of_my_shared_code}.git
Using the shared module

After you have initialized the module in this way and pushed it with your shared code to origin, it is available for getting it by “go get”. Then you can go to your service module where you want to use the shared module and proceed a “go get”.

But before we need to make sure we have access to it! This can be done over ssh or https. I prefer the https method. (If you want it to use ssl method or need further explanation -> go get in azure repos). I decided to use the http method, so I need then to create a PAT (personal access token) in azure devops. After then I need to add the following line in my git.config.

[url "https://{user}:{pat}@dev.azure.com/{company}/{project}/_git/{repository_name_of_my_shared_code}.git"]
    insteadOf = https://dev.azure.com/{company}/{project}/_git/{repository_name_of_my_shared_code}.git

The user can be anything but not empty! The pat is your generated personal access token and the other information should be clear.

~/service_1
$ go get dev.azure.com/{company_or_account_name}/{project}/_git/{repository_name_of_my_shared_code}.git

You might be wondering which branch will be used by the “go get” process. It is the default branch. But this can be changed by adding the branch name at the end of repository when you call “go get”.

~/service_1
$ go get dev.azure.com/{company_or_account_name}/{project}/_git/{repository_name_of_my_shared_code}.git@{branch_name}
Using with Docker

Using the shared module in docker will produce first an error by default, because docker can not use the added security in the local git.config. Before you run the “go get” command in docker you have to provide it in dockers global git.config file.

RUN git config --global url."https://{user}:{pat}@dev.azure.com/{company}/{project}/_git/{repository_name_of_my_shared_code}.git".insteadOf "https://dev.azure.com"

RUN go get ./...

Conclusion

I do not like the first approach, because with this I am not dealing with modules and this has the known disadvantages. I think this could be more complicated because of the go root paths etc. So I would not recommend it.

The local shared module method without having a real go repository is very nice, because it is possible to do this without any repository and good to start fast without creating a real go repository. In combination with git submodules it is very flexible and modular to use. And you can use a local module name which is independent from location where the code is hosted. does not work for go repositories.

And then the private repositories. This was my preferred solution, because it works a little in the way like “npm” or “nuget” package managment. The shared module has a better version control else with submodules (at least for me). So the only thing I do not like is that the import paths hold information about the location from the code and in my point of view this is not good, but I read some other post from people who like it – so …