Containerizing ASP.net core API Gateways
In the previous article, we've learned how to utilize rate limiting for establishing a basic defense mechanism against DDOS attacks. In this article, we will explore how to containerize ASP.net core web APIs and the value that it adds when working with API gateways.
Articles in the Series
This article belongs to a series of articles that explains the importance of API gateways and how to build them using ASP.net Core. If you're interested to learn more about API gateways, it might be a good idea to spend some time reading the articles listed below.- Part 1: API Gateway in a Nutshell.
- Part 2: Building Simple API Gateways with Ocelot.
- Part 3: API Response Aggregation using Ocelot
- Part 4: API Defense using Rate Limiting and Ocelot.
- Part 5: Containerizing API Gateways
- Part 6: Containerizing API Gateways using Alpine Base Image
Abstract
Building huge software platforms using microservice architecture sounds cool but it actually poses a lot of challenges. It often requires software teams to perform the following:
- Define Business Requirements
- Identify Service Boundaries
- Design Technical Services
- Select Technology Stack
- Develop Individual Services
- Build Infrastructure
- Deploy Application
However, as a software developer, I personally find infrastructure building and application deployment as the most annoying of all the phases because it's the phase where you are often presented with challenges associated with environment differences and configuration problems.
When working with API gateways, you'll experience a lot of challenges associated with environment and configuration problems. To give a more concrete example of problems that you'll encounter when working with microservices, I've listed below the common problems that I've personally encountered while working with them.
-
Custom Fonts
Working on document processing pipelines often require the use of custom fonts. Document manipulation tutorials out there often ask developers to access fonts via magic strings and this often becomes troublesome when you ship your application to staging and production environments without installing them. -
Environment Variables
To avoid building environment-specific deployment packages, you might want to use environment variables for storing non-sensitive configuration values like temporary file paths, retry counts and etc. in environment variables. This becomes troublesome when we start using too much of them because we start to forget including some of them in our deployments. -
Sidecar Services
Sidecar services provide an awesome mechanism for adding new features by installing them side-by-side with your applications (more often legacy ones). They are often used to avoid or reduce the effort required for altering existing code bases (Open / Closed Principle).
They often become troublesome once you start to forget to install these sidecar services on staging and production environments. What makes things worst is that you have separate teams deploying your application which are totally clueless of what needs to be included when you forget to document them.
Infrastructure as Code
Infrastructure as code is an approach to automate infrastructure provisioning based on practices from the software development world. It emphasizes repeatable and consistent routines for spinning up and updating existing software systems and their configurations. Changes are often made to definition files and then rolled out to systems through automated processes that include thorough validations. These definition files can then be stored on version control systems which gives you the capability to roll back your infrastructure.
In the context of building API gateways and microservices, Docker is one of the tools that we can use to implement infrastructure as code. Docker provides software developers a nice and terse way to express environment conditions required by their applications to run in the form of Dockerfiles.
Containerization Basics
Application containerization is the process of packaging a software together with the environment (includes OS, application dependencies, sidecar services, environment variables, etc) it requires in order to run.
An application can be containerized with the help of Docker Build command. A Docker build command takes a Dockerfile which contains all terminal commands required to package your software. The result of the build process is called a docker image which is used to instantiate an application container (Instance of an application and its environment) in a couple of seconds.
The capability to instantiate an application from a container image is an extremely helpful capability as it enables you to launch, destroy and re-create multiple independent instances of your application (APIs in our case) depending on the demand.
Containerization (aka dockerizing) alone is a huge topic that we can't tackle in a single article. Instead, we'll focus on minimal yet important concepts associated with the containerization process of ASP.net core apps and API gateways.
Key Terms
Below is a list of terms that you may want to understand for you to be able to containerize an API Gateway.
- Application - An application in the context of Containerization is a piece of software that you want to ship to your production environment to produce business value.
- Application Environment - An application environment can be defined as the ecosystem that your application need to survive and perform its responsibilities.
- Docker Build - Docker build is an operation that aims to produce an immutable snapshot of your application and its environment.
- Docker Image - Docker images are the output of a docker build operation. Docker images are immutable because they are physically frozen and if stored in reliable storages like Azure Container Registry, they provide immense value to your business by giving you the capability to spin, tear and re-create your application and its environment in a couple of seconds.
- Docker Container - A container in the context of Docker is an instance of a Docker Image (your application and its environment snapshot).
- Docker Run - Is an operation that enables you to instantiate Docker Containers out of Docker Images. If you are a software developer with OOP background, you can compare a Docker Image with a class and a Docker Container with an object.
- Dockerfile - Text document that contains all the commands a user could call on the command line to assemble an image. Using docker buid, users can create an automated build that executes several command-line instructions in succession.
Key Benefits
Containerizing API gateways brings the following benefits:
- Platform Independence - Since your application is shipped with its own sandbox, you have the capability to deploy and migrate to different cloud providers (Azure, AWS, GCP, etc) or on-prem infrastructure based on your application use cases and organizational constraints.
- Ease of scalability - Thanks to the immutability and speed of instantiation of application containers, you can easily scale up and down the number of your web API instances to handle the load that your application requires.
- Dependency Diversity - Since your application runs on its own sandbox, your web APIs can now be developed with different dependency sets which is cool since newer applications can be written on better technology stacks without getting tied to constraints related to your existing services.
- Efficient Use of Computing Resources - Application containers help you maximize your computing resources by giving you the capability to set and adjust the memory and CPU that each container instances can utilize.
- Instant Downtime Recovery - Since applications are immutable, you can easily recover from hardware or OS related failures.
- Identical Development and Production Ecosystems - Thanks to infrastructure as a code approach, you can say goodbye to days where you need to say "It works on my machine boss" whenever a production specific issue occurs.
Containerizing API Gateways
The image above shows the architecture of the API gateway that we'll containerize. The application contains three sub-domains (Authentication, Ledger and Catalog) that were aggregated by an API gateway. If you're interested in checking the whole source code, the demo application is available in GITHUB. Please do feel free to clone and work on it as you wish.
DISCLAIMER: This article will focus plainly on performing a basic containerization of each of the downstream services and the API gateway itself. We would cover more advanced ways and techniques on how to containerize our API gateway on the upcoming articles (Use of docker compose, alpine images, orchestration via Kubernetes, etc).
Step 0: Clone Previous Article's Repository
Download a copy of the previous article's repository from GITHUB. You can use it as a starting point in replicating the following steps in this article.
Step 1: Creating a DNS record for your computer
Add an entry to the host file of your local machine that points to the demo.api.gateway. This will be the URL that the API gateway container will use to route incoming HTTP requests to other containers in your computer. Later in this series, we will explore all the service discovery options that we can utilize to avoid doing this dirty approach.
Step 2: Utilizing Kestrel and enabling external access to endpoints over the network
In order for each services (enclosed on their own sandbox) to communicate over the network, we have to configure kestrel
to accept external traffic originating from the network with the use of the code below:
Hard coding the URL is considered bad and was just done for the sake of brevity of the
article.
You'll also need to disable firewall settings to accept incoming HTTP traffic over the network on the ports used
by the downstream services and the API gateway. You can do this by using the use of the command below:
Step 3: Updating the Ocelot Configuration File
Since we're going to run our applications inside containers, we have to update the ocelot.json file by replacing the configured downstream host configurations with the DNS name that we configured on the Step 1. We also have to update the BaseURL configured on the global configuration object by pointing it to the DNS registered on Step 1.
Step 4: Writing Dockerfiles
In order to containerize our API gateway and downstream services, copy the Dockerfile below and replace the template values (aspnetapp.dll) with the appropriate values for each of the downstream services and place them under the root folder of each service project. If you are a developer that previously worked with Java or NodeJS, you'll observe that this Dockerfile is way bigger (for good reasons) compared to the ones used for containerizing NodeJS and Java apps. This is due to the fact that Microsoft had introduced a separate docker image for development, compilation, and publishing to achieve smaller docker images.
Step 5: Building Docker Images
Copy the script below on your solution directory and execute it. The script will build a docker image for each of the downstream services + one for your API gateway. These images will then be stored in your local image repository. You can verify if all of your images are built using the command below:
Step 6: Running the API gateway.
To run the API gateway and its downstream services, we need instantiate our application container instances, you can use the script below to bootstrap all of them at the same time.
Step 7: Testing Container Endpoints
To verify that all endpoints are working on your machine (Damn), you can open the following endpoints listed below:
- Ledger @ http://demo.api.gateway:52790
- Catalog @ http://demo.api.gateway:52791
- Authentication @ http://demo.api.gateway:52792
- API Gateway @ http://demo.api.gateway:52793/api/user-transactions/539bf338-e5de-4fc4-ac65-4a91324d8111
Step 8: Cleaning up the Artifacts
If want to clean the artifacts produced by the containerization test that we've performed, I've setup some bash scripts to remove the containers and images that we built for this POC.
Cleaning your containers
Cleaning your images
Areas of Improvement
Our API gateway is up and running, however, we still have some areas in our containerization approach that can improve:
- Hard coded port configurations
- We're manually configuring DNS through the use of host file. Manually configuring the host file is a bit painful especially when you're working with more projects and dealing with more networks (Home, office, multiple office networks).
- Our API gateway can be considered as a fat client because it contains knowledge about where our downstream services are located. Our API gateway implementation can improve if we use Kubernetes or other service discovery tools.
- Our Docker image sizes exceeds more than 248mb + which is quite horrible. We'll address this by utilizing Alpine base image (4.15 MB in size) to produce lower container image sizes.
Conclusion
In this article, we've learned about the importance of infrastructure as a code approach and application containerization. We've also learned how to perform basic containerization of an API gateway and its downstream services. In the next article, we'll explore how to use Alpine base image to produce lower container image footprints.
Related Articles
- Part 1: API Gateway in a Nutshell.
- Part 2: Building Simple API Gateways with Ocelot.
- Part 3: API Response Aggregation using Ocelot
- Part 4: API Defense using Rate Limiting and Ocelot.
- Part 5: Containerizing API Gateways
- Part 6: Containerizing API Gateways using Alpine Base Image
Awesome article.Really helped me in solving issue
ReplyDelete