How to deploy python microservices on Kubernetes, hands on tutorial !

Grasp in this article a basic practical understanding on how to deploy python microservices in Kubernetes

Senhaji Rhazi hamza
15 min readApr 25, 2022

Introduction :

Deploying and shipping software reliably, is a necessary constraint that has evolved through many patterns, with the advent of the new cloud technologies, Docker and Kubernetes, the stateless and microservices architectures have been encouraged at the expense of monolithic architectures.

Among some reasons that explains this trend, can be the following, we don’t tend to scale vertically anymore, so it’s harder to persist a state locally since, you don’t know which node will process your request, hence apps are designed stateless, and other systems are used to persist a state (Database, memory store, blob object ,etc….), another reason is when you have people with different tech stacks, it’s easier to lets them work on their own stack and make their work available as a microservice for other teams, also when your monolithic app become bigger and bigger, it’s more likely to cumulate some rigidity, and strong coupling in design, making maintenance, or change of a big monolithic app can be hard sometimes

This is why nowadays, if you don’t want to be left behind in the tech world it is more than recommended to master the microservice patterns, who provide more often than not, a lot of flexibility.

What you will learn here :

At the end of this article, you will know how to deploy python microservices architecture inside a Kubernetes cluster.

In order to achieve that, we will

  1. Introduce and provide an example of a python application with a monolithic architecture (as one service), then we will show how we do break it into microservices
  2. Give a small introduction to Kubernetes and some of its objects used in this tutorial (no prerequisite in Kubernetes is needed, but some knowledge of Docker is helpful)
  3. Install a mini cluster locally (with Minikube)
  4. Deploy the first architecture (monolithic) then the second (microservice) inside the cluster Kubernetes
  5. All the code will be provided inside a GitHub repo

What this article is not :

This article is neither exhaustive nor a rigorous description of how Kubernetes works nor a best practice reference on how to structure your micro-services, for an aim of simplicity only Kubernetes parts that are used in this tutorial example are explained, and when explained, the explanations aims to give a conceptual model for understanding rather than a rigorous description.

Python app monolithic and microservices architectures

Introduction :

In this section, we will see an example of a python application that can be deployed either as one monolithic flask application, or multiple flask services applications

These flask applications, once dockerized will play the role of the different microservices we deploy into our kubernetes cluster

Section’s steps :

In this section we will

  • Describe functionalities of our python flask application
  • Show monolithic and microservices architecture”s schemas
  • We will see how can use our CLI app to start both of the monolithic and microservices architecture then test them
  • Show how to dockerize the multiple services as a pre-requisite for the next Kubernetes section

Application description :

Our application contains a package called arithmetic, this package contains 2 modules, mul.py containing multiplication functionality (basically function for multiplying elements of a list), sum.py containing summing functionality (basically function for summing elements of a list)

┣ app/
...
┃ ┣ lib/┃ ┃ ┣ arithmetic/┃ ┃ ┃ ┣ __init__.py┃ ┃ ┃ ┣ mul.py┃ ┃ ┃ ┗ sum.py
...

Our 2 functionalities can be used either directly from the CLI or used from the service python flask web app (as shown in the next section), when used as a web application each functionality is targeted with a corresponding url

In the web application mode, these functionalities (summing, multiplying) are for the monolithic architecture are called inside the same service, but in the microservice architecture each functionality is separated as an independent service, this mimics a real-life situation when you start with a monolithic app, then functionalities become too complex to be handled by one team, so you start dividing them into semantically coherent microservices, allowing teams to work on them independently.

Take a look to the next section to see schemas architectures

Application schema’s architectures

Monolithic schema architecture

This schema represent an architecture where we are running only one backend service that deliver our functionalities (summing, multiplying)

Microservices schema architecture

This schema represent an architecture where we are running 3 backend services, one master that the user interact with, and this master backend plays the role of proxy to the 2 other microservices (mul, sum)

Let’s see below how to use our app from the CLI

How to use the app’s CLI

Displaying list of commands :

At the root of our repo we run the cmd

# as a pre-requisite you should have poetry installed, otherwise check poetry installation --> https://python-poetry.org/docs/#installation# then install dependencies with : poetry installsh bin/cli.sh

Result :

Usage: run.py [OPTIONS] COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]...Command line entry, choose from the commands listed belowOptions:
--help Show this message and exit.
Commands:
ms_service_master Start the microservice master (proxy for other ms)
ms_service_mul Start the micro service multiplying
ms_service_sum Start the micro service summing
mul_cmd Multiply multiple numbers and print the result
service_monolithic Start the monolithic service (summing, multiplying)
sum_cmd Sum multiple numbers and print the result

Our app contains 6 commands :

mul_cmd, sum_cmd : To use multiplying and summing functionality directly

service_monolthic : To start the server with monolithic architecture

ms_service_master, ms_service_ml, ms_service_sum : To start the different services for micro-service architecture

Starting services (monolithic & microservices)

Introduction :

As you will see in the layout section (in the end)

Here we will show using the app’s CLI how we can start the monolithic service, test functionalities against curl requests, then shut down the monolithic service, then start 3 microservices and repeat the same process with curl.

PS : the tree app layout is displayed at the end of this section, to see how the code is structured

Monolithic service (start & test)

start service :

sh bin/cli.sh service_monolithic# result : server listening on localhost, port 8000
>>...
2022-04-04 19:18:05,400 - (werkzeug) - INFO - * Running on http://192.168.1.135:8000/ (Press CTRL+C to quit)
>>...

testing service :

Do the following in another terminal

# ping the service 
curl http://0.0.0.0:8000/
>>
{
"message": "welcome to the example service",
"status": "success"
}
# testing sum's functionality route
curl -X POST http://0.0.0.0:8000/sum -H "Content-Type: application/json" -d '{"list": [1, 3, 5]}'
>>
{
"message": "my sum function",
"result": 9,
"status": "success"
}
# testing multiplication's functionality route
curl -X POST http://0.0.0.0:8000/sum -H "Content-Type: application/json" -d '{"list": [1, 3, 5]}'
>>
{
"message": "my mulfunction",
"result": 15,
"status": "success"
}

Conclusion :

We have started our service and tested /sum & /mul routes with curl, everything worked as expected, let’s do the same with multiple services this time

Microservices (start & test)

We will be using 3 terminals (terminal1, terminal2, terminal3) to start our services, and another one (terminal4) to test them with curl

start sum microservice :

sh bin/cli.sh ms_service_sum (started in terminal 2)# result : server listening on localhost, port 8002
>>...
2022-04-04 19:50:19,950 - (werkzeug) - INFO - * Running on http://192.168.1.135:8002
>>...

testing sum microservice :

# ping the service (in terminal 4)
curl http://0.0.0.0:8002/
>>
{
"message": "welcome to the example microservice sum",
"status": "success"
}
# testing sum's functionality microservice route
curl -X POST http://0.0.0.0:8002/sum -H "Content-Type: application/json" -d '{"list": [1, 3, 5]}'
>>
{
"message": "my sum function from microservice",
"result": 9,
"status": "success"
}

start mul microservice :

sh bin/cli.sh ms_service_mul (started in terminal 1)# result : server listening on localhost, port 8001
>>...
2022-04-04 20:17:15,730 - (werkzeug) - INFO - * Running on http://192.168.1.135:8001/ (Press CTRL+C to quit)
>>...

testing mul microservice :

# ping the service (in terminal 4)
curl http://0.0.0.0:8001/
>>
{
"message": "welcome to the microservice master",
"status": "success"
}
# testing multiplication's functionality microservice route
curl -X POST http://0.0.0.0:8001/mul -H "Content-Type: application/json" -d '{"list": [1, 3, 5]}'
>>
{
"message": "my mul function from microservice",
"result": 15,
"status": "success"
}

start master microservice :

Notice here that the master microservice plays a role of a proxy to the sum and mul microservices

sh bin/cli.sh ms_service_master (started in terminal 3)# result : server listening on localhost, port 8000
>>...
2022-04-04 20:39:46,397 - (werkzeug) - INFO - * Running on http://192.168.1.135:8000/ (Press CTRL+C to quit)
>>...

testing master microservice :

# ping the service (in terminal 4)
curl http://0.0.0.0:8000/
>>
{
"message": "welcome to the microservice master",
"status": "success"
}
# testing multiplication's functionality you can notice here that the microservice mul listening on port 8001 (in terminal 1) recieved the request
curl -X POST http://0.0.0.0:8000/mul -H "Content-Type: application/json" -d '{"list": [1, 3, 5]}'>>
{
"message": "my mul function from microservice",
"result": 15,
"status": "success"
}
# testing sum's functionality you can notice here that the microservice mul listening on port 8002 (in terminal 2) recieved the requestcurl -X POST http://0.0.0.0:8000/sum -H "Content-Type: application/json" -d '{"list": [1, 3, 5]}'
{
"message": "my sum function from microservice",
"result": 9,
"status": "success"
}

Conclusion :

We have started 3 microservices (sum, mul, master), have tested mul, and sum independently, then have tested master that sends requests to mul and sum services, get the result and return it

Dockerizing our different services

To dockerize our multiple services, we just have to dockerize our app one time, and use the cli as an entrypoint, and each time, depending on the parameters passed on, we can start the desired service.

Take a look to the Dockerfile, (entrypoint : python app/cli/run.py)

Build docker image

# Build docker image
docker image build --rm -t mskube:0.0.0 -f Dockerfile .

Run the docker image

#Run docker image 
docker run mskube
>>...
Usage: run.py [OPTIONS] COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]...
Command line entry, choose from the commands listed belowOptions:
--help Show this message and exit.
Commands:
ms_service_master Start the microservice master (proxy for other ms)
ms_service_mul Start the micro service multiplying
ms_service_sum Start the micro service summing
mul_cmd Multiply multiple numbers and print the result
service_monolithic Start the monolithic service (summing, multiplying)
sum_cmd Sum multiple numbers and print the result

Conclusion :

All the functionalities (sum, mul, starting services) seen in the previous sections, can now be started simply from running the docker image, this docker image will serve as a basis for our yaml kubernetes manifest file to start the different services, in the next section we will introduce kubernetes, then in the deployment section we will use our docker image with different parameters each time

Kubernetes’s introduction :

The best metaphore explaining Kubernetes that i have seen, is the one given by Kelsey Hightower in the excellent documentary done by honeypot where he compared Kubernetes to the post office, when you go to the post office you ship what you want inside a container to be sent, then the post office promise you to deliver your shipping, and they do without the need of telling you how they did it, whether that they had traffic jam or the plane couldn’t take off because of weather, or anything else, they just do

Another way to give you insights about Kubernetes would be the following :

Kubernetes is a complex system that takes several machines, add an abstraction layer to transform these machines into resources of a coherent entity that we call cluster.

Its responsibility can be compared to what traditionally an operating system does in one machine, it manages your hardware’s resources for you, how your software runs on your hardware resources, Kubernetes manage how software runs reliably on multiple machines

Kubernetes’s primitives

To achieve its mission, Kubernetes makes several programs running in coordination between themselves we will call these programs components, when a user wants to communicate with Kubernetes to deploy his software, Kubernetes offers some primitives objects that the user has to declare in a YAML file format, this is the way Kubernetes takes the user’s order declaration and deploys a desired user’s state (for example a running application with 3 replicas).

So let’s check some of these primitives, that we will be using in our example

Let’s start by the following need :

“You would like to deploy a service API with 3 replicas using a configuration

To answer this need, we will have to choose certain primitives objects, configure and declare them to Kubernetes, as it can use these declarations and deliver through its components your desired state.

As an example, for the need expressed just previously, we will need the following primitives objects : Pod, Deployment, Services, Configmaps

Pod : A pod can be seen as a unit of run environment, it’s like a container but a little more, it contains at least one container, and it is supposed to contain only one, unless sometimes it contains more when the containers are tightly coupled, (example you have your app container, and a sidecar container for logging)

Deployment : When you want to deploy your application, you don’t use a pod, you use a deployment, the deployment will deploy for you one pod or more according to the number of replicas you have declared to Kubernetes, more than that with the deployment, it handles for you deployments strategies update, the default strategy used in deployment is the rolling deployment, it means that it replace pods one by one without downtime, there are many strategies for deployment, for more information on different deployments strategies, check this article

Services : When you deploy your pod or deployment in Kubernetes they will be reachable by an internal IP address, but if the pod/deployment are destroyed when they are regenerated they have a new IP address that you didn’t keep track of , so take the service object as a network abstraction that helps you reach your pod/deployment without knowing their IP addresses, we will be interested in our example with 2 types, NodePort (expose the pod/deployment externally) and ClusterIp (expose the pod/deployment internally)

Configmaps : Each application need its configuration, and it’s a good practice to have the code separated from configuration, the configmap object helps you to declare configuration mostly in a key value form, that can be used later by your app (pod/deployment)

Kubernetes deployment

Minikube cluster setup

Minikube plays the role of a mini cluster kubernetes you can setup in your local machine, check minikub install

For Mac M1 users (as me) use instead these steps

# Download arm64 version
curl -LO https://github.com/kubernetes/minikube/releases/download/v1.25.2/minikube-darwin-arm64
# Install minikube
sudo install minikube-darwin-amr64 /usr/local/bin/minikube
# Start minikube
minikube start --driver=docker --alsologtostderr

Check minikube working with cmd

kubectl 
>>...
Basic Commands (Beginner):
create Create a resource from a file or from stdin.
expose Take a replication controller, service, deployment or pod and
expose it as a new Kubernetes Service
run Run a particular image on the cluster
>>...

Make minikube aware of local docker images

This is a really important step, when you build your docker images, minikube is not aware how to look into your docker registry and pull the images from there.

In order to do that, you should run a special command (that will export some envs variables) and then rebuild your docker images, see below :

# command to export envrionements variable
>> eval $(minikube docker-env)
# rebuild your image docker
>> docker image build — rm -t mskube:0.0.0 -f Dockerfile .
# now minikube will be able to pull mskube

Monolithic deployment

Here we will deploy into kubernetes our monolithic architecture, a deployment with 3 pods replicas then expose it with a NodePort service.

To deploy a manifest file to kubernetes we use the cmd :

kubectl apply -f <manifest.yml>

Deployment yml file :

Check that your deployment went well :

# To check your pods are running, run cmd 
kubect get pods
# you should have an output as the following, make sûre that your pods have status Running
# To check your service is deployed
kubect get svc
# you should have an output as the following

PS : Notice in the deployment file, we have 2 kubernetes objects, a Deployment object and a service object, we could have written the service object in a separated file if we wanted to

Schema of monolithic architecture deployment inside minikube:

Ping your service :

Using a NodePort service allows you to expose a port for outside cluster access, when you are using linux, it’s pretty simple you just request <localhost:nodePort>, when you are using Mac OS, if your Mac OS support Virtualbox (Apple silicon M1 still doesn’t support Virtual box, minikube in this case simulate a node with a docker driver), minikube by default start a vm (virtual machine), then you should create a tunnel and request the vm host <vmhost:nodePort> to access your service check more details here

# Theses command are tested on Mac OS M1 silicon device 
# forwarding requests from localhost:8000 to our service svc-monolithic on port 8000
kubectl port-forward service/svc-monolithic 8000:8000
>>
Forwarding from 127.0.0.1:8000 -> 8000
Forwarding from [::1]:8000 -> 8000
>>
#Then request your service on another terminal
curl http://127.0.0.1:8000
>>
{
"message": "welcome to the example service",
"status": "success"
}
# Below the image

Conclusion

Now our app is deployed inside minikube with the monolithic architecutre and can be used through curl requests the same way as shown in section Starting services (monolithic & microservices)

Microservices deployment

In the previous section we deployed one deployment object with 3 pods replicas and one service NodePort, here we will deploy 3 deployments objects, with one pod replicas each, and 3 services, one NodePort and 2 ClusterIps, (master_service, mul_service, sum_service) and a configmap object, see the schema in the following section (schema without configmap object)

NB : before deploying the new kubernetes objects, we will delete the old ones created with the commands :

# Delete all services objects in default name spaces
kubectl delete svc --all
# Delete all deployments objects in default name spaces
kubectl delete deploy --all

We deploy a configmap object to be used for injecting environments variables inside our containers

# Deploy config map with the following command
kubectl apply -f <configmap.yml>

Schema of microservices architecture deployment inside minikube:

Deployment and exposition file for master microservice

# Deploy master microservice with the following command
kubectl apply -f <master_deploy.yml>

Deployment and exposition file for mul microservice

# Deploy mul microservice with the following command
kubectl apply -f <mul_deploy.yml>

Deployment and exposition file for sum microservice

# Deploy sum microservice with the following command
kubectl apply -f <sum_deploy.yml>
# Once you deployed all the file
# check the services status
kubectl get svc
# check expected result in image bellow
# Once you deployed all the file
# check the deployments status
kubectl get deploy
# check expected result in image bellow

Ping services :

Acording to the schema architecture we should reforward the trafic from our localhost to the service master with the command :

# reforward trafic to the new master service
kubectl port-forward service/svc-master 8000:8000
>>
Forwarding from 127.0.0.1:8000 -> 8000
Forwarding from [::1]:8000 -> 8000
>>

Then we can test our microservices :

# ping helathcheck
curl http://localhost:8000
# check expected result in image bellow
# ping mul microservice through master microservice
curl -X POST http://localhost:8000/mul -H "Content-Type: application/json" -d '{"list": [1, 3, 5]}'
# check expected result in image bellow
# ping sum microservice through master microservice
curl -X POST http://localhost:8000/sum -H "Content-Type: application/json" -d '{"list": [1, 3, 5]}'
# check expected result in image bellow

Conclusion and repo source :

In this article we have seen how we can be architectured a monolithic python app, how we can consume it either as a CLI or as a http service, how we can break it into multiple services, how we can dockerize each service, how we setup a minikube cluster locally, how we deploy the first monolithic architecture and how we deploy the second microservices architecture.

You can find all the code source for this tutorial here

Bonus video :

References :

--

--