Full Infra stack, Provision your virtual machines with Pulumi, Deploy on it a Kubernetes cluster wih pyinfra, then deploy your python application

Senhaji Rhazi hamza
20 min readFeb 28, 2023

Introduction :

Your mother/(or father :D) is probably not a cook not because she/he doesn't know how to cook, but because cooking for one person or one family has little to do with cooking at scale in a restaurant or a hostel, the same goes for developing software, someone who is able to write an algorithm or a script is not yet a cook or a software engineer, writing software at scale is hard so hard that it requires different people with different skills to meet the set of constraints required by software at scale

These statements are getting less and less true since the last 5 years up to last decade, people/companies have developed tools and abstractions where we can express state of deployments, infrastructure’s constraints, security policy simply in a declarative manner and (nearly) out of teen air we pop a hole infrastructures that serve our apps in reliably, handles security, flexibility, resilience, reliability etc…

In this article, we are trying to show the reader how to use Pyinfra to deploy Kubernetes cluster on bare metal, but more general goal is to show the reader a a full pattern of :

  • Provisioning infrastructure
  • Security management
  • Set up a Kubernetes cluster
  • Deploying a python application

The power of a value proposition like that if met, is that it’s make you independent free from vendor locking solutions to serve your customers reliably, you by yourself as a software developer you are able to handle infrastructure, configuration management, application deployment and application development

Steps

We will first start with a target architecture schema, as the reader can have in mind what we want to achieve.

We will show the project layout structure for informative purposes, the reader is encouraged to not pay attention to it too much since we will zoom on its parts through the article

We will introduce and use Pulumi to provision Linux machines on google cloud platform

We will introduce and use Pyinfra to install the Kubernetes cluster on the machines provisioned with Pulumi and expose cluster’s apps to the outside world through a load balancer

Finally, we will build and deploy a python application inside the cluster and consume it from the outside world

A GitHub repo of all these steps will be provided to reproduce

Target Architecture & Layout :

Let’s start visualizing a target architecture schema, then discuss the steps we’re following to set it up

Schema

Here you’re seeing :

  • 4 Linux machines
  • 3 of the Linux machines compose a Kubernetes cluster (1 control plane, 2 workers)
  • 1 of the Linux machines serves as a Loadbalancer to the workers of the Kubernetes cluster (this is what makes apps available to the outside world)
  • A user that administer the cluster via a CLI (Kubectl) that is configured to target the control plane only
  • The same user consumes the apps deployed in the cluster through http calls toward the Loadbalancer

Note that many components are not represented in the schema for purpose of facilitating the comprehension, it’s still a good approximation of what is happening

Let’s have a look at the repo project layout structure that will allow us to meet this target architecture

Project layout

kube-pyinfra-pulumi-all-in-one/
┣ **app/**
┃ ┣ cli/
┃ ┣ lib/
┃ ┣ __init__.py
┃ ┗ server.py
┣ **deploy/**
┃ ┣ assets/
┃ ┣ facts/
┃ ┣ tasks/
┃ ┣ __init__.py
┃ ┣ helper.py
┃ ┣ inventory.py
┃ ┣ join_workers.py
┃ ┣ configue_loadbalancer.py
┃ ┣ deploy_ingress_controller.py
┃ ┗ main.py
┣ **infra/**
┃ ┣ Pulumi.dev.yaml
┃ ┣ Pulumi.yaml
┃ ┣ __main__.py
┃ ┗ helper.py
┣ **k8s**/
┃ ┣ hello-world/
┃ ┗ sanic-app/
┣ **secrets**/
┃ ┣ iac/
┃ ┣ k8s/
┃ ┗ Article.md
┣ .env
┣ .gitattributes
┣ .gitignore
┣ Dockerfile
┣ README.md
┣ justfile
┣ poetry.lock
┣ pyproject.toml
┗ test.http

As a brief commentary on the layout project structure, we use the app folder to host our python application, we use infra folder to provision our vms (virtual machines) on Google cloud platform, we use deploy folder to contain our configuration management python scripts for the vms, we use K8s folder to host our Kubernetes manifests, we use secrets folder combined with git-crypt to store our secrets in an encrypted manner we use Just as a more modern alternative for running commands than Makefile

Now that we have put some light on the picture of the target architecture, and have seen the layout project structure, let’s zoom on its parts

Node provisioning

First, we need to provision the 4 Linux machines we talked about in schema, we will do that with Pulumi on GCP

If you have already 4 linux machines that satisfies theses requirements you can skip this step

What is Pulumi ? :

Pulumi is an “Infrastructure as code” tool to provision infrastructure on cloud, this kind of tools have emerged as a way to standardize and snapshot the infrastructure you would provision on a cloud provider, in order to evolve it, reuse it etc…

Pulumi as Terraform, both handle a state file, the state file records or snapshot your current state of the infrastructure, when you modify the description of your infrastructure, Pulumi use this file to compute the diffrent actions it needs to perform in order to match your state description, and then update the state file after performing the actions

Pulumi takes a slightly different approach than Terraform, where Terraform should be more likely called “Infrastructure as data” rather than “Infrastructure as code” tool, since terraform is more declarative with HCL language (it’s like a YAML++), and Pulumi is more dynamic by offering you the ability to express your infrastructure literally with a programming language which means with code (Terraform too have started to do that, but code is not their first class citizen) than Terraform, that said, both have pros and cons, but it’s not the purpose of this article

Setup Pulumi :

Pre-requisites GCP :

  • A GCP project : For our tutorial since we will provision the machines on GCP, you should have a GCP project and if you don’t, don’t worry Google offers you 300$ credit for free trial simply with your gmail account and if you don’t use a gmail account, create a new one, check the articles here and here on how to use the free credit to create your first project
  • A gcloud CLI : It’s with this CLI that you can interact with google cloud plateform, it allows you to authenticate yourself, configure it to point on one of your projects, provision or delete resources, you wand download it here, you will initialize it by running a gcloud init follow the prompt and authenticate yourself
  • Authorize apis : Pulumi uses some python libraries to interact with GCP, having an authenticated and authorized gcloud CLI is not enough for the libraries to interact with your project, you should in addition run the cmd “gcloud auth application-default login”
gcloud auth application-default login  #(https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login)
# do not confuse it with the cmd "gcloud auth login" #(https://cloud.google.com/sdk/gcloud/reference/auth/login)

Install Pulumi CLI :

A pulumi infra project in python is a folder composed from 2 files, Pulumi.yaml that contains a pulumi config and a __main__.py that contains the infrastructure description

The pulumi CLI relies on these 2 files to know how to targets your GCP project, which language to choose and the components of infrastructure to provision

You can install the Pulumi CLI from here

For example for Mac users the command would be :

brew install pulumi/tap/pulumi

How we use Pulumi in our project ?

Download the github repo :

Clone the github repo and let’s focus on just this part concerning Pulumi (also delete the files of folder secrets, you will generate new ones, delete just the files, keep the folder structure)

kube-pyinfra-pulumi-all-in-one/
┣ infra/
┃ ┣ Pulumi.dev.yaml
┃ ┣ Pulumi.yaml
┃ ┣ __main__.py
┃ ┗ helper.py
┣ secrets/
┃ ┣ iac/
┃ ┣ k8s/
┣ .env
┣ justfile
┣ pyproject.toml

Set file state to local

The file state that pulumi use, can be stored locally, or an a remote storage, or can be fully managed for you, if you use the cloud based paid service of pulumi.

We will store locally the state file in the folder secrets/iac as it can be encrypted with git-crypt, and you can push it safely to the github repo, you don’t want to share your state file publicly, because sometimes might contain sensitive data

In order to configure where pulumi should store the state file, run the cmd :

  pulumi login file:///$(pwd)/secrets/iac # this indicate to pulumi where to store your file locally

Install python dependencies :

As suggested earlier, Pulumi will use python libraries to interact with GCP, theses libraries should be installed, we use Poetry as dependecy management tool for python, poetry does lot of things for us

curl -sSL https://install.python-poetry.org | python3 - # install poetry
poetry config virtualenvs.in-project true --local # configure poetry to generate .venv at root of your project
poetry install --only infra # install dependencies concerning Pulumi

This will install create a virtual environment .venv at your root folder and install 2 python libraries pulumi, pulumi-gcp

Configure Pulumi to interact with your gcp project :

To make Pulumi point to your GCP project, you can either use the cmd pulumi config or environement variables as described here, I prefer environement variables, just fill a .env file like the one bellow with your own values and run the cmd : export $(cat .env | envsubst | xargs)

# create a .env with theses values and put your own values
GOOGLE_PROJECT=<YOUR_GCP_PROJECT: example -> gifted-cooler-370220>
GOOGLE_REGION=<YOUR_GCP_REGION: example -> europe-west9>
GOOGLE_ZONE=<YOUR_GCP_ZONE: example -> europe-west9-a>
KUBECONFIG=${PWD}/secrets/k8s/kubeconfig
DOCKER_IMAGE=<DOCKER_IMG_NAME: example -> my-app>
DOCKER_TAG=0.0.0
DOCKER_USERNAMESPACE=<DOCKER_USER_NAME: example -> senhajirhazi>
# once you have created and filled the .env with your own value, run the cmd
export $(cat .env | envsubst | xargs)

Now when your run Pulumi CLI, it will know how to target your GCP project

Provision the nodes :

You can provision your 4 linux machines by executing the cmd :

pulumi --cwd infra up --stack dev
# This will instruct pulumi to look into the file infra/__main__.py and provision the infrastructure described there
# Note that the implementation details are hidden in infra/helper.py

We have 4 functions we use to provision resources here, prov_instance prov_network, prov_firewall, prov_address their implementation details are hidden in infra/helper.py

The vms have their own internal network, when we provision them, we should associate to each instance a shared network interface and an address in order to reach them from the outside world.

Then with pulumi.export function, their ips get printed after the provisioning.

Record their ips because they will be used in the next section to provision the cluster kube, the output should look like :

# after running pulumi --cwd infra up --stack dev and provisionning the stack
# your ip values will be different
Outputs:
controle-plane-instance_ip: "34.155.173.159"
load-balancer-instance_ip : "34.163.165.22"
network : "default-network"
worker-1-instance_ip : "34.155.230.196"
worker-2-instance_ip : "34.155.150.176"

Install Kubernetes cluster

Introduction on Kubernetes components :

The schema’s architecture seen earlier, doesn’t display all the required components in order for kubernetes to work, let’s see some additional components of Kubernetes that should be installed :

On the control-plane :

  • The etcd distributed persistent storage
  • The API server 
  • The Scheduler 
  • The Controller Manager

On the worker level :

  • The Kubelet 
  • The Kubernetes Service Proxy (kube-proxy) 
  • The Container Runtime (Docker, rkt, or others)

If you want to install manually theses components yourself the hardway you can check Mumshad repo or Keyley Hightower repo

If you want to get more details on how theses components work together, I advice you to read chapter 11 in the book Kubernetes in action

In our tutorial we won’t install theses components ourselves, we get some help with kubeadm binary.

Nowadays, the clusters are installed with kubeadm, in fact we install the kubelet, a container runtime, and kubeadm binary, which will use the the kubelet and the container runtime to run the other components as containers instead of installing them really into the nodes, check kubernetes doc on kubeadm

So to resume, what we will install on our nodes with the help of pyinfra :

Workers and control-plane :

  • Kubelet
  • Container runtime (docker here)
  • Kubeadm

Control-plane only :

  • Kubectl (CLI that commpunicate with the api server)

Introduction Pyinfra :

Pyinfra is a configuration management tool, it serves the same purpose as ansible for automation and deployment, pyinfra looks a lot like ansible, but instead of having yaml that describes the automation steps, it has python code, which I find more flexible and powerful than ansible, because you benefit from all the flexibility of a programming language in your description files + python’s ecosystem, altough is not as well known as ansible

Pyinfra can be used as a library but it’s more intended to be used as a CLI, a basic usage will looks like :

pyinfra inventory.py deploy.py 
# or for more simple and direct usage, for example you want to target
# one server and have one package installed, you can be more explicit with the CLI
pyinfra ssh@<server_ip_address> apt.packages iftop update=true _sudo=true
# your telling the cli, i want you to connect with ssh and install the package iftop with
# sudo on server <server_ip_address>

The inventory.py contains list of targets (hosts) and the deploy.py contains list of operations (an operation could be like : install nginx package and ensure it’s in running state)

Installing kubernetes with pyinfra :

Steps :

In this part we will focus on :

  • Installing pyinfra with poetry
  • Describe layout & adding the node’s ips into the inventory.py
  • Installing docker, kubelet, kubeadm on both of workers & control-plane, and installing kubectl on control-plane with pyinfra
  • Initializing control-plane
  • Joining the workers to the cluster
  • Testing we are able to reach workers from the control-plane
  • Reaching cluster from local machine through configuring kubectl

Install pyinfra dependencies :

poetry install --only config-management  
# poetry will take care to install pyinfra for you

Describe infra layout and add node ips into inventory.py :

deploy/
┣ facts/
┃ ┣ __init__.py
┃ ┣ k8s_facts.py
┣ tasks/
┃ ┣ __init__.py
┃ ┗ install_docker.py
┣ __init__.py
┣ inventory.py
┣ join_workers.py
┗ main.py

This is what we need in the deploy folder to install our cluster, let’s details folders and files :

  • facts : Contains helpers to get infos about hosts, (for example k8s_facts contain function like to tell you if the cluster is installed or not
  • tasks : Contain tasks that can be included in main.py (for example here we have an install docker task)
  • main.py : Install kubelet, kubeadm, kubectl, initializing the control-plane
  • join_workers.py : Join workers to the cluster
  • inventory.py : Contains the Ips address of the nodes

You can then add your node ip addresses to inventory.py, see below

# inventory.py 
workers = ["<ip_address_worker1>", "<ip_address_worker2>"]
controlplanes = ["<ip_address_control-plane>"]
# to be added for the next section
loadbalancers=["<ip_address_load-balancer>"]

Install cluster & initialize control-plane

Once you have filled the inventory.py, pyinfra recognize the variables “workers”,”controlplanes”,”loadbalancer” as groups, you can then in your deploy file (here main.py) target each group with special operation, for example we install kubectl only on control-plane group, see the snippet below :

if "controlplanes" in host.groups:
apt.packages(
update=True,
force=True,
name="Ensure packages installed (kubectl)",
packages=["kubectl=1.22.4-00"],
_sudo=True, # useudo when installing the packages
)

So to install & initialize the cluster just run :

poetry run pyinfra deploy/inventory.py deploy/main.py
# this will install kubelet, docker, kubeadm,
# kubectl (kubectl on control-plane only)

Join the workers to the cluster

poetry run pyinfra deploy/inventory.py deploy/join_workers.py
# this will install kubelet, docker, kubeadm,
# kubectl (kubectl on control-plane only)

Test your cluster

Ssh to your control-plane and test kubectl

# ssh to control-plane and run kubectl 
ssh <username>@<control-plane-ip>
>>
# run kubectl get nodes
kubectl get nodes
# you should have a result similar to what is displayed below

Reaching cluster from local machine

This part is optional, but what if you would like to reach your cluster from outside, from your local machine for example, you will notice that in the deploy main.py there is an operation that copy the kubeconfig from the controlplane to your local machine at secrets/k8s/kubeconfig, (here is a screen of its content)

Normally all what you would have to do, would be to export the environment variable KUBECONFIG to the file path of kubeconfig, and then when you run kubectl, kubectl takes the infos needed to interact with cluster from there

export KUBECONFIG=${PWD}/secrets/k8s/kubeconfig

But since the vms on GCP aren’t using between themselves internet network, the ips they use to reach each other are internal and different than the one we use to reach them from internet (remember the network interface), so before doing this export we have to :

  • Change the ip of kubeconfig to the Ip of the controleplane
  • Regenerate the certificat in kubeadm & restart the kube-api server pod to accept traffic from comming from this ip

Change the ip of kubeconfig to the Ip of the control-plane

To redisplay the ip of the control-plane, run the cmd

pulumi stack -s dev output
>>>
Current stack outputs (5):
OUTPUT VALUE
controle-plane-instance_ip 34.155.173.159# <- Select the ip of the conrol-plane
load-balancer-instance_ip 34.163.165.22
network default-network
worker-1-instance_ip 34.155.230.196
worker-2-instance_ip 34.155.150.176

So once you select the ip of the conrol-plane, replace it in the kubeconfig

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxx
server: https://<ip_control_plane>:6443
name: kubernetes

Regenerate the certificat in kubeadm

It’s not enough to replace the ip, you have to regenerate the certificat with kubeadm, we will follow these steps :

# 1 -> ssh to your controle-plane
ssh <user>@<ip_controle-plane>
# 2 -> dump kubeadm config to modify it
kubectl -n kube-system get configmap kubeadm-config -o jsonpath='{.data.ClusterConfiguration}' > kubeadm.yaml
# 3 -> edit the output file kubeadm.yaml by adding key certSAns
apiServer:
certSANs:
- "<ip_of_your_control_plane>"
extraArgs:
.......
# 4 -> mv the current certificates
sudo mv /etc/kubernetes/pki/apiserver.{crt,key} ~
# 5 -> regenerate the certficate with the edited file kubeadm.yaml
sudo kubeadm init phase certs apiserver --config kubeadm.yaml
# 6 -> restart kube-api-server pod
kubectl delete pod kube-apiserver-control-plane -n kube-system

Exit ssh from control-plane, you should be able to reach your cluster, test with cmd :

kubectl get nodes 

This section wasn’t the easiest to understand, kubernetes is not easy in general, it has steep learning curve, but once you understand it, everything become easy, you can check some extra refs on the steps followed here and here

Expose kubernetes to the outside world

Introduction

If you have reached up to this point in this tutorial, congratulation, you were able to install a kubernetes cluster from scratch on a bare metal infrastructure, now you just have to add an extra step to be able consume the apps you deploy on it from the internet

If you go back to the architeture schema, you see that the user make a call to the load balancer, the load balancer forward this call to the nodes, and the nodes forward the call to the nginx-controller service that forward your call to the concerned app in your cluster

We will make things a little bit less blurry, by explaining the role of nginx, Ingress, and Ingress controller

Nginx, Ingress & Ingress controller :

Nginx : Nginx is a well proven solution of web server, this is what you might put in front of your back-end app when you’re in production but it’s also a proxy server, it means that it can read a configuration file to redirect your http requests, you can put some rules in the configuration file, to redirect, rewrite your requests and forward it somewhere else check an example of config for nginx proxying here

Ingress : Ingress it’s a kubernetes object, that standardize configuration of for proxy servers

Ingress controller : To simplify, we can say it’s a proxy server, that knows how to read its configuration from an Ingress Kubernetes object, for example the nginx-controller it’s a special implementation of nginx, that instead of reading it’s conf from a classic config file, it’s reads it from a Ingress kubernetes object, here is an example of Ingress yaml

So if you go back to the schema architecture, the flow of request, goes like this from the user to the cluster :

  • The user makes a request to the loadbalancer
  • The loadbalancer forward it to one of the worker nodes
  • Then it’s get forwarded to the Nginx-controller service
  • The nginx-controller checks the url recieved against the rules it knows from the kubernetes ingress object
  • The nginx-controller find the corresponding rule and forward the request to the internal concerned app inside your cluster

These explanations are not correct 100% but they are good enough to have a sufficient understanding of what is happening

So now that we have this understanding, we will :

  • Install the nginx controller in kubernetes
  • Configure the loadbalancer to forward request to the nginx-controller (trough the nodes)
  • Install inside Kubernetes a simple hello world app

Install the nginx controller

There are 2 implementation flavours of nginx as an ingress controller for Kubernetes, you can check details and differences here we will be choosing this implementation

A complete yaml file installation is provided in the documentation section bare-metal considerations

We will take this yaml file and instead of using this yaml file as it is, we will make a small modification to control the nodeports assigned during nginx-controller service installation (if we don’t do that we get random port, and will be hard to maintain a coherent configuration with the load balancer)

📦deploy
┣ 📂assets
┃ ┣ 📂k8s
┃ ┃ ┗ 📜nginx-controller-bare-metal.yaml.j2 # <-- File to modify
. . .
. . .

If you paid attention, the file is a jinja template, so it’s gonna be rendred before being installed, and to do that we just run the deploy_ingress_controller.py deploy

# run the deploy deploy_ingress_controller.py against master node
poetry run pyinfra deploy/inventory.py deploy/deploy_ingress_controller.py --limit controlplanes
# This will render, copy and install the manifest nginx-controller-bare-metal.yaml.j2

Check the ingress controller is installed

# run the cmd : 
kubectl get deploy -n ingress-nginx
>> # check screenshot

You’re seeing here the nginx-ingress controller that is responsible of reading the kubernetes ingresses objects and applying the routing rules they contain

The ingress-nginx-controller is exposed with a nodeport service, each of our worker have a port open (30080 for http and 30443 for https) that listens on incomming requests before they get forwarded to the ingress-nginx-controller proxy

You can check also how the service exposing the ingress-nginx-controller by running the cmd

kubectl get svc -n ingress-nginx
>> # check the screeshot

Configure the Load balancer

Introduction :

Now that we have installed the nginx ingress-controller inside the cluster, we have to configure the loadbalancer to be able to forward the requests it recieves, toward the nginx ingress-controller

Configuring the loadbalancer, consist of installing an nginx server on the vm, configuring it as a proxy server to loadbalance traffic on the workers nodes, on the ports 30080 for http and 30443 for https

Layout :

deploy/
┣ assets/
┃ ┗ nginx/
┃ ┗ default.conf.j2 # <-- Temlplate conf nginx
.. ..
.. ..
┣ configure_loadbalancer.py # <-- Deploy file that install nginx on loadbalancer, takes the default.conf.j2 and render it with the worker's ip addresses and configure nginx with it
.. ..

This is partial part of your layout in deploy, the part needed to install and configure the load balancer.

What is happening in this part, is that the loadblancer.py installs nginx in the remote vm (destined to be the loadbalancer by belonging to the group loadbalancers in the inventory) then copy & render the default.conf.j2 with the worker’s ip to loadblance traffik on workers, check below the template for nginx conf


http {
upstream controller {
{% for worker_ip in workers_ips -%} # <-- looping on the worker's ips
server {{worker_ip}}:{{ http_node_port | default(30080) }};
{% endfor %}
}
server {
listen 80;
location ~ /(.*) {
proxy_set_header Host $host;
proxy_pass http://controller/$1;
}
}
}
# Note we are not handling https traffic in this tutorial, we are just handling http traffic

Configure & test the Load balancer :

# 1 - Just run the following command to configure the loadbalancer  
poetry run pyinfra deploy/inventory.py deploy/configure_loadbalancer.py --limit loadbalancer
# 2 - Deploy an nginx hello-world app (we will explain a similar layout with the python app)
kubectl apply -f k8s/hello-world
# 3 - Reach your nginx hello-world app
curl -H "Host:trustme.com" http://34.163.165.22:80/hello-world
>> 'Hello world'

Note that this architecture is not the only one possible, not even the best, on bare metal it’s more recommended to use MetalLb but it doesn’t work on GCP see refs

Deploy your python application :

Introduction

Congratulation if you went soo far, this is the last step of our tutorial, we will be deploying a small hello-world python application into our cluster.

Our application is a small web server using the framework Sanic which is quite similar to the more known flask framework, but much more modern and richer in features

We present the app , dockerize it & make it available in dockerhub then deployit using our manifests

Python app :

# App layout
# app/
# ┣ __init__.py
# ┗ server.py


# server.py content
from sanic import Sanic
from sanic.response import text


app = Sanic("MyHelloWorldApp")

@app.get("/")
async def hello_world(request):
return text("Hello world from Sanic")

if __name__ == '__main__':
app.run(host='0.0.0.0', port=8000)

The app is pretty simple as you can see, just a simple hello_world function associated with the route “/”

Run it & test it locally :

# 1 start the app
poetry run python app/server.py
# 2 test the app
curl http://0.0.0.0:8000/
>>
'Hello world from Sanic'

Dockerize the application :

To build the docker image yourself :

# At the root of the project
export DOCKER_IMAGE="myapp"
export DOCKER_TAG="0.0.0"
docker image build --rm -t ${DOCKER_IMAGE}:${DOCKER_TAG} -f Dockerfile .

Tag & Push your image to dockerhub

export DOCKER_IMAGE="myapp"
export DOCKER_TAG="0.0.0"
export DOCKER_USERNAMESPACE=<YOUR_DOCKER_USERNAMESPACE> # for example mine is senhajirhazi
docker tag ${DOCKER_IMAGE}:${DOCKER_TAG} ${DOCKER_USERNAMESPACE}/${DOCKER_IMAGE}:${DOCKER_TAG}
docker push ${DOCKER_USERNAMESPACE}/${DOCKER_IMAGE}:${DOCKER_TAG}
# Get more details on how to push a docker image to dockerhub here : https://medium.com/codex/push-docker-image-to-docker-hub-acc978c76ad

Deploy with your kubernetes manifest

You will see in our layout of k8s/sanic-app, we have 3 kubernetes manifest deployment.yaml, svc.yaml, ingress.yaml

k8s/
.. ...
┗ sanic-app/
┣ deployment.yaml
┣ ingress.yaml
┗ svc.yaml

The deployment.yaml contains the reference to the image container (by default it’s will check dockerhub, you can configure it with private repos but it’s outside of our scope)

In the deployment.yaml, you can either use it as it is (it will point to the image i have built), or you can change the reference to the image you have built

#...
#...
spec:
containers:
- image: senhajirhazi/sanic-app:0.0.1 # <-- you can replace this with your own ${DOCKER_USERNAMESPACE}/${DOCKER_IMAGE}:${DOCKER_TAG}
name: sanic-app
#...
#...

The svc.yaml deploy a service that will recieve requests from the nginx-controller and look up the internal ip of the corresponding pod then forward the request to this ip address

The ingress.yaml as stated earlier contains the configuration that the ingress controller (here nginx-controller) use to route traffic

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
ingressClassName: nginx
rules:
- host: trustme.com
http:
paths:
- path: /sanic-app
pathType: Prefix
backend:
service:
name: sanic-app
port:
number: 8000

So here, it just says that when the ingress-controller is reached at the path /sanic-app with the host trusme.com, forward the traffic to the service sanic-app on port 8000

Test your python app

curl -H "Host:trustme.com" http://34.163.165.22:80/sanic-app
>>
'Hello world from Sanic'

Congratulation that you stayed that far !! you have provisionned your infra as code, deployed your kubernetes cluster on it, deployed your pytho application on it, you you’re a team on your own

Bonus video

References :

--

--