How to build a hybrid (Python/JavaScript) asynchronous task queue system for your server web application with Celery

A step by step guide on how to setup an architecture pattern for your asynchronous workloads (Javascript and Python interchangeably)

Senhaji Rhazi hamza
8 min readNov 23, 2021

Introduction :

As a quick reminder, asynchronous task systems allow your server to delegate workloads elsewhere and get back the result later, a common use case is to improve the user experience since the user doesn't wait for the server to process his request (with the browser in freezing mode), this is also done to create some decoupling between your server application, and some tasks that might be considered as not main responsibility for your server (3rd party tasks), or can be implemented by other teams, or simply to not overload your server with heavy processing when you can do better

What you will get from this article ?

  • Why you might consider a hybrid (Python/JavaScript) asynchronous task queue system
  • Brief introduction to Celery
  • A step by step code organization layout to have this hybrid pattern
  • A GitHub repo of the architecture that you can use as a boilerplate

What you will not get from this article ?

  • This article is not a reference on how Celery works internally, you might consider the documentation or this series of medium articles part1, part2, part3

Why the hybrid (Python/Javascript) pattern might be useful to you ?

A typical use case where you might consider using a hybrid (Python/Javascript) asynchronous task queue system, would be the following

Suppose you are developing an AI application for your customer, suppose you have 2 teams, a data science team, and a dev team.

The data science team doesn’t know another language rather than Python, when your application is requested, your DS team face at least 2 kinds of tasks :

  • launch a training for a model (considered heavy load for the server)
  • Infer a model’s prediction result. (might be considered heavy load for the server)

Now your Full-stack dev team doesn’t know another language rather than node, your dev team face at least 2 kinds of tasks :

  • Generate a pdf report for the customer (might be considered heavy load for the server)
  • Sending an email to the customer (might be waiting for 3rd party systems)

The problems faced here is how we make these tasks treatments in an external system (decoupled from our server), how to not force each team to go in an area where it doesn't feel comfortable, with little business value.

Having a hybrid pattern, will let each team work with its proper tools, within the area where it provides the most value to your company

Brief introduction on celery architecture

As said previously, you would like in some use cases (many) to have an independent system that you can delegate tasks to for execution, a system that can scale easily out of the box, able to supervise the state of the tasks you have submitted, robust by having some parametrized logic of retry when tasks fail.

This is mainly what Celery is, a framework/protocol of asynchronous task queue system, let’s check a celery architecture setup schema (the one we are using in our pattern), describe the flow and its components

The flow :

The client sends a message to the message broker with task information to order its execution, the message broker read the tasks’s information, check whether a queue is specified or not, if not, the default queue is used, the message broker pushes the task’s information (arguments, how many times to be retried if failed, etc…) to the correspondent queue, then after that, it takes the responsibility to deliver the task execution order to a worker listening on the queue specified

In our example, we use one queue for JavaScript workers and another for Python workers so the client (usually the server application) sends in a similar a manner the order of a task execution whether the task is written in Python (more likely by data scientist team) or JavaScript (more likely by dev team), all what he has to specify is the targeted queue (In our example Python queue or JavaScript queue) to the message broker.

Client :

Celery is written in Python, but clients could be written in any language, as long as you implement celery protocol you can submit tasks to celery to dispatch them through your workers (that could also be written in any language as long as they implement celery protocol)

Worker :

The worker is a unit of work, a server like, waiting to be delivered a message from the message broker to know which task to execute, in our example we have Python and JavaScript workers each respectively responsible to execute Python tasks and JavaScript tasks, when you want to scale your application, you basically just add more workers (can be more complicated in some cases, but let’s assume we are not in edge cases)

Message broker :

It’s a queue system that ensures that the message emitted by your clients, will be delivered to your workers, note that you can define multiple queues and dedicate a set of workers for each queue.

Celery is flexible and allows you to choose your message broker, you can choose Rabbitmq or Redis

Result backend :

When tasks perform their job, they store their results in a result back-end.

Again Celery is flexible and lets you choose your result back-end, for example you can use Redis (more likely when you’re not interested to persist the states/results of your task), or Postgres as a result backend (more likely when you want to keep history track on states/results on tasks you have executed)

NB : for complementary information on how these buildings blocks fit together check out the celery introduction guide

Code layout :

Lets us check the directory tree structure, then zoom on the workers

celery-node-python-example/
┣ .vscode/
┃ ┗ settings.json
┣ configmaps/
┃ ┗ pyenvs/
┣ node/
┃ ┣ js_workers/
┃ ┃ ┣ tasks/
┃ ┃ ┃ ┣ example/
┃ ┃ ┃ ┃ ┣ arithmetic.js
┃ ┃ ┃ ┃ ┗ index.js
┃ ┃ ┃ ┗ index.js
┃ ┃ ┣ client_example.js
┃ ┃ ┣ config.js
┃ ┃ ┗ worker.js
┃ ┣ Dockerfile
┃ ┣ Makefile
┃ ┣ package-lock.json
┃ ┣ package.json
┃ ┗ yarn.lock
┣ python/
┃ ┣ .vscode/
┃ ┃ ┗ settings.json
┃ ┣ py_workers/
┃ ┃ ┣ tasks/
┃ ┃ ┃ ┣ example/
┃ ┃ ┃ ┃ ┣ __init__.py
┃ ┃ ┃ ┃ ┗ arithmetic.py
┃ ┃ ┃ ┗ __init__.py
┃ ┃ ┣ utils/
┃ ┃ ┃ ┣ __init__.py
┃ ┃ ┃ ┗ decorator.py
┃ ┃ ┣ __init__.py
┃ ┃ ┣ client_example.py
┃ ┃ ┣ config.py
┃ ┃ ┗ worker.py
┃ ┣ scripts/
┃ ┃ ┗ entrypoints/
┃ ┃ ┗ start_worker_pyqueue.sh
┃ ┣ tests/
┃ ┃ ┣ __init__.py
┃ ┃ ┗ test_py_worker.py
┃ ┣ Dockerfile
┃ ┣ Makefile
┃ ┣ poetry.lock
┃ ┗ pyproject.toml
┣ .gitignore
┣ LICENSE
┣ Makefile
┣ README.md
┗ docker-compose.yml

When we zoom on worker :

celery-node-python-example/
┣ node/
┃ ┣ js_workers/
┃ ┃ ┣ tasks/
┃ ┃ ┃ ┣ example/
┃ ┃ ┃ ┃ ┣ arithmetic.js
┃ ┃ ┃ ┃ ┗ index.js
┃ ┃ ┃ ┗ index.js
┃ ┃ ┣ client_example.js
┃ ┃ ┣ config.js
┃ ┃ ┗ worker.js
┣ python/
py_workers/
┣ tasks/
┃ ┣ example/
┃ ┃ ┣ __init__.py
┃ ┃ ┗ arithmetic.py
┃ ┗ __init__.py
┣ __init__.py
┣ client_example.py
┣ config.py
┗ worker.py

Notice that the directory structure for Python workers is similar to the directory structure for JavaScript workers.

We have a main file worker.js or worker.py containing the worker’s initialization

worker.py :

worker.js

Tasks are grouped by package modules, the package modules are registered at the worker’s initialization, here for example we have in both Python/JavaScript, the package example that contains arithmetic module that define a function add, both are registered in worker.js or worker.py as you can see above.

arithmetic.js

arithmetic.py

Tutorial :

Starting from this point you can follow in a linux or Mac environment the steps presented using the github repo :

Steps :

  • Initialize the message broker and the result backend
  • Initialize the Python worker
  • Initialize the JavaScript worker
  • Test both workers with Python client and JavaScript client
  • Launch everything in docker-compose

Pre-requisite :

  • Docker
  • Docker-compose
  • Poetry (only if you don’t want to use docker for Python worker)
  • Yarn (only if you don’t want to use docker for JavaScript worker)

Initializing the message broker and the result backend

Initialize redis as a result backend and a message broker using docker :

docker run -it -p 6379:6379 redis

Initializing the Python worker

At the <root>/python/ folder :

Using docker :

# build the docker image 
make build-py-worker
# start the python worker
docker run -it --network="host" py_worker

Using poetry :

# install dependencies 
poetry install
# start the python worker
poetry run celery -A py_workers.worker worker --loglevel=INFO -Q py-queue

Initializing the JavaScript worker

At the <root>/node/ folder :

Using docker :

# build the docker image 
make build-js-worker
# start the python worker
docker run -it --network="host" js_worker

Using yarn:

# install dependencies 
yarn install
# start the python worker
node js_workers/worker.js

Testing both workers with Python client and JavaScript client

If you have executed the steps above, you have your message broker and result backend Redis listening on port 6379, you have your Python and JavaScript worker started, able to communicate with Redis.

Here we are going to test both workers from Python and JavaScript clients

JavaScript client :

at the <root>/node/ folder :

node js_workers/client_example.js>>Result executed by python worker from js client : 3 + 2 = 5
>>Result executed by js worker from js client : 1 + 2 = 3

Python client :

at the <root>/python/ folder :

poetry run python py_workers/client_example.py
# if you don't want to use poetry use this instead
# pip3 install celery && python3 py_workers/client_example.py
>>Result executed by python worker from python client: 4 + 2 = 6
>>Result executed by js worker from python client: 2 + 2 = 4

Launching everything in docker-compose

Instead of launching the Redis, Python worker, JavaScript worker separatly, we can express everything in a docker-compose.yml file :

So by running at the root folder the command :

# start the docker-compose
make workers_up

Then you can retest your clients, but now the containers are started by the docker-compose

NB :

If something runs under docker compose, it can run under Kubernetes, and it can be much more scalable on Kubernetes than docker compose.

Hope you liked this pattern, this pattern is also useful for event driven architecture, as an alternative to kafka check ref celery vs kafka

Bonus video :

References :

--

--