Published: 2026-03-13
8 min read

Tools I Use as a Backend Engineer (2026)

Backend Tools Django Python Go DevOps AI
Tools I Use as a Backend Engineer (2026)

I am sharing what tools I actually use day to day.

This is the current stack as of March 2026. I'll explain what I use, why, and where each tool shows up in my actual projects.


Languages

Python is my primary language. Django for web APIs, Celery for async work, FastAPI when I need a lightweight async server. Most of my projects are Python-first.

Go gets used when I need a single binary with strong concurrency. My HealthLab project — a pathology lab management platform — is a Go/Gin backend that handles concurrent Telegram bot conversations with goroutines.

TypeScript for frontend work and some serverless functions. My portfolio site runs on Astro + React + TypeScript. The Cloudflare Worker proxy that handles AI chatbot streaming is TypeScript too.

SQL constantly. I write raw SQL when ORMs generate bad queries. EXPLAIN ANALYZE is a regular part of my workflow — I covered this in depth in How to Optimise Backend Performance.


Frameworks

Django + Django REST Framework

My default for Python web APIs. The ORM handles 90% of queries well. The admin panel saves hours when I need to inspect data. DRF's serialization is verbose but predictable.

I use Django for Message Scheduler (scheduling platform with Celery/Redis) and SalesGPT (AI sales assistant).

FastAPI

When I need async and don't need Django's batteries. The Telegram Chat Manager started on Flask, hit async issues with Telethon's event loop, and switched to FastAPI. The portfolio's AI chatbot backend also runs FastAPI for SSE streaming.

Gin (Go)

Clean routing, good middleware ecosystem. The HealthLab API uses Gin with a repository pattern — handlers don't touch GORM directly, which makes unit testing straightforward.

Astro + React

For content-heavy sites that need interactivity in specific places. My portfolio uses Astro for static generation (100/100 Lighthouse) with React islands (client:load) for the AI chat widget and interactive components.


Databases

PostgreSQL

Default for everything persistent. Partial indexes, JSON columns, full-text search — PostgreSQL does it all without adding another service.

I use CONN_MAX_AGE for connection reuse in Django, and PgBouncer when running multiple application workers. For complex queries, I go straight to EXPLAIN ANALYZE to check execution plans. I wrote about reading EXPLAIN output, identifying missing indexes, and fixing slow queries in my performance post.

Redis

Celery broker, caching layer, rate limiting backend. It's in almost every project. For the Message Scheduler, Redis holds the Celery task queue. Tasks sit there with ETA parameters until their scheduled delivery time — no polling needed.

Qdrant (Vector Database)

For the portfolio's RAG-powered AI chatbot. Qdrant stores vector embeddings of my portfolio content (projects, case studies, blog posts). On each question, the user's query gets embedded with sentence-transformers, searched against Qdrant, and the top results form context for the LLM.

I chose Qdrant over Pinecone because it's open-source, runs in Docker, and doesn't charge per vector. For smaller projects or prototyping, Zvec is worth looking at — it's an in-memory vector database that's fast to set up when you don't need persistence or distributed deployment.


Task Queues & Async Processing

Celery + Redis

Handles background work in my Python projects. Email sending, message delivery, scheduled tasks. The Message Scheduler uses Celery's apply_async(eta=...) for precise delivery timing and exponential backoff retries for transient failures.

Celery Beat

Once-a-day cron that schedules all of that day's messages. Combined with ETA scheduling, this means one cron check per day instead of polling every minute.

Celery Flower

Monitoring dashboard for Celery workers. Shows task success/failure rates, queue depths, worker health. I use this as a lightweight alternative to full APM on smaller projects.


Observability

Datadog APM

For production monitoring when it's available. Trace waterfalls, flame graphs, span-level breakdowns. I wrote extensively about reading Datadog traces and identifying N+1 queries in How to Optimise Backend Performance.

django-debug-toolbar

Essential for local development. The SQL panel catches N+1 queries immediately — it shows duplicate query badges and lets you click through to the stack trace that triggered each query. I run this on every Django project in development.

structlog

Structured logging that actually helps when debugging. Key-value pairs (order_id, duration_ms, cache_hit) instead of concatenated strings. When something fails in production, I can reconstruct the exact request path from the logs.

cProfile + snakeviz

Python's built-in profiler plus a visual flame graph. When an endpoint is slow and I need to see exactly where time goes, cProfile output sorted by cumulative time points directly to the bottleneck.


Development Environment

Linux (Ubuntu) + i3 Window Manager

I use a tiling window manager with semantic workspaces — terminal on 1, browser on 2, editor on 3, chat on 4. Everything keyboard-driven. I built a portable backup and migration system for my entire environment so I can replicate it on any machine in 30 minutes.

Zsh + Oh My Zsh

Hundreds of custom aliases. gs for git status, dc for docker-compose, mkdjango to scaffold a full Django project with DRF + Celery + Docker in one command.

Modern CLI Tools

eza (ls replacement with git integration), bat (cat with syntax highlighting), fd (fast find), rg (ripgrep), btop (process viewer with graphs), fzf (fuzzy finder for everything).

Docker + Docker Compose

Local development and production deployment. Every project has a docker-compose.yml that brings up the full stack. The HealthLab project runs Go API + PostgreSQL + React dashboard from a single compose file.


AI Coding & Research Tools

AI tools have changed how I write and debug code. These are the ones that stuck.

Antigravity

My primary AI coding assistant inside the editor. Handles multi-file edits, runs terminal commands, and understands project context. I use it for refactoring, writing tests, scaffolding new features, and debugging issues across files. It works well for tasks that need awareness of the full codebase rather than single-file completions.

Kilo Code / OpenCode / Pi

Terminal-based AI coding tools. Quickly becoming my go to development tools.

Claude / ChatGPT / Perplexity

General-purpose assistant for quick lookups, explaining error messages, and brainstorming. I use these alongside code tools rather than as a replacement. For longer reasoning tasks — architecture decisions, tradeoff analysis, reviewing complex code. When I need to think through a design before writing anything, Claude handles multi-step reasoning well. Search replacement for technical research. When I need to understand a library's behaviour, compare tools, or find recent documentation, Perplexity gives sourced answers instead of making me sift through SEO-optimized blog posts.

The way I think about it: each tool fills a different slot. Editor-integrated tools for code, Claude for deep reasoning, Perplexity for research, ChatGPT for quick answers. Using the right tool for the right task matters more than picking a single favourite.


AI / ML Tools

Gemini API / Cerebras (Llama)

The portfolio chatbot runs on open-weight Llama models via Cerebras for $0 inference cost. I built an LLMClient class that acts as a universal adapter — Cerebras, Gemini, OpenAI, OpenRouter all work through the same interface. Swapping providers is a config change.

LangChain (selectively)

I use parts of LangChain for text chunking (RecursiveCharacterTextSplitter) but skip the heavier abstractions. The core RAG pipeline is custom code.


Deployment & Infrastructure

Cloudflare (Pages, Workers, R2)

The portfolio deploys to Cloudflare Pages. A Cloudflare Worker proxies SSE streams from the AI backend. Cloudflare R2 stores generated resume PDFs (S3-compatible, zero egress fees).

GitHub Actions

CI/CD for resume PDF generation. A matrix build compiles 3 LaTeX resume variants in parallel and uploads them to R2. Triggered on any change to data files. The full setup is in my portfolio case study.

Nginx

Reverse proxy for Django and FastAPI deployments. Handles static files, SSL termination, and request buffering.

systemd

Process management on VPS deployments. Auto-restart on crashes, log management through journald.


What I'd Change

A few tools I'm watching or planning to adopt:

  • Pyinstrument over cProfile for profiling — less noise, better defaults for web request profiling
  • k6 for load testing — I've been doing manual testing but should automate it
  • Proper dead-letter queues — Currently monitoring failed Celery tasks through Flower, but a DLQ with alerting would be more reliable

The Full Picture

If you want to see how these tools work together in actual projects — architecture diagrams, code samples, tradeoff analysis — check out my case studies. Each one walks through the decisions, the failure modes, and what I'd do differently.

All my projects are listed at ankitjang.one/projects.


About me: I'm Ankit Jangwan, a Senior Software Engineer working with Python, Django, Go, and cloud infrastructure. Currently building backend systems at Instahyre.

All my projects — including architecture diagrams, tradeoff analysis, and failure mode documentation — are at ankitjang.one/projects.

About me: I'm Ankit Jangwan, a Senior Software Engineer building backend systems with Django, PostgreSQL, Celery, and Go. See my case studies at ankitjang.one/case-studies.