Celery Flower is a real-time web-based monitoring and administration tool for Celery task queues. It gives you a live dashboard to inspect workers, tasks, queues, and broker stats — essential for debugging and operating background task systems in production.
In this tutorial you will learn how to:
- Run Celery Flower with Docker Compose for local development
- Add basic authentication to protect the interface
- Set up an nginx reverse proxy with WebSocket support
- Enable Prometheus metrics for production alerting
- Deploy Flower to a production/staging environment via Appliku
You can also check out our Django Celery Tutorial, read about Celery Shared Task and Celery rate limiting
Table Of Contents¶
Celery Flower Dashboard Overview¶
Before diving into setup, here is what Celery Flower shows you:
Workers tab — Lists all connected workers with status (online/offline), tasks processed, active tasks, and reserved tasks. Click a worker to see detailed stats including pool type, concurrency, and per-task counters.
Tasks tab — Full task history with name, UUID, state (SUCCESS / FAILURE / RETRY / REVOKED), runtime, and timestamps. Filterable by state, worker, and task name — invaluable when debugging why a task failed.
Broker tab — Queue-level view: queue name, message count, and which workers are consuming from each queue. Useful for spotting backlogs.
Monitor tab — Real-time charts: tasks per second, active workers, and task completion time. Good for capacity planning.
API — Flower exposes a REST API at /api/ so you can query worker and task state programmatically.
Celery Flower in Docker¶
For local development, assume you are running your Django project in Docker Compose with the following services in docker-compose.yml:
version: '3.3'
services:
redis:
image: redis
ports:
- "6379:6379"
rabbitmq:
image: rabbitmq
environment:
- RABBITMQ_DEFAULT_USER=djangito
- RABBITMQ_DEFAULT_PASS=djangito
- RABBITMQ_DEFAULT_VHOST=djangito
ports:
- "21001:5672"
- "21002:15672"
db:
image: postgres
environment:
- POSTGRES_USER=djangito
- POSTGRES_PASSWORD=djangito
- POSTGRES_DB=djangito
ports:
- "21003:5432"
web:
build: .
restart: always
command: python manage.py runserver 0.0.0.0:8060
env_file:
- .env
ports:
- "127.0.0.1:8060:8060"
volumes:
- .:/code
links:
- db
- redis
- rabbitmq
depends_on:
- db
- redis
- rabbitmq
celery:
build: .
restart: always
command: celery -A project.celeryapp:app worker -Q default -n djangitos.%%h --loglevel=INFO --max-memory-per-child=512000 --concurrency=1
env_file:
- .env
volumes:
- .:/code
links:
- db
- redis
- rabbitmq
depends_on:
- db
- redis
- rabbitmq
celery-beat:
build: .
restart: always
command: celery -A project.celeryapp:app beat -S redbeat.RedBeatScheduler --loglevel=DEBUG --pidfile /tmp/celerybeat.pid
env_file:
- .env
volumes:
- .:/code
links:
- db
- redis
- rabbitmq
depends_on:
- db
- redis
- rabbitmq
Now add the celery-flower service:
celery-flower:
build: .
restart: always
command: celery -A project.celeryapp:app flower --loglevel=DEBUG --port=9090
ports:
- "127.0.0.1:9090:9090"
env_file:
- .env
volumes:
- .:/code
links:
- db
- redis
- rabbitmq
depends_on:
- db
- redis
- rabbitmq
This starts Flower on port 9090, bound to localhost only.
Start your project:
docker-compose up
When all services are up, open http://127.0.0.1:9090/ and you will see the Celery Flower interface.

Celery Flower Authentication¶
By default, Flower is open to anyone — a serious security risk in any non-local environment. Anyone could view sensitive task arguments and results, or manipulate your Celery cluster.
Add the --basic_auth= flag to the Flower command:
celery-flower:
build: .
restart: always
command: celery -A project.celeryapp:app flower --loglevel=DEBUG --port=9090 --basic_auth=djangitos:testpassword
ports:
- "127.0.0.1:9090:9090"
env_file:
- .env
volumes:
- .:/code
links:
- db
- redis
- rabbitmq
depends_on:
- db
- redis
- rabbitmq
Restart Docker Compose (CTRL-C, then docker-compose up) and you will see a login prompt.

For production, pass credentials via environment variables instead of hardcoding them:
command: celery -A project.celeryapp:app flower --port=9090 --basic_auth=${FLOWER_BASIC_AUTH}
Then set FLOWER_BASIC_AUTH=username:password in your .env file.
Nginx Reverse Proxy for Celery Flower¶
When running Flower behind nginx you need to proxy WebSocket connections — Flower uses WebSockets to push real-time updates to the browser.
Here is an nginx server block for Flower with HTTPS and WebSocket support:
server {
listen 80;
server_name flower.yourdomain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name flower.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/flower.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/flower.yourdomain.com/privkey.pem;
location / {
proxy_pass http://localhost:9090;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support — required for real-time task updates
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Without the Upgrade and Connection WebSocket headers, the Flower dashboard will load but real-time updates will not work.
Also add the --url_prefix flag if you want to serve Flower at a sub-path (e.g. /flower/):
command: celery -A project.celeryapp:app flower --port=9090 --url_prefix=flower
Then set the nginx location block to location /flower/.
Celery Flower with Prometheus¶
Flower exposes a /metrics endpoint that returns Prometheus-compatible metrics. This lets you build production dashboards and alerts in Grafana.
No extra configuration is needed — the endpoint is available by default at http://your-flower-host/metrics.
Add Flower as a scrape target in your prometheus.yml:
scrape_configs:
- job_name: 'celery'
static_configs:
- targets: ['flower.yourdomain.com']
metrics_path: '/metrics'
scheme: https
basic_auth:
username: 'your_username'
password: 'your_password'
Key metrics Flower exposes:
| Metric | Description |
|---|---|
flower_worker_online |
Worker availability — 1 online, 0 offline |
flower_task_prefetch_time_seconds |
Time tasks spend waiting in worker prefetch |
flower_task_runtime_seconds |
Histogram of task execution duration |
flower_events_total |
Total Celery events processed by Flower |
Useful Prometheus alert examples:
groups:
- name: celery
rules:
- alert: CeleryWorkerDown
expr: flower_worker_online == 0
for: 2m
annotations:
summary: "Celery worker offline for 2 minutes"
- alert: CeleryTaskHighRuntime
expr: histogram_quantile(0.95, flower_task_runtime_seconds_bucket) > 60
for: 5m
annotations:
summary: "95th percentile task runtime exceeds 60s"
Celery Flower in Production¶
Running Flower locally is straightforward — production requires a few extra steps: HTTPS, authentication, and a stable deployment.
Appliku handles SSL certificates automatically for every app with a web worker, so deploying Flower is easy.
Because each Appliku app can have only one web worker, create a dedicated app for Flower:
- Fork this GitHub repo: https://github.com/appliku/flowermonitor
- In the Appliku dashboard, create a new application pointing to your forked repository
- Set the following environment variables:
-
BROKER_URL— your RabbitMQ or Redis broker URL -RESULT_BACKEND— your Redis result backend URL -FLOWER_BASIC_AUTH— credentials inUSERNAME:PASSWORDformat - On the Processes tab, enable the
webworker - Hit Deploy
When deployment finishes, click "Open App" — you will see the password prompt. Use the credentials from FLOWER_BASIC_AUTH.
Appliku provisions HTTPS automatically, so your Flower instance is secured with SSL out of the box — no nginx config needed.