Last updated January 06, 2026

Originally published on January 05, 2026

Achieving storage parity in Django with local object storage

In modern Django projects, files uploaded in Production are typically stored in an S3-compatible object storage service. Maintaining different file storage classes for local and production environments often complicates the codebase and can introduce subtle issues that only surface in production. A practical way to avoid this category of problems is to run an object storage service locally as well.

The open source community, through django-storages, has greatly simplified interactions with a wide range of storage backends. The main issue arises when more advanced or fine-grained file operations are required: at a lower level, the django-storages API and Django’s native File Storage API are not specified in a uniform way.

And yes — I’m looking at you: paths, signed URLs, ACLs, overwrites. Both the bane and the boon.

A local object storage is a great workaround for achieving behavioral parity among the environments and for writing reliable automated tests. Specifically, we're going to see how to set up a dockerized local MinIO instance.

Adding MinIO to your Compose file

First, let’s add the MinIO service to our Compose file with its health check:

services:

  ...

  minio:
    command: minio server /var/lib/minio/data --console-address ":9001"
    environment:
      - MINIO_ENDPOINT=http://minio:9000
      - MINIO_BUCKET_NAME=my-bucket
      - MINIO_ROOT_USER=minio-admin
      - MINIO_ROOT_PASSWORD=minio-admin
    healthcheck:
      test: ["CMD", "mc", "ready", "local"]
      interval: 3s
      timeout: 3s
      retries: 30
    image: minio/minio:latest
    ports:
      - 9000:9000
      - 9001:9001
    volumes:
      - minio_data:/var/lib/minio/data

If your Django application does not run inside Docker, then you will need to expose the MinIO service through a custom network to make it reachable from outside Docker.

Once the service is running, the MinIO web console will be available at http://localhost:9001, while the S3 API will be exposed on port 9000.

Installing and configuring our MinIO instance via its client

It is also helpful to install the MinIO client, automate bucket creation and configure ACLs. If your Django project runs in a container, you can include something like the following in its Dockerfile:

RUN curl "https://dl.min.io/client/mc/release/linux-amd64/mc" > /usr/bin/minio-client \
    && chmod +x /usr/bin/minio-client \
    && minio-client alias set minio http://minio:9000 minio-admin minio-admin \
    && minio-client mb --quiet --ignore-existing minio/my-bucket \
    && minio-client anonymous set download minio/my-bucket;

With that, we have the MinIO server set up, create the bucket using the mb ("make bucket") command and configure its permissions to be public read-only.

Integrating MinIO with Django

Finally, even out the functional gap by making django-storages your primary storage backend. In your Django settings, specify the following parameters:

  • AWS_ACCESS_KEY_ID → MINIO_ROOT_USER

  • AWS_SECRET_ACCESS_KEY → MINIO_ROOT_PASSWORD

  • AWS_STORAGE_BUCKET_NAME → MINIO_BUCKET_NAME

  • AWS_S3_ENDPOINT_URL → MINIO_ENDPOINT

  • AWS_S3_REGION_NAME → your preferred region

  • AWS_LOCATION → optional subfolder within the bucket

This configuration ensures that your local MinIO setup behaves consistently with your production storage, enabling reliable development and testing.

Copyright © 2026 Niccolò Mineo
Some rights reserved: CC BY-NC 4.0