How to containerize your Django application with Docker and compose on OSX

Posted 4 December 2014

I’ve been hearing good things about Docker for a while now – and certainly the premise made a lot of sense. I develop on a Mac and deploy to Heroku so having my development environment match production is ideal.

Getting your development environment “close enough” is not so much of a problem these days, but I still carry the scars earned in hard lost battles compiling binaries against incompatible BSD libraries and the accompanying inexplicable segfaults and premature jubilance.

Reccently we’ve been spoiled with apps like and the first OSX package manager to actually get it right, Homebrew.

Still, the dream lives on … mirroring a production environment on your dev machine without going the full monty and devoting half your expensive SSD to Vagrant and some Chef/Puppet/Ansible recipes.

Suffice to say, I was keen to make it work. But Docker did not make this easy, it was a whole new world and the documentation lacked accessibility, tutorials contradictory, and although I parsed individual concepts a holistic solution seemed elusive. What was this “compose” thing, what’s boot2docker, why can’t I use –volumes on OSX, what’s Dockerhub, and why when someone talks about “dockerizing” Django why is it completely unhelpful?

So in an attempt to avoid being overly loquacious, here’s the quick answers:

  • compose is a tool you can use to describe all the containers your app needs, as well start and stop them. This functionality may in the future be integrated into Docker as "container groups".
  • boot2docker is a light weight virtual machine running Tiny Core Linux via VirtualBox. This is necessary because OSX doesn't support for something that Docker needs (Linux Containers). Once you've installed this, you don't need to know much more about it.
  • You'll want your container (your "vm") to be able to access stuff on your host (your laptop), for example your app's source code. The --volumes argument didn't work on OSX making this difficult but this was fixed in Docker 1.3 for anything inside the /Users folder.
  • Dockerhub is a place to put your compiled containers to make it easier to share with people. You can also put your Dockerfile up (put it into a GitHub repository and link it via an "automated build" on Dockerhub).
  • "Dockerizing" tutorials are mostly building up the containers from scratch, this is not useful for us. We'll use the library of "official" pre-built images on Dockerhub and extend them to "containerize" our app.

We’ll now look at what it takes to get a simple hello world application (with Redis and Postgres w/ hstore) up and running.

If you want to jump straight to the source, it’s all on github.

I’m going to assume you’ve installed Docker, if not do that first.

Setting up your database container

This part is pretty simple as we could use the stock standard Postgres docker from Dockerhub (library/postgres).

However, I needed the hstore extension in Postgres and I couldn’t find a Dockerfile that enabled that on Dockerhub, so I created one: a simple Dockerfile that enables hstore just before the server is launched. You can simply reference it from your docker-compose.yml or Dockerfile.

We create a docker-compose.yml, a document that describes our infrastructure holistically, in our project root. It should contain the following:

  image: postgres:latest
    - /var/lib/postgresql
  command: true

  image: aidanlister/postgres-hstore
    - dbdata
    - "5432"

This is creating two containers, one for the data and one for the server. This is the recommended approach, so that you can trash your server container but keep your data persisted. This has other benefits for example easily distributing a ready-to-go container to colleagues.

We’ll step through loading data into your database for the first time later in the article.

Aside: Why use “postgres:latest” as the base for our data volume when we could use Tiny Linux or something else? Because Docker caches changes to the filesystem, it would actually use more disk space to use anything other than the same base image.

Type “docker-compose up” to watch everything download and build, and you should see a message that your server is up and running. ctrl-c to close.

Setting up your redis server

This is exactly the same. We add this to our docker-compose.yml:

  image: redis:latest
    - /var/lib/redis
  command: true

  image: redis:latest
    - redisdata
    - "6379"

Type “docker-compose up” to watch postgres spin up instantly without building (the image has been cached in your VM), and redis build and spin up. You’re now running four containers with a single command.

Setting up the python container

Next we want to get our python container up and running. This is where it gets tricker … create a Dockerfile in your project root (same folder as and your docker-compose.yml).

FROM python:2.7.8

RUN mkdir -p /usr/src/app
COPY requirements.txt /usr/src/requirements.txt

WORKDIR /usr/src/python
RUN pip install -r /usr/src/requirements.txt

ENV DJANGO_SETTINGS_MODULE myproject.settings.local
ENV DATABASE_URL postgres://postgres@db/postgres
ENV REDISTOGO_URL redis://redis:6379

WORKDIR /usr/src/app
CMD [ "python", "", "runserver", "" ]

This is pretty simple: we’ve extended the official Python docker with FROM, copied our requirements.txt into the container image, run pip install on it, set up our environment variables and then runserver’d.

You can test that it builds correctly with docker build -t yourapp . which will download the base images, then run with docker run -it --rm yourapp. If you want to open it in your browser, we’ll need to link the three containers together in our docker-compose.yml, add:

  build: .
  command: python runserver
    - .:/usr/src/app/
    - "8000:8000"
    - db
    - redis
    - DEBUG=1
    - DJANGO_SETTINGS_MODULE=yourapp.settings.local
    - DATABASE_URL=postgres://postgres@db/postgres

You’ll note that we’ve specified a “volumes” key. This will map the source code in the “.” folder on the host (the current working folder which should also be location of the Dockerfile and docker-compose.yml) to the /usr/src/app folder on the container. This, combined with using “python runserver” as the “command” means when you edit and re-save a source file on the host, runserver will automatically reload your code changes. In fact, it’s even faster than you’d normally be used to on OSX because the linux container will have inotify support.

Type boot2docker ip into a fresh terminal window to get the IP address of the Docker VM.

We can now type “docker-compose up” again, this will build and launch your containers. Open the IP of your VM in your browser eg. If everything has gone well, you’ll see a Hello World showing that we have connected to both Postgres and Redis.

To connect up some python rqworkers to the redis server, you would add:

  build: .
  command: python rqworker
    - db
    - redis
    - .:/usr/src/app/
    - INSTANCE_TYPE=worker
    - DEBUG=1
    - DJANGO_SETTINGS_MODULE=abas.settings.local
    - DATABASE_URL=postgres://postgres@db/postgres

We’re essentially done. We’ve got all our containers talking, our app is running … there’s just a few extra things worth discussing:

Running scripts that interact directly with the database

You could connect to your Postgres database inside the container directly from your host, but that’s un-docker and you might not want postgres installed on your host. Instead, we’re going to create a Docker image for these types of tasks.

Create a new folder to hold your dockers in your project root, e.g. “dockers”. In that folder create a folder for your “database job” docker, “dbjob”. In that folder add a Dockerfile that looks like:

FROM postgres:latest
RUN mkdir /usr/scripts
ADD scripts/ /usr/scripts/
WORKDIR /usr/scripts

You’ll then need to create a folder “scripts”, where you’ll put scripts that could be executed against the database. For example, here’s a which drops and restores the database:

if [ ! -e /tmp/hostvar/database.dump ]; then
    echo " /tmp/database.dump does not exist!"
    exit 0

dropdb \
  -U postgres \
  postgres \
  || { echo ' unable to drop DB'; exit 1; }

createdb \
  -U postgres \
  postgres \
  || { echo ' unable to recreate DB'; exit 1; }

pg_restore \
  -U postgres \
  --no-acl --no-owner --verbose \
  -d postgres < /tmp/hostvar/database.dump

echo " success"

To run this, we make sure only the database is running with docker-compose up db and then execute docker run -it --link=abas_db_1:postgres --volume=$PWD/var/:/tmp/hostvar/ dbjob

This code builds our docker, mounts the “var” folder on our host (which should contain your database.dump dump), and then runs the “” command inside the scripts folder on the container.

Running management commands

You’re probably thinking “I hope I don’t need to create a dockerfile just to run management commands”. Nope, luckily you can just use docker-compose run web python shell_plus.

Using python debugger

You can now use pdb with docker (see this and this about why you couldn’t). To start your project in a way that exposes a TTY for docker-compose:

$ docker-compose run --service-ports web

Other useful commands

Docker one-liners:

  • Delete all containers: docker rm -f $(docker ps -aq)
  • Delete all images: docker rmi -f $(docker images -q)
  • Delete dangling images: docker rmi $(docker images -q -f dangling=true)

So this is a summation of my knowledge thus far, I hope it is useful. Remember all of the code is available on GitHub. Please leave any feedback in the comments below.