Goal
To use Docker and Docker Compose to develop a Django app locally, synchronizing data with an active Platform.sh project.
Note:
These instructions are presented as a demonstration of using Docker with Platform.sh for local development. DDEV with the Platform.sh integration is, however, the Platform.sh recommended local development tool.
See the Django DDEV documentation for more information.
Assumptions
You will need:
-
A local copy of the project repository on your computer.
You can get one by running the following command:
platform get PROJECT_ID
. Or clone an integrated source repository and set the remote branch by running the following command:platform project:set-remote PROJECT_ID
. -
The Platform.sh CLI.
This example makes a few assumptions, which you may need to adjust for your own circumstances.
It assumes that you’ve already deployed a Django project on Platform.sh that has production data in a PostgreSQL database.
It’s assumed that database has the following service definition:
db:
type: postgresql:12
disk: 1024
This is assumed to have the following relationship definition:
relationships:
database: "db:postgresql"
It’s assumed you want to run a built-in lightweight development server with manage.py runserver
. To match a production web server (such as Gunicorn or Daphne), modify those commands accordingly.
Finally, this example mostly assumes that a Platform.sh is the primary remote for the project. When using source integrations, the steps will be identical in most cases and addressed otherwise.
Steps
Starting with a Dockerfile
for Python 3.10:
FROM python:3.10
WORKDIR /code
COPY . /code/
RUN python -m pip install --upgrade pip
RUN pip install -r requirements.txt
and a corresponding docker-compose.yaml
file:
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
command: >
sh -c "python manage.py collectstatic --no-input &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
depends_on:
- db
1. Create a new environment off of production.
platform branch new-feature main
If working from a source integration, it’s necessary for a merge/pull request environment to be opened beforehand. Otherwise, work from an existing staging environment, and sync to the active merge/pull request environment once it’s been activated.
2. Retrieve the inherited production data.
platform db:dump -e new-feature
This creates a database dump file with the format: PROJECT_ID--PLATFORM_BRANCH--SERVICE_NAME--dump.sql
For merge/pull request environments, substitute new-feature
for the name of the environment (for example, pr-42
).
3. Ignore this file in future commits.
echo "*--dump.sql" >> .gitignore
4. Build the image.
docker-compose build
5. Start the container
docker-compose up -d
Verify that the containers are running:
$ docker-compose images
Container Repository Tag Image Id Size
-----------------------------------------------------------
django_db_1 postgres latest 9f3ec01f884d 378.6 MB
django_web_1 django_web latest 0065da03c239 1.063 GB
6. Import the dump into the database.
docker exec -i $(docker-compose ps -q db) psql -U bigfoot < DATABASE--DUMP--NAME--dump.sql
Note:
The section
-i $(docker-compose ps -q db)
is used to get theid
for thedb
service container.
7. Shut down the containers.
When finished with your work, shut down the containers.
docker-compose down
Conclusion
You now have a local development environment that’s in sync with the new-feature environment on Platform.sh to work from. All of the steps above can be included in a setup-local.sh
and committed to the repository, ensuring that each new developer contributing to the project can quickly setup an identical environment to work from.