Docker images¶
Pull images¶
Docker images are stored in GitHub Container Registry (GHCR), which is a Docker registry like Docker Hub. Public Docker images can be pulled anonymously from ghcr.io
. The inboard images are based on the official Python Docker images.
Simply running docker pull ghcr.io/br3ndonland/inboard
will pull the latest FastAPI image (Docker uses the latest
tag by default). If specific versions of inboard or Python are desired, append the version numbers to the specified Docker tags as shown below (new in inboard version 0.6.0). All the available images are also provided with Alpine Linux builds, which are available by appending -alpine
, and Debian "slim" builds, which are available by appending -slim
(new in inboard version 0.11.0). Alpine and Debian slim users should be aware of their limitations.
Available Docker tags
# Pull latest FastAPI image (Docker automatically appends the `latest` tag)
docker pull ghcr.io/br3ndonland/inboard
# Pull latest version of each image
docker pull ghcr.io/br3ndonland/inboard:base
docker pull ghcr.io/br3ndonland/inboard:fastapi
docker pull ghcr.io/br3ndonland/inboard:starlette
# Pull image from specific release (new in inboard 0.6.0)
docker pull ghcr.io/br3ndonland/inboard:base-0.22.0
docker pull ghcr.io/br3ndonland/inboard:fastapi-0.22.0
docker pull ghcr.io/br3ndonland/inboard:starlette-0.22.0
# Pull image from latest minor version release (new in inboard 0.22.0)
docker pull ghcr.io/br3ndonland/inboard:base-0.22
docker pull ghcr.io/br3ndonland/inboard:fastapi-0.22
docker pull ghcr.io/br3ndonland/inboard:starlette-0.22
# Pull image with specific Python version
docker pull ghcr.io/br3ndonland/inboard:base-python3.8
docker pull ghcr.io/br3ndonland/inboard:fastapi-python3.8
docker pull ghcr.io/br3ndonland/inboard:starlette-python3.8
# Pull image from specific release and with specific Python version
docker pull ghcr.io/br3ndonland/inboard:base-0.22-python3.8
docker pull ghcr.io/br3ndonland/inboard:fastapi-0.22-python3.8
docker pull ghcr.io/br3ndonland/inboard:starlette-0.22-python3.8
# Append `-alpine` to any of the above for Alpine Linux (new in inboard 0.11.0)
docker pull ghcr.io/br3ndonland/inboard:latest-alpine
docker pull ghcr.io/br3ndonland/inboard:fastapi-alpine
docker pull ghcr.io/br3ndonland/inboard:fastapi-0.22-alpine
docker pull ghcr.io/br3ndonland/inboard:fastapi-python3.8-alpine
docker pull ghcr.io/br3ndonland/inboard:fastapi-0.22-python3.8-alpine
# Append `-slim` to any of the above for Debian slim (new in inboard 0.11.0)
docker pull ghcr.io/br3ndonland/inboard:latest-slim
docker pull ghcr.io/br3ndonland/inboard:fastapi-slim
docker pull ghcr.io/br3ndonland/inboard:fastapi-0.22-slim
docker pull ghcr.io/br3ndonland/inboard:fastapi-python3.8-slim
docker pull ghcr.io/br3ndonland/inboard:fastapi-0.22-python3.8-slim
Use images in a Dockerfile¶
For a Poetry project with the following directory structure:
repo/
package/
__init__.py
main.py
prestart.py
Dockerfile
poetry.lock
pyproject.toml
The Dockerfile could look like this:
Example Dockerfile for Poetry project
FROM ghcr.io/br3ndonland/inboard:fastapi
# Install Python requirements
COPY poetry.lock pyproject.toml /app/
WORKDIR /app
RUN poetry install --no-dev --no-interaction --no-root
# Install Python app
COPY package /app/package
ENV APP_MODULE=package.main:app
# RUN command already included in base image
Organizing the Dockerfile this way helps leverage the Docker build cache. Files and commands that change most frequently are added last to the Dockerfile. Next time the image is built, Docker will skip any layers that didn't change, speeding up builds.
For a standard pip
install:
repo/
package/
__init__.py
main.py
prestart.py
Dockerfile
requirements.txt
Example Dockerfile for project with pip and requirements.txt
FROM ghcr.io/br3ndonland/inboard:fastapi
# Install Python requirements
COPY requirements.txt /app/
WORKDIR /app
RUN python -m pip install -r requirements.txt
# Install Python app
COPY package /app/package
ENV APP_MODULE=package.main:app
# RUN command already included in base image
The image could then be built with:
cd /path/to/repo
docker build . -t imagename:latest
The final argument is the Docker image name (imagename
in this example). Replace with your image name.
Run containers¶
Run container:
docker run -d -p 80:80 imagename
Run container with mounted volume and Uvicorn reloading for development:
cd /path/to/repo
docker run -d -p 80:80 \
-e "LOG_LEVEL=debug" -e "PROCESS_MANAGER=uvicorn" -e "WITH_RELOAD=true" \
-v $(pwd)/package:/app/package imagename
Details on the docker run
command:
-e "PROCESS_MANAGER=uvicorn" -e "WITH_RELOAD=true"
will instructstart.py
to run Uvicorn with reloading and without Gunicorn. The Gunicorn configuration won't apply, but these environment variables will still work as described:APP_MODULE
HOST
PORT
LOG_COLORS
LOG_FORMAT
LOG_LEVEL
RELOAD_DIRS
WITH_RELOAD
-v $(pwd)/package:/app/package
: the specified directory (/path/to/repo/package
in this example) will be mounted as a volume inside of the container at/app/package
. When files in the working directory change, Docker and Uvicorn will sync the files to the running Docker container.
Docker and Poetry¶
As explained in python-poetry/poetry#1879, there are two conflicting conventions to consider when working with Poetry in Docker:
- Docker's convention is to not use virtualenvs, because containers themselves provide sufficient isolation.
- Poetry's convention is to always use virtualenvs, because of the reasons given in python-poetry/poetry#3209.
Docker and Poetry - the previous approach
This project previously preferred the Docker convention:
- Poetry itself was installed with the get-poetry.py script, with the environment variable
POETRY_HOME=/opt/poetry
used to specify a consistent location for Poetry. poetry install
was used withPOETRY_VIRTUALENVS_CREATE=false
to install the project's packages into the system Python directory.
The conventional Docker approach no longer works because:
- The old install script get-poetry.py is deprecated and not compatible with Python 3.10.
- The new install script install-poetry.py has been problematic so far, and Poetry doesn't really test it, so it will likely continue to be problematic.
Docker and Poetry - the updated approach
The updated approach uses pipx
to install Poetry.
In the updated approach:
ENV PATH=/opt/pipx/bin:/app/.venv/bin:$PATH
is set first to prepare the$PATH
.pip
is used to install a pinned version ofpipx
.pipx
is used to install a pinned version of Poetry, withPIPX_BIN_DIR=/opt/pipx/bin
used to specify the location wherepipx
installs the Poetry command-line application, andPIPX_HOME=/opt/pipx/home
used to specify the location forpipx
itself.poetry install
is used withPOETRY_VIRTUALENVS_CREATE=true
,POETRY_VIRTUALENVS_IN_PROJECT=true
andWORKDIR /app
to install the project's packages into the virtualenv at/app/.venv
.
With this approach:
- Subsequent
python
commands will use the executable atapp/.venv/bin/python
. - As long as
POETRY_VIRTUALENVS_IN_PROJECT=true
andWORKDIR /app
are retained, subsequent Poetry commands will use the same virtual environment at/app/.venv
.
Linux distributions¶
Alpine¶
The official Python Docker image is built on Debian Linux by default, with Alpine Linux builds also provided. Alpine is known for its security and small Docker image sizes.
Runtime determination of the Linux distribution
To determine the Linux distribution at runtime, it can be helpful to source /etc/os-release
, which contains an ID
variable specifying the distribution (alpine
, debian
, etc).
Alpine differs from Debian in some important ways, including:
- Shell (Alpine does not use Bash by default)
- Packages (Alpine uses
apk
as its package manager, and does not include some common packages likecurl
by default) - C standard library (Alpine uses
musl
instead ofgcc
)
The different C standard library is of particular note for Python packages, because binary package distributions may not be available for Alpine Linux. To work with these packages, their build dependencies must be installed, then the packages must be built from source. Users will typically then delete the build dependencies to keep the final Docker image size small.
The basic build dependencies used by inboard include gcc
, libc-dev
, and make
. These may not be adequate to build all packages. For example, to install psycopg
, it may be necessary to add more build dependencies, build the package, (optionally delete the build dependencies) and then include its libpq
runtime dependency in the final image. A set of build dependencies for this scenario might look like the following:
Example Alpine Linux Dockerfile for PostgreSQL project
ARG INBOARD_DOCKER_TAG=fastapi-alpine
FROM ghcr.io/br3ndonland/inboard:${INBOARD_DOCKER_TAG}
ENV APP_MODULE=mypackage.main:app
COPY poetry.lock pyproject.toml /app/
WORKDIR /app
RUN sh -c '. /etc/os-release; if [ "$ID" = "alpine" ]; then apk add --no-cache --virtual .build-project build-base freetype-dev gcc libc-dev libpng-dev make openblas-dev postgresql-dev; fi' && \
poetry install --no-dev --no-interaction --no-root && \
sh -c '. /etc/os-release; if [ "$ID" = "alpine" ]; then apk del .build-project && apk add --no-cache libpq; fi'
COPY mypackage /app/mypackage
Alpine Linux virtual packages
Adding --virtual .build-project
creates a "virtual package" named .build-project
that groups the rest of the dependencies listed. All of the dependencies can then be deleted as a set by simply referencing the name of the virtual package, like apk del .build-project
.
Python packages with Rust extensions on Alpine Linux
As described above, Python packages can have C extensions. In addition, an increasing number of packages also feature Rust extensions. Building Python packages with Rust extensions will typically require installation of Rust and Cargo (apk add --no-cache rust cargo
), as well as installation of a Python plugin like maturin
or setuptools-rust
(python3 -m pip install --no-cache-dir setuptools-rust
). Remember to uninstall after (python3 -m pip uninstall -y setuptools-rust
). The installed rust
package should be retained.
In addition to build dependencies, Rust also has runtime dependencies, which are satisfied by the rust
package installed with apk
. The addition of the Rust runtime dependencies bloats Docker image sizes, and may make it impractical to work with Python packages that have Rust extensions on Alpine Linux. For related discussion, see rust-lang/rust#88221 and rust-lang/rustup#2213.
The good news - Python now supports binary package distributions built for musl
-based Linux distributions like Alpine Linux. See PEP 656 and cibuildwheel
for details.
Debian slim¶
The official Python Docker image provides "slim" variants of the Debian base images. These images are built on Debian, but then have the build dependencies removed after Python is installed. As with Alpine Linux, there are some caveats:
- Commonly-used packages are removed, requiring reinstallation in downstream images.
- The overall number of security vulnerabilities will be reduced as compared to the Debian base images, but vulnerabilities inherent to Debian will still remain.
- If
/etc/os-release
is sourced, the$ID
will still bedebian
, so custom environment variables or other methods must be used to identify images as "slim" variants.
A Dockerfile equivalent to the Alpine Linux example might look like the following:
Example Debian Linux slim Dockerfile for PostgreSQL project
ARG INBOARD_DOCKER_TAG=fastapi-slim
FROM ghcr.io/br3ndonland/inboard:${INBOARD_DOCKER_TAG}
ENV APP_MODULE=mypackage.main:app
COPY poetry.lock pyproject.toml /app/
WORKDIR /app
ARG INBOARD_DOCKER_TAG
RUN sh -c '. /etc/os-release; if [ "$ID" = "debian" ] && echo "$INBOARD_DOCKER_TAG" | grep -q "slim"; then apt-get update -qy && apt-get install -qy --no-install-recommends gcc libc-dev libpq-dev make wget; fi' && \
poetry install --no-dev --no-interaction --no-root && \
sh -c '. /etc/os-release; if [ "$ID" = "debian" ] && echo "$INBOARD_DOCKER_TAG" | grep -q "slim"; then apt-get purge --auto-remove -qy gcc libc-dev make wget; fi'
COPY mypackage /app/mypackage
Redeclaring Docker build arguments
Why is ARG INBOARD_DOCKER_TAG
repeated in the example above? To understand this, it is necessary to understand how ARG
and FROM
interact. Any ARG
s before FROM
are outside the Docker build context. In order to use them again inside the build context, they must be redeclared.