If you're waiting more than 10 seconds for your Docker rebuilds, you're wasting hours of your life!
So that you keep reading, here are the results!
Results
- Fresh builds: 50s → 18s (2.7x faster)
- Rebuilds: 47s → 0.4s (117x faster)
- Adding dependencies: 50s → 6s (8x faster)
Overview
We work with Python development teams of all sizes and levels of sophistication. One area where many teams struggle is optimizing their Docker image build times.
I was reminded of this yesterday replying to a recent r/django reddit thread where the author assumed they needed to break their Django monolith up into a few services to reduce their build time, which isn't accurate, but is a common mistaken assumption.
There are a few small things you can do to dramatically reduce the amount of time it takes to build (and more importantly REBUILD) a Python or Django docker image.
The TL;DR is you need to:
- Only rebuild the dependency layer when your dependencies actually change
- Cache your PyPI downloads locally and in CI
- Switch to using uv which is stupid stupid fast
- Use a multi-stage build
If you're already doing all of those things, you can safely skip the rest of this post.
Naive Approach
The most common problem and impactful issue is not ordering the lines in your Dockerfile to only rebuild the layers that need to be rebuilt.
Not having to do anything at all is the fastest thing there is! It's a one million percent improvement! 🤣
Here is what many people start with:
FROM python:3.13-slim
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN pip install -r requirements.txt
# ... rest of lines ending in a `CMD` to run
So what's wrong with this? Because every time you change ANYTHING in your git repository you're re-installing your pip dependencies.
This is thankfully easy to resolve. Instead of the above, you should be doing this:
FROM python:3.13-slim
RUN mkdir /code
WORKDIR /code
# Copy just the requirements first
COPY ./requirements.txt /code/requirements.txt
# Install dependencies
RUN pip install -r requirements.txt
# Copy in everything else
COPY . /code/
# ... rest of lines ending in a `CMD` to run
Now when you rebuild this image it will only need to perform the pip install
step when there has actually been a change to
your requirements.txt
!
Dependencies change somewhat frequently, but no where near as frequently as you change code, docs, tests, and your README this stops wasting time rebuilding that particular Docker layer on every single change.
Caching the PyPI dependencies
Ok so now we're only doing this when there is really something new to do. The next thing to do is not bother downloading all of these dependencies each and every time we build our Docker image. By default pip caches your downloads when using it locally, so this little optimization is overlooked. Python developers either assume it IS happening inside Docker or that it is hard or impossible to make it do so.
Where does pip cache things?
You can manage your pip cache but the most useful thing is to simply know where
this cache exists. So run pip cache dir
(or uv cache dir
if you're already using uv
we'll talk about it more later). If you
look into that directory hopefully you'll see a bunch of files.
Now this is the cache on your HOST OS, not inside of Docker. There are a couple of ways to expose this into your Docker image, but it's much easier to just have your Docker daemon cache it for you.
If you're using a default Python docker image, you're running in Debian and by default everything is running as the root user. FYI there are security implications to this and you look into running your code as another user non-root user, but that's a topic for another post.
So for the root
user on a Debian system this makes the pip and uv cache locations are going to be in /root/.cache/
so we need to make a small change to your RUN
that installs everything.
Instead of:
RUN pip install -r requirements.txt
We need to use:
RUN --mount=type=cache,target=/root/.cache,id=pip \
python -m pip install -r /code/requirements.txt
This is instructing the Docker daemon to cache this folder with the id pip
and it will then be reused across builds.
What about in CI?
Things are a bit harder in CI. Depending on what CI system you're using it's sometimes built in, sometimes you need to make configuration adjustments. In any case, the goal you're after here is that the /root/.cache/
folder is preserved and reused across builds so that the
downloads are cached between CI runs.
You can read up on all of the details of to optimize Docker cache usage in the Docker docs.
Use uv
If you're not familiar with uv it's a near drop-in replacement
for pip from the folks at Astral who also brought us the great ruff
linting and formatting tool and the soon to be beta ty type checker.
For most things you just prefix your normal pip command with uv
and it works as expected,
just a HELL OF A LOT faster.
Switching to uv
and adding in the cache mount makes our example Dockerfile now look like
this:
FROM python:3.13-slim
RUN mkdir /code
WORKDIR /code
# Install uv
RUN --mount=type=cache,target=/root/.cache,id=pip \
python -m pip install uv
# Copy just the requirements first
COPY ./requirements.txt /code/requirements.txt
# Run uv pip install with caching!
RUN --mount=type=cache,target=/root/.cache,id=pip \
uv pip install --system -r /code/requirements.txt
# Copy in everything else
COPY . /code/
# ... rest of lines ending in a `CMD` to run
So how fast is it now?
Things are quite a bit faster at the small expensive of a slightly more complicated Dockerfile
.
Naive Fresh - 50 seconds
Naive Rebuild - 47 seconds.
The difference here is just the speed of downloading the pip dependencies between runs.
After we've fixed things to only re-run pip install
when those requirements change gives us
the biggest benefit.
Naive Fixed Fresh - 50 seconds
Naive Fixed Rebuild - 10 seconds
With Caching
Caching our downloads improves our situation even further!
Cached Fresh - 44 seconds
Cached Rebuild - 0.4 seconds
With Caching and uv
UV Fresh - 18.5 seconds
UV Rebuild - 0.4 seconds
Why isn't uv
faster? Well it IS faster downloading the files initially, I'm guessing it is doing something in parallel better or just being written in
rust is making this aspect twice as fast as normal pip. But for these last two we're really just testing how faster Docker is able to
create the layer since there is really no calls to pip or uv going on.
Adding a new pip dependency into the mix
The real speed up is when you need to add a new dependency. In our original requirements.txt we neglected to add the very useful
django-debug-toolbar
package. So I added it and re-ran all of these.
Naive
Naive Fresh - 50 seconds
Naive Rebuild - 47 seconds.
Naive Rebuild w/ new dependency - 50 seconds
Naive Fixed Fresh - 50 seconds
Naive Fixed Rebuild - 10 seconds
Naive Fixed Rebuild w/ new dependency - 51 seconds
With Caching
Cached Fresh - 44 seconds
Cached Rebuild - 0.4 seconds
Cached Rebuild w/ new dependency - 24 seconds
With Caching and uv
UV Fresh - 18.5 seconds
UV Rebuild - 0.4 seconds
UV Rebuild w/ new dependency - 6 seconds
So we went from a consistent 50ish seconds per build to 18 seconds for a fresh build, 6 seconds when adding a new dependency and nearly instant for rebuilds with no dependency changes.
Bonus info
Multi-stage Docker Builds with Python
What are multi-stage builds? In short, they are Dockerfile
s with multiple FROM
lines.
Why would I want to do that? Well size and security mainly.
On the security front, using a multi-stage build allows you to deploy an image that does not include any compilers or build tools, but still use those tools to build the dependencies you use. In terms of size, the your final image only includes the initial runtime environment, your built dependencies, but not any of the tools or dev packages needed to build those dependencies.
So you get a smaller and more secure image, which are good things and they add just a BIT more complexity to your Dockerfile. Once you've been walked through it, it should be fairly clear.
FROM python:3.13-slim AS builder-py
RUN mkdir /code
WORKDIR /code
# Install uv
RUN --mount=type=cache,target=/root/.cache,id=pip \
python -m pip install uv
# Copy just the requirements first
COPY ./requirements.txt /code/requirements.txt
# Run uv pip install with caching!
RUN --mount=type=cache,target=/root/.cache,id=pip \
uv pip install --system -r /code/requirements.txt
FROM python:3.13-slim AS release
# Copy our system wide installed pip dependencies from builder-py
COPY --from=builder-py /usr/local /usr/local
# Copy in everything else
COPY . /code/
# ... rest of lines ending in a `CMD` to run
Benchmark / Testing done here
You can find the exact Docker files and bits I used to do this testing here in this repo.
I did this testing on a M4 Max MacBook Pro with 128GBs of RAM on a 1.2 Gbps fiber internet connection while catching up on watching some PyCon 2025 talks. I'm also using Orbstack which improves the overall performance of Docker on MacOS. Your results will almost certainly vary, but doing any of these steps will save you and your team time in your CI pipelines and when building images locally. The small differences in download speed or available CPU don't really matter, we aren't doing a CPU heavy micro-benchmark here.
Our time on this planet is short, too short to spend it waiting for Docker to needlessly rebuild images.
Do yourself a favor a start using these tips now!