
Is Docker Eating Your Disk Space? How to Find and Fix Runaway Container Logs
It’s a scenario familiar to many developers and system administrators: you receive a dreaded “disk space low” alert from a server that should have plenty of room. After a frantic search, you discover the culprit isn’t your application data or database backups, but a hidden giant—massive log files generated by a Docker container.
Docker is an incredibly powerful tool, but its default logging behavior can quickly become a liability if left unchecked. A single, overly verbose container can generate gigabytes of logs in a short period, consuming valuable disk space and potentially bringing your services to a halt.
Fortunately, diagnosing and fixing this common issue is straightforward. This guide will walk you through how to reclaim your disk space and implement a permanent solution to prevent it from ever happening again.
Step 1: Diagnose the Problem and Find the Offender
Before you can fix the problem, you need to confirm that Docker is indeed the cause. The first command to run is a simple health check of Docker’s own disk usage.
docker system df
This command provides a breakdown of the space used by images, containers, and build cache. While useful, it often doesn’t reveal the full picture, especially when it comes to log files, which are technically part of a container’s writable layer but can grow independently.
To find the actual log files, you need to look inside Docker’s data directory, typically located at /var/lib/docker/containers
. Each container has its own subdirectory, and within it, a JSON file that stores its log output.
You can use a command like find
or du
to pinpoint the largest files. To find all log files larger than 1GB, you can use the following command:
find /var/lib/docker/containers/ -type f -name "*.log" -size +1G
This will quickly identify which container is generating excessive logs, allowing you to focus your efforts.
Step 2: Safely Reclaim Your Disk Space (The Immediate Fix)
Once you’ve identified the massive log file, your first instinct might be to delete it with rm
. Do not simply delete the log file while the container is running. Docker maintains an open file handle to this log. Deleting the file can lead to unexpected behavior and won’t actually free up the disk space until the container is stopped and the file handle is released.
The correct and immediate solution is to truncate the file, which empties its contents without deleting the file itself.
To safely clear a container’s log file, use the truncate
command:
truncate -s 0 /var/lib/docker/containers/[container-id]/[container-id]-json.log
Replace the path with the actual path to the log file you identified in the previous step. This command reduces the file’s size to zero instantly, freeing up your disk space without requiring a container restart.
Step 3: Implement a Permanent Solution with Log Rotation
Clearing the log file is a temporary fix. To prevent the problem from recurring, you need to tell Docker to automatically manage its log files. This is done by configuring the logging driver.
By default, Docker uses the json-file
driver without any size or rotation limits. You can configure this globally for all containers or specify limits for individual containers.
Global Configuration (Recommended)
The best approach is to set a global logging policy for the Docker daemon. You can do this by editing or creating the daemon’s configuration file at /etc/docker/daemon.json
.
Add the following configuration to limit the size and number of log files per container:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "50m",
"max-file": "3"
}
}
max-size
: “50m” This sets the maximum size of a single log file to 50 megabytes.max-file
: “3” This tells Docker to keep a maximum of 3 log files. When the primary log file hits 50MB, it will be rotated, and a new one will be created. After 3 files, the oldest one is deleted.
This configuration ensures that any single container will use a maximum of 150MB (50MB x 3) for logs.
Important: After saving the daemon.json
file, you must restart the Docker daemon for the changes to take effect. Be aware that restarting the Docker daemon will stop all running containers.
sudo systemctl restart docker
Per-Container Configuration
If you only want to apply limits to specific, noisy containers, you can add flags to your docker run
command:
docker run --log-opt max-size=50m --log-opt max-file=3 my-image
If you are using Docker Compose, you can add the logging configuration directly into your docker-compose.yml
file:
version: '3.8'
services:
myapp:
image: my-image
logging:
driver: "json-file"
options:
max-size: "50m"
max-file: "3"
Final Housekeeping: The docker system prune
Command
While you’re managing disk space, it’s also a good practice to perform general Docker cleanup. Over time, your system can accumulate stopped containers, unused networks, and dangling images.
The docker system prune
command is a powerful tool for this. Use it with caution, as it permanently removes data.
docker system prune -a --volumes
-a
: This flag tells Docker to remove all unused images, not just dangling ones.--volumes
: This flag will also remove all unused volumes. Be absolutely sure you do not have important data in an unused volume before running this.
Take Control of Your Disk Space
Unmanaged container logs are a silent threat to server stability. By regularly monitoring disk usage and implementing a sensible log rotation policy, you can prevent unexpected outages and ensure your Docker environment remains efficient and reliable. Taking these proactive steps turns a potential crisis into a simple, automated maintenance task.
Source: https://linuxhandbook.com/docker-log-space-issue/