Docker-compose not reading logging config in /etc/docker/daemon.json - docker

I've got a daemon.json file, stored in /etc/docker/daemon.json. to configure the docker daemon with following contents:
{
"log-driver" : "syslog",
"log-opts": {
"syslog-facility": "local1",
"tag": "{{.Name}}"
},
"storage-driver": "devicemapper",
"storage-opts": [
"dm.fs=xfs",
"dm.thinpooldev=/dev/mapper/vg00-docker--pool",
"dm.use_deferred_removal=true"
]
}
None of the docker-compose services have logging options configured, nor are any of the docker containers configured to start with --log-driver in their cmd or entrypoint.
Inspecting the output of the docker info command, I can verify that the logging driver is set to syslog.
However when running a docker-compose stack, all of the containers still show json-file upon inspecting them with the command docker inspect --format='{{.HostConfig.LogConfig.Type}}' , which seems to me as if docker-compose is not respecting the /etc/docker/daemon.json config file, just for the logging config, as the storage-driver is set correctly.
The docker version I used to run this is 17.12.0, docker-compose is at 1.19.0

/etc/docker/daemon.json is default config file and docker daemon should access if exists when starts. Maybe there's something wrong in your file according to the configuration (because it looks ok according to syntax).
Let's try to force config-file read with debug enabled and see which error it shows.
/usr/bin/dockerd stop
/usr/bin/dockerd start -D -l debug --config-file /etc/docker/daemon.json
After that, you can see logs with journalctl -u docker
Alternatively, you know that you can test easily each config param passing them one by one via cli instead json config file, in order to figure out which of them causes that configuration is not load.
/usr/bin/dockerd stop
/usr/bin/dockerd start -D -l debug --log-driver syslog --storage-driver devicemapper ...
Adding one by one you will be able to check if for example it fails with storage-opts because /dev/mapper/vg00-docker--pool is not mounted or whatever.

Related

Moving docker data directory

I want to store docker data on an external disk following the documentation on daemon.json configuration.
Error:
unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives don't match any configuration option: data-root
Using the documentation from docker, the "data-root" configuration option is used to move the data directory.
Using the documentation from docker, the new disk satisfies the prerequisites of xfs format,ftype=1 as per the output from xfs_info /path/to/disk |grep ftype
Using Centos7.9;
Stop docker
systemctl stop docker
Edit /etc/docker/daemon.json to include:
{
"data-root": "/var/lib/test"
}
Note; I found a source suggesting to add "storage-driver": "overlay2" to daemon.json - this has the same error at the top of this post
Sync the old and new directory
sudo rsync -axPSv /var/lib/docker/ /var/lib/test/
Start docker
systemctl start docker
Observe the error
systemctl status docker.service -l
localhost.localdomain dockerd-current[3615]: unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives don't match any configuration option: data-root
I also tried using the --bind as suggested here, and also this info for centos and SELinux all resulting in the error posted here.

Hyperledger Fabric: How to trim logs?

We have been running a fabric network for a while and the docker containers ran out of disk space because of the logs. How can we trim the logs so that they don't take up more than e.g., 1GB of disk space? Older messages should be discarded.
As it sounds like you are running Fabric in Docker, you should just use Docker's native logging options. Sounds like you are just using the default logging which means the json-file driver. You can either specify Docker-wide settings or per container settings.
Here's an example of a daemon.json file to set global options to limit log file size to 10m and limit the number of log files to keep to 3:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "production_status",
"env": "os,customer"
}
}
If you are using docker-compose to run your containers, you can set per container logging options in your yams config file.
If you are starting containers using docker run ...., you can use the --log-opt flag, e.g. docker run --log-opt max-file=3 --log-opt max-size=10m ....

Increase Docker container size from default 10GB on rhel7

When I launch a container from rhel7.3 image, the default container size is 10GB. I want to increase it to 20GB. I tried the below ways but I had no luck
1) Added "DOCKER_STORAGE_OPTIONS": "--storage-opt dm.basesize=20G" in /etc/docker/daemon.json file. /etc/docker/daemon.json file is not there by default so I had to add it and tried restarting docker. Restart fails with the below error:
"unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives don't match any configuration option: DOCKER_STORAGE_OPTIONS\n"
2) Added "dm.basesize=20G" parameter while I launch the conatiner
docker run --privileged --storage-opt "dm.basesize=20G" -d IMAGE_ID
but it fails to launch with error
"docker: Error response from daemon: Unknown option dm.basesize."
Any help on how I can achieve to launch a container with 20GB instead of the default 10GB?
Thanks,
Premchand
I changed the storage type to "Overlay" by the following steps
1) Added {"storage-driver": "overlay"} in /etc/docker/daemon.json file. This file was not there in rhel 7.3 so I added it manually.
2) Restarted docker
My issue of increasing the container volume is resolved as each container get total amount of volume available on the host.
Had the same issue as you, after a lot of research i found a simple solution:
stop the docker service:
sudo systemctl stop docker
edit your docker service file, located at:
/usr/lib/systemd/system/docker.service
find the execution line:
ExecStart=/usr/bin/dockerd
and change it to: ExecStart=/usr/bin/dockerd --storage-opt dm.basesize=20G
start docker service again:
sudo systemctl start docker
all done.
You have the correct flag, --storage-opt dm.basesize=some_size, however this is an argument that should be given to dockerd, not docker.
Try reformatting your daemon.json file to contain:
"storage-opt": [ "dm.basesize=20G" ]

Unable to start docker after configuring hosts in daemon.json

I'm trying to configure docker (version 17.03.1-ce) in ubuntu 16.04 using configuration file /etc/docker/daemon.json to add an host:
{
"debug": true,
"hosts": ["tcp://0.0.0.0:1234", "unix:///var/run/docker.sock"],
"dns" : ["8.8.8.8","8.8.4.4"]
}
when I try to restart docker.. it fails
#service docker restart
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
watching on systemctl status docker.service:
Starting Docker Application Container Engine...
docker-slave-ubuntu-build dockerd[24806]: unable to configure the Docker daemon with file /etc/docker/daemon.json:
the following directives are specified both as a flag and in the configuration file:
hosts: (from flag: [fd://], from file: [tcp://0.0.0.0:4243 unix:///var/run/docker.sock])
Where I can remove the mentioned flag ? I have to modify maintainer's script ?
For systemd, my preferred method is to deploy a simple override file (you may need to first create the directory):
$ cat /etc/systemd/system/docker.service.d/override.conf
# Disable flags to dockerd, all settings are done in /etc/docker/daemon.json
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
This removes the -H ... default flag from dockerd along with any other options and lets you manage docker from the daemon.json file. This also allows docker to make changes to their startup scripts as long as they don't modify the ExecStart and you'll continue to receive those changes without maintaining your own copy of the docker.service.
After creating this file, run systemctl daemon-reload; systemctl restart docker.
It looks like this is an issue merging configuration from both the command line and configuration file. The default systemd unit file is specifying -H fd:// and it conflicts with your tcp://0.0.0.0:1234 and unix:///var/run/docker.sock.
There are a number of GitHub issues on the subject:
https://github.com/moby/moby/issues/22339
https://github.com/moby/moby/issues/21559
https://github.com/moby/moby/issues/25471
https://github.com/moby/moby/pull/27473
They don't seem to consider this a bug. But it is definitely an annoyance. A workaround is to copy the default unit file and remove the -H fd:// from it:
$ sudo cp /lib/systemd/system/docker.service /etc/systemd/system/
$ sudo sed -i 's/\ -H\ fd:\/\///g' /etc/systemd/system/docker.service
$ sudo systemctl daemon-reload
$ sudo service docker restart
I found this on the Docker docs and it worked on Docker 18.09.1 and Centos 8:
To work around this problem, create a new file /etc/systemd/system/docker.service.d/docker.conf with the following contents, to remove the -H argument that is used when starting the daemon by default.
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
Then reload
systemctl daemon-reload
The reason is:
Docker listens on a socket by default. On Debian and Ubuntu systems using systemd, this means that a host flag -H is always used when starting dockerd. If you specify a hosts entry in the daemon.json, this causes a configuration conflict (as in the above message) and Docker fails to start.
Here is the link: https://docs.docker.com/config/daemon/#troubleshoot-conflicts-between-the-daemonjson-and-startup-scripts
In my case I tried to add both the daemon.json under /etc/docker and a *.conf file under /etc/systemd/system/docker.service.d.
It turned out it was enough to have a .conf file only (in my case called override.conf):
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375
this way I could expose the docker socket.
I had copied the daemon.json from a website. After running
sudo systemctl stop docker
/usr/sbin/dockerd
it showed me a better error message that stated that I had a strange invisible character in the daemon.json file

Docker: How to clear the logs properly for a Docker container?

I use docker logs [container-name] to see the logs of a specific container.
Is there an elegant way to clear these logs?
First the bad answer. From this question there's a one-liner that you can run:
echo "" > $(docker inspect --format='{{.LogPath}}' <container_name_or_id>)
instead of echo, there's the simpler:
: > $(docker inspect --format='{{.LogPath}}' <container_name_or_id>)
or there's the truncate command:
truncate -s 0 $(docker inspect --format='{{.LogPath}}' <container_name_or_id>)
I'm not a big fan of either of those since they modify Docker's files directly. The external log deletion could happen while docker is writing json formatted data to the file, resulting in a partial line, and breaking the ability to read any logs from the docker logs cli. For an example of that happening, see this comment on duketwo's answer:
after emptying the logfile, I get this error: error from daemon in stream: Error grabbing logs: invalid character '\x00' looking for beginning of value
Instead, you can have Docker automatically rotate the logs for you. This is done with additional flags to dockerd if you are using the default JSON logging driver:
dockerd ... --log-opt max-size=10m --log-opt max-file=3
You can also set this as part of your daemon.json file instead of modifying your startup scripts:
{
"log-driver": "json-file",
"log-opts": {"max-size": "10m", "max-file": "3"}
}
These options need to be configured with root access. Make sure to run a systemctl reload docker after changing this file to have the settings applied. This setting will then be the default for any newly created containers. Note, existing containers need to be deleted and recreated to receive the new log limits.
Similar log options can be passed to individual containers to override these defaults, allowing you to save more or fewer logs on individual containers. From docker run this looks like:
docker run --log-driver json-file --log-opt max-size=10m --log-opt max-file=3 ...
or in a compose file:
version: '3.7'
services:
app:
image: ...
logging:
options:
max-size: "10m"
max-file: "3"
For additional space savings, you can switch from the json log driver to the "local" log driver. It takes the same max-size and max-file options, but instead of storing in json it uses a binary syntax that is faster and smaller. This allows you to store more logs in the same sized file. The daemon.json entry for that looks like:
{
"log-driver": "local",
"log-opts": {"max-size": "10m", "max-file": "3"}
}
The downside of the local driver is external log parsers/forwarders that depended on direct access to the json logs will no longer work. So if you use a tool like filebeat to send to Elastic, or Splunk's universal forwarder, I'd avoid the "local" driver.
I've got a bit more on this in my Tips and Tricks presentation.
Use:
truncate -s 0 /var/lib/docker/containers/**/*-json.log
You may need sudo
sudo sh -c "truncate -s 0 /var/lib/docker/containers/**/*-json.log"
ref. Jeff S. Docker: How to clear the logs properly for a Docker container?
Reference: Truncating a file while it's being used (Linux)
On Docker for Windows and Mac, and probably others too, it is possible to use the tail option. For example:
docker logs -f --tail 100
This way, only the last 100 lines are shown, and you don't have first to scroll through 1M lines...
(And thus, deleting the log is probably unnecessary)
sudo sh -c "truncate -s 0 /var/lib/docker/containers/*/*-json.log"
You can set up logrotate to clear the logs periodically.
Example file in /etc/logrotate.d/docker-logs
/var/lib/docker/containers/*/*.log {
rotate 7
daily
compress
size=50M
missingok
delaycompress
copytruncate
}
You can also supply the log-opts parameters on the docker run command line, like this:
docker run --log-opt max-size=10m --log-opt max-file=5 my-app:latest
or in a docker-compose.yml like this
my-app:
image: my-app:latest
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "5"
Credits: https://medium.com/#Quigley_Ja/rotating-docker-logs-keeping-your-overlay-folder-small-40cfa2155412 (James Quigley)
Docker4Mac, a 2018 solution:
LOGPATH=$(docker inspect --format='{{.LogPath}}' <container_name_or_id>)
docker run -it --rm --privileged --pid=host alpine:latest nsenter -t 1 -m -u -n -i -- truncate -s0 $LOGPATH
The first line gets the log file path, similar to the accepted answer.
The second line uses nsenter that allows you to run commands in the xhyve VM that servers as the host for all the docker containers under Docker4Mac. The command we run is the familiar truncate -s0 $LOGPATH from non-Mac answers.
If you're using docker-compose, the first line becomes:
local LOGPATH=$(docker inspect --format='{{.LogPath}}' $(docker-compose ps -q <service>))
and <service> is the service name from your docker-compose.yml file.
Thanks to https://github.com/justincormack/nsenter1 for the nsenter trick.
You can't do this directly through a Docker command.
You can either limit the log's size, or use a script to delete logs related to a container. You can find scripts examples here (read from the bottom): Feature: Ability to clear log history #1083
Check out the logging section of the docker-compose file reference, where you can specify options (such as log rotation and log size limit) for some logging drivers.
Here is a cross platform solution to clearing docker container logs:
docker run --rm -v /var/lib/docker:/var/lib/docker alpine sh -c "echo '' > $(docker inspect --format='{{.LogPath}}' CONTAINER_NAME)"
Paste this into your terminal and change CONTAINER_NAME to desired container name or id.
As a root user, try to run the following:
> /var/lib/docker/containers/*/*-json.log
or
cat /dev/null > /var/lib/docker/containers/*/*-json.log
or
echo "" > /var/lib/docker/containers/*/*-json.log
On my Ubuntu servers even as sudo I would get Cannot open ‘/var/lib/docker/containers/*/*-json.log’ for writing: No such file or directory
But combing the docker inspect and truncate answers worked :
sudo truncate -s 0 `docker inspect --format='{{.LogPath}}' <container>`
I do prefer this one (from solutions above):
truncate -s 0 /var/lib/docker/containers/*/*-json.log
However I'm running several systems (Ubuntu 18.x Bionic for example), where this path does not work as expected. Docker is installed through Snap, so the path to containers is more like:
truncate -s 0 /var/snap/docker/common/var-lib-docker/containers/*/*-json.log
This will delete all logfiles for all containers:
sudo find /var/lib/docker/containers/ -type f -name "*.log" -delete
Thanks to answer by #BMitch, I've just wrote a shell script to clean logs of all the containers:
#!/bin/bash
ids=$(docker ps -a --format='{{.ID}}')
for id in $ids
do
echo $(docker ps -a --format='{{.ID}} ### {{.Names}} ### {{.Image}}' | fgrep $id)
truncate -s 0 $(docker inspect --format='{{.LogPath}}' $id)
ls -llh $(docker inspect --format='{{.LogPath}}' $id)
done
Not sure if this is helpful for you, but removing the container always helps.
So, if you use docker-compose for your setup, you can simply use docker-compose down && docker-compose up -d instead of docker-compose restart. With a proper setup (make sure to use volume mounts for persistent data), you don't lose any data this way.
Sure, this is more than the OP requested. But there are various situations where the other answers cannot help (if using a remote docker server or working on a Windows machine, accessing the underlying filesystem is proprietary and difficult)
Linux/Ubuntu:
If you have several containers and you want to remove just one log but not others.
(If you have issues like "Permission denied" do first sudo su.)
List all containers: docker ps -a
Look for the container you desire and copy the CONTAINER ID. Example: E1X2A3M4P5L6.
Containers folders and real names are longer than E1X2A3M4P5L6 but first 12 characters are those resulted in docker ps -a.
Remove just that log:
> /var/lib/docker/containers/E1X2A3M4P5L6*/E1X2A3M4P5L6*-json.log (Replace E1X2A3M4P5L6 for your result !! )
As you can see, inside /containers are the containers, and logs has the same name but with -json.log at the end. You just need to know that first 12 characters, because * means "anything".
Docker for Mac users, here is the solution:
Find log file path by:
$ docker inspect | grep log
SSH into the docker machine( suppose the name is default, if not, run docker-machine ls to find out):
$ docker-machine ssh default
Change to root user(reference):
$ sudo -i
Delete the log file content:
$ echo "" > log_file_path_from_step1
I needed something I could run as one command, instead of having to write docker ps and copying over each Container ID and running the command multiple times. I've adapted BMitch's answer and thought I'd share in case someone else may find this useful.
Mixing xargs seems to pull off what I need here:
docker ps --format='{{.ID}}' | \
xargs -I {} sh -c 'echo > $(docker inspect --format="{{.LogPath}}" {})'
This grabs each Container ID listed by docker ps (will erase your logs for any container on that list!), pipes it into xargs and then echoes a blank string to replace the log path of the container.
To remove/clear docker container logs we can use below command
$(docker inspect container_id|grep "LogPath"|cut -d """ -f4)
or
$(docker inspect container_name|grep "LogPath"|cut -d """ -f4)
If you need to store a backup of the log files before deleting them, I have created a script that performs the following actions (you have to run it with sudo) for a specified container:
Creates a folder to store compressed log files as backup.
Looks for the running container's id (specified by the container's name).
Copy the container's log file to a new location (folder in step 1) using a random name.
Compress the previous log file (to save space).
Truncates the container's log file by certain size that you can define.
Notes:
It uses the shuf command. Make sure your linux distribution has it or change it to another bash-supported random generator.
Before use, change the variable CONTAINER_NAME to match your running container; it can be a partial name (doesn't have to be the exact matching name).
By default it truncates the log file to 10M (10 megabytes), but you can change this size by modifying the variable SIZE_TO_TRUNCATE.
It creates a folder in the path: /opt/your-container-name/logs, if you want to store the compressed logs somewhere else, just change the variable LOG_FOLDER.
Run some tests before running it in production.
#!/bin/bash
set -ex
############################# Main Variables Definition:
CONTAINER_NAME="your-container-name"
SIZE_TO_TRUNCATE="10M"
############################# Other Variables Definition:
CURRENT_DATE=$(date "+%d-%b-%Y-%H-%M-%S")
RANDOM_VALUE=$(shuf -i 1-1000000 -n 1)
LOG_FOLDER="/opt/${CONTAINER_NAME}/logs"
CN=$(docker ps --no-trunc -f name=${CONTAINER_NAME} | awk '{print $1}' | tail -n +2)
LOG_DOCKER_FILE="$(docker inspect --format='{{.LogPath}}' ${CN})"
LOG_FILE_NAME="${CURRENT_DATE}-${RANDOM_VALUE}"
############################# Procedure:
mkdir -p "${LOG_FOLDER}"
cp ${LOG_DOCKER_FILE} "${LOG_FOLDER}/${LOG_FILE_NAME}.log"
cd ${LOG_FOLDER}
tar -cvzf "${LOG_FILE_NAME}.tar.gz" "${LOG_FILE_NAME}.log"
rm -rf "${LOG_FILE_NAME}.log"
truncate -s ${SIZE_TO_TRUNCATE} ${LOG_DOCKER_FILE}
You can create a cronjob to run the previous script every month. First run:
sudo crontab -e
Type a in your keyboard to enter edit mode. Then add the following line:
0 0 1 * * /your-script-path/script.sh
Hit the escape key to exit Edit mode. Save the file by typing :wq and hitting enter. Make sure the script.sh file has execution permissions.
On computers with docker desktop we use:
truncate -s 0 //wsl.localhost/docker-desktop-data/data/docker/containers/*/*-json.log
For linux distributions you can use this it works for me with this path:
truncate -s 0 /var/lib/docker/containers/*/*-json.log
docker system prune
run this command in command prompt

Resources