docker logs --details flag showing nothing more - docker

I try to see docker logs with the --details flag
I read the docs but i see no difference with or without the flag : https://docs.docker.com/engine/reference/commandline/logs/
For exemple this command echoes the date every second.
$ docker run --name test -d busybox sh -c "while true; do $(echo date); sleep 1; done"
e9d836000532
This command shows logs :
$ docker logs e9d836000532
Sun Jan 26 16:01:55 UTC 2020
...
This command adds nothing more that a "space on the left" :
$ docker logs --details e9d836000532
...
Sun Jan 26 16:01:55 UTC 2020

From docker documentation:
The docker logs --details command will add on extra attributes, such
as environment variables and labels, provided to --log-opt when
creating the container.
currently you have an extra space on the left when you use docker log --details because you probably do not use --log-opt when you create your container.
For your interest, --log-opt is used to use an another log driver than docker default's one
Try out this one :
https://docs.docker.com/config/containers/logging/fluentd/

Related

How to disable clock sync in docker-desktop VM for MACOS

Using macos Catalina and docker desktop.
The time of the conteiners perfectly syncs with the time in Vm Docker Desktop.
But I need to test one conteiner with date in the future.
I dont want to advance the clock of my mac because of iCloud services.
So I can achieve this just changing the hour in VM docker-desktop
I run:
docker run --privileged --rm alpine date -s "2023-02-19 11:27"
It changes the time ok. But it last just some seconds. Clearly there is some type of "syncronizer" that keeps changing back the time.
How do I disable this "syncronizer"?
There's only one time in Linux, it's not namespaced, so when Docker runs ntp on the VM to keep it synchronized (in the past it would get out of sync, especially after the parent laptop was put to sleep), that sync applies to the Linux kernel, which applies to every container since it's the same kernel value for everything. Therefore it's impossible to set this on just one container in the Linux kernel.
Instead, I'd recommend going with something like libfaketime that can be used to alter the response applications see when the query that time value. It basically sits as a layer between the kernel and application, and injects an offset based on an environment variable you set.
FROM debian
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get install -y libfaketime \
&& rm -rf /var/lib/apt/lists*
ENV LD_PRELOAD=/usr/lib/x86_64-linux-gnu/faketime/libfaketime.so.1
And then to run it, set FAKETIME:
$ docker run --rm test-faketime date
Thu Feb 17 14:59:48 UTC 2022
$ docker run -e FAKETIME="+7d" --rm test-faketime date
Thu Feb 24 14:59:55 UTC 2022
$ date
Thu 17 Feb 2022 09:59:57 AM EST
I found that you can kill the NTP service which syncs the VM time to the host's time. Details of how service works.
First, use this guide to get a shell inside the VM.
Then, find the sntpc service:
/ # ps a | grep sntpc
1356 root 0:00 /usr/bin/containerd-shim-runc-v2 -namespace services.linuxkit -id sntpc -address /run/containerd/containerd.sock
1425 root 0:00 /usr/sbin/sntpc -v -i 30 127.0.0.1
3465 root 0:00 grep sntpc
Take the number at the beginning of the /usr/sbin/sntpc line, and use kill to stop the process.
/ # kill 1425
I have found that Docker Desktop does not seem to restart this process if it dies, and you can change the VM time without SNTPC changing it back.

Why is Loki's Docker Driver Client stopping to log after some time?

I want to send logs of my Docker containers to Grafana Loki. Therefore, I installed Loki's Docker Driver Client and started my containers with it. First I can see logs, but after some time I see no more logs.
Installation
I installed Loki's Docker Driver Client as a Docker plugin on my Docker Engine (version 20.10.2):
$ docker plugin install grafana/loki-docker-driver:master-54d1d3b --alias loki --grant-all-permissions
I didn't use the tag lastest, because of the bug Unable to connect to logging plugin in Swarm
Configuration
I started my Docker containers with Loki's Docker Driver Client as log driver:
$ docker container run
--log-driver=loki
--log-opt loki-url="$LOKI_URL"
--log-opt loki-retries=5
--log-opt loki-batch-size=400
--log-opt max-size="10m"
--log-opt max-file=5
--detach
--name $CONTAINER_NAME
--restart unless-stopped
$IMAGE:$TAG
I also added json-log driver's max-size and max-file to limit disk space, see Configuring the Docker Driver.
Problem
First I could see logs in Grafana and in command line with docker container logs, but after some time no more logs were shown. If I tried to look into the logs on Docker host and I saw an error:
$ docker container logs 75d4b13eb3e8
error from daemon in stream: Error grabbing logs: error getting log reader: LogDriver.ReadLogs: logger does not exist for 75d4b13eb3e8203b9247ecdeb41fdf495cc8fea7dcfc4775fd8261263b1dcd32
Research
I looked into the directories of the containers (see Where is a log file with logs from a container?), but I couldn't see any log files:
$ sudo ls /var/lib/docker/containers/75d4b13eb3e8203b9247ecdeb41fdf495cc8fea7dcfc4775fd8261263b1dcd32
checkpoints config.v2.json hostconfig.json hostname hosts mounts resolv.conf resolv.conf.hash
I also checked the log path (see Get an instance’s log path), but it was empty:
$ docker inspect --format='{{.LogPath}}' 75d4b13eb3e8
I found container's logs in plugin's directory (see Loki log driver not storing logs as files on disk, even with keep-file: true), but the log files don't change anymore:
$ sudo ls -la /var/lib/docker/plugins/eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288/rootfs/var/log/docker/75d4b13eb3e8203b9247ecdeb41fdf495cc8fea7dcfc4775fd8261263b1dcd32
total 912
drwxr-xr-x 2 root root 4096 Jan 22 12:59 .
drwxr-xr-x 17 root root 4096 Jan 22 15:46 ..
-rw-r----- 1 root root 923177 Jan 22 13:34 json.log
I looked into Docker daemon's logs (see Read the logs) and found errors and a warning (at the same time logging stopped):
$ sudo journalctl -u docker.service | grep eac33cc9913c
[...]
[...]level=error msg="panic: send on closed channel" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error msg="goroutine 153 [running]:" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error msg="main.(*loki).Log(0xc0000c5e00, 0xc0001d81c0, 0xc0000c5e80, 0x0)" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error msg="\t/src/loki/cmd/docker-driver/loki.go:69 +0x2fb" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error msg="main.consumeLog(0xc0002c0480)" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error msg="\t/src/loki/cmd/docker-driver/driver.go:165 +0x4c2" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error msg="created by main.(*driver).StartLogging" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error msg="\t/src/loki/cmd/docker-driver/driver.go:116 +0xa75" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=warning msg="Unable to connect to plugin: /run/docker/plugins/eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288/loki.sock/LogDriver.StopLogging: Post http://%2Frun%2Fdocker%2Fplugins%2Feac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288%2Floki.sock/LogDriver.StopLogging: EOF, retrying in 1s"
[...]
What did I do wrong?
I was experiencing the same issue.
My only differences in configuration are that I'm trialing the latest Enterprise Edition (19.03) as it brings dual logging capability although this is also supported in the latest CE versions, and I'm using the latest Loki Docker driver client now that the Github issue previously mentioned has been resolved.
I ended up setting the log-opts properties no-file and keep-file in docker-compose.yml:
logging:
driver: "loki"
options:
loki-url: "http://${LOKI_URL}:3100/loki/api/v1/push"
loki-batch-size: "400"
no-file: "false"
keep-file: "true"
max-size: "5m"
max-file: "3"
Since making this change I am receiving logs in Loki and can still use docker container logs and docker service logs on my Docker hosts.
no-file: "false" tells the driver to continue creating logs on disk and keep-file: "true" tells the driver to keep json logs if the container is stopped (by default files are removed).
Note: Originally I was adding these settings to /etc/docker/daemon.json on the host but would still see the error getting log reader issue, I had to switch to specifying the log driver per container/swarm service.
Regarding this issue
First I could see logs in Grafana and in command line with docker container logs, but after some time no more logs were shown.
On Grafana please select Query type: Range not Instant and you will see all the logs for the selected period of time, if exists in loki.

Saving docker container logs with container names instead of container IDs

With the default json-file logging driver, is there a way to log rotate docker container logs with container names, instead of the container IDs?
The container IDs in the log file name look not so readable, which is when i thought of saving the logs with container names instead?
It's possible to configure the engine with log options to include labels in the logs:
# cat /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "com.docker.stack.namespace,com.docker.swarm.service.name,environment"
}
}
# docker run --label environment=dev busybox echo hello logs
hello logs
root#vm-11:/etc/docker# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9615c898c2d2 busybox "echo hello logs" 8 seconds ago Exited (0) 7 seconds ago eloquent_germain
# docker logs --details 961
environment=dev hello logs
# more /var/lib/docker/containers/9615c898c2d2aa7439581e08c2e685f154e4bf2bb9fd5ded0c384da3242c6c9e/9615c898c2d2aa7439581e08c2e685f154e4bf2bb9fd5ded0c384da3242c6c9e-json.log
{"log":"hello logs\n","stream":"stdout","attrs":{"environment":"dev"},"time":"2020-09-22T11:12:41.279155826Z"}
You need to reload the docker engine after making changes to the daemon.json, and changes only apply to newly created containers. For systemd, reloading is done with systemctl reload docker.
To specifically pass the container name, which isn't a label, you can pass a "tag" setting:
# docker run --name test-log-opts --log-opt tag="{{.Name}}/{{.ID}}" busybox echo hello log opts
hello log opts
# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c201d0a2504a busybox "echo hello log opts" 6 seconds ago Exited (0) 5 seconds ago test-log-opts
# docker logs --details c20
tag=test-log-opts%2Fc201d0a2504a hello log opts
# more /var/lib/docker/containers/c201d0a2504addedb2b6785850a83e8931052d0d9778438e9dcc27391f45fec2/c201d0a2504addedb2b6785850a83e8931052d0d9778438e9dcc27391f45fec2-json.log
{"log":"hello log opts\n","stream":"stdout","attrs":{"tag":"test-log-opts/c201d0a2504a"},"time":"2020-09-22T11:15:26.998956544Z"}
For more details:
JSON log driver options: https://docs.docker.com/config/containers/logging/json-file/#options
Container logging tags: https://docs.docker.com/config/containers/logging/log_tags/

SCADA LTS - HTTP Status 404

After starting a SCADA LTS Docker container as suggested on https://github.com/SCADA-LTS/Scada-LTS with the following command:
docker run -it -e DOCKER_HOST_IP=docker-machine ip-p 81:8080 scadalts/scadalts /root/start.sh
...The container works well for some time and then suddenly a "HTTP Status 404" error is shown, like the following:
http://[IP]/ScadaBR/
HTTP Status 404 - /ScadaBR/
type Status report
message /ScadaBR/
description The requested resource is not available.
Apache Tomcat/7.0.85
Where [IP] is the default Docker IP address and port, most of the times is localhost:81.
Any idea how to solve it?
Thank you in advance!
TL;DR
After some time running the MySQLservice dies. Is necessary to restart it manually with this:
docker exec scada service mysql restart
docker exec scada killall tail
DETAILED REPORT
When the error is shown, you can check if all the services are running on the container (in this case named 'scada'):
>docker exec scada ps -A
PID TTY TIME CMD
1 ? 00:00:00 start.sh
790 ? 01:00:22 java
791 ? 00:01:27 tail
858 ? 00:00:00 ps
As can be seen, no MySQL service is running. This explains why Tomcat is running but SCADA-LTS don't.
You can restart MySQL service inside the container with:
docker exec scada service mysql restart
After that SCADA-LTS is still down and you have to restart tomcat which can be done in this way:
docker exec scada killall tail
After a minute or less, all the services are running:
>docker exec scada ps -A
PID TTY TIME CMD
1 ? 00:00:00 start.sh
43 ? 00:00:00 mysqld_safe
398 ? 00:00:00 mysqld
481 ? 00:00:31 java
482 ? 00:00:00 sleep
618 ? 00:00:00 ps
Now SCADA-LTS is running!

How to limit `docker run` execution time?

I want to run a command inside a docker container. If the command takes more than 3 seconds to finish, the container should be deleted.
I thought I can achieve this goal by using --stop-timeout option in docker run.
But it looks like something goes wrong with my command.
For example, docker run -d --stop-timeout 3 ubuntu:14.04 sleep 100 command creates a docker container that lasts for more than 3 seconds. The container is not stopped or deleted after the 3rd second.
Do I misunderstand the meaning of --stop-timeout?
The document says
--stop-timeout Timeout (in seconds) to stop a container
Here's my docker version:
Client:
Version: 17.12.0-ce
API version: 1.35
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:03:51 2017
OS/Arch: darwin/amd64
Server:
Engine:
Version: 17.12.0-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:12:29 2017
OS/Arch: linux/amd64
Experimental: true
The API version is newer than 1.25.
You can try
timeout 3 docker run...
there is a PR on that subject
https://github.com/moby/moby/issues/1905
See also
Docker timeout for container?
The --stop-timeout option is the maximum amount of time docker should wait for your container to stop when using the docker stop command.
A container will stop when it's told to or when the command is running finishes, so if you change you sleep from 100 to 1, you'll see that the container is stopped after a second.
What I'll advice you to do is to change the ENTRYPOINT of your container to a script that you create, that will execute what you want and keep track of the execution time from within and exit when timeout.
After that you can start your container using the --rm option that will delete it once the script finishes.
A small example.
Dockerfile:
FROM ubuntu:16.04
ADD ./script.sh /script.sh
ENTRYPOINT /script.sh
script.sh:
#!/bin/bash
timeout=5
sleep_for=1
sleep 100 &
find_process=$(ps aux | grep -v "grep" | grep "sleep")
while [ ! -z "$find_process" ]; do
find_process=$(ps aux | grep -v "grep" | grep "sleep")
if [ "$timeout" -le "0" ]; then
echo "Timeout"
exit 1
fi
timeout=$(($timeout - $sleep_for))
sleep $sleep_for
done
exit 0
Run it using:
docker build -t testing .
docker run --rm testing
This script will execute sleep 100 in background, check if its still running and if the timeout of 5 seconds is reach then exit.
This might not be the best way to do it, but if you want to do something simple it may help.
docker run --rm ubuntu timeout 2 sh -c 'echo start && sleep 30 && echo finish'
will terminate after 2 seconds and finish will never be output
Depending on what exactly you want to achieve, the --ulimit parameter to docker run may do what you need. For example:
docker run --rm -it --ulimit cpu=1 debian:buster bash -c '(while true; do true; done)'
After about 1s, this will print Killed and return. With the --ulimit option, it would rune forever.
However, note that this only limits the CPU time, not the wall clock time. You can happily run sleep 24h with a --ulimit cpu=1 because sleep does not consume CPU time.
In my case, I had a docker container that started an Express server, and then remained running, and I wanted a simple test on CI to check that the container can start without any immediate error (such as configuration errors).
I made sure my code returned a non-zero exit code if something failed during start, and then ended up with this:
timeout 10 docker run [ container params ]; test $? -eq 124 && echo "Container ran for 10 seconds without failing"
This will send SIGTERM to the docker container after 10 seconds if it has not already died. If it's alive long enough for the timeout to occur, it will return 124, which is what the test is for. In other words, this verifies that the docker ran long enough to reach a timeout, any error (or early exit with code 0!) will be considered an error.

Resources