Saving docker container logs with container names instead of container IDs - docker

With the default json-file logging driver, is there a way to log rotate docker container logs with container names, instead of the container IDs?
The container IDs in the log file name look not so readable, which is when i thought of saving the logs with container names instead?

It's possible to configure the engine with log options to include labels in the logs:
# cat /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "com.docker.stack.namespace,com.docker.swarm.service.name,environment"
}
}
# docker run --label environment=dev busybox echo hello logs
hello logs
root#vm-11:/etc/docker# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9615c898c2d2 busybox "echo hello logs" 8 seconds ago Exited (0) 7 seconds ago eloquent_germain
# docker logs --details 961
environment=dev hello logs
# more /var/lib/docker/containers/9615c898c2d2aa7439581e08c2e685f154e4bf2bb9fd5ded0c384da3242c6c9e/9615c898c2d2aa7439581e08c2e685f154e4bf2bb9fd5ded0c384da3242c6c9e-json.log
{"log":"hello logs\n","stream":"stdout","attrs":{"environment":"dev"},"time":"2020-09-22T11:12:41.279155826Z"}
You need to reload the docker engine after making changes to the daemon.json, and changes only apply to newly created containers. For systemd, reloading is done with systemctl reload docker.
To specifically pass the container name, which isn't a label, you can pass a "tag" setting:
# docker run --name test-log-opts --log-opt tag="{{.Name}}/{{.ID}}" busybox echo hello log opts
hello log opts
# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c201d0a2504a busybox "echo hello log opts" 6 seconds ago Exited (0) 5 seconds ago test-log-opts
# docker logs --details c20
tag=test-log-opts%2Fc201d0a2504a hello log opts
# more /var/lib/docker/containers/c201d0a2504addedb2b6785850a83e8931052d0d9778438e9dcc27391f45fec2/c201d0a2504addedb2b6785850a83e8931052d0d9778438e9dcc27391f45fec2-json.log
{"log":"hello log opts\n","stream":"stdout","attrs":{"tag":"test-log-opts/c201d0a2504a"},"time":"2020-09-22T11:15:26.998956544Z"}
For more details:
JSON log driver options: https://docs.docker.com/config/containers/logging/json-file/#options
Container logging tags: https://docs.docker.com/config/containers/logging/log_tags/

Related

How to find how a container was started: "docker run" or "docker-compose up"?

When accessing a remote machine I'd like to know if a container was started over docker run or docker-compose or some other means.
Is that even possible?
EDIT: the main reason for this was to find out, where these containers are getting orchestrated, i.e. if the container goes down, will be started again? Where would that configuration be?
For investigation purposes I created the most simplest docker-compose.yml:
version: "2.4"
services:
hello:
image: "hello-world"
Then run it with docker-compose up
And lastly the normal way: docker run -it --name cli hello-world
So I had two stopped containers:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6a8d53ff45a4 hello-world "/hello" 9 minutes ago Exited (0) 9 minutes ago cli
d54f7a2ae8b2 hello-world "/hello" 9 minutes ago Exited (0) 9 minutes ago compose_hello_1
Then I compared inspect output of both:
diff <(docker inspect cli) <(docker inspect compose_hello_1)
I found out that there are labels which compose creates:
"Labels": {}
---
"Labels": {
"com.docker.compose.config-hash": "251ebf43e00417fde81d3c53b9f3d8cd877e1beec00ebbffbc4a06c4db9c7b00",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "compose",
"com.docker.compose.service": "hello",
"com.docker.compose.version": "1.24.1"
}
Also compose uses another network:
"NetworkMode": "default",
---
"NetworkMode": "compose_default",
You should do it on your environment and try to find out differences where you can surely differentiate between both launch ways.
Unfortunately that's impossible. You can only guess it according to some secondary properties doing docker inspect container-name.

docker logs --details flag showing nothing more

I try to see docker logs with the --details flag
I read the docs but i see no difference with or without the flag : https://docs.docker.com/engine/reference/commandline/logs/
For exemple this command echoes the date every second.
$ docker run --name test -d busybox sh -c "while true; do $(echo date); sleep 1; done"
e9d836000532
This command shows logs :
$ docker logs e9d836000532
Sun Jan 26 16:01:55 UTC 2020
...
This command adds nothing more that a "space on the left" :
$ docker logs --details e9d836000532
...
Sun Jan 26 16:01:55 UTC 2020
From docker documentation:
The docker logs --details command will add on extra attributes, such
as environment variables and labels, provided to --log-opt when
creating the container.
currently you have an extra space on the left when you use docker log --details because you probably do not use --log-opt when you create your container.
For your interest, --log-opt is used to use an another log driver than docker default's one
Try out this one :
https://docs.docker.com/config/containers/logging/fluentd/

Docker View Historical Logs

Background:
For Development purposes I do a lot of docker-compose up -d and docker-compose stop.
To view logs of a container I do either
- docker logs --details --since=1m -t -f container_name
or
- docker inspect --format='{{.LogPath}}' container_name
cat path-from-previous
The problem is when I want to view 10 days older logs, there are none, the logs just have todays logs.
when I do a docker inspect container_name I get the following
"Created": "todays-timestamp"
my logging is the default config.
"LogConfig": {
"Type": "json-file",
"Config": {}
},
the reason behind this is because there is no rotation in your docker-logs.
in case you are using a linux system go to:
/etc/logrotate.d/
and create the file docker-container like this => /etc/logrotate.d/docker-container
write this into the file:
/var/lib/docker/containers/*/*.log {
rotate 7
daily
compress
missingok
delaycompress
copytruncate
}
it takes all builded images and their daily log and rotates + compress them.
you can test this with:
logrotate -fv /etc/logrotate.d/docker-container
enter your docker folder /var/lib/docker/containers/[CONTAINER ID]/ and you can see the rotation.
reference: https://sandro-keil.de/blog/logrotate-for-docker-container/

How do I tail the logs of ALL my docker containers?

I can tail the logs of a single docker container by doing:
docker logs -f container1
But, how can I tail the logs of multiple containers on the same screen?
docker logs container1 container2
doesn’t work. It gives an error:
“docker logs” requires exactly 1 argument(s).
Thank you.
If you are using docker-compose, this will show all logs from the diferent containers
docker-compose logs -f
If you have access and root to the docker server:
tail -f /var/lib/docker/containers/*/*.log
The docker logs command can't stream multiple logs files.
Logging Drivers
You could use one of the logging drivers other than the default json to ship the logs to a common point. The systemd journald or syslog drivers would readily work on most systems. Any of the other centralised log systems would work too.
Note that configuring syslog on the Docker daemon means that docker logs command can no longer query the logs, they will only be stored where your syslog puts them.
A simple daemon.json for syslog:
{
"log-driver": "syslog",
"log-opts": {
"syslog-address": "tcp://10.8.8.8:514",
"syslog-format": "rfc5424"
}
}
Compose
docker-compose is capable of streaming the logs for all containers it controls under a project.
API
You could write tool that attaches to each container via the API and streams the logs via a websocket. Two of the Java libararies are docker-client and docker-java.
Hack
Or run multiple docker logs and munge the output, in node.js:
const { spawn } = require('child_process')
function run(id){
let dkr = spawn('docker', [ 'logs', '--tail', '1', '-t', '--follow', id ])
dkr.stdout.on('data', data => console.log('%s: stdout', id, data.toString().replace(/\r?\n$/,'')))
dkr.stderr.on('data', data => console.error('%s: stderr', id, data.toString().replace(/\r?\n$/,'')))
dkr.on('close', exit_code => {
if ( exit_code !== 0 ) throw new Error(`Docker logs ${id} exited with ${exit_code}`)
})
}
let args = process.argv.splice(2)
args.forEach(arg => run(arg))
Which dumps data as docker logs writes it.
○→ node docker-logs.js 958cc8b41cd9 1dad69882b3d db4b844d9478
958cc8b41cd9: stdout 2018-03-01T06:37:45.152010823Z hello2
1dad69882b3d: stdout 2018-03-01T06:37:49.392475996Z hello
db4b844d9478: stderr 2018-03-01T06:37:47.336367247Z hello2
958cc8b41cd9: stdout 2018-03-01T06:37:55.155137606Z hello2
db4b844d9478: stderr 2018-03-01T06:37:57.339710598Z hello2
1dad69882b3d: stdout 2018-03-01T06:37:59.393960369Z hello

How do I check that a docker host is in swarm mode?

After executing this;
eval $(docker-machine env mymachine)
How do I check if the docker daemon on mymachine is a swarm manager?
To check general swarm membership, my preferred method is to use the formatted output from docker info. The possible values of this are currently inactive, pending, active, locked, and error:
case "$(docker info --format '{{.Swarm.LocalNodeState}}')" in
inactive)
echo "Node is not in a swarm cluster";;
pending)
echo "Node is not in a swarm cluster";;
active)
echo "Node is in a swarm cluster";;
locked)
echo "Node is in a locked swarm cluster";;
error)
echo "Node is in an error state";;
*)
echo "Unknown state $(docker info --format '{{.Swarm.LocalNodeState}}')";;
esac
To check for manager status, rather than just a node in a cluster, the field you want is .Swarm.ControlAvailable:
docker info --format '{{.Swarm.ControlAvailable}}'
That will output "true" for managers, and "false" for any node that is a worker or not in a swarm.
To identify worker nodes, you can join to two:
if [ "$(docker info --format '{{.Swarm.LocalNodeState}}')" = "active" \
-a "$(docker info --format '{{.Swarm.ControlAvailable}}')" = "false" ]; then
echo "node is a worker"
else
echo "node is not a worker"
fi
You could also use docker info to see the result of Swarm property (inactive or active).
For example:
function isSwarmNode(){
if [ "$(docker info | grep Swarm | sed 's/Swarm: //g')" == "inactive" ]; then
echo false;
else
echo true;
fi
}
I don't have a swarm node handy at the moment, but it looks as if you could simply run something like docker node ls. When targeting a docker daemon that is not in swarm node, that results in:
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
And it returns a nonzero exit code
$ echo $?
1
So the test would look something like:
if docker node ls > /dev/null 2>&1; then
echo this is a swarm node
else
echo this is a standalone node
fi
In addition to larsks answer, if you run docker node ls when pointing to a worker node, you'll get the following message:
Error response from daemon: This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.
You can use this to differentiate between worker nodes and nodes not in a swarm at all.
Make sure your docker environment variables are set properly
$env | grep DOCKER_
Compare url and port values with the output from
$docker-machine ls
Select the swarm master machine name and you can reset the environment variables using
$eval $(docker-machine env your_master_machine_name)
Once environment variables are set properly, your command
$docker info | egrep '^Swarm: ' | cut -d ' ' -f2
should give the correct result
To get the IP address of a manager from any node (either worker or manager) using bash you can do:
read manager_ip _ <<<$(IFS=':'; echo $(docker info --format "{{ (index .Swarm.RemoteManagers 0).Addr }}"))
echo "${manager_ip}"
As mentioned above, the most direct way to identify if the current node is a manager is by using:
docker info --format '{{.Swarm.ControlAvailable}}'

Resources