Is "docker logs container-id | tail -10" a valid command? - docker

I'm running the command docker logs <container-id> | tail -10 and still, docker shows the entire log history. I know docker logs --tail 10 <container-id> is a valid command and serves the purpose. But, why doesn't the former command works as it does in case of a file?

If you want your everything your program writes to either stdout or stderr to go through the pipeline to tail, redirect stderr to stdout:
docker logs "$container_id" 2>&1 | tail -10

in case someone may want to tail -f docker logs
here you may try this:
docker logs -f --tail 0 "$container_id"

Related

How to follow a log until a string is matched in PowerShell

I would like to be able to do something similar to this is PowerShell
docker logs -f my-container | grep -q "string-to-match"
i.e. to follow a log file and when a string is matched you stop follow the file. My idea was to try something like this
docker logs -f my-container | Select-String "string-to-match".
I know it's not complete but I can't figure out how to make it work. I also did try to use WSL2 like this
docker logs -f my-container | wsl grep -q "string-to-match".
but it keeps following the log even after a match has been found!
A WSL-solution would solve my problem but a native PowerShell-solution would be preferable!
I tried with ugrep.exe in PowerShell:
docker logs -f my-container | ugrep -q "string-to-match"
and that appears to work fine as it stops at the first match. This can also be done with ugrep -m1 "string-to-match" to report the line that matched, since -m1 stops searching further after the first hit.

docker logs effected by -it parameter in docker run

I have a strange behavior when searching with grep in docker logs which I don't understand.
As an example, I start a jupyter notebook container with following command.
docker run -it -d --rm --name test -p 8888:8888 jupyter/minimal-notebook
With that running Container I can now display the containers log and grep for a part of interest (eg. URL with token of the running jupyter server). This is done with the following Command and shows me the expected result.
docker logs test --tail 5 | grep -ozE "http://127.*"
But when the Container is started without the -it option this will not work and it just prints the whole log message.
Can someone explain this behavior?
The pipe between commands only pipes stdout. If the application inside the container outputs to stderr, this will be displayed bypassing the pipe and grep. You can adjust this by sending stderr to stdout before the pipe with a 2>&1:
docker logs test --tail 5 2>&1 | grep -ozE "http://127.*"

Tail stdout from multiple Docker containers

I have a script that starts 10 containers in background mode (fig up -d option). I want to aggregate the stdout or the log in /var/log from all of them. How can I do this?
The containers are started using different docker-compose files so I can not do docker-compose up target1 target2 target3
docker logs only accepts one container as a parameter.
I was considering creating a volume from the /var/log on all containers, mapping them to a directory outside of docker, making sure the logs do not have colliding name and than using the bash tail -f * . But I would appreciate a more elegant solution
This bash script will do what you want:
docker-logs
#!/bin/bash
if [ $# -eq 0 ]; then
echo "Usage: $(basename "$0") containerid ..."
exit 1
fi
pids=()
cleanup()
{
kill "${pids[#]}"
}
trap cleanup EXIT
while [ $# -ne 0 ]
do
(docker logs -f -t --tail=10 "$1"|sed -e "s/^/$1: /")&
pids+=($!)
shift
done
wait
Usage:
$ docker-logs containerid1 containerid2 ... containeridN
The output of this script has each line from the tracked logs prepended with the container id.
The script works in --follow mode and must be interrupted with Ctrl-C.
Note that the options of docker logs are hardcoded in the script. If you need to be able to control the options of docker logs from the command line then you will need to parse the command line arguments (for example with getopts).
Docker does not support as 1.12 yet. But I have a workaround via bash;
docker ps | grep -w <filter-text> | for i in `awk '{ print $1 }'`; do docker logs -f --tail=30 $i & done
I am using docker swarm modes comes with 1.12 and deploying many replication. So all of my containers contain common text which is same as service name. To tail all of its logs in a docker node , I am using this on each docker node. filter-text will be filtering only my containers.
If you want to stop tailing, this works for me;
pkill -f 'docker logs'

Docker look at the log of an exited container

Is there any way I can see the log of a container that has exited?
I can get the container id of the exited container using docker ps -a but I want to know what happened when it was running.
Use docker logs. It also works for stopped containers and captures the entire STDOUT and STDERR streams of the container's main process:
$ docker run -d --name test debian echo "Hello World"
02a279c37d5533ecde76976d7f9d1ca986b5e3ec03fac31a38e3dbed5ea65def
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
49daa9d41a24 debian "echo test" 2 minutes ago Exited (0) 2 minutes ago test
$ docker logs -t test
2016-04-16T15:47:58.988748693Z Hello World
docker logs --tail=50 <container id> for the last fifty lines - useful when your container has been running for a long time.
You can use below command to copy logs even from an exited container :
docker cp container_name:path_of_file_in_container destination_path_locally
Eg:
docker cp sample_container:/tmp/report /root/mylog
To directly view the logfile of an exited container in less, scrolled to the end of the file, I use:
docker inspect $1 | grep 'LogPath' | sed -n "s/^.*\(\/var.*\)\",$/\1/p" | xargs sudo less +G
run as ./viewLogs.sh CONTAINERNAME
This method has the benefit over docker logs based approaches, that the file is directly opened, instead of streamed.
sudo is necessary, as the LogPath/File usually is under root-owned
#icyerasor comment above actually helped me solve the issue. In my particular situation the container that has stopped running had no container name only container id.
Steps that found the logs also listed in this post
Find the stopped container via docker ps -a
grab the container id of the failed container
Substitute it in this command cat /var/lib/docker/containers/<container id>/<container id>-json.log

How to use docker logs

The question may be a bit of newbie.
I run docker exec -it mycontainer bash to enter into a daemon container(postgresSQL ),
and echo something.
now I exit it , and use docker logs mycontainer so as to see my echos.
According to
The docker logs command batch-retrieves logs present at the time of execution.
The docker logs --follow command will continue streaming the new output from the container's STDOUT and STDERR.
The docker logs listen STDOUT of the container, why I don't see my string just echoed inside it?
Docker engine only stores stdout from process with ID 0 (i.e. the process launched by the CMD directive of your Dockerfile).
By the way, on your Docker host, you can inspect the content of container's logs by viewing the file /var/lib/docker/containers/<ID of your container>/<ID of your container>-json.log.
This file stores logs in JSON format.
I assume logging only occurs for the main process in a container. As exec creates a new process, it won't get logged.
Note that docker logs works for processes given in the run command e.g:
$ ID=$(docker run -d debian sh -c "while true; do echo "hello"; sleep 1; done;")
$ docker logs $ID
hello
hello
hello
hello
hello

Resources