View stdout and logs of an ephemeral docker container - docker

I understand there are many questions about how to read docker logs that are answered by:
$ docker logs containername
However, I am working with an ephemeral container, one created with -rm, so I do not have time to call logs after creating it. But I am still interested in seeing the logs of how it ran.
My command is:
docker run --name myname --rm python-my-script:3.7.4 - --myflags "myargs"
Now, I'd like to see how my script runs with these arguments. My entrypoint has a script that should effectively be reading in and printing "myargs" to the console.
But when I do:
docker logs myname
Error: No such container: myname
Or if I'm really quick:
Error response from daemon: can not get logs from container which is dead or marked for removal
How can I see the logs of a container that is no longer running? I'd prefer not to install something heavyweight like syslog.

The default logging driver for Docker is json-file, which you are able to use docker logs to see it. But if you delete the container or use --rm when run the container, the logs will be deleted after the container removed.
For you case, you need change a logging driver to assure the log still could be seen even after the container deleted.
There are a lots of logging driver which could meet your requirements, see this. E.g fluentd, splunk, etc.
Here, give a simplest way to reserve the log, use journald, a minimal example for your reference:
Start the container with journald log driver, need to set a container name which will be used later to retrieve the log:
$ docker run --log-driver=journald --rm --name=trial alpine echo "hello world"
After the container finish print "hello world", the container will be deleted as it specify --rm, check if docker logs ok:
$ docker logs trial
Error: No such container: trial
Use journald to if we can get the log:
$ journalctl CONTAINER_NAME=trial --all
-- Logs begin at Mon 2018-12-17 21:35:55 CST, end at Mon 2019-08-05 14:21:19 CST. --
Aug 05 14:18:26 shubuntu1 a475febe91c1[1975]: hello world
You can see you can use journalctl to get the log content "hello world" even the container was removed.
BTW, if you do not want to specify --log-driver every time you start a container, you can also set it as default log driver in daemon.json, see this:
{
"log-driver": "journald"
}
Meanwhile, you still can use docker logs to get the logs if the container not deleted.

Related

How to stop Docker from clearing logs for dead containers?

I use Dokku to run my app, and for some reason, the container is dying every few hours and recreates itself.
In order to investigate the issue, I am willing to read the error logs to this container and understand why it's crashing. Since Docker clears logs of dead containers, this is impossible.
I turned on docker events and it shows many events (like container update, container kill, container die, etc.) But no sign of what triggered this kill.
How can I investigate the issue?
Versions:
Docker version 19.03.13, build 4484c46d9d
dokku version 0.25.1
Logs are deleted when the container is deleted. If you want the logs to persist, then you need to avoid deleting the container. Make sure you aren't running the container with an option like --rm that automatically deletes it on exit. And check for the obvious issues like running out of disk space.
There are several things you can do to investigate the issue:
You can run the container in the foreground and allow it to log to your console.
If you were previously starting the container in the background with docker run -d (or docker-compose up -d), just remove the -d from the command line and allow the container to log to your terminal. When it crashes, you'll be able to see the most recent logs and scroll back to the limits of your terminal's history buffer.
You can even capture this output to a file using e.g. the script tool:
script -c 'docker run ...`
This will dump all the output to a file named typescript, although you can of course provide a different output name on the command line.
You can change the log driver.
You can configure your container to use a different logging driver. If you select something like syslog or journald, your container logs will be sent to the corrresponding service, and will continue to be available even after the container has been deleted.
I like use the journald logging driver because this allows searching for output by container id. For example, if I start a container like this:
docker run --log-driver journald --name web -p 8080:8080 -d docker.io/alpinelinux/darkhttpd
I can see logs from that container by running:
$ journalctl CONTAINER_NAME=web
Feb 25 20:50:04 docker 0bff1aec9b65[660]: darkhttpd/1.13, copyright (c) 2003-2021 Emil Mikulic.
These logs will persist even after the container exits.
(You can also search by container id instead of name by using CONTAINER_ID_FULL (the full id) or CONTAINER_ID (the short id), or even by image name with IMAGE_NAME.)

How to see the logs of a docker container

I have a simple code for which I have created a docker container and the status shows it running fine. Inside the code I have used some print() commands to print the data. I wanted to see that print command output.
For this I have seen docker logs . But it seems not to be working as it shows no logs. How to check logs.?
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a3b3fd261b94 myfirstdocker "python3 ./my_script…" 22 minutes ago Up 22 minutes elegant_darwin
$ sudo docker logs a3b3fd261b94
<shows nothing>
The first point you need to print your logs to stdout.
To check docker logs just use the following command:
docker logs --help
Usage: docker logs [OPTIONS] CONTAINER
Fetch the logs of a container
Options:
--details Show extra details provided to logs
-f, --follow Follow log output
--help Print usage
--since string Show logs since timestamp
--tail string Number of lines to show from the end of the logs (default "all")
-t, --timestamps Show timestamps
Some example:
docker logs --since=1h <container_id>
4
Let's try using that docker create start and then logs command again and see what happens.
sudo docker create busybox echo hi there
output of the command
now I will take the ID and run a docker start and paste the ID that starts up the container it executes echo high there inside of it and then immediately exits.
Now I want to go back to that stopped container and get all the logs that have been emitted inside of it.
To do so I can run at docker logs and then paste the ID in and I will see that when the container had been running it had printed out the string Hi there.
One thing to be really clear about is that by running docker logs I am not re-running or restarting the container to in any way shape or form, I am just getting a record of all the logs that have been emitted from that container.
docker logs container_id
If there's not so much supposed output (e.g. script just tries to print few bytes), I'd suspect python is buffering it.
Try adding more data to the output to be sure that buffer is flushed, and also using PYTHONUNBUFFERED=1 (although, python3 still may do some buffering despite of this setting).

saving docker log files with volumes produces permission denied

I am trying to test saving log files of docker containers in playing in this site which gives you a linux root shell with docker installed. I'v used solution provided here:
docker run -ti -v /dev/log:/root/data --name zizimongodb mongo
This is what I got in the console:
docker: Error response from daemon: oci runtime error: container_linux.go:262: starting container process caused "process_linux.go:339: container init caused \"rootfs_linux.go:57: mounting \\\"/dev/log\\\" to rootfs \\\"/graph/overlay2/7f1eb83902e3688c0a1204c2fe8dfd8fbf43e1093bc578e4c41028e8b03e4b38/merged\\\" at \\\"/graph/overlay2/7f1eb83902e3688c0a1204c2fe8dfd8fbf43e1093bc578e4c41028e8b03e4b38/merged/root/data\\\" caused \\\"permission denied\\\"\"".
But the container has started:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8adaa75ba6f7 mongo "docker-entrypoint..." 2 minutes ago Created zizimongodb
docker logs -f zizimongodb returns nothing. When I stop the container, nothing is saved in the /root/data. Any idea how I can correctly save all logs?
Since you are using the official mongo image from DockerHub, it is worth pointing out that this official image (like many--or all?--of the official images) does not send log output to their default log locations that you might expect if you download a Linux distro version of the same software.
Instead, most software that is capable of being told where to log, are forced to log to stdout/stderr so that docker log plugins and the docker log command itself work properly.
For the mongodb case you can see this somewhat complicated code here that tells the mongodb process to use the /proc filesystem file descriptor that maps to "stdout", as long as it is writeable when the container is started. Because of some bugs this is more complicated that other Dockerfile customization of log output (you can read more if interested at the links in the comments).
I think a more reasonable way to try and do some form of log consolidation or collection is to read about docker log drivers and see if any of those options works for you. For example, if you like journald there is a driver which will take all container logs and pass them to journald on the host.

docker run vs create+start: why are created containers different?

Related to
docker container started in Detached mode stopped after process execution
https://serverfault.com/questions/661909/the-right-way-to-keep-docker-container-started-when-it-used-for-periodic-tasks
I do understand the difference between docker run and create + start, but don't understand how the actual containers created in these two ways differ.
Say I create and run a container with
docker run -dit debian:testing-slim
and then stop it. The created container can later be started with
docker start silly_docker_name
and it'll run in the background, because the entry command for the image is bash.
But when a container is first created
docker create --name silly_name debian:testing-slim
and then started with
docker start silly_name
then it'll exit immediately. Why isn't bash started, or how come it exits in this case?
The difference for a container process that is a shell (like bash in your debian example) is that a shell started without a terminal+interactive "mode" exits without doing anything.
You can test this by changing the command of a create'd container to something that doesn't require a terminal:
$ docker create --name thedate debian date
Now if I run thedate container, each time I run it it outputs the date (in the logs) and exits. docker logs thedate will show this; one entry for each run.
To be explicit, your docker run command has flags -dit: detached, interactive (connect STDIN), and tty are all enabled.
If you want a similar approach with create & start, then you need to allocate a tty for the created container:
$ docker create -it --name ashell debian
Now if I start it, I ask to attach/interactively to it and I get the same behavior as run:
$ docker start -ai ashell
root#6e44e2ae8817:/#
NOTE: [25 Jan 2018] Edited to add the -i flag on create as a commenter noted that as originally written this did not work, as the container metadata did not have stdin connected at the create stage

View logs for all docker containers simultaneously

I currently use docker for my backend, and when I first start them up with
docker-compose up
I get log outputs of all 4 dockers at once, so I can see how they are interacting with each other when a request comes in. Looking like this, one request going from nginx to couchdb
The issue is now that I am running on GCE with load balancing, when a new VM spins up, it auto starts the dockers and runs normally, I would like to be able to access a load balanced VM and view the live logs, but I can not get docker to allow me this style, when I use logs, it gives me normal all white font with no label of where it came from.
Using
docker events
does nothing, it won't return any info.
tldr; what is the best way to obtain a view, same as the log output you get when running "docker-compose up"
If using docker-compose, you use
docker-compose logs --tail=0 --follow
instead of
docker logs --tail=0 --follow
This will get the output I was originally looking for.
You can see the logs for all running containers with
docker ps -q | xargs -L 1 docker logs
In theory this might work for the --follow too if xargs is ran with -P <count>, where the count is higher than the number of running containers.
I use a variation of this to live tail (--follow) all logs and indicate which log is tailing at the time. This bash includes both stdout and stderr. Note you may need to purge the /tmp dir of *.{log,err} afterwards.
for c in $(docker ps -a --format="{{.Names}}")
do
docker logs -f $c > /tmp/$c.log 2> /tmp/$c.err &
done
tail -f /tmp/*.{log,err}
Hope this helps. Logging has become so problematic these days, and other get-off-my-lawn old man rants...
Try "watch"
Here's a quick and dirty multitail/xtail for docker containers.
watch 'docker ps --format "{{.Names}}" | sort | xargs --verbose --max-args=1 -- docker logs --tail=8 --timestamps'
How this works:
watch to run every few seconds
docker ps --format "{{.Names}}" to get the names of all running containers
sort to sort them
xargs to give these names to docker logs:
docker logs to print the actual logs
Adjust parameter "--tail=8" as needed so that everything still fits on one screen.
The "xargs" methods listed above (in another user's answer) will stop working as containers are stopped and restarted. This "watch" method here does not have that problem. (But it's not great either.)
If you are using Docker Swarm, you can find your services by
docker service ls
Grap the id, and then run
docker service logs $ID -f
if the service is defined with tty: true, then you must run with the --raw flag. Notice, this wont tell you which container is giving the outputted log entry.

Resources