How to get the log of killed containers - docker

I'm running docker in linux for some specific application. I start multiple containers and run some application, and exit the container if application fails for xyz reasons. Now I would like to debug the reason for that container to exit.
Many post suggest to use docker logs <container-id> but it works only with running containers.
Solution given in this post Access logs of a killed docker container doesn't work and log message shows date followed by -- No entries --
So how do I get log file even after exiting containers without installing any external application to manage log?
PS: the container is killed and destroyed.

If you didn't remove that particular stopped container( killed but not destroyed ) you can access its logs by using the docker command
docker logs <container_id>
You can get the stopped container Id by using
docker ps -f "status=exited"
or just by using docker ps -a (which list all containers including stopped one)

Related

How to stop Docker from clearing logs for dead containers?

I use Dokku to run my app, and for some reason, the container is dying every few hours and recreates itself.
In order to investigate the issue, I am willing to read the error logs to this container and understand why it's crashing. Since Docker clears logs of dead containers, this is impossible.
I turned on docker events and it shows many events (like container update, container kill, container die, etc.) But no sign of what triggered this kill.
How can I investigate the issue?
Versions:
Docker version 19.03.13, build 4484c46d9d
dokku version 0.25.1
Logs are deleted when the container is deleted. If you want the logs to persist, then you need to avoid deleting the container. Make sure you aren't running the container with an option like --rm that automatically deletes it on exit. And check for the obvious issues like running out of disk space.
There are several things you can do to investigate the issue:
You can run the container in the foreground and allow it to log to your console.
If you were previously starting the container in the background with docker run -d (or docker-compose up -d), just remove the -d from the command line and allow the container to log to your terminal. When it crashes, you'll be able to see the most recent logs and scroll back to the limits of your terminal's history buffer.
You can even capture this output to a file using e.g. the script tool:
script -c 'docker run ...`
This will dump all the output to a file named typescript, although you can of course provide a different output name on the command line.
You can change the log driver.
You can configure your container to use a different logging driver. If you select something like syslog or journald, your container logs will be sent to the corrresponding service, and will continue to be available even after the container has been deleted.
I like use the journald logging driver because this allows searching for output by container id. For example, if I start a container like this:
docker run --log-driver journald --name web -p 8080:8080 -d docker.io/alpinelinux/darkhttpd
I can see logs from that container by running:
$ journalctl CONTAINER_NAME=web
Feb 25 20:50:04 docker 0bff1aec9b65[660]: darkhttpd/1.13, copyright (c) 2003-2021 Emil Mikulic.
These logs will persist even after the container exits.
(You can also search by container id instead of name by using CONTAINER_ID_FULL (the full id) or CONTAINER_ID (the short id), or even by image name with IMAGE_NAME.)

Differences between detached mode and background in docker

Running docker run with a -d option is described as running the container in the background. This is what most tutorials do when they don't want to interact with the container. On another tutorial I saw the use of bash style & to send the process to the background instead of adding the -d option.
Running docker run -d hello_world only outputs the container ID. On the other hand docker run hello_world & still gives me the same output as if i had run docker run hello_world.
If I do the both experiments with docker run nginx I get the same behavior on both (at least as far as I can see), and both show up if I run docker ps.
Is the process the same in both cases(apart from the printing of the ID and output not being redirected with &)? If not, what is going on behind the scenes in each?
docker is designed as C-S architecture: docker client, docker daemon(In fact still could fined to container-d, shim, runc etc).
When you execute docker run, it will just use docker client to send things to docker daemon, and let daemon call runc etc to start a container.
So:
docker run -d: It will let runc run container in background, you can use docker logs $container_name to see all logs later, the background happend on server side.
docker run &: It will make the linux command run at background, which means the docker run will be in background, this background run on client side. So you still can see stdout etc in terminal. More, if you leave the terminal(even you bash was set as nohup), you will not see it in terminal, you still need docker logs to see them.

Docker container gets killed

I am running a docker container which is trying to access a port in another docker container. Both of these are running are configured together to run on the same network. But as soon as I start this container it gets killed and doesn't throw any error. There are no error logs. I also tried using docker inspect but couldn't find much.
PS: I am a newbie docker user.
Following from OP comment w/ ENTRYPOINT
ENTRYPOINT /configure.sh && bash
Answer
Given your ENTRYPOINT the container will always exit since the process is bash. You need to have a continuously running process in the foreground for the container to stay running i.e. an application daemon.

reconnect to container as the original "docker run"

I have some containers running and once in a while the connection is lost in the terminal. The container is still running perfectly. How do I reconnect to the samme user process?
The problem is:
When I do docker exec -it name bash, I get a new root user. But then I need to stop the applications the original user started to get them into this bash.
How do you reconnect to the original running user process/bash.
info: using mac terminal.
You would need to use the docker attach <container ID>
refer: man docker-attach
"
The docker attach command allows you to attach to a running
container using the container's ID or name, either to view its ongoing
output or to control it interactively. You can
attach to the same contained process multiple times simultaneously, screen sharing style, or quickly view the progress of
your daemonized process.
You can detach from the container (and leave it running) with CTRL-p CTRL-q (for a quiet exit) or CTRL-c which will send a SIGKILL
to the container. When you are attached to a con‐
tainer, and exit its main process, the process's exit code will be returned to the client.
"
docker ps -a # list all the containers and find your container
docker start <container ID> # start the exited container
docker attach <container ID> # attach to your container

How to restart a container on docker restart (--restart=true doesn't work)?

I am using docker version 1.1.0, started by systemd using the command line /usr/bin/docker -d, and tried to:
run a container
stop the docker service
restart the docker service (either using systemd or manually, specifying --restart=true on the command line)
see if my container was still running
As I understand the docs, my container should be restarted. But it is not. Its public facing port doesn't respond, and docker ps doesn't show it.
docker ps -a shows my container with an empty status:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb0d05b4e0d9 mildred/p2pweb:latest node server-cli.js - 7 minutes ago 0.0.0.0:8888->8888/tcp jovial_ritchie
...
And when I try to docker restart cb0d05b4e0d9, I get an error:
Error response from daemon: Cannot restart container cb0d05b4e0d9: Unit docker-cb0d05b4e0d9be2aadd4276497e80f4ae56d96f8e2ab98ccdb26ef510e21d2cc.scope already exists.
2014/07/16 13:18:35 Error: failed to restart one or more containers
I can always recreate a container using the same base image using docker run ..., but how do I make sure that my running containers will be restarted if docker is restarted. Is there a solution that exists even in case the docker is not stopped properly (imagine I remove the power plug from the server).
Thank you
As mentioned in a comment, the container flag you're likely looking for is --restart=always, which will instruct Docker that unless you explicitly docker stop the container, Docker should start it back up any time either Docker dies or the container does.

Resources