I use Dokku to run my app, and for some reason, the container is dying every few hours and recreates itself.
In order to investigate the issue, I am willing to read the error logs to this container and understand why it's crashing. Since Docker clears logs of dead containers, this is impossible.
I turned on docker events and it shows many events (like container update, container kill, container die, etc.) But no sign of what triggered this kill.
How can I investigate the issue?
Versions:
Docker version 19.03.13, build 4484c46d9d
dokku version 0.25.1
Logs are deleted when the container is deleted. If you want the logs to persist, then you need to avoid deleting the container. Make sure you aren't running the container with an option like --rm that automatically deletes it on exit. And check for the obvious issues like running out of disk space.
There are several things you can do to investigate the issue:
You can run the container in the foreground and allow it to log to your console.
If you were previously starting the container in the background with docker run -d (or docker-compose up -d), just remove the -d from the command line and allow the container to log to your terminal. When it crashes, you'll be able to see the most recent logs and scroll back to the limits of your terminal's history buffer.
You can even capture this output to a file using e.g. the script tool:
script -c 'docker run ...`
This will dump all the output to a file named typescript, although you can of course provide a different output name on the command line.
You can change the log driver.
You can configure your container to use a different logging driver. If you select something like syslog or journald, your container logs will be sent to the corrresponding service, and will continue to be available even after the container has been deleted.
I like use the journald logging driver because this allows searching for output by container id. For example, if I start a container like this:
docker run --log-driver journald --name web -p 8080:8080 -d docker.io/alpinelinux/darkhttpd
I can see logs from that container by running:
$ journalctl CONTAINER_NAME=web
Feb 25 20:50:04 docker 0bff1aec9b65[660]: darkhttpd/1.13, copyright (c) 2003-2021 Emil Mikulic.
These logs will persist even after the container exits.
(You can also search by container id instead of name by using CONTAINER_ID_FULL (the full id) or CONTAINER_ID (the short id), or even by image name with IMAGE_NAME.)
Related
In my CentOS server I use docker created a container,
I opened two sessions connected to the container by command:
docker attach container-name
but there is an issue, in each window I execute command the other window is display the same information.
so I cannot control the container when it is installing package.
is it possible to avoid this issue?
The docker attach command attaches to the currently running process as defined by CMD. You can attach as many times as you want, but they all connect to the same process.
If you want to access the container and have different sessions to it, use:
docker exec -it container-name bash
Or whatever shell is available. bash is common, but you may need to use sh or find out what's used, if any is there at all. Some containers are super stripped down.
The -it flag enables "interactive" mode, as otherwise it just runs that command and shows you the output.
I'm running docker in linux for some specific application. I start multiple containers and run some application, and exit the container if application fails for xyz reasons. Now I would like to debug the reason for that container to exit.
Many post suggest to use docker logs <container-id> but it works only with running containers.
Solution given in this post Access logs of a killed docker container doesn't work and log message shows date followed by -- No entries --
So how do I get log file even after exiting containers without installing any external application to manage log?
PS: the container is killed and destroyed.
If you didn't remove that particular stopped container( killed but not destroyed ) you can access its logs by using the docker command
docker logs <container_id>
You can get the stopped container Id by using
docker ps -f "status=exited"
or just by using docker ps -a (which list all containers including stopped one)
I understand there are many questions about how to read docker logs that are answered by:
$ docker logs containername
However, I am working with an ephemeral container, one created with -rm, so I do not have time to call logs after creating it. But I am still interested in seeing the logs of how it ran.
My command is:
docker run --name myname --rm python-my-script:3.7.4 - --myflags "myargs"
Now, I'd like to see how my script runs with these arguments. My entrypoint has a script that should effectively be reading in and printing "myargs" to the console.
But when I do:
docker logs myname
Error: No such container: myname
Or if I'm really quick:
Error response from daemon: can not get logs from container which is dead or marked for removal
How can I see the logs of a container that is no longer running? I'd prefer not to install something heavyweight like syslog.
The default logging driver for Docker is json-file, which you are able to use docker logs to see it. But if you delete the container or use --rm when run the container, the logs will be deleted after the container removed.
For you case, you need change a logging driver to assure the log still could be seen even after the container deleted.
There are a lots of logging driver which could meet your requirements, see this. E.g fluentd, splunk, etc.
Here, give a simplest way to reserve the log, use journald, a minimal example for your reference:
Start the container with journald log driver, need to set a container name which will be used later to retrieve the log:
$ docker run --log-driver=journald --rm --name=trial alpine echo "hello world"
After the container finish print "hello world", the container will be deleted as it specify --rm, check if docker logs ok:
$ docker logs trial
Error: No such container: trial
Use journald to if we can get the log:
$ journalctl CONTAINER_NAME=trial --all
-- Logs begin at Mon 2018-12-17 21:35:55 CST, end at Mon 2019-08-05 14:21:19 CST. --
Aug 05 14:18:26 shubuntu1 a475febe91c1[1975]: hello world
You can see you can use journalctl to get the log content "hello world" even the container was removed.
BTW, if you do not want to specify --log-driver every time you start a container, you can also set it as default log driver in daemon.json, see this:
{
"log-driver": "journald"
}
Meanwhile, you still can use docker logs to get the logs if the container not deleted.
Running docker run with a -d option is described as running the container in the background. This is what most tutorials do when they don't want to interact with the container. On another tutorial I saw the use of bash style & to send the process to the background instead of adding the -d option.
Running docker run -d hello_world only outputs the container ID. On the other hand docker run hello_world & still gives me the same output as if i had run docker run hello_world.
If I do the both experiments with docker run nginx I get the same behavior on both (at least as far as I can see), and both show up if I run docker ps.
Is the process the same in both cases(apart from the printing of the ID and output not being redirected with &)? If not, what is going on behind the scenes in each?
docker is designed as C-S architecture: docker client, docker daemon(In fact still could fined to container-d, shim, runc etc).
When you execute docker run, it will just use docker client to send things to docker daemon, and let daemon call runc etc to start a container.
So:
docker run -d: It will let runc run container in background, you can use docker logs $container_name to see all logs later, the background happend on server side.
docker run &: It will make the linux command run at background, which means the docker run will be in background, this background run on client side. So you still can see stdout etc in terminal. More, if you leave the terminal(even you bash was set as nohup), you will not see it in terminal, you still need docker logs to see them.
I am using tomcat:9.0-jre8-alpine image to deploy my application. when i run the below command it works perfectly and displays logs.
docker logs -f <containername>
but after a few hours logs gets struck and whatever the operation we do on the application it does not display new logs. Container is running as expected and there is enough ram and disk space on the VM.
Note: I run the same container on 3 different VMs. Only 1 VM has this problem.
How can I debug/resolve the issue?
check you docker version, is it too old that you may meet
https://github.com/moby/moby/issues/35332 It's a dead lock caused by github.com/fsnotify/fsnotify pkg. fsnotify PR
check the daemon config in /etc/docker/daemon.json for docker log configuration.
and you need to check container configuration with docker inspect to see the log options.
Sometimes I try to look into the /var/lib/docker/containers/Container-ID/Container-ID.json to see the log if you use json-file log format.
If you use journald, you may find the log in /var/log/messages