Tomcat docker container logs hangs after few hours - docker

I am using tomcat:9.0-jre8-alpine image to deploy my application. when i run the below command it works perfectly and displays logs.
docker logs -f <containername>
but after a few hours logs gets struck and whatever the operation we do on the application it does not display new logs. Container is running as expected and there is enough ram and disk space on the VM.
Note: I run the same container on 3 different VMs. Only 1 VM has this problem.
How can I debug/resolve the issue?

check you docker version, is it too old that you may meet
https://github.com/moby/moby/issues/35332 It's a dead lock caused by github.com/fsnotify/fsnotify pkg. fsnotify PR
check the daemon config in /etc/docker/daemon.json for docker log configuration.
and you need to check container configuration with docker inspect to see the log options.
Sometimes I try to look into the /var/lib/docker/containers/Container-ID/Container-ID.json to see the log if you use json-file log format.
If you use journald, you may find the log in /var/log/messages

Related

How to stop Docker from clearing logs for dead containers?

I use Dokku to run my app, and for some reason, the container is dying every few hours and recreates itself.
In order to investigate the issue, I am willing to read the error logs to this container and understand why it's crashing. Since Docker clears logs of dead containers, this is impossible.
I turned on docker events and it shows many events (like container update, container kill, container die, etc.) But no sign of what triggered this kill.
How can I investigate the issue?
Versions:
Docker version 19.03.13, build 4484c46d9d
dokku version 0.25.1
Logs are deleted when the container is deleted. If you want the logs to persist, then you need to avoid deleting the container. Make sure you aren't running the container with an option like --rm that automatically deletes it on exit. And check for the obvious issues like running out of disk space.
There are several things you can do to investigate the issue:
You can run the container in the foreground and allow it to log to your console.
If you were previously starting the container in the background with docker run -d (or docker-compose up -d), just remove the -d from the command line and allow the container to log to your terminal. When it crashes, you'll be able to see the most recent logs and scroll back to the limits of your terminal's history buffer.
You can even capture this output to a file using e.g. the script tool:
script -c 'docker run ...`
This will dump all the output to a file named typescript, although you can of course provide a different output name on the command line.
You can change the log driver.
You can configure your container to use a different logging driver. If you select something like syslog or journald, your container logs will be sent to the corrresponding service, and will continue to be available even after the container has been deleted.
I like use the journald logging driver because this allows searching for output by container id. For example, if I start a container like this:
docker run --log-driver journald --name web -p 8080:8080 -d docker.io/alpinelinux/darkhttpd
I can see logs from that container by running:
$ journalctl CONTAINER_NAME=web
Feb 25 20:50:04 docker 0bff1aec9b65[660]: darkhttpd/1.13, copyright (c) 2003-2021 Emil Mikulic.
These logs will persist even after the container exits.
(You can also search by container id instead of name by using CONTAINER_ID_FULL (the full id) or CONTAINER_ID (the short id), or even by image name with IMAGE_NAME.)

How to clear logs of a docker container when there is no space left out because of docker logs

I have a aws instance where I am running a docker container (let it be for any process). The log file in var/log/docker.log is exceeding the storage of the vm. How can I clear the logs and make the storage available in a very clean way.
Use the truncate command to empty the logs
truncate -s 0 /var/log/docker.log
There is mention of an old issue with AWS linux instances not rotating docker logs, and a workaround is to use logrotate and update the user data script.
https://github.com/aws/amazon-ecs-init/issues/119
You should empty the log file.
cat /dev/null > /var/log/docker.log

docker service logs <service-name> & docker logs <image-name> not printing logs

I have 2 nodes, 1 manager and 1 worker. Worker has around 15 spring boot services. All the services seem to be running fine when we use the UI to test the application. However when I am trying to check the logs using the docker service logs nothing is printed. When I ssh into the worker and try to check logs for individual images the logs are not printed. However when I ssh into the running container I can see a log folder with the generated logs.
This was working a couple of days back. We just redeployed some images and had to scale images from 1 to 0 and back to 1 instances. Not sure why the logs are not printed? Any hints on how we can debug or fix this?
The command docker logs <container> only outputs STDOUT and STDERR. You need to configure your logs to push to one of these. Read more about it here https://stackoverflow.com/a/36790613/10632970

saving docker log files with volumes produces permission denied

I am trying to test saving log files of docker containers in playing in this site which gives you a linux root shell with docker installed. I'v used solution provided here:
docker run -ti -v /dev/log:/root/data --name zizimongodb mongo
This is what I got in the console:
docker: Error response from daemon: oci runtime error: container_linux.go:262: starting container process caused "process_linux.go:339: container init caused \"rootfs_linux.go:57: mounting \\\"/dev/log\\\" to rootfs \\\"/graph/overlay2/7f1eb83902e3688c0a1204c2fe8dfd8fbf43e1093bc578e4c41028e8b03e4b38/merged\\\" at \\\"/graph/overlay2/7f1eb83902e3688c0a1204c2fe8dfd8fbf43e1093bc578e4c41028e8b03e4b38/merged/root/data\\\" caused \\\"permission denied\\\"\"".
But the container has started:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8adaa75ba6f7 mongo "docker-entrypoint..." 2 minutes ago Created zizimongodb
docker logs -f zizimongodb returns nothing. When I stop the container, nothing is saved in the /root/data. Any idea how I can correctly save all logs?
Since you are using the official mongo image from DockerHub, it is worth pointing out that this official image (like many--or all?--of the official images) does not send log output to their default log locations that you might expect if you download a Linux distro version of the same software.
Instead, most software that is capable of being told where to log, are forced to log to stdout/stderr so that docker log plugins and the docker log command itself work properly.
For the mongodb case you can see this somewhat complicated code here that tells the mongodb process to use the /proc filesystem file descriptor that maps to "stdout", as long as it is writeable when the container is started. Because of some bugs this is more complicated that other Dockerfile customization of log output (you can read more if interested at the links in the comments).
I think a more reasonable way to try and do some form of log consolidation or collection is to read about docker log drivers and see if any of those options works for you. For example, if you like journald there is a driver which will take all container logs and pass them to journald on the host.

how get logs for docker service tasks on "preparing" state

I'm playing around now with docker 1.12, created a service and noticed there is a stage of "preparing" when I ran "docker service tasks xxx".
I can only guess that on this stage the images are being pulled or updated.
My question is: how can I see the logs for this stage? Or more generally: how can I see the logs for docker service tasks?
I have been using docker-machine for emulating different "hosts" in my development environment.
This is what I did to figure out what was going on during this "Preparing" phase for my services:
docker service ps <serviceName>
You should see the nodes (machines) where your service was scheduled to run. Here you'll see the "Preparing" message.
Use docker-machine ssh to connect to a particular machine:
docker-machine ssh <nameOfNode/Machine>
Your prompt will change. You are now inside another machine.
Inside this other machine do this:
tail -f /var/log/docker.log
You'll see the "daemon" log for that machine.
There you'll see if that particular daemon is doing the "pull" or what's is doing as part of the service preparation.
In my case, I found something like this:
time="2016-09-05T19:04:07.881790998Z" level=debug msg="pull progress map[progress:[===========================================> ] 112.4 MB/130.2 MB status:Downloading
Which made me realise that it was just downloading some images from my docker account.
Your assumption (about pulling during preparation) is correct.
There is no log command yet for tasks, but you could certainly connect to that daemon and do docker logs in the regular way.

Resources