I have 2 nodes, 1 manager and 1 worker. Worker has around 15 spring boot services. All the services seem to be running fine when we use the UI to test the application. However when I am trying to check the logs using the docker service logs nothing is printed. When I ssh into the worker and try to check logs for individual images the logs are not printed. However when I ssh into the running container I can see a log folder with the generated logs.
This was working a couple of days back. We just redeployed some images and had to scale images from 1 to 0 and back to 1 instances. Not sure why the logs are not printed? Any hints on how we can debug or fix this?
The command docker logs <container> only outputs STDOUT and STDERR. You need to configure your logs to push to one of these. Read more about it here https://stackoverflow.com/a/36790613/10632970
Related
I use Dokku to run my app, and for some reason, the container is dying every few hours and recreates itself.
In order to investigate the issue, I am willing to read the error logs to this container and understand why it's crashing. Since Docker clears logs of dead containers, this is impossible.
I turned on docker events and it shows many events (like container update, container kill, container die, etc.) But no sign of what triggered this kill.
How can I investigate the issue?
Versions:
Docker version 19.03.13, build 4484c46d9d
dokku version 0.25.1
Logs are deleted when the container is deleted. If you want the logs to persist, then you need to avoid deleting the container. Make sure you aren't running the container with an option like --rm that automatically deletes it on exit. And check for the obvious issues like running out of disk space.
There are several things you can do to investigate the issue:
You can run the container in the foreground and allow it to log to your console.
If you were previously starting the container in the background with docker run -d (or docker-compose up -d), just remove the -d from the command line and allow the container to log to your terminal. When it crashes, you'll be able to see the most recent logs and scroll back to the limits of your terminal's history buffer.
You can even capture this output to a file using e.g. the script tool:
script -c 'docker run ...`
This will dump all the output to a file named typescript, although you can of course provide a different output name on the command line.
You can change the log driver.
You can configure your container to use a different logging driver. If you select something like syslog or journald, your container logs will be sent to the corrresponding service, and will continue to be available even after the container has been deleted.
I like use the journald logging driver because this allows searching for output by container id. For example, if I start a container like this:
docker run --log-driver journald --name web -p 8080:8080 -d docker.io/alpinelinux/darkhttpd
I can see logs from that container by running:
$ journalctl CONTAINER_NAME=web
Feb 25 20:50:04 docker 0bff1aec9b65[660]: darkhttpd/1.13, copyright (c) 2003-2021 Emil Mikulic.
These logs will persist even after the container exits.
(You can also search by container id instead of name by using CONTAINER_ID_FULL (the full id) or CONTAINER_ID (the short id), or even by image name with IMAGE_NAME.)
This is the command to check the docker container logs(info level by default) live:
docker logs -f CONTAINER_ID
But what if I want to check the live debug logs which I have logged in my code at debug level?
This is the command to check the docker container logs(info level by
default) live:
docker logs -f CONTAINER_ID
Not really, docker logs CONTAINER_ID doesn't cope with verbosity level.
It simply output the container STDOUT and STDERR.
But what if I want to check the live debug logs which I have logged in
my code at debug level?
That is a very good question.
You could statically (via configuration file) configure your logger appender to write to stdout for all logs (debug and above).
But as side effect, it will log always with that level. For a simple test, it is fine but for a long time running container it may be annoying.
In that case, a dynamic approach to set the logger level is be better (a basic rest controller may very well do the job).
And in that way docker logs -F CONTAINER_ID will output more or less logs according to the current level.
You can run the container in foreground mode so you will able to see log.
docker run -it --rm my_node_app
-it keep the container running in foreground as a result you will able to see your container logs.
You will able to see live logs same like running application in terminal.
But what if I want to check the live debug logs which I have logged in
my code at debug level?
The container output logs totally depend upon the stdout/stderr of the main process that is defined in CMD.
You can filter Debug logs from the log output, as docker does not know the logs format it just print the logs which is available in the form of stdout/stderr.
You can try
docker logs -f container_id | grep "Debug"
If the logs formate contains debug or similar pattern.
I am using tomcat:9.0-jre8-alpine image to deploy my application. when i run the below command it works perfectly and displays logs.
docker logs -f <containername>
but after a few hours logs gets struck and whatever the operation we do on the application it does not display new logs. Container is running as expected and there is enough ram and disk space on the VM.
Note: I run the same container on 3 different VMs. Only 1 VM has this problem.
How can I debug/resolve the issue?
check you docker version, is it too old that you may meet
https://github.com/moby/moby/issues/35332 It's a dead lock caused by github.com/fsnotify/fsnotify pkg. fsnotify PR
check the daemon config in /etc/docker/daemon.json for docker log configuration.
and you need to check container configuration with docker inspect to see the log options.
Sometimes I try to look into the /var/lib/docker/containers/Container-ID/Container-ID.json to see the log if you use json-file log format.
If you use journald, you may find the log in /var/log/messages
My Docker container produces multiple application logs.
docker logs command only shows the app startup logs. Is there a way to redirect the other log files so they are shown by the docker logs command?
EDIT:
I'm using the WebSphere traditional docker image. Docker logs only shows the startServer.log but there are other logs like SystemOut.log ....
We need more information, like which application, your Dockerfile, etc.
As a general answer you must have an entry point script which will send your application log to stdout:
runapp &
tail -f logfile
Something like that.
Regards
docker logs will display all the stdout and stderr from the application running in the container. Configure the application to log to stderr instead of a log file and the logs will be visible from docker logs.
I'm playing around now with docker 1.12, created a service and noticed there is a stage of "preparing" when I ran "docker service tasks xxx".
I can only guess that on this stage the images are being pulled or updated.
My question is: how can I see the logs for this stage? Or more generally: how can I see the logs for docker service tasks?
I have been using docker-machine for emulating different "hosts" in my development environment.
This is what I did to figure out what was going on during this "Preparing" phase for my services:
docker service ps <serviceName>
You should see the nodes (machines) where your service was scheduled to run. Here you'll see the "Preparing" message.
Use docker-machine ssh to connect to a particular machine:
docker-machine ssh <nameOfNode/Machine>
Your prompt will change. You are now inside another machine.
Inside this other machine do this:
tail -f /var/log/docker.log
You'll see the "daemon" log for that machine.
There you'll see if that particular daemon is doing the "pull" or what's is doing as part of the service preparation.
In my case, I found something like this:
time="2016-09-05T19:04:07.881790998Z" level=debug msg="pull progress map[progress:[===========================================> ] 112.4 MB/130.2 MB status:Downloading
Which made me realise that it was just downloading some images from my docker account.
Your assumption (about pulling during preparation) is correct.
There is no log command yet for tasks, but you could certainly connect to that daemon and do docker logs in the regular way.