Make Docker Logs Persistent - docker

I have been using docker-compose to setup some docker containers.
I am aware that the logs can be viewed using docker logs <container-name>.
All logs are being printed to STDOUT and STDERR when the containers are run, there is no log 'file' being generated in the containers.
But these logs (obtained from docker logs command) are removed when their respective containers are removed by commands like docker-compose down or docker-compose rm.
When the containers are created and started again there is a fresh set of logs. No logs from the previous 'run' is present.
I am curious if there is a way to somehow prevent the logs from being removed along with their containers.
Ideally i would like to keep all my previous logs even when the container is removed.

I believe you have two ways you can go:
Make containers log into file
You can reconfigure the applications inside the container to write into logfiles rather than stdout/stderr. As you put it, you'd like to keep the logs even when the container is removed. Therefore ensure the files are stored in a (bind) mounted volume.
Reconfigure docker to store logs
Reconfigure docker to use a different logging driver. This can be especially helpful as it prevents you from changing each and every container.

Related

What will happen if i delete a docker container log file?

I have a simple question: what will happen if I delete a Docker container log file?
As we know Docker stores container log under /var/lib/docker/containers/*/*-json.log.
If by chance I deleted that <container_name>-json.log then what will happen? Docker will create a new log file or it will stop writing logs?

How to auto-remove Docker container while persisting logs?

Is there any way to use Docker's --rm option that auto-removes the container once it exits but allow the container's logs to persist?
I have an application that creates containers to process jobs, and then once all jobs are complete, the container exits and is deleted to conserve space. However, in case a bug caused the container's process to exit prematurely, I'd like to persist the log files so I can confirm it exited cleanly or diagnose a faulty exit.
However, the --rm option appears to remove the container's logs along with the container.
Log to somewhere outside of the container.
You could mount a directory of the host in your container, so logs will be written to the host directory and kept after rm.
Or you can mount a volume on your container; which will be persisted after rm
Or you can setup rsyslog - or some similar log collection agent - to export your logs to a remote service. See https://www.simulmedia.com/blog/2016/02/19/centralized-docker-logging-with-rsyslog/ for more on this solution.
The first 2 are hacks but easier to get up and running on your workstation/server. If this is all cloud hosted there might be a decent log offloading option (Cloudwatch on AWS) which saves you the hassle of configuring rsyslog

How to organize container logs . Can the default container log location be changed?

In Linux, the Docker containers log files are in location :
/var/lib/docker/containers/<container-id>/<container-id>-json.log
can this default path "/var/lib/docker/containers/" be changed and how?
the default container logs are organised with container id, can this be changed to container name. in my project case every-time a docker image for a particular container changes(upgrades to a newer version) . a new container is spun up and log name changes but the container name remains same hence logging with container name helps. is my understanding correct ? I know that with logging driver we can append the container name to logs and then segregate it later.
docker container logs gives out logs which are written to STDOUT. if my container app doesn't put out logs to STDOUT instead uses logging solution like log4j and logs it to a different location,
docker logs <container_id>
might not return actual container/app log ? is my understanding correct?
Better solution would be to use fluentbit and push thelogs to elasticsearch.

docker logs multiple containers

I have a requirement where data gets loaded into the system using load balanced docker containers. We have multiple images of docker running on different instances either of which picks up the job and load data. The data loading process is reflected in "docker logs -f", but I need to check all the container logs manually and then figure out which container is active and loading data to tail for logs.
Is there a way though which I check which docker is active and tail it for logs. Or may be merge all the container logs into a single and then tail it.
I have good understanding on shell so any pointers would be helpful.
Please let me know if further details needed.
Thanks
Assuming you use docker compose you should be able to do docker service logs to see logs from any container making up a service.

custom logs in docker

I need to get the logs from the running container persisted on the host, so we don't lose the logs when the container restarts.
The logs that get put in the standard apache logs are handled fine with the --log-driver=syslog --log-opt syslog-tag="app_name" run options. However, each application also has a custom debug.log output.
I tried using the --log-opt syslog-address=unix://infra/py/appinstance/app/log/debug.log run parameter, but that doesn't work. I would like to plug the debug logs into the standard syslog, but I don't see how to do it. Any ideas.
the docker run--log-driver option is to specify where to store your docker container log. The log we are talking about here is the one that you get from the docker logs command.
The content of that log is gathered from the container's process standard output and error output.
The debug.log file you are mentioning isn't sent to any of the standard or error output and has such won't be handled by docker.
You have at least two options to persist those debug messages:
writing to stdout or stderr
You can make your application write its debug messages to the standard or error output instead of to the debug.log file. This way those debug messages will be handled by docker, and given the --log-driver=syslog option will persist in your host syslog service.
mount a volume
You can also use the docker run -v option to create a volume in your container that will mount a directory from your docker host in your container.
Then configure your application so that it writes the debug.log file on that mount point.

Resources