Where does Docker save logs? - docker

Docker seems to allow to specify any log driver of choice either through /etc/docker/daemon.json or through options while running a container. Further, it allows specifying driver options too, but is it possible to mention the location where the logs themselves get stored. Or at least can I know where docker is saving the logs even if the location is not customizable.
Reference: For example consider the default driver - JSON File logging driver
Environments to consider: Ubuntu/CentOS/Windows etc... but looking for generic solution.

If you want to check docker daemon logs then here is the location where you can find it.
To check logs of containers.
In case of default logging driver Json file, you can get the logs using command.
docker logs container-id
Or get the location of specific container logs using docker inspect
docker inspect --format='{{.LogPath}}' container-id
Hope this helps.

Related

Make Docker Logs Persistent

I have been using docker-compose to setup some docker containers.
I am aware that the logs can be viewed using docker logs <container-name>.
All logs are being printed to STDOUT and STDERR when the containers are run, there is no log 'file' being generated in the containers.
But these logs (obtained from docker logs command) are removed when their respective containers are removed by commands like docker-compose down or docker-compose rm.
When the containers are created and started again there is a fresh set of logs. No logs from the previous 'run' is present.
I am curious if there is a way to somehow prevent the logs from being removed along with their containers.
Ideally i would like to keep all my previous logs even when the container is removed.
I believe you have two ways you can go:
Make containers log into file
You can reconfigure the applications inside the container to write into logfiles rather than stdout/stderr. As you put it, you'd like to keep the logs even when the container is removed. Therefore ensure the files are stored in a (bind) mounted volume.
Reconfigure docker to store logs
Reconfigure docker to use a different logging driver. This can be especially helpful as it prevents you from changing each and every container.

How to get docker system ID inside container?

As far as I understand each docker installation has some kind of a unique ID. I can see it by executing docker system info:
$ docker system info
// ... a lot of output
ID: UJ6H:T6KC:YRIL:SIDL:5DUW:M66Y:L65K:FI2F:MUE4:75WX:BS3N:ASVK
// ... a lot of output
The question is if it's possible to get this ID from the container (by executing a code inside container) w/o mapping any volumes, etc?
Edit:
Just to clarify the use case (based on the comments): we're sending telemetry data from docker containers to our backend. We need to identify which containers are sharing the same host. This ID would help us to achieve this goal (it's kind of machine id). If there's any other way to identify the host - it can solve the issue as well.
No - unless you explicitly inject that information in the container(volumes, COPY, environment variable, ARG passed at build time and persisted in a file etc), or you fetch it via a GET request for example, that information is not available inside the docker containers.
You may open a console inside a container and search for all files that contain that ID grep -rnw '/' -e 'the-ID' but nothing will match the search.
On the other hand, any breakout from the container to the host would be a real security concern.
Edit to answer the update on you question:
The docker host has visibility on the containers that are running. A much better approach would be to send the information you need from the host level rather than container level.
You could still send data directly from the containers and use the container ID, which is known inside the container and correlate these telemetry information to the data sent from the docker host.
Yet another option, which is even better in my opinion, is to send that telemetry data to the stdout of the container. This info can easily be collected and send to the telemetry backend on the docker host, from the logging driver.
Often the hostname of the container is the container ID--not the ID you're talking about, but the ID you would use for e.g. docker container exec, so it's a fine identifier.

How to organize container logs . Can the default container log location be changed?

In Linux, the Docker containers log files are in location :
/var/lib/docker/containers/<container-id>/<container-id>-json.log
can this default path "/var/lib/docker/containers/" be changed and how?
the default container logs are organised with container id, can this be changed to container name. in my project case every-time a docker image for a particular container changes(upgrades to a newer version) . a new container is spun up and log name changes but the container name remains same hence logging with container name helps. is my understanding correct ? I know that with logging driver we can append the container name to logs and then segregate it later.
docker container logs gives out logs which are written to STDOUT. if my container app doesn't put out logs to STDOUT instead uses logging solution like log4j and logs it to a different location,
docker logs <container_id>
might not return actual container/app log ? is my understanding correct?
Better solution would be to use fluentbit and push thelogs to elasticsearch.

Redirect docker daemon logs to elasticsearch

I have a docker swarm cluster and am able to get all docker "container" logs to ELK stack.
But am unable to get docker daemon logs. Can someone please guide me to achieve this.
FYI : My stack is in Linux.
You can use Filebeat plugin to send the logs from the daemon logs file to your ELK (plugin presentation page.
There is an article on this point on the elasic.co blog. Your configuration will be different since you don't want containers logs but Docker daemon logs found at the path /var/log/docker.log or /var/log/daemon.log.
EDIT 1:
Since in your environment, the logs are readable with journalctl, I digged up the internet and I have found an ELK plugin that allows you to send the logs from the journald: https://github.com/logstash-plugins/logstash-input-journald
I Hope it'll help.
1st: you'd need to find out where your docker daemon is saving the logs, which depends on linux distribution. See this response with a list of possible places:
https://stackoverflow.com/a/30970134/3165889
2nd: you can use the suggestion of Paul Rey and use Filebeat. As an alternative, I also suggest the use of Fluentd, which usually you can use in place of Logstash, then having EFK instead of ELK, or simply as an extra tool to your ELK environment.
It can also read from a file using the tail input plugin
It can also insert data to Elasticsearch using the elasticsearch out plugin
This tutorial teaches how to log containers, but then you'd need to change your input plugin to tail from that file: Docker logging via EFK
I'd also like to add that, if you're interested in logging the daemon, you probably want to log even if docker is failing to start. So I'd install Fluentd directly on the host. NOT in a container.

custom logs in docker

I need to get the logs from the running container persisted on the host, so we don't lose the logs when the container restarts.
The logs that get put in the standard apache logs are handled fine with the --log-driver=syslog --log-opt syslog-tag="app_name" run options. However, each application also has a custom debug.log output.
I tried using the --log-opt syslog-address=unix://infra/py/appinstance/app/log/debug.log run parameter, but that doesn't work. I would like to plug the debug logs into the standard syslog, but I don't see how to do it. Any ideas.
the docker run--log-driver option is to specify where to store your docker container log. The log we are talking about here is the one that you get from the docker logs command.
The content of that log is gathered from the container's process standard output and error output.
The debug.log file you are mentioning isn't sent to any of the standard or error output and has such won't be handled by docker.
You have at least two options to persist those debug messages:
writing to stdout or stderr
You can make your application write its debug messages to the standard or error output instead of to the debug.log file. This way those debug messages will be handled by docker, and given the --log-driver=syslog option will persist in your host syslog service.
mount a volume
You can also use the docker run -v option to create a volume in your container that will mount a directory from your docker host in your container.
Then configure your application so that it writes the debug.log file on that mount point.

Resources