docker-compose: Log to persistent file - docker

I know that docker-compose default logs to a file defined by docker inspect --format='{{.LogPath}}' my_container. This file is gone as soon as I kill the container. As I deploy a new version of the image frequently, I loose a lot of log entries.
What I'd like to do is to have my container's log entries stored in a persistent log file, just like regular linux processes use. I can have my deployment script do something like this, but I'm thinking there's a less hack-ish way of doing this:
docker-compose logs -t -f >> output-`date +"%Y-%m-%d_%H%M"`.log'
One option would be to configure docker-compsose to log to syslog, but for the time being I'd like to log to a dedicated file.
How have others dealt with the issue of persistent logging?

So docker has a concept called logging-drivers. https://docs.docker.com/config/containers/logging/configure/#supported-logging-drivers
The default is the file that you mentioned. The ideal way to do this is to pass the --log-driver <driver-name> to your run command. Then have another process on the same machine picking these up and pushing to your central logging system.
Most popular of these is fluentd or splunk, I guess. But you can also choose to write to json or journald.
The docker manual for these are below
Splunk - https://docs.docker.com/config/containers/logging/splunk/
Fluentd - https://docs.docker.com/config/containers/logging/fluentd/

Related

Make Docker Logs Persistent

I have been using docker-compose to setup some docker containers.
I am aware that the logs can be viewed using docker logs <container-name>.
All logs are being printed to STDOUT and STDERR when the containers are run, there is no log 'file' being generated in the containers.
But these logs (obtained from docker logs command) are removed when their respective containers are removed by commands like docker-compose down or docker-compose rm.
When the containers are created and started again there is a fresh set of logs. No logs from the previous 'run' is present.
I am curious if there is a way to somehow prevent the logs from being removed along with their containers.
Ideally i would like to keep all my previous logs even when the container is removed.
I believe you have two ways you can go:
Make containers log into file
You can reconfigure the applications inside the container to write into logfiles rather than stdout/stderr. As you put it, you'd like to keep the logs even when the container is removed. Therefore ensure the files are stored in a (bind) mounted volume.
Reconfigure docker to store logs
Reconfigure docker to use a different logging driver. This can be especially helpful as it prevents you from changing each and every container.

Parse Server Logs in Docker?

So I need to dockerize my parse server for a project and im a bit new to Docker.
I'm used to Heroku where I just could use heroku logs or connect papertrial to see the parse logs to help debug things but I have no clue how to see my parse specific logs when its running in a docker container.
I'm able to do test curl and get data back so I know its working but no idea how to find log data.
Searching around really doesn't lead to any results that are more specific to Docker. I also tried to figure out how to write to the logs folder but the folder alway seems empty.. ?
Docker collects the logs of each container from stdout and stderr of a container. As described in 12-factor application, an application should send its logs to stdout (it is standardized by heroku):
A twelve-factor app never concerns itself with routing or storage of its output stream. It should not attempt to write to or manage logfiles. Instead, each running process writes its event stream, unbuffered, to stdout. During local development, the developer will view this stream in the foreground of their terminal to observe the app’s behavior.
That being said, everything you stream to stdout of a container will be stored in /var/lib/docker/containers/<container id>/<container id>-json.log directory. You can see the container id using docker ps command. You don't have to see the file always. You can do docker logs <container-id> to see the logs stored in the container's directory.
If you have to have logs in the filesystem, you can store logs inside the container and mount that directory on your host machine to see log files. You can do something like this:
docker run -v <host directory>:<log directory in container> ...
Hope it helps.

Docker backup container with startup parameters

Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/

Getting docker logs to graylog2 AND locally

I've been working on this problem for a couple days now and have had no luck finding answers.
The setup is that I have several hosts running groups of docker containers, and I need to have local copies of all container logs AND on a centralized graylog2 server.
The graylog2 server is up and running and has inputs for GELF on port 9000 and syslog UDP on 514.
Originally, containers were logging to JSON files on the docker hosts. I then changed the log-driver in docker to use GELF and send to the graylog2 server. This worked fine, however it turns out we need local copies of the logs as well. The best part about this is that the logs on graylog had the fields filled out nicely, including things like "container_name" and "source" (which was the hostname of the docker host).
So I reverted to making containers log to JSON and then just sent the JSON logs to graylog2, however when I did that, graylog2 just showed one long string for the message which included the source, container_name, container_id, and so on but not in any of the corresponding fields.
Then I tried changing the log driver to syslog and having rsyslog or syslog-ng send a copy of the logs to graylog2, but that created entries on graylog2 that was missing a bunch of data, this time the message only contained the actual message, there was no info at all about the container_name or container_id.
Then I tried changing the log driver back to GELF and having syslog-ng listen on each docker host and docker was sending the logs to the host it was sitting on. This has a similar result as solution #2 above, so I see a very long "message" field that contains host, container_name, container_id, timestamp, etc etc, but graylog2 doesn't seem to want to populate the corresponding fields. Additionally, in this method all logs have the log source set to the IP address of the docker host even though I use hostnames for everything.
I tried installing graylog-collector-sidecar but that didn't seem to work at all.
Does anyone have any suggestions on where to go from here? I'm thinking option #3 would be good if I could somehow include at least the container_name to show up (maybe with a tag). The other solution that I think would be good is if I can get #4 to actually populate the fields, meaning instead of having this:
source: $IP
message: $IP {"version","1.1","host":"$HOSTNAME","message":"$MESSAGE","container_name":"$CONTAINER_NAME","container_id":"$CONTAINER_ID",etc etc}
it should display this:
source: $HOSTNAME
container_name: $CONTAINER_NAME
container_id: $CONTAINER_ID
message: $MESSAGE
Anyone know how I can get graylog2 (or syslog-ng) to format the log data so that it looks like the lower example? Note that when I was sending log data directly from docker to graylog2 using the GELF log-driver the data did appear in that format and it was great, we just need to also keep a copy of the logs locally.
Thanks in advance for any input!
-Martin
It looks like Docker Enterprise allows you to use dual logging, see https://docs.docker.com/config/containers/logging/configure/#limitations-of-logging-drivers
Another option could be to use fluentd to send the logs to Graylog (instead of using Docker's logging driver), see https://jsblog.insiderattack.net/bunyan-json-logs-with-fluentd-and-graylog-187a23b49540

custom logs in docker

I need to get the logs from the running container persisted on the host, so we don't lose the logs when the container restarts.
The logs that get put in the standard apache logs are handled fine with the --log-driver=syslog --log-opt syslog-tag="app_name" run options. However, each application also has a custom debug.log output.
I tried using the --log-opt syslog-address=unix://infra/py/appinstance/app/log/debug.log run parameter, but that doesn't work. I would like to plug the debug logs into the standard syslog, but I don't see how to do it. Any ideas.
the docker run--log-driver option is to specify where to store your docker container log. The log we are talking about here is the one that you get from the docker logs command.
The content of that log is gathered from the container's process standard output and error output.
The debug.log file you are mentioning isn't sent to any of the standard or error output and has such won't be handled by docker.
You have at least two options to persist those debug messages:
writing to stdout or stderr
You can make your application write its debug messages to the standard or error output instead of to the debug.log file. This way those debug messages will be handled by docker, and given the --log-driver=syslog option will persist in your host syslog service.
mount a volume
You can also use the docker run -v option to create a volume in your container that will mount a directory from your docker host in your container.
Then configure your application so that it writes the debug.log file on that mount point.

Resources