Getting docker logs to graylog2 AND locally - docker

I've been working on this problem for a couple days now and have had no luck finding answers.
The setup is that I have several hosts running groups of docker containers, and I need to have local copies of all container logs AND on a centralized graylog2 server.
The graylog2 server is up and running and has inputs for GELF on port 9000 and syslog UDP on 514.
Originally, containers were logging to JSON files on the docker hosts. I then changed the log-driver in docker to use GELF and send to the graylog2 server. This worked fine, however it turns out we need local copies of the logs as well. The best part about this is that the logs on graylog had the fields filled out nicely, including things like "container_name" and "source" (which was the hostname of the docker host).
So I reverted to making containers log to JSON and then just sent the JSON logs to graylog2, however when I did that, graylog2 just showed one long string for the message which included the source, container_name, container_id, and so on but not in any of the corresponding fields.
Then I tried changing the log driver to syslog and having rsyslog or syslog-ng send a copy of the logs to graylog2, but that created entries on graylog2 that was missing a bunch of data, this time the message only contained the actual message, there was no info at all about the container_name or container_id.
Then I tried changing the log driver back to GELF and having syslog-ng listen on each docker host and docker was sending the logs to the host it was sitting on. This has a similar result as solution #2 above, so I see a very long "message" field that contains host, container_name, container_id, timestamp, etc etc, but graylog2 doesn't seem to want to populate the corresponding fields. Additionally, in this method all logs have the log source set to the IP address of the docker host even though I use hostnames for everything.
I tried installing graylog-collector-sidecar but that didn't seem to work at all.
Does anyone have any suggestions on where to go from here? I'm thinking option #3 would be good if I could somehow include at least the container_name to show up (maybe with a tag). The other solution that I think would be good is if I can get #4 to actually populate the fields, meaning instead of having this:
source: $IP
message: $IP {"version","1.1","host":"$HOSTNAME","message":"$MESSAGE","container_name":"$CONTAINER_NAME","container_id":"$CONTAINER_ID",etc etc}
it should display this:
source: $HOSTNAME
container_name: $CONTAINER_NAME
container_id: $CONTAINER_ID
message: $MESSAGE
Anyone know how I can get graylog2 (or syslog-ng) to format the log data so that it looks like the lower example? Note that when I was sending log data directly from docker to graylog2 using the GELF log-driver the data did appear in that format and it was great, we just need to also keep a copy of the logs locally.
Thanks in advance for any input!
-Martin

It looks like Docker Enterprise allows you to use dual logging, see https://docs.docker.com/config/containers/logging/configure/#limitations-of-logging-drivers
Another option could be to use fluentd to send the logs to Graylog (instead of using Docker's logging driver), see https://jsblog.insiderattack.net/bunyan-json-logs-with-fluentd-and-graylog-187a23b49540

Related

Output log file per container by using fluent

Is it possible to output container logs to a file per container using fluentd?
I installed fluentd ( by running a fluentd official image) and am running a multiple application containers on the host.
I was able to output all of containers logs to one file, but I’d like to create a log file per container.
I’m thinking about using “match” directive, but have no idea.

Where does Docker save logs?

Docker seems to allow to specify any log driver of choice either through /etc/docker/daemon.json or through options while running a container. Further, it allows specifying driver options too, but is it possible to mention the location where the logs themselves get stored. Or at least can I know where docker is saving the logs even if the location is not customizable.
Reference: For example consider the default driver - JSON File logging driver
Environments to consider: Ubuntu/CentOS/Windows etc... but looking for generic solution.
If you want to check docker daemon logs then here is the location where you can find it.
To check logs of containers.
In case of default logging driver Json file, you can get the logs using command.
docker logs container-id
Or get the location of specific container logs using docker inspect
docker inspect --format='{{.LogPath}}' container-id
Hope this helps.

docker-compose: Log to persistent file

I know that docker-compose default logs to a file defined by docker inspect --format='{{.LogPath}}' my_container. This file is gone as soon as I kill the container. As I deploy a new version of the image frequently, I loose a lot of log entries.
What I'd like to do is to have my container's log entries stored in a persistent log file, just like regular linux processes use. I can have my deployment script do something like this, but I'm thinking there's a less hack-ish way of doing this:
docker-compose logs -t -f >> output-`date +"%Y-%m-%d_%H%M"`.log'
One option would be to configure docker-compsose to log to syslog, but for the time being I'd like to log to a dedicated file.
How have others dealt with the issue of persistent logging?
So docker has a concept called logging-drivers. https://docs.docker.com/config/containers/logging/configure/#supported-logging-drivers
The default is the file that you mentioned. The ideal way to do this is to pass the --log-driver <driver-name> to your run command. Then have another process on the same machine picking these up and pushing to your central logging system.
Most popular of these is fluentd or splunk, I guess. But you can also choose to write to json or journald.
The docker manual for these are below
Splunk - https://docs.docker.com/config/containers/logging/splunk/
Fluentd - https://docs.docker.com/config/containers/logging/fluentd/

Parse Server Logs in Docker?

So I need to dockerize my parse server for a project and im a bit new to Docker.
I'm used to Heroku where I just could use heroku logs or connect papertrial to see the parse logs to help debug things but I have no clue how to see my parse specific logs when its running in a docker container.
I'm able to do test curl and get data back so I know its working but no idea how to find log data.
Searching around really doesn't lead to any results that are more specific to Docker. I also tried to figure out how to write to the logs folder but the folder alway seems empty.. ?
Docker collects the logs of each container from stdout and stderr of a container. As described in 12-factor application, an application should send its logs to stdout (it is standardized by heroku):
A twelve-factor app never concerns itself with routing or storage of its output stream. It should not attempt to write to or manage logfiles. Instead, each running process writes its event stream, unbuffered, to stdout. During local development, the developer will view this stream in the foreground of their terminal to observe the app’s behavior.
That being said, everything you stream to stdout of a container will be stored in /var/lib/docker/containers/<container id>/<container id>-json.log directory. You can see the container id using docker ps command. You don't have to see the file always. You can do docker logs <container-id> to see the logs stored in the container's directory.
If you have to have logs in the filesystem, you can store logs inside the container and mount that directory on your host machine to see log files. You can do something like this:
docker run -v <host directory>:<log directory in container> ...
Hope it helps.

custom logs in docker

I need to get the logs from the running container persisted on the host, so we don't lose the logs when the container restarts.
The logs that get put in the standard apache logs are handled fine with the --log-driver=syslog --log-opt syslog-tag="app_name" run options. However, each application also has a custom debug.log output.
I tried using the --log-opt syslog-address=unix://infra/py/appinstance/app/log/debug.log run parameter, but that doesn't work. I would like to plug the debug logs into the standard syslog, but I don't see how to do it. Any ideas.
the docker run--log-driver option is to specify where to store your docker container log. The log we are talking about here is the one that you get from the docker logs command.
The content of that log is gathered from the container's process standard output and error output.
The debug.log file you are mentioning isn't sent to any of the standard or error output and has such won't be handled by docker.
You have at least two options to persist those debug messages:
writing to stdout or stderr
You can make your application write its debug messages to the standard or error output instead of to the debug.log file. This way those debug messages will be handled by docker, and given the --log-driver=syslog option will persist in your host syslog service.
mount a volume
You can also use the docker run -v option to create a volume in your container that will mount a directory from your docker host in your container.
Then configure your application so that it writes the debug.log file on that mount point.

Resources