I've configured filebeat instance, and when it was running without errors, I've figured out, it does nothing.
I've found in log the following line:
INFO log/input.go:138 Configured paths: [/var/lib/docker/containers/*/*.log]
Quick check and I've found out, that the difference between openshift and pure docker is, that in docker the directories under /var/lib/docker/containers contains log files and under openshift they don't.
How should I configure filebeat to work under openshift?
AFAIK OpenShift also log out container logs as /var/lib/docker/containers/<hash>/*-json.log format, refer Viewing available container logs
for more details. If you can not find out at the directory, your docker log driver might be configured as journald, it can check from /etc/sysconfig/docker.
OPTIONS=' --selinux-enabled --log-driver=journald --signature-verification=False'
Then you should change journald to json-file for logging into /var/lib/docker/containers/<hash>/*-json.log.
OPTIONS=' --selinux-enabled --log-driver=json-file --signature-verification=False'
you need to restart the docker.service for taking effect.
Related
The log is created in the path /var/lib/docker/containers/~/*, and linked in the path /var/log/container/*.
I wonder how the log of each POD occurs in the /var/lib/docker/containers/~/* path.
Also, I am wondering if it is right to use the json-file driver in an environment that collects logs with fluentD.
json-file is a logging driver supplied with Docker (usually the default Docker daemon setup)
For any container (CID) Docker will create a file in /var/lib/docker/containers/CID/CID.log for stdout and stderr. You can see this when you docker run something.
This logging is completely independent of Kubernetes.
Kubernetes
Kubernetes manages the symlinks in /var/log/container/* when Pod containers start or stop to point to the logfile of the underlying container runtime.
When using Docker, Kubernetes relies on the specific json-file Docker log path setup to create functional symlinks. If you use other custom logging solutions in Docker those Kubernetes symlinks won't be functional.
The recommended setup in the kubernetes logging architecture is to have Docker rotate log file at 10MB.
kube-up.shs GCE config is the defacto recommended setup for container run time configuration. json-file is used and rotated at 10MB and 5 old files are kept.
CRI-O
The alternate container runtime to Docker is cri-o.
cri-o also logs to a local json file, in a similar format to Docker.
kubelet will rotate cri-o log files in a similar manner to Docker.
Log Collection
Any kubernetes log collectors will rely on the Kubernetes symlinks to json files. There should be an expectation that those files will be rotated underneath the collection. fluentd also supports this.
If your having an issue with your fluentd setup, I would recommend adding the specific detail of the issue you are seeing, with examples of the data you see in the log files and the data being received on the log collection end to your other question or the same detail as an issue against the fluentd project you used to setup your k8s log collection.
Docker seems to allow to specify any log driver of choice either through /etc/docker/daemon.json or through options while running a container. Further, it allows specifying driver options too, but is it possible to mention the location where the logs themselves get stored. Or at least can I know where docker is saving the logs even if the location is not customizable.
Reference: For example consider the default driver - JSON File logging driver
Environments to consider: Ubuntu/CentOS/Windows etc... but looking for generic solution.
If you want to check docker daemon logs then here is the location where you can find it.
To check logs of containers.
In case of default logging driver Json file, you can get the logs using command.
docker logs container-id
Or get the location of specific container logs using docker inspect
docker inspect --format='{{.LogPath}}' container-id
Hope this helps.
I have a docker swarm cluster and am able to get all docker "container" logs to ELK stack.
But am unable to get docker daemon logs. Can someone please guide me to achieve this.
FYI : My stack is in Linux.
You can use Filebeat plugin to send the logs from the daemon logs file to your ELK (plugin presentation page.
There is an article on this point on the elasic.co blog. Your configuration will be different since you don't want containers logs but Docker daemon logs found at the path /var/log/docker.log or /var/log/daemon.log.
EDIT 1:
Since in your environment, the logs are readable with journalctl, I digged up the internet and I have found an ELK plugin that allows you to send the logs from the journald: https://github.com/logstash-plugins/logstash-input-journald
I Hope it'll help.
1st: you'd need to find out where your docker daemon is saving the logs, which depends on linux distribution. See this response with a list of possible places:
https://stackoverflow.com/a/30970134/3165889
2nd: you can use the suggestion of Paul Rey and use Filebeat. As an alternative, I also suggest the use of Fluentd, which usually you can use in place of Logstash, then having EFK instead of ELK, or simply as an extra tool to your ELK environment.
It can also read from a file using the tail input plugin
It can also insert data to Elasticsearch using the elasticsearch out plugin
This tutorial teaches how to log containers, but then you'd need to change your input plugin to tail from that file: Docker logging via EFK
I'd also like to add that, if you're interested in logging the daemon, you probably want to log even if docker is failing to start. So I'd install Fluentd directly on the host. NOT in a container.
I'm using the graylog server image from docker hub but I can't find the log files. In the graylog vm they're located under /var/log/graylog but the location doesn't exist in the graylog server docker image. Where are the log files located in the docker container?
The graylog2/server Docker image sends logs to stdout and doesn't write them into a log file.
You can use a Docker Logging Driver to configure where these logs should be written to.
Looks like log4j is configured only to stdout. Inspecting the Dockerfile I would suggest checking this directory:
/usr/share/graylog/data/log
Also check:
/usr/share/graylog/data/journal
Regards
I need to get the logs from the running container persisted on the host, so we don't lose the logs when the container restarts.
The logs that get put in the standard apache logs are handled fine with the --log-driver=syslog --log-opt syslog-tag="app_name" run options. However, each application also has a custom debug.log output.
I tried using the --log-opt syslog-address=unix://infra/py/appinstance/app/log/debug.log run parameter, but that doesn't work. I would like to plug the debug logs into the standard syslog, but I don't see how to do it. Any ideas.
the docker run--log-driver option is to specify where to store your docker container log. The log we are talking about here is the one that you get from the docker logs command.
The content of that log is gathered from the container's process standard output and error output.
The debug.log file you are mentioning isn't sent to any of the standard or error output and has such won't be handled by docker.
You have at least two options to persist those debug messages:
writing to stdout or stderr
You can make your application write its debug messages to the standard or error output instead of to the debug.log file. This way those debug messages will be handled by docker, and given the --log-driver=syslog option will persist in your host syslog service.
mount a volume
You can also use the docker run -v option to create a volume in your container that will mount a directory from your docker host in your container.
Then configure your application so that it writes the debug.log file on that mount point.