I have a docker swarm cluster and am able to get all docker "container" logs to ELK stack.
But am unable to get docker daemon logs. Can someone please guide me to achieve this.
FYI : My stack is in Linux.
You can use Filebeat plugin to send the logs from the daemon logs file to your ELK (plugin presentation page.
There is an article on this point on the elasic.co blog. Your configuration will be different since you don't want containers logs but Docker daemon logs found at the path /var/log/docker.log or /var/log/daemon.log.
EDIT 1:
Since in your environment, the logs are readable with journalctl, I digged up the internet and I have found an ELK plugin that allows you to send the logs from the journald: https://github.com/logstash-plugins/logstash-input-journald
I Hope it'll help.
1st: you'd need to find out where your docker daemon is saving the logs, which depends on linux distribution. See this response with a list of possible places:
https://stackoverflow.com/a/30970134/3165889
2nd: you can use the suggestion of Paul Rey and use Filebeat. As an alternative, I also suggest the use of Fluentd, which usually you can use in place of Logstash, then having EFK instead of ELK, or simply as an extra tool to your ELK environment.
It can also read from a file using the tail input plugin
It can also insert data to Elasticsearch using the elasticsearch out plugin
This tutorial teaches how to log containers, but then you'd need to change your input plugin to tail from that file: Docker logging via EFK
I'd also like to add that, if you're interested in logging the daemon, you probably want to log even if docker is failing to start. So I'd install Fluentd directly on the host. NOT in a container.
Related
I am implementing the loki-grafana log management system and I have several questions.
First of all I want to put you in the context of my environment:
Applications in java which log to different files / daemons
They are in docker linux containers
These containers can run on a linux/windows/debian OS ....
I guess the right option is to run both Loki and grafana in docker containers on the machine together with the rest of the containers.
My question comes with : Which client do I use to join the logs of my services/applications to loki-grafana? Grafana gives us the following alternatives
Promtail : This is the default one used by the loki-grafana guide, but I haven't been able or haven't seen yet the way to make it read the log of other applications in docker. I was thinking about doing it sharing volumes with the host, but it seems to me that there may be clients that make this easier for me ...
AWS : I don't use the cloud, discarded too.
Docker driver :It is the one that recommends you with docker , but not being able to run plugins on windows is discarded. (Which is a problem)
Fluentbit : It is a very powerful metrics processor, but in principle I only want to pass the logs to grafana and manage it from loki/grafana. Would I be interested in this option for my case ?
Fluentd : I find it very similar to logstash , but it seems that you can configure the pass/user which puts it above logstash .
Logstash : in principle it is linked to Loki and runs the same image seems like a very good option.
Here is the info on the clients.
Any contribution are welcome.
You can get logs from docker to loki with promtail, you only need to bind logs dir from docker to promtail container an.
Fluent stack works good too, but promtail is more ready to use.
Docker seems to allow to specify any log driver of choice either through /etc/docker/daemon.json or through options while running a container. Further, it allows specifying driver options too, but is it possible to mention the location where the logs themselves get stored. Or at least can I know where docker is saving the logs even if the location is not customizable.
Reference: For example consider the default driver - JSON File logging driver
Environments to consider: Ubuntu/CentOS/Windows etc... but looking for generic solution.
If you want to check docker daemon logs then here is the location where you can find it.
To check logs of containers.
In case of default logging driver Json file, you can get the logs using command.
docker logs container-id
Or get the location of specific container logs using docker inspect
docker inspect --format='{{.LogPath}}' container-id
Hope this helps.
I've configured filebeat instance, and when it was running without errors, I've figured out, it does nothing.
I've found in log the following line:
INFO log/input.go:138 Configured paths: [/var/lib/docker/containers/*/*.log]
Quick check and I've found out, that the difference between openshift and pure docker is, that in docker the directories under /var/lib/docker/containers contains log files and under openshift they don't.
How should I configure filebeat to work under openshift?
AFAIK OpenShift also log out container logs as /var/lib/docker/containers/<hash>/*-json.log format, refer Viewing available container logs
for more details. If you can not find out at the directory, your docker log driver might be configured as journald, it can check from /etc/sysconfig/docker.
OPTIONS=' --selinux-enabled --log-driver=journald --signature-verification=False'
Then you should change journald to json-file for logging into /var/lib/docker/containers/<hash>/*-json.log.
OPTIONS=' --selinux-enabled --log-driver=json-file --signature-verification=False'
you need to restart the docker.service for taking effect.
Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/
The docker-compose utility is attached to the terminal by default allowing you to see that's happening with all of your containers which is very convenient for development. Does the docker stack deploy command support something like this when the activity of the running containers gets rendered in one terminal in real time?
According to Docker website the only log displayed is:
docker stack deploy --compose-file docker-compose.yml vossibility
Ignoring unsupported options: links
Creating network vossibility_vossibility
Creating network vossibility_default
Creating service vossibility_nsqd
Creating service vossibility_logstash
Creating service vossibility_elasticsearch
Creating service vossibility_kibana
Creating service vossibility_ghollector
Creating service vossibility_lookupd
However, there's a command which displays the logs:
docker service logs --follow
Therefore, on a Linux system you could combine both commands and you will get the desired output
What you're looking for is a merged output of the logs ("attached" for a stack deploy is a different thing with progress bars).
You can't get the logs for the full stack just yet (see issue #31458 to track the progress of this request), but you can get the logs for all of the containers in a service with docker service logs.