I am using gelf as log driver for my docker container. In log options i provided udp endpoint.
Now when i start the container, everything is working as expected.
My question is, if it is possible to see the container logs in the host where it is running(not at UDP endpoint)?
This depends on Docker version.
Docker 20.10 and up introduces “dual logging”, which uses a local buffer that allows you to use the docker logs command for any logging driver.
If you are talking about seeing the logs via docker logs command on the machine running the docker containers, its not possible to do so when using other logging drivers.
See limitations of logging drivers.
If you know where the log is at inside the container, a work around would be to write a script which copies the log file from the container and displays it, or maybe just exec's into the container and displays it. But I really wouldn't recommend that.
Something like:
#!/bin/bash
docker cp mycontainer:/var/log/mylog.log $(pwd)/logs/mylog.log
tail -f $(pwd)/logs/mylog.log
Related
I'm using an Nginx docker container as base image for an application. I'm redirecting Nginx logs to syslog but I'm not sure what is the best way to have the busybox syslogd started. It all works if I start it manually, I just need it to run as a daemon automatically when the container runs.
Seeing that nginx is in init.d I tried this in my Dockerfile:
RUN ln -s /bin/busybox syslogd /etc/init.d/syslogd || :
But syslogd still didn't run on start-up. Since the documentation says that only one [CMD] is allowed I have the following hack:
FROM nginx:mainline-alpine
CMD nginx & busybox syslogd -n
This works, locally at least, but I'm wondering what is the proper solution. By default the container already symlinks log files to stdout and stderr but I don't want to use docker's syslog logging driver, because the application will be deployed to Kubernetes so I need a self-contained solution, that will work in the pod. Thank you!
Have your container log to stdout, but collect the logs elsewhere.
One option is to configure Docker itself to send container logs to syslog:
docker run --log-driver=syslog --log-opt syslog-address=udp://... ... nginx
Since the Docker daemon itself is configuring this, the syslog-address needs to be something that can be reached from the host. If you're running syslogd in a separate container, this option needs to point at a published port.
Another option is to use the standard Docker JSON-format logging, but use another tool to forward the logs to somewhere else. This has the downside of needing an additional tool, but the upside of docker logs working unmodified. Fluentd is a prominent open-source option. (Logstash is another, but doesn't seem to directly have a Docker integration.)
I have a server that is the host OS for multiple docker containers. Each of the containers contains an application that is creating logs. I want these logs to be sent to a single place by using the syslog daemon, and then I want filebeat to transmit this data to another server. Is it possible to install filebeat on the HOST OS (without making another container for filebeat), and make the containers applications' log data be collected by the syslog daemon and then consolidated in /var/log on the host OS? Thanks.
You need to share a volume with every container in order to get your logs in the host filesystem.
Then, you can install filebeat on the host and forward the logs where you want, as they were "standard" log files.
Please be aware that usually docker containers do not write they logs to real log files, but to stdout. That means that you'll probably need custom images in order to fix this logging problem.
I want to see the logs from my Docker Swarm service. Not only because I want all my logs to be collected for the usual reason, but also because I want to work out why the service is crashing with "task: non-zero exit (1)".
I see that there is work to implement docker logs in the pipeline, but there a way to access logs for production services? Or is Docker Swarm not ready for production wrt logging?
With Docker Swarm 17.03 you can now access the logs of a multi instance service via command line.
docker service logs -f {NAME_OF_THE_SERVICE}
You can get the name of the service with:
docker service ls
Note that this is an experimental feature (not production ready) and in order to use it you must enable the experimental mode:
Update: docker service logs is now a standard feature of docker >= 17.06. https://docs.docker.com/engine/reference/commandline/service_logs/#parent-command
similar question: How to log container in docker swarm mode
What we've done successfully is utilize GrayLog. If you look at docker run documentation, you can specify a log-driver and log-options that allow you to send all console messages to a graylog cluster.
docker run... --log-driver=gelf --log-opt gelf-address=udp://your.gelf.ip.address:port --log-opt tag="YourIdentifier"
You can also technically configure it at the global level for the docker daemon, but I would advise against that. It won't let you add the "Tag" option, which is exceptionally useful for filtering down your results.
Docker service definitions also support log driver and log options so you can use docker service update to adjust your services without destroying them.
As the documents says:
docker service logs [OPTIONS] SERVICE|TASK
resource: https://docs.docker.com/engine/reference/commandline/service_logs/
I am trying to run syslog inside Docker so that it has access to DNS configuration for the container. Is it possible run syslog inside Docker and expose that to the host as host's syslog daemon?
Yes. I'm doing this at the moment, because I've got a containerised ELK (Elasticsearch/Logstash/Kibana).
My logstash runs a listener on port 514 for syslog traffic* which it forwards to ELK.
Well, more correctly - I'm running a haproxy instance, that I'm redirecting using confd and etcd to wherever my syslog container is, but the principle stands.
My hosts have
*.* ##localhost
in their rsyslog.conf
And it works nicely. (and I can also log from my containers to this syslogd)
I think this a bit old but it can help others!
I think the best way is to use docker driver to send logs to syslog instead of running syslog inside.
One of the best practices in docker is to run only one process inside the container.
If you would like to have one docker running syslog inside and forward all logs to this container from other containers this will be good idea, because you will separate concerns and also you can scale the log container.
Here is a container that do that: enter link description here
I created a customize Docker image based on ubuntu 14.04 with the Sensu-Client package inside.
Everything's went fine but now I'm wondering how can I trigger the checks to run from the hosts machine.
For example, I want to be able to check the processes that are running on the host machine and not only the ones running inside the container.
Thanks
It depends on what checks you want to run. A lot of system-level checks work fine if you run sensu container with --net=host and --privileged flags.
--net=host not just allows you to see the same hostname and IP as host system, but also all the tcp connections and interface metric will match for container and host.
--privileged gives container full access to system metrics like hdd, memory, cpu.
Tricky thing is checking external process metrics, as docker isolates it even from privileged container, but you can share host's root filesystem as docker volume ( -v /:/host) and patch check to use chroot or use /host/proc instead of /proc.
Long story short, some checks will just work, for others you need to patch or develop your own way, but sensu in docker is one possible way.
an unprivileged docker container cannot check processes outside of it's container because docker uses kernel namespaces to isolate it from all other processes running on the host. This is by design: docker security documentation
If you would like to run a super privileged docker container that has this namespace disabled you can run:
docker run -it --rm --privileged --pid=host alpine /bin/sh
Doing so removes an important security layer that docker provides and should be avoided if possible. Once in the container, try running ps auxf and you will see all processes on the host.
I don't think this is possible right now.
If the processes in the host instance are running inside docker, you can mount the socket and get the status from the sensu container
Add a sensu-client to the host machine? You might want to split it out so you have granulation between problems in the containers VS problems with your hosts
Else - You would have to set up some way to report from the inside - Either using something low level (system calls etc) or set up something from outside to catch the the calls and report back status.
HTHs
Most if not all sensu plugins hardcode the path to the proc files. One option is to mount the host proc files to a different path inside of the docker container and modify the sensu plugins to support this other location.
This is my base docker container that supports modifying the sensu plugins proc file location.
https://github.com/sstarcher/docker-sensu