I need to forward docker logs to a ELK stack.
The administrator of the stack filters my log according to the type parameter of the message. Right now I use filebeat and have to set the document_type parameter so the Logstash configuration filters my messages properly.
I am now trying to avoid using Filebeat, because I am going to instantiate my EC2 machines on demand, and did not want to have to install filebeat on each of them on runtime.
I already saw that there is a syslog driver among others available. I set the syslog driver, and the messages go to Logstash, but I am not able to find how to set a value for the document_type like in filebeat. How can I send this metadata to Logstash using Syslog driver, or any other Docker native driver?
Thanks!
Can't you give your syslog output a tag like so:
docker run -d --name nginx --log-driver=syslog --log-opt syslog-address=udp://LOGSTASH_IP_ADDRESS:5000 --log-opt syslog-tag="nginx" -p 80:80 nginx
And then in your logstash rules:
filter {
if "nginx" in [tags] {
add_field => [ "type", "nginx" ]
}
}
Related
I am trying to set up Filebeat on Docker. The rest of the stack (Elastic, Logstash, Kibana) is already set up.
I want to forward syslog files from /var/log/ to Logstash with Filebeat. I created a new filebeat.yml file on the host system under /etc/filebeat/(I created this filebeat directory, not sure if that's correct?):
output:
logstash:
enabled: true
hosts: ["localhost:5044"]
filebeat:
inputs:
-
paths:
- /var/log/syslog
- /var/log/auth.log
document_type: syslog
Then I ran the Filebeat container: sudo docker run -v /etc/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml docker.elastic.co/beats/filebeat:7.4.2
It is able to run, but no files are actually being forwarded to logstash. I am thinking the issue is with the filebeat.yml configuration...
Any thoughts?
As David Maze intimated, the reason that filebeat isn't forwarding any logs is because it only has access to logs within the container. You can share those logs using another bind mount. My preferred option when using docker + filebeat is to have filebeat listen on a TCP/IP port and have the log source forward logs to that port.
I used the below command to start the splunk server using Docker.
docker run -d -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_USER=root" -p "8000:8000" splunk/splunk
But when I opened the URL localhost:8000, I am getting Server can't be reached message
What am I missing here?
I followed a tutorial from the source :- https://medium.com/#caysever/docker-splunk-logging-driver-c70dd78ad56a
Depending on your docker version and host OS, you could be missing the need to map 8080 from the VirtualBox.
This should not be needed if you are using HyperV (Windows host) or XHyve (Mac host), but can still be needed with VirtualBox.
The link to the Docker Image is https://hub.docker.com/r/splunk/splunk/. Referring to this we can see some details related to pulling and running the image. According to the link the right command is:
docker run -d -p 8000:8000 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=<password>" --name splunk splunk/splunk:latest
This works correctly for me. The image uses Ansible to do the configurations once the container has been created. If you do not specify password, the respective Ansible task will fail and your container will not be configured.
To follow the progress of the container configuration, you may run this command after running the above command:
docker logs -f splunk
Given that the name of your container is splunk. Here you will be able to see the progress of Ansible in configuring Splunk.
In case you are looking to create a clustered Splunk deployment then, you might want to have a look at this: https://github.com/splunk/docker-splunk
Hope this helps!
I try to use syslog as driver to collect logs
using following command:
docker run --log-driver syslog --log-opt syslog-address=udp://localhost:514 --log-opt tag=expconf app-1
Where does the syslog write this log?
I want the logs to be written into a specific file/ location. How to achieve this?
I am trying to get JMX monitoring working to monitor a test kafka instance.
I have kafka (ches/kafka) running in docker via boot2docker but I can not get JMX monitoring configured properly. I have done a bunch of troubleshooting and I know the kafka instance is running properly (consumers and producers work). The issue arises when I try simple JMX tools (jconsole and jvisualvm) and both can not connect (insecure connection error, connection failed).
Configuration items of note: I connect to 192.168.59.103 (virtualbox ip found when running 'boot2docker ip') and the ches/kafka docker/kafka instance uses the port 7203 as the JMX_PORT (confirmed in the kafka startup logs). Using jconsole, I connect to 192.168.59.103:7203 and that is when the errors occur.
Any help is appreciated.
For completeness, here is the solution that works:
I ran ches/kafka docker image as follows -- note that the JMX_PORT (7203) is now published appropriately:
$ docker run --hostname localhost --name kafka --publish 9092:9092 --publish 7203:7203 --env EXPOSED_HOST=192.168.59.103 --env ZOOKEEPER_IP=192.168.59.103 ches/kafka
Also, the following environment is set in the kafka-run-class.sh (.bat for windows)
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
But I needed to add one additional item (thanks to one of the commenters for pointing this out):
-Dcom.sun.management.jmxremote.rmi.port=7203
Now, to run the ches/docker image in boot2docker you just need to set one of the recognized environment variables (KAFKA_JMX_OPTS or KAKFA_OPTS) to add the additional item and it now works.
Thanks for the help!
There's no reason the kafka docker port would bind to the same port in the boot2docker VM except if you specify it.
Try running it with -p 7203:7203 to force the 1:1 forwarding of the port.
Can logs in a docker container ... say logs located in /var/log/syslog get shipped to logstash without using any additional components such as lumberjack and logspout?
Just wondering because I set up an environment and tried to make it work with syslog (so syslog ships the logs from docker container to logstash) but for now it's not working .. just wondering if there's something wrong with my logic.
There's no way for messages in /var/log/syslog to magically route to logstash without something configured to forward messages. Something must send the logs to logstash. You have a few options:
Configure your app to send log messages to stdout rather than to /var/log/syslog, and run logspout to collect stdout from all the running containers and send messages to your logstash endpoint.
Run rsyslog inside your container and configure a syslog daemon such as rsyslog to send messages to your logstash endpoint
Bind mount /dev/log from the host to your container by passing -v /dev/log:/dev/log to docker run when starting your container. On the host, configure your syslog daemon to send messages to logstash.
You could use the docker syslog driver to send docker logs straight from docker containers to logstash. Just have to add some parameters when you run your container
https://docs.docker.com/engine/admin/logging/overview/#supported-logging-drivers