I am trying to set up Filebeat on Docker. The rest of the stack (Elastic, Logstash, Kibana) is already set up.
I want to forward syslog files from /var/log/ to Logstash with Filebeat. I created a new filebeat.yml file on the host system under /etc/filebeat/(I created this filebeat directory, not sure if that's correct?):
output:
logstash:
enabled: true
hosts: ["localhost:5044"]
filebeat:
inputs:
-
paths:
- /var/log/syslog
- /var/log/auth.log
document_type: syslog
Then I ran the Filebeat container: sudo docker run -v /etc/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml docker.elastic.co/beats/filebeat:7.4.2
It is able to run, but no files are actually being forwarded to logstash. I am thinking the issue is with the filebeat.yml configuration...
Any thoughts?
As David Maze intimated, the reason that filebeat isn't forwarding any logs is because it only has access to logs within the container. You can share those logs using another bind mount. My preferred option when using docker + filebeat is to have filebeat listen on a TCP/IP port and have the log source forward logs to that port.
Related
I am trying to run mqtt as a container using Docker desktop. But as per the logs getting an error Address in use when running the command:
mosquitto -c mosquitto.conf
Below is my docker-compose file:
version: '3.5'
services:
mosquitto:
image: eclipse-mosquitto
container_name: mosquitto_container
ports:
- 1883:1883
volumes:
- ./config:/mosquitto/config
And the mosquitto.conf file:
listener 1883 127.0.0.1
allow_anonymous true
In all the articles over the internet and also in the mqtt docs it is mentioned that we have to specify listener in the conf file which I have added.
My goal is to run mqtt as a container and then from a .net utililty publish the messages to the broker.
I am using the Docker version v20.10.10 running on my Windows 10 and mqtt image version is 2.0.14.
Please guide me.
The problem will most likely be because you are already running mosquitto on your docker host machine (windows) on port 1883. So when Docker ties to bind the container version to the same port it will clash.
Either stop the version running on the Windows host machine or change the port the docker version is mapped to in the ports section of the docker compose file.
Also possibly you already have a running container bound to that port. Running docker ps -a will show what containers exist and what ports they are bound to.
I am forwarding my docker containers log to fluentd at remote location. I do not want container to generate logs anymore locally on the same machine ( just to avoid storage/ space consumption).
Following is my configuration in docker-compose file for my services.
logging:
driver: fluentd
options:
fluentd-address: dev-fluentd_ip.com:2224
Is there any way to stop docker containers to stop generating logs locally?
I need to forward docker logs to a ELK stack.
The administrator of the stack filters my log according to the type parameter of the message. Right now I use filebeat and have to set the document_type parameter so the Logstash configuration filters my messages properly.
I am now trying to avoid using Filebeat, because I am going to instantiate my EC2 machines on demand, and did not want to have to install filebeat on each of them on runtime.
I already saw that there is a syslog driver among others available. I set the syslog driver, and the messages go to Logstash, but I am not able to find how to set a value for the document_type like in filebeat. How can I send this metadata to Logstash using Syslog driver, or any other Docker native driver?
Thanks!
Can't you give your syslog output a tag like so:
docker run -d --name nginx --log-driver=syslog --log-opt syslog-address=udp://LOGSTASH_IP_ADDRESS:5000 --log-opt syslog-tag="nginx" -p 80:80 nginx
And then in your logstash rules:
filter {
if "nginx" in [tags] {
add_field => [ "type", "nginx" ]
}
}
For using traefik as a reverse-proxy in front of a Docker container whose dynamic IP address might change over time, traefik comes with a docker backend. All examples that I could find for setting this up follow the same pattern:
First, start traefik in docker mode without an extra configuration file, activate host network mode (optional, so that traefik can see all Docker networks on the host if required) and mount the Docker unix socket so that traefik can listen to container starts and stops.
docker run --rm -p 80:80 --net=host --name traefik-reverse-proxy -v /dev/null/traefik.toml:/etc/traefik/traefik.toml -v /var/run/docker.sock:/var/run/docker.sock traefik --docker --loglevel debug
Then, start another container and set at least the following labels:
traefik.backend: "some-backend-name"
traefik.frontend.rule: "Host: localhost; Method: GET" (or whatever your rules are)
traefik.port: 80 (or whatever port your container exposes internally)
Example:
docker run --rm --name nginx -l traefik.backend="some-backend-name" -l traefik.frontend.rule="Host: localhost; Method: GET" -l traefik.port="80 nginx
Then, doing a curl localhost, one can see in the logs of the traefik container that it took the request and routed it to the NGINX container.
So far, so good... however, I do not like the fact that I have to configure my reverse-proxy forwarding rules (e.g. forward Host: some.host.name to container xxx) within the application itself (where my docker-compose files setting up the containers, labels etc. are usually located). Rather, I would like to separate this from the application and configure it as part of traefik's configuration instead.
Is this possible somehow? What I tried is leaving out the traefik.frontend.rule label from the example nginx container and instead mount the following configuration file for traefik:
[frontends]
[frontends.frontend1]
backend = "some-backend-name"
[frontends.frontend1.routes.test_1]
rule = "Host: localhost; Method: GET"
The startup command for traefik thus becomes:
docker run --rm -p 80:80 --net=host --name traefik-reverse-proxy -v $PWD/traefik.toml:/etc/traefik/traefik.toml -v /var/run/docker.sock:/var/run/docker.sock traefik --docker --loglevel debug
However, this does not seem to attach the frontend rule from the config file with the backend label from the nginx container. curl localhost now returns a 404 / Not found error.
the watch flag seems only works under the condition of rule.toml changed first time.
In your case, i suggest you write a service to update your rule in etcd or zookeeper. the service read etcd changes and update traefik configure in etcd.
This is likely an order of operations issue. Enabling debug logging in config (debug = true) shows that traefik is parsing the config file frontend rules first, and only later generating frontends and backends based on what's running in docker.
This means that the docker backends don't exist when the frontends from config are created, and it throws and error.
One solution is to put your rules config in a seperate file (e.g. rules.toml as shown in the docs) and add the watch = true directive to your config. This means that the frontend rules you define there will be updated after the backends from docker are generated.
We should probably submit a bug for this, because it's not exactly desirable functionality.
Can logs in a docker container ... say logs located in /var/log/syslog get shipped to logstash without using any additional components such as lumberjack and logspout?
Just wondering because I set up an environment and tried to make it work with syslog (so syslog ships the logs from docker container to logstash) but for now it's not working .. just wondering if there's something wrong with my logic.
There's no way for messages in /var/log/syslog to magically route to logstash without something configured to forward messages. Something must send the logs to logstash. You have a few options:
Configure your app to send log messages to stdout rather than to /var/log/syslog, and run logspout to collect stdout from all the running containers and send messages to your logstash endpoint.
Run rsyslog inside your container and configure a syslog daemon such as rsyslog to send messages to your logstash endpoint
Bind mount /dev/log from the host to your container by passing -v /dev/log:/dev/log to docker run when starting your container. On the host, configure your syslog daemon to send messages to logstash.
You could use the docker syslog driver to send docker logs straight from docker containers to logstash. Just have to add some parameters when you run your container
https://docs.docker.com/engine/admin/logging/overview/#supported-logging-drivers