I have the following setup in docker:
Application (httpd)
Fluentd
ElasticSearch
Kibana
The configuration of the logdriver of the application is describing the fluentd container. The logs will be saved in ES and shown in Kibana.
When the logdriver is configured as this, it works:
web:
image: httpd
container_name: httpd
ports:
- "80:80"
links:
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: httpd.access
And fluentd is mapping its exposed port 24224 on port 24224 of the host.
fluentd:
build: ./fluentd
image: fluentd
container_name: fluentd
links:
- "elasticsearch"
ports:
- "24224:24224"
But I don't want to expose my fluentd on the hostnetwork. I want to keep it 'private' inside the docker network (I only want to map the app and kibana on the host network) so like this:
fluentd:
build: ./fluentd
image: fluentd
container_name: fluentd
links:
- "elasticsearch"
The port 24224 is still exposed (in the dockerfile) but it's not mapped on the host network. Now I want change the config of the logdriver of my app:
logging:
driver: "fluentd"
options:
fluentd-address: fluentd:24224
tag: httpd.access
So fluentd is the name of the fluentd container and they are in the same network but the app is not able to make a connection with it.
failed to initialize logging driver: dial tcp: lookup fluentd
Is this maybe because the logging option is executed before the 'link'-option in the compose file?
Is there a way to let this work?
This is not possible currently. The docker deamon which handles the log drivers is a process running on the host machine. It is not a service in your network and is therefore unable to resolve servicenames to IP's. See this github issue for more detailed explanations.
You will have to publish a port for this to work.
Related
I have my ELK deployed on an ec2 instance and a dockerized application running on a different instance. I am trying to use gelf to collect the different service logs and send to logstash. But my current configuration doesn't work.
Here's my docker.yaml file and my logstash conf file. For the gelf address I used the private ip of the instance where I have logstash running - is that what I should be using in this use case? What am I missing?
version: '3'
services:
app:
build: .
volumes:
- .:/app
ports:
- "8000:8000"
links:
- redis:redis
depends_on:
- redis
logging:
driver: gelf
options:
gelf-address: "udp://10.0.1.98:12201"
tag: "dockerlogs"
redis:
image: "redis:alpine"
expose:
- "6379"
logging:
driver: gelf
options:
gelf-address: "udp://10.0.1.98:12201"
tag: "redislogs"
This is my logstash conf:
input {
beats {
port => 5044
}
gelf {
port:12201
type=> "dockerLogs"
}
}
output {
elasticsearch {
hosts => ["${ELK_IP}:9200"]
index =>"logs-%{+YYYY.MM.dd}"
}
}
Verify the version of docker once and check if the syntax is correct.
Docker resolves gelf address through the host's network so the address needs to be the external address of the server.
Why not directly write to elasticsearch as you are only sending application logs without using logstash filter benefits?
see also: Using docker-compose with GELF log driver
I want to bring up a container (very basic) that reads data from a file and emits them to stdout to be picked up by fluentd logdriver (log configuration of the container).
I started with the below service in docker-compose
image: httpd
ports:
- "8010:80"
depends_on:
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: 127.0.0.1:24224
fluentd-async: 'true'
when i do a curl http://localhost:8010/ - I can see logs routed to fluentd container. I want to route data from file to stdout->fluentd
I have the following configuration in my docker-compose file:
fluentd:
build: ./fluentd
container_name: fluentd
expose:
- 24224
- 24224/udp
depends_on:
- "elasticsearch"
networks:
- internal
public-site:
build: ./public-site
container_name: public-site
depends_on:
- fluentd
logging:
driver: fluentd
options:
tag: public-site
networks:
- internal
networks:
internal:
When I start the app using docker-compose up, then the webserver exists with the error message ERROR: for public-site Cannot start service public-site: failed to initialize logging driver: dial tcp 127.0.0.1:24224: connect: connection refused.
On the other hand, when I publish the ports from fluentd (ports: 24224:24224), it works. The problem is that I don't want to publish those ports on the host, since it bypasses the linux firewall (i.e. it exposes the fluentd port to everyone, see here).
This is confusing, since exposing a port should make it available for every container in the network. I am using an internal network betweem fluentd and the webserver, so I would expect that the exposed ports of fluentd are enough (which isn't the case).
When I connect to the webserver container, I can ping and resolve the fluentd container, so there is a connection. For some reasons however, at startup it won't accept a fluentd config with no published ports.
The communication to 127.0.0.1 is always problematic if you're in a container. I found this explanation in the docs that performs way better than I would do:
To use the fluentd driver as the default logging driver, set the
log-driver and log-opt keys to appropriate values in the daemon.json
file, which is located in /etc/docker/ on Linux hosts or
C:\ProgramData\docker\config\daemon.json on Windows Server. For more
about +configuring Docker using daemon.json, see +daemon.json.
The following example sets the log driver to fluentd and sets the
fluentd-address option.
{
"log-driver": "fluentd",
"log-opts": {
"fluentd-address": "fluentd:24224"
}
}
src: https://docs.docker.com/config/containers/logging/fluentd/
EDIT: this works until you want to have an application on the host communicating with the dockerized fluentd (then it's a pain)
I have facing issue, I have solve using using static ip address.
logging:
driver: fluentd
options:
fluentd-address: 172.24.0.5:24224
I am facing the same error with you. After check the example config in fluent official site, I was able to connect fluentd through links.
Below is my configuration that works:
version: "3.5"
networks:
test:
services:
flog:
container_name: flog
image: mingrammer/flog:0.4.3
command: -t stdout -f apache_common -d 1s -l
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
links:
- fluentd
networks:
- test
fluentd:
container_name: fluentd
image: moonape1226/fluentd-with-loki-plugin:v1.13-1
ports:
- "24224:24224"
- "24224:24224/udp"
volumes:
- ./config/fluentd/fluent.conf:/fluentd/etc/fluent.conf
networks:
- test
I try to run services (mongo) in swarm mode with log collected to elasticsearch via fluentd. It's worked(!) with:
docker-compose up
But when I deploy via stack, services started, but logs not collected, and i don't know how to see what the reason.
docker stack deploy -c docker-compose.yml env_staging
docker-compose.yml:
version: "3"
services:
mongo:
image: mongo:3.6.3
depends_on:
- fluentd
command: mongod
networks:
- webnet
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: mongo
fluentd:
image: zella/fluentd-es
depends_on:
- elasticsearch
ports:
- 24224:24224
- 24224:24224/udp
networks:
- webnet
elasticsearch:
image: elasticsearch
ports:
- 9200:9200
networks:
- webnet
kibana:
image: kibana
depends_on:
- elasticsearch
ports:
- 5601:5601
networks:
- webnet
networks:
webnet:
upd
I remove fluentd-address: localhost:24224 and problem solves. But I don't understand what is "localhost"? Why we can't set "fluentd" host. If someone explain what is fluentd-address, I will accept answer.
fluentd-address is the address where fluentd daemon resides (default is localhost and you don't need to specify it in this case).
In your case (using stack) your fluentd daemon will run on a node, you should reach that service using the name of the service (in your case fluentd, have you tried?).
Remember to add to your options the fluentd-async-connect: "true"
Reference is at:
https://docs.docker.com/config/containers/logging/fluentd/#usage
You don't need to specify fluentd-address. When you set logging driver to fluentd, Swarm automatically discovers nearest fluentd instance and sends there all stdout of desired container.
I want to send logs from one container running my_service to another running the ELK stack with the syslog driver (so I will need the logstash-input-syslog plugin installed).
I am tweaking this elk image (and tagging it as elk-custom) via the following Dockerfile-elk
(using port 514 because this seems to be the default port)
FROM sebp/elk
WORKDIR /opt/logstash/bin
RUN ./logstash-plugin install logstash-input-syslog
EXPOSE 514
Running my services via a docker-compose as follows more or less:
elk-custom:
# image: elk-custom
build:
context: .
dockerfile: Dockerfile-elk
ports:
- 5601:5601
- 9200:9200
- 5044:5044
- 514:514
my_service:
image: some_image_from_my_local_registry
depends_on:
- elk-custom
logging:
driver: syslog
options:
syslog-address: "tcp://elk-custom:514"
However:
ERROR: for b4cd17dc1142_namespace_my_service_1 Cannot start service
my_service: failed to initialize logging driver: dial tcp: lookup
elk-custom on 10.14.1.31:53: server misbehaving
ERROR: for api Cannot start service my_service: failed to initialize
logging driver: dial tcp: lookup elk-custom on 10.14.1.31:53: server
misbehaving ERROR: Encountered errors while bringing up the project.
Any suggestions?
UPDATE: Apparently nothing seems to be listening on port 514, cause from within the container, the command netstat -a shows nothing on this port....no idea why...
You need to use tcp://127.0.0.1:514 instead of tcp://elk-custom:514. Reason being this address is being used by docker and not by the container. That is why elk-custom is not reachable.
So this will only work when you map the port (which you have done) and the elk-service is started first (which you have done) and the IP is reachable from the docker host, for which you would use tcp://127.0.0.1:514