I want to bring up a container (very basic) that reads data from a file and emits them to stdout to be picked up by fluentd logdriver (log configuration of the container).
I started with the below service in docker-compose
image: httpd
ports:
- "8010:80"
depends_on:
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: 127.0.0.1:24224
fluentd-async: 'true'
when i do a curl http://localhost:8010/ - I can see logs routed to fluentd container. I want to route data from file to stdout->fluentd
Related
I have my ELK deployed on an ec2 instance and a dockerized application running on a different instance. I am trying to use gelf to collect the different service logs and send to logstash. But my current configuration doesn't work.
Here's my docker.yaml file and my logstash conf file. For the gelf address I used the private ip of the instance where I have logstash running - is that what I should be using in this use case? What am I missing?
version: '3'
services:
app:
build: .
volumes:
- .:/app
ports:
- "8000:8000"
links:
- redis:redis
depends_on:
- redis
logging:
driver: gelf
options:
gelf-address: "udp://10.0.1.98:12201"
tag: "dockerlogs"
redis:
image: "redis:alpine"
expose:
- "6379"
logging:
driver: gelf
options:
gelf-address: "udp://10.0.1.98:12201"
tag: "redislogs"
This is my logstash conf:
input {
beats {
port => 5044
}
gelf {
port:12201
type=> "dockerLogs"
}
}
output {
elasticsearch {
hosts => ["${ELK_IP}:9200"]
index =>"logs-%{+YYYY.MM.dd}"
}
}
Verify the version of docker once and check if the syntax is correct.
Docker resolves gelf address through the host's network so the address needs to be the external address of the server.
Why not directly write to elasticsearch as you are only sending application logs without using logstash filter benefits?
see also: Using docker-compose with GELF log driver
I have a docker swarm running across 4 raspberryPis (1 manager, 3 workers). I was a little surprised today when I was diagnosing a crash on the master node and discovered that the container processes which were running on that host are writing their logs to /var/log on the host machine.
I'd thought that by default (and my swarm is using the default/basic config from the docker instructions here https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/), docker writes its logs to a json-log output as part of Docker's own logging structure on the host. Is what I'm seeing expected behaviour, or have I badly misconfigured/misunderstood something?
For example, the letsencrypt image which runs an nginx ingress node for my swarm is writing its logs to /var/log/letsencrypt on my host machine. I wouldn't have thought this possible without me explicitly mounting the /var/log directory in my container spec.
It seems to be writing these certbot debug logs to /var/log/letsencrypt/letsencrypt.log on the host:
2020-07-19 07:11:46,615:DEBUG:certbot.main:certbot version: 0.31.0
2020-07-19 07:11:46,616:DEBUG:certbot.main:Arguments: ['-q']
2020-07-19 07:11:46,616:DEBUG:certbot.main:Discovered plugins: PluginsRegistry(PluginEntryPoint#manual,PluginEntryPoint#null,PluginEntryPoint#standalone,PluginEntryPoint#webroot)
2020-07-19 07:11:46,638:DEBUG:certbot.log:Root logging level set at 30
2020-07-19 07:11:46,639:INFO:certbot.log:Saving debug log to /var/log/letsencrypt/letsencrypt.log
Here's my nginx docker-compose file:
version: '3'
services:
nginx:
image: linuxserver/letsencrypt
volumes:
- /share/data/nginx/:/config
deploy:
mode: replicated
placement:
constraints:
- "node.role==manager"
ports:
- 80:80
- 443:443
environment:
- PUID=1001
- PGID=1001
- URL=mydomain.com
- SUBDOMAINS=www,mysite1,mysite2
- VALIDATION=http
- EMAIL=myemail#myprovider.com
- TZ=Europe/London
networks:
- internal
- monitoring_front-tier
networks:
internal:
external: true
monitoring_front-tier:
external: true
You can check which logging driver is configured on that container:
docker inspect -f '{{.HostConfig.LogConfig.Type}}' <container-id>
You can compare the result to how it is supposed to behave, according to the official documentation: https://docs.docker.com/config/containers/logging/configure/#supported-logging-drivers
You may also check if you chose to override the default json-file logging driver under /etc/docker/daemon.json. If the file does not exist, the json-file driver should be the one in use.
I have the following configuration in my docker-compose file:
fluentd:
build: ./fluentd
container_name: fluentd
expose:
- 24224
- 24224/udp
depends_on:
- "elasticsearch"
networks:
- internal
public-site:
build: ./public-site
container_name: public-site
depends_on:
- fluentd
logging:
driver: fluentd
options:
tag: public-site
networks:
- internal
networks:
internal:
When I start the app using docker-compose up, then the webserver exists with the error message ERROR: for public-site Cannot start service public-site: failed to initialize logging driver: dial tcp 127.0.0.1:24224: connect: connection refused.
On the other hand, when I publish the ports from fluentd (ports: 24224:24224), it works. The problem is that I don't want to publish those ports on the host, since it bypasses the linux firewall (i.e. it exposes the fluentd port to everyone, see here).
This is confusing, since exposing a port should make it available for every container in the network. I am using an internal network betweem fluentd and the webserver, so I would expect that the exposed ports of fluentd are enough (which isn't the case).
When I connect to the webserver container, I can ping and resolve the fluentd container, so there is a connection. For some reasons however, at startup it won't accept a fluentd config with no published ports.
The communication to 127.0.0.1 is always problematic if you're in a container. I found this explanation in the docs that performs way better than I would do:
To use the fluentd driver as the default logging driver, set the
log-driver and log-opt keys to appropriate values in the daemon.json
file, which is located in /etc/docker/ on Linux hosts or
C:\ProgramData\docker\config\daemon.json on Windows Server. For more
about +configuring Docker using daemon.json, see +daemon.json.
The following example sets the log driver to fluentd and sets the
fluentd-address option.
{
"log-driver": "fluentd",
"log-opts": {
"fluentd-address": "fluentd:24224"
}
}
src: https://docs.docker.com/config/containers/logging/fluentd/
EDIT: this works until you want to have an application on the host communicating with the dockerized fluentd (then it's a pain)
I have facing issue, I have solve using using static ip address.
logging:
driver: fluentd
options:
fluentd-address: 172.24.0.5:24224
I am facing the same error with you. After check the example config in fluent official site, I was able to connect fluentd through links.
Below is my configuration that works:
version: "3.5"
networks:
test:
services:
flog:
container_name: flog
image: mingrammer/flog:0.4.3
command: -t stdout -f apache_common -d 1s -l
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
links:
- fluentd
networks:
- test
fluentd:
container_name: fluentd
image: moonape1226/fluentd-with-loki-plugin:v1.13-1
ports:
- "24224:24224"
- "24224:24224/udp"
volumes:
- ./config/fluentd/fluent.conf:/fluentd/etc/fluent.conf
networks:
- test
I want to send all logs from docker postgresql container to my host journald service.
i.e. I want to be able to read docker container logs at host machine using tail -f /var/logs/messages or journald -f
here is my docker-compose config:
postgres:
restart: unless-stopped
image: postgres:9.6
ports:
- "5432:5432"
logging:
driver: syslog
options:
syslog-address: "udp://127.0.0.1:514"
I've been trying different solutions but everytime i got an error from docker:
postgres_1 | WARNING: no logs are available with the 'syslog' log driver
and I cannot get see logs on the host machine.
I'm not sure, what I'm doing wrong?
Thank you in advance
For now, docker-compose does not support any logs driver except json-file and journald https://github.com/docker/compose/blob/master/compose/container.py#L173
But journald still does not post anything to host machine it just prints everything to the screen withoud -d options. I think that's build in behavior you cannot change
I have the following setup in docker:
Application (httpd)
Fluentd
ElasticSearch
Kibana
The configuration of the logdriver of the application is describing the fluentd container. The logs will be saved in ES and shown in Kibana.
When the logdriver is configured as this, it works:
web:
image: httpd
container_name: httpd
ports:
- "80:80"
links:
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: httpd.access
And fluentd is mapping its exposed port 24224 on port 24224 of the host.
fluentd:
build: ./fluentd
image: fluentd
container_name: fluentd
links:
- "elasticsearch"
ports:
- "24224:24224"
But I don't want to expose my fluentd on the hostnetwork. I want to keep it 'private' inside the docker network (I only want to map the app and kibana on the host network) so like this:
fluentd:
build: ./fluentd
image: fluentd
container_name: fluentd
links:
- "elasticsearch"
The port 24224 is still exposed (in the dockerfile) but it's not mapped on the host network. Now I want change the config of the logdriver of my app:
logging:
driver: "fluentd"
options:
fluentd-address: fluentd:24224
tag: httpd.access
So fluentd is the name of the fluentd container and they are in the same network but the app is not able to make a connection with it.
failed to initialize logging driver: dial tcp: lookup fluentd
Is this maybe because the logging option is executed before the 'link'-option in the compose file?
Is there a way to let this work?
This is not possible currently. The docker deamon which handles the log drivers is a process running on the host machine. It is not a service in your network and is therefore unable to resolve servicenames to IP's. See this github issue for more detailed explanations.
You will have to publish a port for this to work.