Docker: Ship log files being written inside containers to ELK stack - docker

I am running a django application using docker, and using python logging in django settings to write api logs inside a logs folder. When I restart my container my log files are also removed (which is understandable).
I would like to ship my logs (e.g. /path/to/workdir/logs/django.log) to elasticsearch. I am confused since my searches tell me to ship this path /var/lib/docker/containers/*/*.log but I don't think this is what I want.
Any ideas on how I ship my logs inside the container to ELK Stack?

You can ship logs from docker containers stdout / stderr to elasticsearch using the gelf logging driver.
Configure the services to with the gelf logging driver (docker-compose.yml):
version: '3.7'
x-logging:
&logstash
options:
gelf-address: "udp://localhost:12201"
driver: gelf
services:
nginx:
image: 'nginx:1.17.3'
hostname: 'nginx'
domainname: 'example.com'
depends_on:
- 'logstash'
ports:
- '80:80'
volumes:
- '${PWD}/nginx/nginx.conf:/etc/nginx/nginx.conf:ro'
logging: *logstash
elasticsearch:
image: 'elasticsearch:7.1.1'
environment:
- 'discovery.type=single-node'
volumes:
- 'elasticsearch:/usr/share/elasticsearch/data'
expose:
- '9200'
- '9300'
kibana:
image: 'kibana:7.1.1'
depends_on:
- 'elasticsearch'
ports:
- '5601:5601'
volumes:
- '${PWD}/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml'
logstash:
build: 'logstash'
depends_on:
- 'elasticsearch'
volumes:
- 'logstash:/usr/share/logstash/data'
ports:
- '12201:12201/udp'
- '10514:10514/udp'
volumes:
elasticsearch:
logstash:
Note: the above example configures the logging using extension fields.
The minimal nginx.conf used for this example:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name _;
location / {
return 200 'OK';
}
}
}
The logstash image is a custom build using the below Dockerfile:
FROM logstash:7.1.1
USER 0
COPY pipeline/gelf.cfg /usr/share/logstash/pipeline
COPY pipeline/pipelines.yml /usr/share/logstash/config
COPY settings/logstash.yml /usr/share/logstash/config
COPY patterns /usr/share/logstash/patterns
RUN rm /usr/share/logstash/pipeline/logstash.conf
RUN chown -R 1000:0 /usr/share/logstash/pipeline /usr/share/logstash/patterns /usr/share/logstash/config
USER 1000
... the relevant logstash gelf plugin config:
input {
gelf {
type => docker
port => 12201
}
}
filter { }
output {
if [type] == "docker" {
elasticsearch { hosts => ["elasticsearch:9200"] }
stdout { codec => rubydebug }
}
}
... and pipelines.yml:
- pipeline.id: "gelf"
path.config: "/usr/share/logstash/pipeline/gelf.cfg"
... and logstash.yml to persist the data:
queue:
type: persisted
drain: true
The process running in the container logs to stdout / stderr, docker pushes the logs to logstash using the gelf logging driver (note: the logstash address is localhost because the docker service discovery is not available to resolve the service name - the ports must be mapped to the host and the logging driver must be configured using localhost) which outputs the logs to elasticsearch that you can index in kibana:

Related

ElasticSearch + LogStash + docker-compose: simple logstash.conf without volume mount

Small question regarding ElasticSearch + LogStash + docker-compose, and the possibility to start LogStash with a configuration logstash.conf without volume mount please.
For testing purpose, I would like to start an ElasticSearch and a LogStash. The LogStash purpose is to forward logs to the ElasticSearch cluster.
Popular logstash.conf found on the web are like this:
input {
tcp {
port => 5000
codec => json
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch.mynetwork:9200"]
index => "myapp"
}
}
If I save this as a file, and mount it in my dockerfile, it works like a charm, very happy.
However, it requires to mount a file from the local system onto the container.
Since this configuration file is super simple, I was wondering if I could just pass the content of the config file as environment variable in the docker compose file.
Here is what I tried:
version: "3.9"
services:
elasticsearch:
networks: ["mynetwork"]
container_name: elasticsearch.mynetwork
hostname: elasticsearch.mynetwork
image: elasticsearch:8.6.0
ports:
- 9200:9200
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
logstash:
networks: [ "mynetwork" ]
container_name: logstash.mynetwork
hostname: logstash.mynetwork
image: logstash:8.6.0
ports:
- 9600:9600
- 5044:5044
- 5000:5000/tcp
- 5000:5000/udp
environment:
LS_SETTINGS_DIR: input { tcp { port => 5000 codec => json }}output { elasticsearch { hosts => ["http://elasticsearch.mynetwork:9200"] index => "myapp" }}
However, LogStash is not able to forward to Elastic.
May I ask what is the proper way to configure LogStash, so that it can take into account the conf file without mounting it as a volume please?
Thank you

Configuration for sending Docker logs to a locally installed ELK using Gelf

I have my ELK deployed on an ec2 instance and a dockerized application running on a different instance. I am trying to use gelf to collect the different service logs and send to logstash. But my current configuration doesn't work.
Here's my docker.yaml file and my logstash conf file. For the gelf address I used the private ip of the instance where I have logstash running - is that what I should be using in this use case? What am I missing?
version: '3'
services:
app:
build: .
volumes:
- .:/app
ports:
- "8000:8000"
links:
- redis:redis
depends_on:
- redis
logging:
driver: gelf
options:
gelf-address: "udp://10.0.1.98:12201"
tag: "dockerlogs"
redis:
image: "redis:alpine"
expose:
- "6379"
logging:
driver: gelf
options:
gelf-address: "udp://10.0.1.98:12201"
tag: "redislogs"
This is my logstash conf:
input {
beats {
port => 5044
}
gelf {
port:12201
type=> "dockerLogs"
}
}
output {
elasticsearch {
hosts => ["${ELK_IP}:9200"]
index =>"logs-%{+YYYY.MM.dd}"
}
}
Verify the version of docker once and check if the syntax is correct.
Docker resolves gelf address through the host's network so the address needs to be the external address of the server.
Why not directly write to elasticsearch as you are only sending application logs without using logstash filter benefits?
see also: Using docker-compose with GELF log driver

How to connect two docker containers together

I have a reactjs front end application and a simple python flask. And I am using a docker-compose.yml to spin up both the containers, and it is like this:
version: "3.2"
services:
frontend:
build: .
environment:
CHOKIDAR_USEPOLLING: "true"
ports:
- 80:80
links:
- "backend:backend"
depends_on:
- backend
backend:
build: ./api
# volumes:
# - ./api:/usr/src/app
environment:
# CHOKIDAR_USEPOLLING: "true"
FLASK_APP: /usr/src/app/server.py
FLASK_DEBUG: 1
ports:
- 8083:8083
I have used links so the frontend service can talk to backend service using axios as below:
axio.get("http://backend:8083/monitors").then(res => {
this.setState({
status: res.data
});
});
I used docker-compose up --build -d to build and start the two containers and they are started without any issue and running fine.
But now the frontend cannot talk to backend.
I am using an AWS ec2 instance. When the page loads, I tried to see the for any console errors and I get this error:
VM167:1 GET http://backend:8083/monitors net::ERR_NAME_NOT_RESOLVED
Can someone please help me?
The backend service is up and running.
You can use a nginx as reverse proxy for both
The compose file
version: "3.2"
services:
frontend:
build: .
environment:
CHOKIDAR_USEPOLLING: "true"
depends_on:
- backend
backend:
build: ./api
# volumes:
# - ./api:/usr/src/app
environment:
# CHOKIDAR_USEPOLLING: "true"
FLASK_APP: /usr/src/app/server.py
FLASK_DEBUG: 1
proxy:
image: nginx
volumes:
- ./nginx.conf:/etc/nginx/conf.d/example.conf
ports:
- 80:80
minimal nginx config (nginx.conf):
server {
server_name example.com;
server_tokens off;
location / {
proxy_pass http://frontend:80;
}
}
server {
server_name api.example.com;
server_tokens off;
location / {
proxy_pass http://backend:8083;
}
}
The request hits the nginx container and is routed according the domain to the right container.
To use example.com and api.example.com you need to edit your hosts file:
Linux: /etc/hosts
Windows: c:\windows\system32\drivers\etc\hosts
Mac: /private/etc/hosts
127.0.0.1 example.com api.example.com

docker-compose warning on scale containers

I've this docker-compose file:
version: "3.8"
services:
web:
image: apachephp:v1
ports:
- "80-85:80"
volumes:
- volume:/var/www/html
network_mode: bridge
ddbb:
image: mariadb:v1
ports:
- "3306:3306"
volumes:
- volume2:/var/lib/mysql
network_mode: bridge
environment:
- MYSQL_ROOT_PASSWORD=*********
- MYSQL_DATABASE=*********
- MYSQL_USER=*********
- MYSQL_PASSWORD=*********
volumes:
volume:
name: volume-data
volume2:
name: volume2-data
When run this:
docker-compose up -d --scale web=2
Its works as well but receive this warning:
WARNING: The "web" service specifies a port on the host. If multiple containers for this service are created on a single host, the port will clash.
Can somebody help to avoid these warning?, thank you advance.
Best regards.
I suppose, you try to access the web service without knowing the port of the specific container and to distribute the requests to a container here. If i rigth, to do that, you need a load balancing mechanisms to the system configuration.In the following example, i'll use NGINX as the load balancer.
version: "3.8"
services:
web:
image: apachephp:v1
expose: # change 'ports' to 'expose'
- "7483" # <- this is web running port (Change to your web port)
....
ddbb:
....
## New Start ##
nginx:
image: nginx:latest
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- web # your web service name
ports:
- "80:80"
## New End ##
volumes:
...
So you don’t need to map the port 80-85:80 from the web services to a host machine port, if you want to scale the service. So i remove the port mapping configuration from your Docker Compose file and only expose the port as above:
In the nginx service and i add port mappings to the host container for that server. In example, i configured NGINX to listen on the port 4000, which is why we have to add port mappings for this port.
nginx.conf file contents:
user nginx;
events {
worker_connections 1000;
}
http {
server {
listen 4000;
location / {
proxy_pass http://pspdfkit:5000;
}
}
}
You will find here more details, to Use Docker Compose to Run Multiple Instances of a Service in Development.

Errno::EADDRNOTAVAIL - LogStashLogger not connecting to Logstash - Ruby on Rails

I've been fighting on getting the logs to work on a Dockerized Rails 5.1.5 application with the ELK (Elasticsearch Logstash Kibana) stack (which is Dockerized as well). I've been having problems on setting it up. I haven't been able to send a single log to logstash from Rails.
The current problem is:
ERROR -- : [LogStashLogger::Device::TCP] Errno::EADDRNOTAVAIL - Cannot
assign requested address - connect(2) for "localhost" port 5228
At first, I thought there was a problem with my current ELK configuration. After days spent, I finally figured out that ELK was working correctly by sending a dummy .log file through the nc command using Ubuntu for Win10, and it worked πŸŽ‰πŸŽ‰πŸŽ‰πŸŽ‰ !!!
Now that I know the problem is with Rails, I've been trying a different set of combinations, but I still haven't gotten it to work:
I checked that the configuration from Logstash is correctly accepting TCP, and is the right port.
Changed to a different port, both in Logstash and Rails.
I'm currently using Docker-Compose V3. I initialized ELK first, and then Rails (but the problem still creeped in)
Switched between UDP and TCP.
Did not specified a codec in the logstash.conf.
Specified the json_lines codec in the logstash.conf
I've tried specifying a link between logstash and rails in docker-compose.yml (Even though it's deprecated for docker-compose v3)
I've tried bringing them together through a networkin docker-compose.
I've tried specifying a depends_on logstash in the rails app in docker-compose.
I'm running out of ideas:
Here's the logging config (Right now it's in development.rb):
config.log_level = :debug
config.lograge.enabled = true
config.lograge.formatter = Lograge::Formatters::Logstash.new
config.logger = LogStashLogger.new(type: :tcp, host: 'localhost', port: 5228)
The logstash conf:
input {
tcp {
port => 5228
codec => json_lines
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
And last, but not least, here's the Docker-compose:
version: '3'
services:
db:
# This specifies a MySQL image that will be created with root privileges
# Note that MYSQL_ROOT_PASSWORD is Mandatory!
# We specify the 5.7.21'th version of MySQL from the docker repository.
# We are using mariadb because there's an apparent problem with permissions.
# See: https://github.com/docker-library/mysql/issues/69
image: mariadb:10.3.5
restart: always
environment:
MYSQL_ROOT_PASSWORD: "rootPassword"
MYSQL_USER: "ruby"
MYSQL_PASSWORD: "userPassword"
MYSQL_DATABASE: "dev"
ports:
- "3306:3306"
volumes:
- db-data:/var/lib/mysql/data
- ./db/rails_.sql:/docker-entrypoint-initdb.d/rails_.sql
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elasticsearch:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elk
logstash:
container_name: logstash
image: docker.elastic.co/logstash/logstash:6.2.3
volumes:
- ./elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./elk/logstash/pipeline/logstash.conf:/etc/logstash/conf.d/logstash.conf
command: logstash -f /etc/logstash/conf.d/logstash.conf
ports:
- "5050:5050"
- "5228:5228"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:6.2.3
volumes:
- ./elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
app:
depends_on:
- db
environment:
RAILS_ENV: development
LOGSTASH_HOST: localhost
SECRET_MYSQL_HOST: 'db'
SECRET_MYSQL_DATABASE: 'dev'
SECRET_MYSQL_USERNAME: 'ruby'
SECRET_MYSQL_PASSWORD: 'userPassword'
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3001 -b '0.0.0.0'"
stdin_open: true
tty: true
links:
- db
- logstash
volumes:
- "./:/var/www/rails"
ports:
- "3001:3001"
expose:
- "3001"
networks:
- elk
db-data:
driver: local
elasticsearch:
driver: local
networks:
elk:
driver: bridge
Any help is greatly appreciated 😊
By default Compose sets up a single network for your app. Each
container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by
them at a hostname identical to the container name.
According to the docker-compose.yaml file, logstash container is accessible on logstash:5228 from other containers, so logging config on app container should be changed to:
config.log_level = :debug
config.lograge.enabled = true
config.lograge.formatter = Lograge::Formatters::Logstash.new
config.logger = LogStashLogger.new(type: :tcp, host: 'logstash', port: 5228, formatter: :json_lines, sync: true))
Check that logstash.conf is like this:
input {
tcp {
port => 5228
codec => json_lines
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}

Resources