I want to output from my logstash into amazon elasticsearch, my logstash is started via docker compose. But amazon_es plugin is never installed. I've also tried using elasticsearch output but I get I'll have to open anonymous access for that.
docker-compose.yml
services:
logstash:
build:
context: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx2g -Xms2g"
networks:
- elk
networks:
elk:
driver: bridge
Pipeline (Output)
output {
amazon_es {
hosts => "https://es-url-es-url.com"
document_id => "%{[#metadata][fingerprint]}"
index => "docker-movies"
region => "us-east-1"
}
}
logstash/Dockerfile
FROM docker.elastic.co/logstash/logstash-oss:6.2.4
# Add your logstash plugins setup here
# Example: RUN logstash-plugin install logstash-filter-json
RUN logstash-plugin install logstash-output-amazon_es
Logstash_Output
Related
I'm trying to build my first ELK Stack in Docker for a school project.
The goal is to have an ElasticSearch container along with it's volume.
A LogStash container linked to ElasticSearch along with it's volume aswell.
A Filebeat container to send data to LogStash
And finaly a Kibana Container to visualise the logs.
The problem I'm having is that even though all seems to be configured and working, Kibana isn't receiving anything from ElasticSearch. When I try to add a new index pattern I'm faced with this message : "No data streams, indices, or index aliases match your index pattern."
And not a thing is visible in Index Management.
Here's the docker-compose.yml:
version: '3.6'
services:
Elasticsearch:
image: elasticsearch:8.5.3
container_name: elasticsearch
restart: always
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
ES_JAVA_OPTS: "-Xmx750m -Xms750m"
discovery.type: single-node
xpack.security.enabled: false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
Logstash:
image: logstash:8.5.3
container_name: logstash
restart: always
volumes:
- ./logstash/logstash.conf:/etc/logstash/conf.d/logstash.conf:ro
command: logstash -f /etc/logstash/conf.d/logstash.conf
depends_on:
- Elasticsearch
ports:
- '9600:9600'
environment:
LS_JAVA_OPTS: "-Xmx750m -Xms750m"
networks:
- elk
Kibana:
image: kibana:8.5.3
container_name: kibana
restart: always
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
depends_on:
- Elasticsearch
networks:
- elk
Filebeat:
container_name: filebeat
image: "docker.elastic.co/beats/filebeat:8.5.3"
restart: always
user: root
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker:/var/lib/docker:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- elk
volumes:
elastic_data:
networks:
elk:
driver: bridge
logstash/logstash.conf:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "elasticsearch"
codec => "json"
index => "logstash-%{indexDay}"
}
stdout { codec => rubydebug }
}
And finally the filebeat/filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- '/var/lib/docker/containers/*/*.log'
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
output.logstash:
hosts: ["logstash:5044"]
Any idea what I'm doing wrong?
I have a Tomcat docker container and Filebeat docker container both are up and running.
My objective: I need to collect tomcat logs from running Tomcat container to Filebeat container.
Issue: I have no idea how to get collected log files from Tomcat container.
What I have tried so far: I have tried to create a docker volume and add tomcat logs to that volume and access that volume from filebeat container, but ended with no success.
Structure: I have wrote docker-compose.yml file under project Logstash(root directory of the project) with following project structure.(Here I want to up and run Elasticsearch, Logstash, Filebeat and Kibana docker containers from one configuration file). docker-containers(root directory of the project) with following structure (here I want to up and run Tomcat, Nginx and Postgres containers from one configuration file).
Logstash: contain 4 main sub directories (Filebeat, Logstash, Elasticsearch and Kibana), ENV file and docker-compose.yml file. Both sub directories contain Dockerfiles to pull images and build the containers.
docker-containers: contains 3 main sub directories (Tomcat, Nginx and Postgres). ENV file and docker-compose.yml file. Both sub directories contain separate Dockerfiles to pull docker image and build the container.
Note: I think this basic structure my helpful to understand my requirements.
docker-compose.yml files
Logstash.docker-compose.yml file
version: '2'
services:
elasticsearch:
container_name: OTP-Elasticsearch
build:
context: ./elasticsearch
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
filebeat:
container_name: OTP-Filebeat
command:
- "-e"
- "--strict.perms=false"
user: root
build:
context: ./filebeat
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
- logstash
logstash:
container_name: OTP-Logstash
build:
context: ./logstash
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
expose:
- 5044/tcp
ports:
- "9600:9600"
- "5044:5044"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
kibana:
container_name: OTP-Kibana
build:
context: ./kibana
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
ports:
- "5601:5601"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
- logstash
- filebeat
networks:
elk:
driver: bridge
docker-containers.docker-compose.yml file
version: '2'
services:
# Nginx
nginx:
container_name: OTP-Nginx
restart: always
build:
context: ./nginx
args:
- comapanycode=${COMPANY_CODE}
- dbtype=${DB_TYPE}
- dbip=${DB_IP}
- dbname=${DB_NAME}
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
- webdirectory=${WEB_DIRECTORY}
ports:
- "80:80"
links:
- db:db
volumes:
- ./log/nginx:/var/log/nginx
depends_on:
- db
# Postgres
db:
container_name: OTP-Postgres
restart: always
ports:
- "5430:5430"
build:
context: ./postgres
args:
- food_db_version=${FOOD_DB_VERSION}
- dbtype=${DB_TYPE}
- retail_db_version=${RETAIL_DB_VERSION}
- dbname=${DB_NAME}
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
volumes:
- .data/db:/octopus_docker/postgresql/data
# Tomcat
tomcat:
container_name: OTP-Tomcat
restart: always
build:
context: ./tomcat
args:
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
links:
- db:db
volumes:
- ./tomcat/${WARNAME}.war:/usr/local/tomcat/webapps/${WARNAME}.war
ports:
- "8080:8080"
depends_on:
- db
- nginx
Additional files:
filebeat.yml (configuration file inside Logstash/Filbeat/config/)
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/local/tomcat/logs/.*log
output.logstash:
hosts: ["logstash:5044"]
Additional Info:
System I am using is Ubuntu 18.04
My goal is to collect tomcat logs from running tomcat container and forward them to Logstash and filter logs and forward that logs to Elasticsearch and finally to Kibana for Visualization purpose.
For now I can collect local machine(host) logs and visualize them in Kibana.(/var/log/)
My Problem:
I need to know proper way to get collected tomcat logs from tomcat container and forward them to logstash container via filebeat container.
Any discussion, answer or any help to understand a way to do this is highly expected.
Thanks.
So loooong... Create shared volume among all containers and setup your tomcat to save log files into that folder. If you can put all services into one docker-compose.yml, just setup volume internally:
docker-compose.yml
version: '3'
services:
one:
...
volumes:
- logs:/var/log/shared
two:
...
volumes:
- logs:/var/log/shared
volumes:
logs:
If you need several docker-compose.yml files, create volume globally in advance with docker volume create logs and map it into both compose files:
version: '3'
services:
one:
...
volumes:
- logs:/var/log/shared
two:
...
volumes:
- logs:/var/log/shared
volumes:
logs:
external: true
I try to run services (mongo) in swarm mode with log collected to elasticsearch via fluentd. It's worked(!) with:
docker-compose up
But when I deploy via stack, services started, but logs not collected, and i don't know how to see what the reason.
docker stack deploy -c docker-compose.yml env_staging
docker-compose.yml:
version: "3"
services:
mongo:
image: mongo:3.6.3
depends_on:
- fluentd
command: mongod
networks:
- webnet
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: mongo
fluentd:
image: zella/fluentd-es
depends_on:
- elasticsearch
ports:
- 24224:24224
- 24224:24224/udp
networks:
- webnet
elasticsearch:
image: elasticsearch
ports:
- 9200:9200
networks:
- webnet
kibana:
image: kibana
depends_on:
- elasticsearch
ports:
- 5601:5601
networks:
- webnet
networks:
webnet:
upd
I remove fluentd-address: localhost:24224 and problem solves. But I don't understand what is "localhost"? Why we can't set "fluentd" host. If someone explain what is fluentd-address, I will accept answer.
fluentd-address is the address where fluentd daemon resides (default is localhost and you don't need to specify it in this case).
In your case (using stack) your fluentd daemon will run on a node, you should reach that service using the name of the service (in your case fluentd, have you tried?).
Remember to add to your options the fluentd-async-connect: "true"
Reference is at:
https://docs.docker.com/config/containers/logging/fluentd/#usage
You don't need to specify fluentd-address. When you set logging driver to fluentd, Swarm automatically discovers nearest fluentd instance and sends there all stdout of desired container.
I've been fighting on getting the logs to work on a Dockerized Rails 5.1.5 application with the ELK (Elasticsearch Logstash Kibana) stack (which is Dockerized as well). I've been having problems on setting it up. I haven't been able to send a single log to logstash from Rails.
The current problem is:
ERROR -- : [LogStashLogger::Device::TCP] Errno::EADDRNOTAVAIL - Cannot
assign requested address - connect(2) for "localhost" port 5228
At first, I thought there was a problem with my current ELK configuration. After days spent, I finally figured out that ELK was working correctly by sending a dummy .log file through the nc command using Ubuntu for Win10, and it worked 🎉🎉🎉🎉 !!!
Now that I know the problem is with Rails, I've been trying a different set of combinations, but I still haven't gotten it to work:
I checked that the configuration from Logstash is correctly accepting TCP, and is the right port.
Changed to a different port, both in Logstash and Rails.
I'm currently using Docker-Compose V3. I initialized ELK first, and then Rails (but the problem still creeped in)
Switched between UDP and TCP.
Did not specified a codec in the logstash.conf.
Specified the json_lines codec in the logstash.conf
I've tried specifying a link between logstash and rails in docker-compose.yml (Even though it's deprecated for docker-compose v3)
I've tried bringing them together through a networkin docker-compose.
I've tried specifying a depends_on logstash in the rails app in docker-compose.
I'm running out of ideas:
Here's the logging config (Right now it's in development.rb):
config.log_level = :debug
config.lograge.enabled = true
config.lograge.formatter = Lograge::Formatters::Logstash.new
config.logger = LogStashLogger.new(type: :tcp, host: 'localhost', port: 5228)
The logstash conf:
input {
tcp {
port => 5228
codec => json_lines
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
And last, but not least, here's the Docker-compose:
version: '3'
services:
db:
# This specifies a MySQL image that will be created with root privileges
# Note that MYSQL_ROOT_PASSWORD is Mandatory!
# We specify the 5.7.21'th version of MySQL from the docker repository.
# We are using mariadb because there's an apparent problem with permissions.
# See: https://github.com/docker-library/mysql/issues/69
image: mariadb:10.3.5
restart: always
environment:
MYSQL_ROOT_PASSWORD: "rootPassword"
MYSQL_USER: "ruby"
MYSQL_PASSWORD: "userPassword"
MYSQL_DATABASE: "dev"
ports:
- "3306:3306"
volumes:
- db-data:/var/lib/mysql/data
- ./db/rails_.sql:/docker-entrypoint-initdb.d/rails_.sql
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elasticsearch:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elk
logstash:
container_name: logstash
image: docker.elastic.co/logstash/logstash:6.2.3
volumes:
- ./elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./elk/logstash/pipeline/logstash.conf:/etc/logstash/conf.d/logstash.conf
command: logstash -f /etc/logstash/conf.d/logstash.conf
ports:
- "5050:5050"
- "5228:5228"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:6.2.3
volumes:
- ./elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
app:
depends_on:
- db
environment:
RAILS_ENV: development
LOGSTASH_HOST: localhost
SECRET_MYSQL_HOST: 'db'
SECRET_MYSQL_DATABASE: 'dev'
SECRET_MYSQL_USERNAME: 'ruby'
SECRET_MYSQL_PASSWORD: 'userPassword'
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3001 -b '0.0.0.0'"
stdin_open: true
tty: true
links:
- db
- logstash
volumes:
- "./:/var/www/rails"
ports:
- "3001:3001"
expose:
- "3001"
networks:
- elk
db-data:
driver: local
elasticsearch:
driver: local
networks:
elk:
driver: bridge
Any help is greatly appreciated 😊
By default Compose sets up a single network for your app. Each
container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by
them at a hostname identical to the container name.
According to the docker-compose.yaml file, logstash container is accessible on logstash:5228 from other containers, so logging config on app container should be changed to:
config.log_level = :debug
config.lograge.enabled = true
config.lograge.formatter = Lograge::Formatters::Logstash.new
config.logger = LogStashLogger.new(type: :tcp, host: 'logstash', port: 5228, formatter: :json_lines, sync: true))
Check that logstash.conf is like this:
input {
tcp {
port => 5228
codec => json_lines
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
I have an elk docker-compose file that I think cannot run on my server because the docker-compose version is too old...
$ docker-compose -version
docker-compose version 1.6.2, build 4d72027
And here is my docker-compose file...
version: '2'
services:
elasticsearch:
image: elasticsearch:5
command: elasticsearch
environment:
# This helps ES out with memory usage
- ES_JAVA_OPTS=-Xmx1g -Xms1g
volumes:
# Persist elasticsearch data to a volume
- elasticsearch:/usr/share/elasticsearch/data
# Extra ES configuration options
- ./es/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
logstash:
image: logstash:5
command: logstash -w 4 -f /etc/logstash/conf.d/logstash.conf
environment:
# This helps Logstash out if it gets too busy
- LS_HEAP_SIZE=2048m
volumes:
# volume mount the logstash config
- ./logstash/logstash.conf:/etc/logstash/conf.d/logstash.conf
- /Users/rriviere/workspace/parks-dpe/activities-api-v1/app-logs/:/tmp/app-logs
ports:
# GELF port for Docker logs
- "12201:12201/udp"
# UDP port for syslogs
- "5000:5000/udp"
# Default TCP port
- "5001:5001"
links:
- elasticsearch
kibana:
image: kibana:5
environment:
# Point Kibana to the elasticsearch container
- ELASTICSEARCH_URL=http://elasticsearch:9200
ports:
- "5601:5601"
links:
- elasticsearch
kopf:
image: rancher/kopf:v0.4.0
ports:
- "8080:80"
environment:
KOPF_ES_SERVERS: "elasticsearch:9200"
links:
- elasticsearch
volumes:
elasticsearch:
Whilst I'm not looking for an exact answer here can someone help me with what is required to create myself a docker compose v1 file that would do the same things.
thanks
you better check the docs for more details but just looking at your file, I think if you remove following lines then you are good to go:
version: '2'
services:
version 2 has some additional features which I dont see in your case, so you are pretty compatible with v1.