Small question regarding ElasticSearch + LogStash + docker-compose, and the possibility to start LogStash with a configuration logstash.conf without volume mount please.
For testing purpose, I would like to start an ElasticSearch and a LogStash. The LogStash purpose is to forward logs to the ElasticSearch cluster.
Popular logstash.conf found on the web are like this:
input {
tcp {
port => 5000
codec => json
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch.mynetwork:9200"]
index => "myapp"
}
}
If I save this as a file, and mount it in my dockerfile, it works like a charm, very happy.
However, it requires to mount a file from the local system onto the container.
Since this configuration file is super simple, I was wondering if I could just pass the content of the config file as environment variable in the docker compose file.
Here is what I tried:
version: "3.9"
services:
elasticsearch:
networks: ["mynetwork"]
container_name: elasticsearch.mynetwork
hostname: elasticsearch.mynetwork
image: elasticsearch:8.6.0
ports:
- 9200:9200
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
logstash:
networks: [ "mynetwork" ]
container_name: logstash.mynetwork
hostname: logstash.mynetwork
image: logstash:8.6.0
ports:
- 9600:9600
- 5044:5044
- 5000:5000/tcp
- 5000:5000/udp
environment:
LS_SETTINGS_DIR: input { tcp { port => 5000 codec => json }}output { elasticsearch { hosts => ["http://elasticsearch.mynetwork:9200"] index => "myapp" }}
However, LogStash is not able to forward to Elastic.
May I ask what is the proper way to configure LogStash, so that it can take into account the conf file without mounting it as a volume please?
Thank you
Related
I'm having a strange problem I can't work out as my problem, when searching for this error, is different. People seem to have experienced this when trying to connect Filebeat to Logstash.
However, I am trying to write logs directly to Elasticsearch but I am getting Logstash related errors even though I am not even spinning up a container in Docker Compose??
Main Docker Compose File:
version: '2.2'
services:
filebeat:
container_name: filebeat
build:
context: .
dockerfile: filebeat.Dockerfile
volumes:
- ./logs:/var/log
networks:
- esnet
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
ports:
- 9200:9200
- 9300:9300
environment:
- discovery.type=single-node
- cluster.name=docker-
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
elastichq:
container_name: elastichq
image: elastichq/elasticsearch-hq
ports:
- 8080:5000
environment:
- HQ_DEFAULT_URL=http://elasticsearch:9200
- HQ_ENABLE_SSL=False
- HQ_DEBUG=FALSE
networks:
- esnet
networks:
esnet:
DockerFile for Filebeat
FROM docker.elastic.co/beats/filebeat:7.5.2
COPY filebeat/filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
RUN chmod 644 /usr/share/filebeat/filebeat.yml
USER filebeat
I am trying to read json logs that are already in Elasticsearch format, so after reading the docs I decided to try and write directly to Elasticsearch which seems to be valid depending on the application.
My Sample.json file:
{"#timestamp":"2020-02-10T09:35:20.7793960+00:00","level":"Information","messageTemplate":"The value of i is {LoopCountValue}","message":"The value of i is 0","fields":{"LoopCountValue":0,"SourceContext":"WebAppLogger.Startup","Environment":"Development","ApplicationName":"ELK Logging Demo"}}
My Filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.json
json.keys_under_root: true
json.add_error_key: true
json.message_key: log
#----------------------------- Elasticsearch output --------------------------------
output.elasticsearch:
hosts: ["elasticsearch:9200"]
index: "sample-%{+YYYY.MM.dd}"
As stated in the title of this post, I get this message in the console:
filebeat | 2020-02-10T09:38:24.438Z ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://logstash:5044)): lookup logstash on 127.0.0.11:53: no such host
Then when I eventually try and visualize the data in ElasticHq, inevitably, nothing is there.
So far, I've tried using commands like docker prune just in case theres something funny going on with Docker.
Is there something I'm missing?
You have misconfigured your filebeat.yml file. Look at this error:
Failed to connect to backoff(async(tcp://logstash:5044))
Filebeat tries to connect to logstash, beacause this is the default configuration. In fact on one hand you show a filebeat.yml file and on the other hand, you haven't mounted it on /usr/share/filebeat/filebeat.yml - look at your volumes settings
filebeat:
container_name: filebeat
build:
context: .
dockerfile: filebeat.Dockerfile
volumes:
- ./logs:/var/log
networks:
- esnet
You should mount it. If you try to copy it inside a docker container with dockerfile - why?????is necessary reinvent the wheel and add complexity? - you should use the root user:
USER root
and add root user to your service in docker-compose.yml:
user: root
I've setup elasticsearch and kibana with docker compose. elasticsearch is deployed on: localhost:9200 while kibana is deployed on localhost:5601
When trying to deploy metricbeat with docker run I got the following errors:
$ docker run docker.elastic.co/beats/metricbeat:6.3.2 setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["localhost:9200"]
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://localhost:9200: Get http://localhost:9200: dial tcp [::1]:9200: connect: cannot assign requested address]
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://elasticsearch:9200: Get http://elasticsearch:9200: lookup elasticsearch on 192.168.65.1:53: no such host]
My docker-compose.yml:
# ./docker-compose.yml
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
environment:
# - cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elkdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
restart: always
kibana:
image: docker.elastic.co/kibana/kibana:6.3.2
volumes:
- kibana:/usr/share/kibana/config
ports:
- "5601:5601"
depends_on:
- elasticsearch
restart: always
volumes:
elkdata:
kibana:
First edit your docker-compose file by adding a name for default docker network:
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
environment:
# - cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elkdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
networks:
- my-network
restart: always
kibana:
image: docker.elastic.co/kibana/kibana:6.3.2
volumes:
- kibana:/usr/share/kibana/config
ports:
- "5601:5601"
networks:
- my-network
depends_on:
- elasticsearch
restart: always
volumes:
elkdata:
kibana:
networks:
my-network:
name: awesome-name
Execute docker-compose up and then start metricbeat with the below command:
$ docker run docker.elastic.co/beats/metricbeat:6.3.2 --network=awesome-name setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["elasticsearch:9200"]
Explanation:
When you try to deploy metricbeat, you provide below envars:
setup.kibana.host=kibana:5601
output.elasticsearch.hosts=["localhost:9200"]
I will start with the second one. With docker run command, when you are starting metricbeat, you are telling the container that it can access elastic search on localhost:9200. So when the container starts, it will access localhost on port 9200 expecting to find elasticsearch running. But, as the container is a host isolated process with its own network layer, localhost resolves to the container itself, not to your docker host machine as you are expecting.
Regarding the setup of kibana host, you should firstly understand how docker-compose works. By default, when you execute docker-compose up, a docker network is created and all services defined on yml file are added to this network. Inside this network and only, services are accessible through their service name. For your case, as defined on yml file, their names would be elasticsearch, kibana.
So in order metricbeat container to be able to communicate with elasticsearch and kibana containers, it should be added to the same docker network. This can be achieved with setting --network flag on docker run command.
Another approach would be to share docker host's network with your containers by using network mode host, but I would not recommend that.
References:
Docker compose
docker run
I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.
I want to output from my logstash into amazon elasticsearch, my logstash is started via docker compose. But amazon_es plugin is never installed. I've also tried using elasticsearch output but I get I'll have to open anonymous access for that.
docker-compose.yml
services:
logstash:
build:
context: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx2g -Xms2g"
networks:
- elk
networks:
elk:
driver: bridge
Pipeline (Output)
output {
amazon_es {
hosts => "https://es-url-es-url.com"
document_id => "%{[#metadata][fingerprint]}"
index => "docker-movies"
region => "us-east-1"
}
}
logstash/Dockerfile
FROM docker.elastic.co/logstash/logstash-oss:6.2.4
# Add your logstash plugins setup here
# Example: RUN logstash-plugin install logstash-filter-json
RUN logstash-plugin install logstash-output-amazon_es
Logstash_Output
I've been fighting on getting the logs to work on a Dockerized Rails 5.1.5 application with the ELK (Elasticsearch Logstash Kibana) stack (which is Dockerized as well). I've been having problems on setting it up. I haven't been able to send a single log to logstash from Rails.
The current problem is:
ERROR -- : [LogStashLogger::Device::TCP] Errno::EADDRNOTAVAIL - Cannot
assign requested address - connect(2) for "localhost" port 5228
At first, I thought there was a problem with my current ELK configuration. After days spent, I finally figured out that ELK was working correctly by sending a dummy .log file through the nc command using Ubuntu for Win10, and it worked 🎉🎉🎉🎉 !!!
Now that I know the problem is with Rails, I've been trying a different set of combinations, but I still haven't gotten it to work:
I checked that the configuration from Logstash is correctly accepting TCP, and is the right port.
Changed to a different port, both in Logstash and Rails.
I'm currently using Docker-Compose V3. I initialized ELK first, and then Rails (but the problem still creeped in)
Switched between UDP and TCP.
Did not specified a codec in the logstash.conf.
Specified the json_lines codec in the logstash.conf
I've tried specifying a link between logstash and rails in docker-compose.yml (Even though it's deprecated for docker-compose v3)
I've tried bringing them together through a networkin docker-compose.
I've tried specifying a depends_on logstash in the rails app in docker-compose.
I'm running out of ideas:
Here's the logging config (Right now it's in development.rb):
config.log_level = :debug
config.lograge.enabled = true
config.lograge.formatter = Lograge::Formatters::Logstash.new
config.logger = LogStashLogger.new(type: :tcp, host: 'localhost', port: 5228)
The logstash conf:
input {
tcp {
port => 5228
codec => json_lines
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
And last, but not least, here's the Docker-compose:
version: '3'
services:
db:
# This specifies a MySQL image that will be created with root privileges
# Note that MYSQL_ROOT_PASSWORD is Mandatory!
# We specify the 5.7.21'th version of MySQL from the docker repository.
# We are using mariadb because there's an apparent problem with permissions.
# See: https://github.com/docker-library/mysql/issues/69
image: mariadb:10.3.5
restart: always
environment:
MYSQL_ROOT_PASSWORD: "rootPassword"
MYSQL_USER: "ruby"
MYSQL_PASSWORD: "userPassword"
MYSQL_DATABASE: "dev"
ports:
- "3306:3306"
volumes:
- db-data:/var/lib/mysql/data
- ./db/rails_.sql:/docker-entrypoint-initdb.d/rails_.sql
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elasticsearch:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elk
logstash:
container_name: logstash
image: docker.elastic.co/logstash/logstash:6.2.3
volumes:
- ./elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./elk/logstash/pipeline/logstash.conf:/etc/logstash/conf.d/logstash.conf
command: logstash -f /etc/logstash/conf.d/logstash.conf
ports:
- "5050:5050"
- "5228:5228"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:6.2.3
volumes:
- ./elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
app:
depends_on:
- db
environment:
RAILS_ENV: development
LOGSTASH_HOST: localhost
SECRET_MYSQL_HOST: 'db'
SECRET_MYSQL_DATABASE: 'dev'
SECRET_MYSQL_USERNAME: 'ruby'
SECRET_MYSQL_PASSWORD: 'userPassword'
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3001 -b '0.0.0.0'"
stdin_open: true
tty: true
links:
- db
- logstash
volumes:
- "./:/var/www/rails"
ports:
- "3001:3001"
expose:
- "3001"
networks:
- elk
db-data:
driver: local
elasticsearch:
driver: local
networks:
elk:
driver: bridge
Any help is greatly appreciated 😊
By default Compose sets up a single network for your app. Each
container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by
them at a hostname identical to the container name.
According to the docker-compose.yaml file, logstash container is accessible on logstash:5228 from other containers, so logging config on app container should be changed to:
config.log_level = :debug
config.lograge.enabled = true
config.lograge.formatter = Lograge::Formatters::Logstash.new
config.logger = LogStashLogger.new(type: :tcp, host: 'logstash', port: 5228, formatter: :json_lines, sync: true))
Check that logstash.conf is like this:
input {
tcp {
port => 5228
codec => json_lines
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}