I am running logstash and filebeat inside separate docker-compose.yml. But filebeat cannot connect to logstash. I can properly telnet into logstash telnet 127.0.0.1 5044 after I wait for the logstash pipelines to start.
Filebeat cannot create a connection. I get this error.
ERROR pipeline/output.go:74 Failed to connect: dial tcp 127.0.0.1:5044: getsockopt: connection refused
This is my docker-compose for filebeat.
version: '2'
services:
filebeat:
image: docker.elastic.co/beats/filebeat:6.2.3
container_name: filebeat
user: root
volumes:
- flask-sync:/home/flask/app/web:ro
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
volumes:
flask-sync:
external: true
This is my filebeat.yml
filebeat.prospectors:
- type: log
enabled: true
paths:
- /home/flask/app/web/tmp/log
output.logstash:
hosts: ["127.0.0.1:5044"]
This is my docker-compose for logstash
version: '2'
services:
logstash:
image: docker.elastic.co/logstash/logstash:6.2.4
container_name: logstash
ports:
- "5044:5044"
volumes:
- ./logstash/logstash.conf:/usr/share/logstash/logstash.conf:ro
- ./logstash/config/:/usr/share/logstash/config/:ro
command: bin/logstash -f logstash.conf --config.reload.automatic
This is my logstash.conf
input {
beats {
port => 5044
}
}
filter {
}
output {
stdout { codec => rubydebug }
}
I had same problems:
Check export the 5044 port in docker-compose.yml
build:
context: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
- /datos/sna/log:/datos/sna/log
ports:
- "5000:5000"
- "5044:5044"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
Related
I'm trying to build my first ELK Stack in Docker for a school project.
The goal is to have an ElasticSearch container along with it's volume.
A LogStash container linked to ElasticSearch along with it's volume aswell.
A Filebeat container to send data to LogStash
And finaly a Kibana Container to visualise the logs.
The problem I'm having is that even though all seems to be configured and working, Kibana isn't receiving anything from ElasticSearch. When I try to add a new index pattern I'm faced with this message : "No data streams, indices, or index aliases match your index pattern."
And not a thing is visible in Index Management.
Here's the docker-compose.yml:
version: '3.6'
services:
Elasticsearch:
image: elasticsearch:8.5.3
container_name: elasticsearch
restart: always
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
ES_JAVA_OPTS: "-Xmx750m -Xms750m"
discovery.type: single-node
xpack.security.enabled: false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
Logstash:
image: logstash:8.5.3
container_name: logstash
restart: always
volumes:
- ./logstash/logstash.conf:/etc/logstash/conf.d/logstash.conf:ro
command: logstash -f /etc/logstash/conf.d/logstash.conf
depends_on:
- Elasticsearch
ports:
- '9600:9600'
environment:
LS_JAVA_OPTS: "-Xmx750m -Xms750m"
networks:
- elk
Kibana:
image: kibana:8.5.3
container_name: kibana
restart: always
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
depends_on:
- Elasticsearch
networks:
- elk
Filebeat:
container_name: filebeat
image: "docker.elastic.co/beats/filebeat:8.5.3"
restart: always
user: root
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker:/var/lib/docker:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- elk
volumes:
elastic_data:
networks:
elk:
driver: bridge
logstash/logstash.conf:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "elasticsearch"
codec => "json"
index => "logstash-%{indexDay}"
}
stdout { codec => rubydebug }
}
And finally the filebeat/filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- '/var/lib/docker/containers/*/*.log'
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
output.logstash:
hosts: ["logstash:5044"]
Any idea what I'm doing wrong?
I'm trying to figure out how to run Redis cluster using Docker Compose.
However, I'm getting the following error:
Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:6380 -> 0.0.0.0:0: listen tcp 0.0.0.0:6380: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
version: '3.9'
services:
dynamodb-local:
container_name: dynamodb-local
image: amazon/dynamodb-local:latest
command: -jar DynamoDBLocal.jar -sharedDb -dbPath ./data
ports:
- 8000:8000
volumes:
- ./docker/dynamodb:/home/dynamodblocal/data
working_dir: /home/dynamodblocal
networks:
- webnet
redis-master: # Setting up master node
image: bitnami/redis:latest
ports:
- 6379:6379
environment:
- REDIS_REPLICATION_MODE=master
- REDIS_PASSWORD=my_master_password
volumes:
- ./docker/redis:/bitnami/redis/data # Redis master data volume
- ./docker/redis/conf/redis.conf:/opt/bitnami/redis/conf/redis.conf # Redis master configuration volume
networks:
- webnet
redis-replica:
image: bitnami/redis:latest
ports:
- 6380-6382:6379
depends_on:
- redis-master
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_PASSWORD=my_master_password
- REDIS_PASSWORD=my_replica_password
deploy:
replicas: 3
networks:
- webnet
networks:
webnet:
driver: bridge
I am trying to run Spring application with logstash, Elastic Search, Kafka and Kibana.
[main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://kafka:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://kafka:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
The above error is being repeated under logstash container
docker-compose.yml
version: '3.7'
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
#restart: always
networks:
- tweetapp-network
kafka:
image: wurstmeister/kafka
container_name: kafka
#restart: always
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "tweetapp-logs:1:1, Tweets:1:1"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- tweetapp-network
mongodb:
image: mongo
container_name: mongodb
# restart: always
ports:
- "27017:27017"
# volumes:
# - mongodb-volume:/data/db
networks:
- tweetapp-network
springboot:
image: tweetapp
#restart: always
ports:
- "8080:8080"
depends_on:
- mongodb
- kafka
- elasticsearch
- logstash
- kibana
networks:
- tweetapp-network
logstash:
image: logstash:7.7.0
container_name: logstash
hostname: logstash
ports:
- "9600:9600"
volumes:
- .\logstash:/usr/share/logstash/pipeline/
links:
- elasticsearch:elasticsearch
depends_on:
- elasticsearch
networks:
- tweetapp-network
elasticsearch:
image: elasticsearch:7.7.0
container_name: elasticsearch
hostname: elasticsearch
ports:
- "9200:9200"
environment:
- discovery.type=single-node
networks:
- tweetapp-network
kibana:
image: kibana:7.7.0
container_name: kibana
hostname: kibana
ports:
- "5601:5601"
links:
- elasticsearch:elasticsearch
depends_on:
- elasticsearch
networks:
- tweetapp-network
# Networks to be created to facilitate communication between containers
networks:
tweetapp-network:
I made sure that Elastic Search is working http://localhost:9200/ , for this URL I get JSON output.
logstash.config
input {
kafka {
bootstrap_servers => "kafka:9092"
topics => ["tweetapp-logs"]
}
}
filter {
grok {
match => [ "message", "%{GREEDYDATA}" ]
}
}
output {
elasticsearch {
hosts => ["kafka:9200"]
index => "tweetapp"
workers => 1
}
}
I am new to docker-compose, Elastic search and Kafka. Any help will be appreciated.
Seems like you are unlucky.
Replace Elasticsearch instead of kafka in host name
Try this:
input { kafka { bootstrap_servers => "kafka:9092" topics => ["tweetapp-logs"] } } filter { grok { match => [ "message", "%{GREEDYDATA}" ] } } output { elasticsearch { hosts => ["elasticsearch:9200"] index => "tweetapp" workers => 1 } }
I am trying to log my application into grafana/loki/promtail using the same docker compose ,and I get the following error when connecting to loki :
localhost:3100 -> 404 page not found
and when i try to hook it in grafana :
URL [http://loki:3100 ]-> Loki: Bad Gateway. 502. Bad Gateway
I have seen that you have to put in grafana the name of the container for it to detect it but I get the same error.
Both promtail and loki containers show no errors in their logs.
version: "3.7"
services:
my-service-to-log:
image: example:latest
ports:
- "8080:8080"
- "8443:8443"
loki:
image: grafana/loki:2.4.1
ports:
- "3100:3100"
volumes:
- "C:/path/loki-config.yaml:/etc/loki/local-config.yaml"
command: -config.file=/etc/loki/local-config.yaml
promtail:
image: grafana/promtail:2.4.1
volumes:
- "C:/path/promtail-config.yaml:/etc/promtail/config.yml"
- /var/log:/var/log
command: -config.file=/etc/promtail/config.yml
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
My loki-config.yaml
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
common:
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
And my promtail-config.yaml
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /opt/app/logs/*.log
/ # nc -vz localhost 3100
localhost (127.0.0.1:3100) open
I have tried to nc from the grafana container to the loki container and it seems to see it .... any ideas ?
Loki needs to listen on all interfaces, not just localhost, when you want to access it from another container:
common:
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
instance_addr: 0.0.0.0
kvstore:
store: inmemory
When adding Loki as a datasource in Grafana,
instead of http://localhost:3100,
use an ip address your computer has, instead of localhost.
Example http://LAN_IP_ADDR:3100
You must place all containers on the same virtual network. Without this, they cannot see each other.
version: "3.7"
networks:
loki:
services:
my-service-to-log:
image: example:latest
ports:
- "8080:8080"
- "8443:8443"
networks:
- loki
loki:
image: grafana/loki:2.4.1
ports:
- "3100:3100"
volumes:
- "C:/path/loki-config.yaml:/etc/loki/local-config.yaml"
command: -config.file=/etc/loki/local-config.yaml
networks:
- loki
promtail:
image: grafana/promtail:2.4.1
volumes:
- "C:/path/promtail-config.yaml:/etc/promtail/config.yml"
- /var/log:/var/log
command: -config.file=/etc/promtail/config.yml
networks:
- loki
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
networks:
- loki
Hi guys i'm having trouble to send my server container logs to my ELK stack. No input is sent to logstash so i'm unable to set kibana index for collecting logs. I think my problem is in the port settings.
Here is the docker-compose yml for the LAMP stack (only the server service):
version: '3'
services:
server:
build: ./docker/apache
links:
- fpm
ports:
- 80:80 # HTTP
- 443:443 # HTTPS
logging:
driver: "gelf"
options:
gelf-address: "udp://127.0.0.1:5000"
tag: "server"
And here is the docker-compose yml for the ELK stack, based on deviantony/docker-elk github project
version: '2'
services:
elasticsearch:
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
I've found the mistake, i've to specify the UDP protocol in the logstash service port definition.
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000/udp"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
You need to use the gelf input plugin. Here an example of a functioning compose file:
services:
logstash:
image: docker.elastic.co/logstash/logstash:5.3.1
logging:
driver: "json-file"
networks:
- logging
ports:
- "127.0.0.1:12201:12201/udp"
entrypoint: logstash -e 'input { gelf { } } output { stdout{ } }'
You can test it by running:
docker run --log-driver=gelf --log-opt gelf-address=udp://127.0.0.1:12201 ubuntu /bin/sh -c 'while true; do date "+%d-%m-%Y %H:%M:%S:%3N"; sleep 1 ; done
and checking docker logs on the logstash container.