filebeat logs to kafka to clickhouse not persisiting using docker-compose - docker

Im trying to insert the file beat collected logs to Kafka and Kafka will store the logs in Clickhouse, File beat to Kafka logs are sending but Kafka is unable to send/store logs to Clickhouse. I don't see any weird logs while checking the container. Please guide me did i do something wrong here.
docker-compose.yml
version: "3.5"
services:
filebeat:
image: docker.elastic.co/beats/filebeat:8.0.1
volumes:
- "./logs:/var/log/"
- "./filebeat.yml:/usr/share/filebeat/filebeat.yml"
container_name: filebeat
networks:
- filebeat-net
depends_on:
- kafka
zookeeper:
image: docker.io/bitnami/zookeeper:3.7
ports:
- "2181:2181"
volumes:
- "zookeeper-data:/bitnami"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- filebeat-net
container_name: zookeeper
kafka:
image: docker.io/bitnami/kafka:3
ports:
- "9092:9092"
- '29092:29092'
volumes:
- "kafka-data:/bitnami"
environment:
- "HOSTNAME_COMMAND=docker info | grep ^Name: | cut -d' ' -f 2"
- "KAFKA_CFG_INTER_BROKER_LISTENER_NAME=PLAINTEXT"
- "ALLOW_PLAINTEXT_LISTENER=yes"
- "KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181"
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
- "KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT"
- "KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,PLAINTEXT_HOST://:29092"
- "KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092"
- "KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true"
depends_on:
- zookeeper
networks:
- filebeat-net
container_name: kafka
clickhouse-server:
image: clickhouse/clickhouse-server:22.5.1
container_name: clickhouse-server
hostname: clickhouse-server
ulimits:
nofile:
soft: 262144
hard: 262144
ports:
- "8123:8123"
- "9000:9000"
volumes:
- './init.sql:/docker-entrypoint-initdb.d/init-db.sql'
depends_on:
- kafka
networks:
- filebeat-net
networks:
filebeat-net:
name: filebeat-net
driver: bridge
volumes:
zookeeper-data:
name: zookeeper-data
driver: local
kafka-data:
name: kafka-data
driver: local
filebeat.yml
filebeat.inputs:
- input_type: log
paths:
- /var/log/*.log
output.kafka:
hosts: ["kafka:9092"]
topic: hello-messages
init.sql
CREATE TABLE IF NOT EXISTS mylogger (
id String,
event_time DateTime64(6),
details_json String)
ENGINE = MergeTree() PARTITION BY toYYYYMM(event_time) ORDER BY (id, event_time) SETTINGS index_granularity=8192;
CREATE TABLE IF NOT EXISTS
mylogger_kafka ( payload String ) ENGINE = Kafka('kafka:9092', 'hello-messages', 'KAFKA2CH_click', 'JSONAsString');
CREATE MATERIALIZED VIEW IF NOT EXISTS mylogger_kafka_consumer TO mylogger AS
SELECT JSONExtractString(payload, 'payload', 'after', 'id') as id,
toDateTime64(JSONExtractString(payload, 'payload', 'after', 'event_time'), 3, 'Asia/Jerusalem') as event_time,
JSONExtractString(payload, 'payload', 'after', 'details_json') as details_json
FROM mylogger_kafka;

Related

ElasticSearch with Docker and Traefic

When I will run elasticsearch with docker and traefic (With SSL encryption).
Than I can't connect to elasticsearch via domain.
When I remove all traefic things in the docker-compose by the elastic search tham it works over the IP and http.
Here is my docker-compose.yml:
`
version: "3.7"
networks:
traefik:
external: true
search:
services:
elasticsearch:
image: elasticsearch:7.16.2
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=XXXXXX
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
labels:
- "traefik.enable=true"
- "traefik.http.routers.elastic_search.rule=Host(se.mydomain.xyz)"
- "traefik.http.routers.elastic_search.entrypoints.websecure"
- "traefik.http.services.elastic_search.loadbalancer.server.port=9200"
ports:
- 9200:9200
- 9300:9300
networks:
- search
kibana:
image: kibana:7.16.2
container_name: kibana
labels:
- "traefik.enable=true"
- "traefik.http.routers.kibana_search.rule=Host(kibana.mydomain.xyz)"
- "traefik.http.routers.kibana_search.entrypoints.websecure"
- "traefik.http.services.kibana_search.loadbalancer.server.port=5601"
#ports:
#- 5601:5601
environment:
ELASTICSEARCH_URL: http://XXX.XXX.XXX.XXX:9200
ELASTICSEARCH_HOSTS: '["http://XXX.XXX.XXX.XXX:9200"]'
networks:
- search
- traefik
volumes:
elasticsearch-data:
driver: local
`
Has anyone a idea for a solution?

I got a 404 when running kibana on docker behind traefik, but elastic can be reached

I am having issues while running ELK on docker, behind Traefik. Every other services are running, but when i try to access to kibana on a browser via its url, I got a 404.
This is my docker-compose.yml :
version: '3.4'
networks:
app-network:
name: app-network
driver: bridge
ipam:
config:
- subnet: xxx.xxx.xxx.xxx/xxx
services:
reverse-proxy:
image: traefik:v2.5
command:
--providers.docker.network=app-network
--providers.docker.exposedByDefault=false
--entrypoints.web.address=:80
--entrypoints.websecure.address=:443
--providers.docker=true
--api=true
--api.dashboard=true
ports:
- "80:80"
- "443:443"
networks:
app-network:
ipv4_address: xxx.xxx.xxx.xxx
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /certs/:/certs
labels:
- traefik.enable=true
- traefik.docker.network=public
- traefik.http.routers.traefik-http.entrypoints=web
- traefik.http.routers.traefik-http.service=api#internal
elasticsearch:
hostname: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0
environment:
- bootstrap.memory_lock=true
- cluster.name=docker-cluster
- cluster.routing.allocation.disk.threshold_enabled=false
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
ulimits:
memlock:
hard: -1
soft: -1
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
app-network:
ipv4_address: xxx.xxx.xxx.xxx
healthcheck:
interval: 20s
retries: 10
test: curl -s http://localhost:9200/_cluster/health | grep -vq '"status":"red"'
labels:
- "traefik.enable=true"
- "traefik.http.routers.elasticsearch.entrypoints=http"
- "traefik.http.routers.elastic.rule=Host(`elastic.mydomain.fr`)"
- "traefik.http.services.elastic.loadbalancer.server.port=9200"
kibana:
hostname: kibana
image: docker.elastic.co/kibana/kibana:7.12.0
depends_on:
elasticsearch:
condition: service_healthy
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
ports:
- 5601:5601
networks:
app-network:
ipv4_address: xxx.xxx.xxx.xxx
links:
- elasticsearch
healthcheck:
interval: 10s
retries: 20
test: curl --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null http://localhost:5601/api/status
labels:
- "traefik.enable=true"
- "traefik.http.routers.kibana.entrypoints=http"
- "traefik.http.routers.kibana.rule=Host(`kibana.mydomain.fr`)"
- "traefik.http.services.kibana.loadbalancer.server.port=5601"
- "traefik.http.routers.kibana.entrypoints=websecure"
volumes:
esdata:
driver: local
Knowing that, as I said, Elastic and other services can be accessed.
I have already tried to set the basePath, but it did not works either.
Do you have any idea what am I missing ?
You named your entrypoints at the top "web" and "websecure", but the labels are using "http" as entrypoint, you have to rename them (except you have defined http as well as entrypoint somewhere else). You have to match the word you are defining in the configuration string: --entrypoints.web.address=:80
So for example: "traefik.http.routers.elasticsearch.entrypoints=web"
Additional Tip: you can remove the label with the loadbalancer port, because as long as you are defining an exposed or mapped port in Docker, Traefik recognises the used port. I have no such line configured for my personal services. - "traefik.http.services.elastic.loadbalancer.server.port=9200"
Thanks for your answer Zeikos, it helps me a lot.
I think it could be closed now

how to run SolrCloud with docker-compose

Please help me with docker-compose file.
Right now, i'm using Solr in docker file, but i need to change it to SolrCloud. I need 2 Solr instances, an internal Zookeeper and docker (local).
This is an example of docker-compose file I did:
version: "3"
services:
mongo:
image: mongo:latest
container_name: mongo
hostname: mongo
networks:
- gsec
ports:
- 27018:27017
sqlserver:
image: microsoft/mssql-server-linux:latest
hostname: sqlserver
container_name: sqlserver
environment:
SA_PASSWORD: "#Password123!"
ACCEPT_EULA: "Y"
networks:
- gsec
ports:
- 1403:1433
solr:
image: solr
container_name: solr
ports:
- "8983:8983"
networks:
- gsec
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- mycore
volumes:
data:
networks:
gsec:
driver: bridge
Thank you in advanced.
Solr docker instance has a zookeeper server embedded into. You have just to start Solr with the right parameters and add the zookeeper ports 9983:9983 in the docker-compose file:
solr:
image: solr
container_name: solr
ports:
- "9983:9983"
- "8983:8983"
networks:
- gsec
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr
- start
- -c
- -f
SolrCloud basically is a Solr cluster where Zookeeper is used to coordinate and configure the cluster.
Usually you use SolrCloud with Docker because you're are learning how it works or because you're preparing your application (locally?) to deploy in a bigger environment.
On the other hand it doesn't make much sense run SolrCloud if you don't have a distributed configuration, i.e. having Solr and Zookeeper running on different nodes.
SolrCloud is the kind of cluster you need when you have hundred or even thousands searches per second with collection of millions or even billions of documents.
Your cluster have to scale horizontally.
Version to use with external zookeper.
'-t' to change data dir in container.
To see other options run: solr start -help
version: '3'
services:
solr1:
image: solr
ports:
- "8984:8984"
entrypoint:
- solr
command:
- start
- -f
- -c
- -h
- "10.1.0.157"
- -p
- "8984"
- -z
- "10.1.0.157:2181,10.1.0.157:2182,10.1.0.157:2183"
- -m
- 1g
- -t
- "/opt/solr/server/solr/mycores"
volumes:
- "./data1/mycores:/opt/solr/server/solr/mycores"
I use this setup locally to test three instances of solr and three instances of zookeeper, based on the official example.
version: '3.7'
services:
solr-1:
image: solr:8.7
container_name: solr-1
ports:
- "8981:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
# command:
# - solr-precreate
# - gettingstarted
solr-2:
image: solr:8.7
container_name: solr-2
ports:
- "8982:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
solr-3:
image: solr:8.7
container_name: solr-3
ports:
- "8983:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
zoo-1:
image: zookeeper:3.6
container_name: zoo-1
restart: always
hostname: zoo-1
volumes:
- zoo1data:/data
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo-2:2888:3888;2181 server.3=zoo-3:2888:3888;2181
networks:
- solr
zoo-2:
image: zookeeper:3.6
container_name: zoo-2
restart: always
hostname: zoo-2
volumes:
- zoo2data:/data
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo-1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo-3:2888:3888;2181
networks:
- solr
zoo-3:
image: zookeeper:3.6
container_name: zoo-3
restart: always
hostname: zoo-3
volumes:
- zoo3data:/data
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo-1:2888:3888;2181 server.2=zoo-2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
networks:
- solr
networks:
solr:
# persist the zookeeper data in volumes
volumes:
zoo1data:
driver: local
zoo2data:
driver: local
zoo3data:
driver: local

Redis connection to 127.0.0.1:6379 failed using Docker

Hi i'm using docker compose to handle all of my configuration.
i have mongo, node, redis, and elastic stack.
But i can't get my redis connect to my node app.
Here is my docker-compose.yml
version: '2'
services:
mongo:
image: mongo:3.6
container_name: "backend-mongo"
ports:
- "27017:27017"
volumes:
- "./data/db:/data/db"
redis:
image: redis:4.0.7
ports:
- "6379:6379"
user: redis
adminmongo:
container_name: "backend-adminmongo"
image: "mrvautin/adminmongo"
ports:
- "1234:1234"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.1.1
container_name: "backend-elastic"
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
web:
container_name: "backend-web"
build: .
ports:
- "8888:8888"
environment:
- MONGODB_URI=mongodb://mongo:27017/backend
restart: always
depends_on:
- mongo
- elasticsearch
- redis
volumes:
- .:/backend
- /backend/node_modules
volumes:
esdata1:
driver: local
networks:
esnet:
Things to notice:
The redis is already running ( I can ping the redis)
I don't have any services running on my host only from the container
Other containers (except redis) work well
I've tried this method below
const redisClient = redis.createClient({host: 'redis'});
const redisClient = redis.createClient(6379, '127.0.0.1');
const redisClient = redis.createClient(6379, 'redis');
I'm using
docker 17.12
xubuntu 16.04
How can i connect my app to my redis container?
adding
hostname: redis
under redis section fix this issue.
So it will be something like this,
redis:
image: redis:4.0.7
ports:
- "6379:6379"
command: ["redis-server", "--appendonly", "yes"]
hostname: redis

Unable to connect docker container to logstash via gelf driver

Hi guys i'm having trouble to send my server container logs to my ELK stack. No input is sent to logstash so i'm unable to set kibana index for collecting logs. I think my problem is in the port settings.
Here is the docker-compose yml for the LAMP stack (only the server service):
version: '3'
services:
server:
build: ./docker/apache
links:
- fpm
ports:
- 80:80 # HTTP
- 443:443 # HTTPS
logging:
driver: "gelf"
options:
gelf-address: "udp://127.0.0.1:5000"
tag: "server"
And here is the docker-compose yml for the ELK stack, based on deviantony/docker-elk github project
version: '2'
services:
elasticsearch:
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
I've found the mistake, i've to specify the UDP protocol in the logstash service port definition.
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000/udp"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
You need to use the gelf input plugin. Here an example of a functioning compose file:
services:
logstash:
image: docker.elastic.co/logstash/logstash:5.3.1
logging:
driver: "json-file"
networks:
- logging
ports:
- "127.0.0.1:12201:12201/udp"
entrypoint: logstash -e 'input { gelf { } } output { stdout{ } }'
You can test it by running:
docker run --log-driver=gelf --log-opt gelf-address=udp://127.0.0.1:12201 ubuntu /bin/sh -c 'while true; do date "+%d-%m-%Y %H:%M:%S:%3N"; sleep 1 ; done
and checking docker logs on the logstash container.

Resources