Running ELK stack docker on host machine - docker

I want to install elasticsearch and kibana, on dockers, on my host machine:
$sudo docker run -dit --name elasticsearch -h elasticsearch --net host -p 9200:9200 -p 9300:9300 -v $(pwd)/elasticsearch/data/:/usr/share/elasticsearch/data/ -e "discovery.type=single-node" elasticsearch:6.6.1
WARNING: Published ports are discarded when using host network mode
$sudo docker run -dit --name kibana -h kibana --net host -p 5601:5601 kibana:6.6.1
WARNING: Published ports are discarded when using host network mode
and I get the following errors on kibana:
log [14:32:26.655] [warning][admin][elasticsearch] Unable to revive connection: http://elasticsearch:9200/
log [14:32:26.656] [warning][admin][elasticsearch] No living connections
But If I don't use host machine:
sudo docker network create mynetwork
sudo docker run -dit --name elasticsearch -h elasticsearch --net mynetwork -p 9200:9200 -p 9300:9300 -v $(pwd)/elasticsearch/data/:/usr/share/elasticsearch/data/ -e "discovery.type=single-node" elasticsearch:6.6.1
sudo docker run -dit --name kibana -h kibana --net mynetwork -p 5601:5601 kibana:6.6.1
all working fine. What is the problem?

--net host disables most of the Docker networking stack. Basic features like communicating between containers using their container name as a host name don’t work. Except in very unusual circumstances it’s never necessary.
Your second setup that uses standard Docker networking and publishes selected ports through the host is a best practice.

Related

Docker's container name can not be resolved

I just tried to create two containers for Elastic Search and Kibana.
docker network create esnetwork
docker run --name myes --net esnetwork -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e "discovery.type=single-node" elasticsearch:7.9.3
and Elastic Search works when I use http://localhost:9200 or http://internal-ip:9200
But when I use http://myes:9200, it just can't resolve the container name.
Thus when I run
docker run --name mykib --net esnetwork -p 5601:5601 -e “ELASTICSEARCH_HOSTS=http://myes:9200” docker.elastic.co/kibana/kibana:7.9.3
It couldn't be created because it cannot resolve myes:9200
I also tried to replace "ELASTICSEARCH_HOSTS=http://myes:9200" with localhost:9200 or internal IP instead. but nothing works.
So I think my question should be how to make the container's DNS works?
How are you resolving 'myes'?
Is it mapped in hostname file and resolving to 127.0.0.1?
Also, use 127.0.0.1 wherever possible as localhost could be pointing to something else and not getting resolved.
It seems this problem doesn't arise from DNS. Both Elastic search and Kibana containers should use the fix name "elasticsearch" . so the docker command will be:
$docker network create esnetwork
$sudo vi /etc/sysctl.d/max_map_count.conf
vm.max_map_count=262144
$docker run --name elasticsearch --net esnetwork -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e
$docker run --name kib01-test --net esnetwork -p 5601:5601 -e “ELASTICSEARCH_HOSTS=http://elasticsearch:9200” docker.elastic.co/kibana/kibana:7.9.3
Then if the terminals that run installations could be ended automatically, just close them. And restart containers from the docker desktop manager. Then everything will go smoothly.
My environment is Fedora 36, docker 20.10.18

Connect application to database when they are in separate docker containers

Well, the set up is simple, there should be two containers: one of them for the mysql database and the other one for web application.
What I do to run the containers,
the first one for database and the second for the app:
docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=db -p 3306:3306 -d mysql
docker run -p 8081:8081 myrepo/myapp
The application tries to connect to database using localhost:3306, but as I found out the issue is that each container has its own localhost.
One of the solution I found was to add the same network for containers using --net and the docker commands happend to be like the following:
docker network create my-network
docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=db -p 3306:3306 -d
--net my-network mysql
docker run --net my-network -p 8081:8081 myrepo/myapp
Though, the web application still is not able to connect to the database. What am I doing wrong and what is the proper flow to connect application to database when they are both inside containers?
You could use the name of the container (i.e. mysql-container) to connect to mysql. Example:
Run the mysql container:
docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=db -p 3306:3306 -d --net my-network mysql
Connect from another container using the mysql client:
docker run --net my-network -it mysql mysql -u root -p db -h mysql-container
In your application you should replace in the database URL, whatever IP you have with mysql-container.
Well, after additional research, I successfully managed to connect to the database.
The approach I used is the following:
On my host I grabbed the IP address of the docker itself but not the specific container:
sudo ip addr show | grep docker0
The IP address of the docker0 I added to the database connection URL inside my application and thus application managed to connect to the database (note: with this flow I don't add the --net keyword when start container)
What definitely strange is that even adding shared network like --net=my-nework for both the container didn't work. Moreover I did try to use --net=host to share the host network with container's one, still it was unsuccessful. If there's any who can explain why it couldn't work, please - share your knowledge.

KafkaTool: Can't connet to Kafka cluster

I'm trying to connect to the Kafka using a KafkaTool. I got an error:
Error connecting to the cluster. failed create new KafkaAdminClient
Kafka and Zookeeper is hosting in the Docker. I run next commands
docker network create kafka
docker run --network=kafka -d --name zookeeper -e ZOOKEEPER_CLIENT_PORT=2181 confluentinc/cp-zookeeper:latest
docker run --network=kafka -d -p 9092:9092 --name kafka -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka:latest
Settings for KafkaTool
Why does KafkaTool not connect to the Kafka that is hosting in the Docker?
I'm assuming this GUI is not coming from a Docker container. Therefore, your host machine doesn't know what zookeeper or kafka are, only the Docker network does.
In the GUI, you will want to use localhost for both, then in your Kafka run command, leave all the other variables alone but change -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
Zookeeper run command is fine, but add -p 2181:2181 to expose the port out to the host so that the GUI can connect

Filebeat doesn't send logs to logstack on ELK docker stack

I have ELK stack installed on dockers (each sits on different container, over the same network, and they use the official elk images).
This is how i configured the elk:
1. sudo docker network create somenetwork
2.
sudo docker pull elasticsearch:6.6.1
sudo docker run -dit --name elasticsearch -h elasticsearch --net somenetwork -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.6.1
3.
sudo docker pull kibana:6.6.1
sudo docker run -dit --name kibana -h kibana --net somenetwork -p 5601:5601 kibana:6.6.1
4. RUN logstash
sudo docker pull logstash:6.6.1
sudo docker run -it --name logstash -h logstash --net somenetwork -p 5044:5044 -v $(pwd)/pipeline/:/usr/share/logstash/pipeline -v $(pwd)/config/logstash.yml:/usr/share/logstash/config/logstash.yml logstash:6.6.1 logstash -f /usr/share/logstash/pipeline/logstash.conf
I also have app container that runs filebeat, which has "log.out" log file. this is the "filebeat.yml":
filebeat.prospectors:
- input_type: log
enabled: true
paths:
- /home/log.out
output.logstash:
hosts: ["logstash:5044"]
the logstash configuration file, logstash.conf:
input
{
beats {
port => 5044
host => "0.0.0.0"
}
}
output{
elasticsearch { hosts => ["elasticsearch:9200"] }
}
I can connect with telnet ($telnet logstash 5044), from the app container.
If I just use stdin from logstash (without getting the logs from filebeat), I can see the logs in Kibana.
I'm pretty sure I miss something simple. I'm new with elk. Thanks

Connect to Redis Docker container from a none docker app in a remote server

Goal: Connect to Redis via an app from a remote server.
Problem: I don't know the exact syntax of a Redis container creation.
You have to expose ports from docker to the world.
docker run --name some-redis -d -p 6379:6379 redis
But you need to be carefuly if you doing this on public IP,
so is better to attach a config file with security enabled.
docker run --name some-redis -d -p 6379:6379 \
-v /path/redis.conf:/usr/local/etc/redis/redis.conf \
redis redis-server /usr/local/etc/redis/redis.conf
Bind Redis container on host port & connect from the remote server using "REDIS_HOST:REDIS_HOST_PORT".
docker run -d --name redis -v <data-dir>:/data -p 6379:6379 redis
You should be able to connect to redis now from remote app server on REDIS_HOST and port 6379.
PS - The DNS/IP address of the Redis host should not change.
Ref - https://docs.docker.com/config/containers/container-networking/#published-ports

Resources