Filebeat doesn't send logs to logstack on ELK docker stack - docker

I have ELK stack installed on dockers (each sits on different container, over the same network, and they use the official elk images).
This is how i configured the elk:
1. sudo docker network create somenetwork
2.
sudo docker pull elasticsearch:6.6.1
sudo docker run -dit --name elasticsearch -h elasticsearch --net somenetwork -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.6.1
3.
sudo docker pull kibana:6.6.1
sudo docker run -dit --name kibana -h kibana --net somenetwork -p 5601:5601 kibana:6.6.1
4. RUN logstash
sudo docker pull logstash:6.6.1
sudo docker run -it --name logstash -h logstash --net somenetwork -p 5044:5044 -v $(pwd)/pipeline/:/usr/share/logstash/pipeline -v $(pwd)/config/logstash.yml:/usr/share/logstash/config/logstash.yml logstash:6.6.1 logstash -f /usr/share/logstash/pipeline/logstash.conf
I also have app container that runs filebeat, which has "log.out" log file. this is the "filebeat.yml":
filebeat.prospectors:
- input_type: log
enabled: true
paths:
- /home/log.out
output.logstash:
hosts: ["logstash:5044"]
the logstash configuration file, logstash.conf:
input
{
beats {
port => 5044
host => "0.0.0.0"
}
}
output{
elasticsearch { hosts => ["elasticsearch:9200"] }
}
I can connect with telnet ($telnet logstash 5044), from the app container.
If I just use stdin from logstash (without getting the logs from filebeat), I can see the logs in Kibana.
I'm pretty sure I miss something simple. I'm new with elk. Thanks

Related

Docker's container name can not be resolved

I just tried to create two containers for Elastic Search and Kibana.
docker network create esnetwork
docker run --name myes --net esnetwork -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e "discovery.type=single-node" elasticsearch:7.9.3
and Elastic Search works when I use http://localhost:9200 or http://internal-ip:9200
But when I use http://myes:9200, it just can't resolve the container name.
Thus when I run
docker run --name mykib --net esnetwork -p 5601:5601 -e “ELASTICSEARCH_HOSTS=http://myes:9200” docker.elastic.co/kibana/kibana:7.9.3
It couldn't be created because it cannot resolve myes:9200
I also tried to replace "ELASTICSEARCH_HOSTS=http://myes:9200" with localhost:9200 or internal IP instead. but nothing works.
So I think my question should be how to make the container's DNS works?
How are you resolving 'myes'?
Is it mapped in hostname file and resolving to 127.0.0.1?
Also, use 127.0.0.1 wherever possible as localhost could be pointing to something else and not getting resolved.
It seems this problem doesn't arise from DNS. Both Elastic search and Kibana containers should use the fix name "elasticsearch" . so the docker command will be:
$docker network create esnetwork
$sudo vi /etc/sysctl.d/max_map_count.conf
vm.max_map_count=262144
$docker run --name elasticsearch --net esnetwork -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e
$docker run --name kib01-test --net esnetwork -p 5601:5601 -e “ELASTICSEARCH_HOSTS=http://elasticsearch:9200” docker.elastic.co/kibana/kibana:7.9.3
Then if the terminals that run installations could be ended automatically, just close them. And restart containers from the docker desktop manager. Then everything will go smoothly.
My environment is Fedora 36, docker 20.10.18

Local Docker RabbitMQ - Connection Timeout

I'm working through an example in the Clojure Programming Cookbook that involves running RabbitMQ locally in Docker. I start it up using
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
and I see log output that includes the line
started TCP listener on [::]:5672.
When I try to connect to it using
(langohr.core/connect {:host "172.17.0.2"})
I get the error "Operation timed out (Connection timed out)". Not sure if it's relevant, but I'm on macOS 11.4.
Also,
docker inspect --format '{{ .NetworkSettings.IPAddress }}' some-rabbit
returns
172.17.0.2.
Any ideas?
try to map the ports:
docker run -it --rm --name my-rabbitmq \
-p 5672:5672 -p 15672:15672 rabbitmq:3-management
then you can connect using HTTP://localhost:15672 and in amqp with localhost:5672

Running ELK stack docker on host machine

I want to install elasticsearch and kibana, on dockers, on my host machine:
$sudo docker run -dit --name elasticsearch -h elasticsearch --net host -p 9200:9200 -p 9300:9300 -v $(pwd)/elasticsearch/data/:/usr/share/elasticsearch/data/ -e "discovery.type=single-node" elasticsearch:6.6.1
WARNING: Published ports are discarded when using host network mode
$sudo docker run -dit --name kibana -h kibana --net host -p 5601:5601 kibana:6.6.1
WARNING: Published ports are discarded when using host network mode
and I get the following errors on kibana:
log [14:32:26.655] [warning][admin][elasticsearch] Unable to revive connection: http://elasticsearch:9200/
log [14:32:26.656] [warning][admin][elasticsearch] No living connections
But If I don't use host machine:
sudo docker network create mynetwork
sudo docker run -dit --name elasticsearch -h elasticsearch --net mynetwork -p 9200:9200 -p 9300:9300 -v $(pwd)/elasticsearch/data/:/usr/share/elasticsearch/data/ -e "discovery.type=single-node" elasticsearch:6.6.1
sudo docker run -dit --name kibana -h kibana --net mynetwork -p 5601:5601 kibana:6.6.1
all working fine. What is the problem?
--net host disables most of the Docker networking stack. Basic features like communicating between containers using their container name as a host name don’t work. Except in very unusual circumstances it’s never necessary.
Your second setup that uses standard Docker networking and publishes selected ports through the host is a best practice.

Port Forwarding Failing For RabbitMQ in Docker

I'm following the Docker documentation on https://docs.docker.com/samples/library/rabbitmq but when I get to port forwarding, I get the following error: C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint some-rabbit11 (c8065d91c990ad498501160011a7f264522ddb5f5a1188db934c47853f833fa2): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:8080:tcp:172.17.0.2:15672: input/output error.
The command I'm trying to run from the terminal is docker run -d --hostname my-rabbit --name some-rabbit -p 15672:15672 rabbitmq:3.6-management
From what I can find online, the command appears to be correct so I'm not sure what is the root cause.
Find out if any docker images use rabbitmq:
docker ps -a
Remove any images using docker rabbitmq:
docker rm <IMAGE ID>
Restart docker with the system tray app
Restart docker rabbitmq
docker run -d -p 15672:15672 -p 5672:5672 --name some-rabbit rabbitmq:3.6-management
The management console is exposed on port 15672 and rabbitmq on port 5672
Insure the new instance is running:
docker ps
Use the Firefox web-browser. This does not work in Google chrome. Browse to 127.0.0.1:5672
This cryptic code shows that rabbit is working.
Go to 127.0.0.1:15672 to view the management plugin in action.
The passwords are defaults.

Connect to Redis Docker container from a none docker app in a remote server

Goal: Connect to Redis via an app from a remote server.
Problem: I don't know the exact syntax of a Redis container creation.
You have to expose ports from docker to the world.
docker run --name some-redis -d -p 6379:6379 redis
But you need to be carefuly if you doing this on public IP,
so is better to attach a config file with security enabled.
docker run --name some-redis -d -p 6379:6379 \
-v /path/redis.conf:/usr/local/etc/redis/redis.conf \
redis redis-server /usr/local/etc/redis/redis.conf
Bind Redis container on host port & connect from the remote server using "REDIS_HOST:REDIS_HOST_PORT".
docker run -d --name redis -v <data-dir>:/data -p 6379:6379 redis
You should be able to connect to redis now from remote app server on REDIS_HOST and port 6379.
PS - The DNS/IP address of the Redis host should not change.
Ref - https://docs.docker.com/config/containers/container-networking/#published-ports

Resources