Not able to connect Elasticsearch docker container running on different port - docker

I have a elasticsearch 7.6.1 docker container which i want to run on port 9400,9500 port.
This is the docker run command I have used.
docker run -d --name elasticsearch761v2 -v /data/dump/:/usr/share/elasticsearch/data
-p 9400:9400 -p 9500:9500 -e "discovery.type=single-node" elasticsearch:7.6.1
Which gives the below output.
docker ps -a | grep elastic
idofcontainer elasticsearch:7.6.1 "/usr/local/bin/docke" 18 minutes ago Up 4 minutes
9200/tcp, 0.0.0.0:9400->9400/tcp, 9300/tcp, 0.0.0.0:9500->9500/tcp elasticsearch761v2
I have also set the elasticsearch.yml setting to below.
[root#idofcontainer config]# vi elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
transport.tcp.port: 9400
I have added Iptable entry for the above ports too.
The log for this container is :-
{"type": "server", "timestamp": "2020-04-06T08:25:22,684Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "docker-cluster", "node.name": "4196b5b23",
"message": "Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.kibana_task_manager_1][0], [.kibana_1][0]]]).", "cluster.uuid": "gx7s8R_PTUK4lFPPGBZA", "node.id": "XfLZnNNnQnAOHJnWdDQg" }
The Curl output is this :-
curl http://eserver:9400/_cat
This is not an HTTP port
Because of this, my kibana is also not able to reach the ES server.
I have set the kibana.yml to point to the above port.
Kibana.yml
# Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: ["http://eserver:9400/"]
xpack.monitoring.ui.container.elasticsearch.enabled: true
The log of this kibana container.
{"type":"log","#timestamp":"2020-04-06T08:49:14Z","tags":["warning","elasticsearch","admin"],"pid":7,"message":"Unable to revive connection: http://eserver:9400/"}
{"type":"log","#timestamp":"2020-04-06T08:49:14Z","tags":["warning","elasticsearch","admin"],"pid":7,"message":"No living connections"}

You have defined custom transport port 9400 and using it as HTTP port in your curl command to check the Elastic server, which error message is clearly pointing.
This is not an HTTP port
As you mentioned, you want to run your Elastic on 9400 and 9500, then you need to properly bind the default HTTP port 9200 to 9500, using below command.
docker run -d --name elasticsearch761v2 -v /data/dump/:/usr/share/elasticsearch/data
-p 9400:9400 -p 9500:9200 -e "discovery.type=single-node" elasticsearch:7.6.1
Note the only change required is -p 9500:9200 and after that, you can check your ES server using curl http://eserver:9500 , ie using HTTP port.

Related

SSL(curl) connection error in ElasticSearch setup

Have setup a 3-node Elasticsearch cluster using docker-compose. Followed below steps:
On one of the master nodes, es11, gets below error, however same curl command works fine on other 2 nodes i.e. es12, es13:
Error:
curl -X GET 'https://localhost:9316'
curl: (35) Encountered end of file
Below error in logs:
"stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [es13][SOMEIP:9316][internal:cluster/coordination/join]",
"Caused by: org.elasticsearch.transport.ConnectTransportException: [es11][SOMEIP:9316] handshake failed. unexpected remote node {es13}{SOMEVALUE}{SOMEVALUE
"at org.elasticsearch.transport.TransportService.lambda$connectionValidator$6(TransportService.java:468) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:95) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.transport.TransportService.lambda$handshake$9(TransportService.java:577) ~[elasticsearch-7.17.6.jar:7.17.6]",
https://localhost:9316 on browser gives site can't be reached error as well.It seems SSL certificate as created in step 4 below is having some issues in es11.
Any leads please? OR If I repeat step 4, do i need to copy the certs again to es12 & es13?
Below elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
Ports as defined in all 3 nodes docker-compose.yml
environment:
- node.name=es11
- transport.port=9316
ports:
- 9216:9200
- 9316:9316
Initialize a docker swarm. On ES11 run docker swarm init. Follow the instructions to join 12 and 13 to the swarm.
Create an overlay network docker network create -d overlay --attachable elastic
If necessary, bring down the current cluster and remove all the associated volumes by running docker-compose down -v
Create SSL certificates for ES with docker-compose -f create-certs.yml run --rm create_certs
Copy the certs for es12 and 13 to the respective servers
Use this busybox to create the overlay network on 12 and 13 sudo docker run -itd --name containerX --net [network name] busybox
Configure certs on 12 and 13 with docker-compose -f config-certs.yml run --rm config_certs
Start the cluster with docker-compose up -d on each server
Set the passwords for the built-in ES accounts by logging into the cluster docker exec -it es11 sh then running bin/elasticsearch-setup-passwords interactive --url localhost:9316
(as per your https://discuss.elastic.co thread)
you cannot talk HTTP to the transport protocol port, which you have defined in transport.port. you need to talk to port 9200 in the container, which you have mapped to 9216 outside the container
the transport port runs a binary protocol that is not HTTP accessible

Elasticsearch - Bootstrap checks failing max virtual memory areas error

Hello I want to install elk on docker, so I followed the official documentation https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
So when I want to start Elasticsearch in Docker to get the password generated for the elastic user and the enrollment token for enrolling Kibana by executing this command:
docker run --name es01 --net elastic -p 9200:9200 -p 9300:9300 -it docker.elastic.co/elasticsearch/elasticsearch:8.1.2
I get this error:
ERROR: [1] bootstrap checks failed. You must address the points described in the following [1] lines before starting Elasticsearch.
bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/Elasticsearch/logs/docker-cluster.log
{"#timestamp":"2022-04-14T12:39:58.449Z", "log.level": "INFO", "message":"stopping ...", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"Elasticsearch.server","process.thread.name":"Thread-2","log.logger":"org.Elasticsearch.node.Node","Elasticsearch.node.name":"50af9edc5c7d","Elasticsearch.cluster.name":"docker-cluster"}
{"#timestamp":"2022-04-14T12:39:58.512Z", "log.level": "INFO", "message":"stopped", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"Elasticsearch.server","process.thread.name":"Thread-2","log.logger":"org.Elasticsearch.node.Node","Elasticsearch.node.name":"50af9edc5c7d","Elasticsearch.cluster.name":"docker-cluster"}
{"#timestamp":"2022-04-14T12:39:58.513Z", "log.level": "INFO", "message":"closing ...", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"Elasticsearch.server","process.thread.name":"Thread-2","log.logger":"org.Elasticsearch.node.Node","Elasticsearch.node.name":"50af9edc5c7d","Elasticsearch.cluster.name":"docker-cluster"}
{"#timestamp":"2022-04-14T12:39:58.531Z", "log.level": "INFO", "message":"closed", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"Elasticsearch.server","process.thread.name":"Thread-2","log.logger":"org.Elasticsearch.node.Node","Elasticsearch.node.name":"50af9edc5c7d","Elasticsearch.cluster.name":"docker-cluster"}
{"#timestamp":"2022-04-14T12:39:58.535Z", "log.level": "INFO", "message":"Native controller process has stopped - no new native processes can be started", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"Elasticsearch.server","process.thread.name":"ml-cpp-log-tail-thread","log.logger":"org.Elasticsearch.xpack.ml.process.NativeController","Elasticsearch.node.name":"50af9edc5c7d","Elasticsearch.cluster.name":"docker-cluster"}
I solved this problem with these commands:
For Windows and macOS with Docker Desktop
docker-machine ssh
sudo sysctl -w vm.max_map_count=262144
For Windows with Docker Desktop WSL
wsl -d docker-desktop
sysctl -w vm.max_map_count=262144
And finally, I reinitialized the docker containers.
Documentation
I resolved this problem by runing this command:
grep vm.max_map_count /etc/sysctl.conf vm.max_map_count=262144

Logstash Crashes When Filebeat Running

Logstash is running well without beats configuration over tcp and I can see the all logs when I send over tcp.
input {tcp{
port => 8500 }
}
output { elasticsearch { hosts => ["elasticsearch:9200"] }
}
But I want to send logs to logstash from filebeat. I changed logstash config with this:
input {
beats {
port => 5044
}
}
output { elasticsearch { hosts => ["elasticsearch:9200"] }
}
This is docker run for logstash
docker run -d -p 8500:8500 -h logstash --name logstash --link elasticsearch:elasticsearch -v C:\elk2\config-dir:/config-dir docker.elastic.co/logstash/logstash:7.5.2 -f /config-dir/logstash.conf
I am running filebeat in docker with following:
docker run -d docker.elastic.co/beats/filebeat:6.8.6 setup --template -E output.logstash.enabled=true -E 'output.logstash.hosts=["127.0.0.1:5044"]'
But whenever I run filebeat, logstash and filenbeat containers are being stopped:
There is no docker log meaningfull:
[2020-01-24T14:13:37,104][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-01-24T14:13:37,978][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2020-01-24T14:13:38,657][INFO ][logstash.runner ] Logstash shut down.
You need to expose your beat listening port
docker run -d -p 5044:5044 -h logstash --name logstash --link elasticsearch:elasticsearch -v C:\elk2\config-dir:/config-dir docker.elastic.co/logstash/logstash:7.5.2 -f /config-dir/logstash.conf

Flink cannot be run in Marathon

I have three physical nodes with docker installed on them. I have one docker container with Mesos, Marathon, Hadoop and Flink. I configured Master node and Slave nodes for Mesos,Zookeeper and Marathon. I do these works step by step.
First, In Master node, I enter to docker container with this command:
docker run -v /home/user/.ssh:/root/.ssh --privileged -p 5050:5050 -p 5051:5051 -p 5052:5052 -p 2181:2181 -p 8082:8081 -p 6123:6123 -p 8080:8080 -p 50090:50090 -p 50070:50070 -p 9000:9000 -p 2888:2888 -p 3888:3888 -p 4041:4040 -p 7077:7077 -p 52222:22 -e WEAVE_CIDR=10.32.0.2/12 -e MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins -e LIBPROCESS_IP=10.32.0.2 -e MESOS_RESOURCES=ports*:[11000-11999] -ti hadoop_marathon_mesos_flink_2 /bin/bash
Then run Mesos and Zookeeper :
/home/zookeeper-3.4.14/bin/zkServer.sh restart
/home/mesos-1.7.2/build/bin/mesos-master.sh --ip=10.32.0.1 --hostname=10.32.0.1 --roles=marathon,flink --quorum=1 --work_dir=/var/run/mesos --log_dir=/var/log/mesos
After that run Marathon in the same container:
/home/marathon-1.7.189-48bfd6000/bin/marathon --master 10.32.0.1:5050 --zk zk://10.32.0.1:2181/marathon --hostname 10.32.0.1 --webui_url 10.32.0.1:8080 --logging_level debug
And finally, I run hadoop:
/opt/hadoop/sbin/start-dfs.sh
Marathon,Mesos and Hadoop are run without any problems.
The most important part of my work is running Flink in Marathon. I configured Flink in docker container like this:
env.java.home: /opt/java
jobmanager.rpc.address: 10.32.0.1
high-availability: zookeeper
high-availability.storageDir: hdfs:///flink/ha/
high-availability.zookeeper.quorum: 10.32.0.1:2181,10.32.0.2:2181
recovery.zookeeper.path.mesos-workers: /mesos-workers
In Marathon UI, I create Application and put this JSON file on it, but it is failed.
{
"id": "flink",
"cmd": "/home/flink-1.7.0/bin/mesos-appmaster.sh
-Dmesos.master=10.32.0.1:5050,10.32.0.2:5050
-Dmesos.initial-tasks=1",
"cpus": 1.0,
"mem": 1024
}
Flink application is failed in Mesos UI. It shows this error:
I0428 06:01:39.586699 6155 exec.cpp:162] Version: 1.7.2
I0428 06:01:39.596458 6154 exec.cpp:236] Executor registered on agent 984595ae-e811-48fb-a9f5-ca6128e1cc1a-S0
I0428 06:01:39.598870 6157 executor.cpp:188] Received SUBSCRIBED event
I0428 06:01:39.599761 6157 executor.cpp:192] Subscribed executor on 10.32.0.3
I0428 06:01:39.599963 6157 executor.cpp:188] Received LAUNCH event
I0428 06:01:39.601236 6157 executor.cpp:697] Starting task flink.16a7cc18-697b-11e9-928f-ce235caa831e
I0428 06:01:39.613719 6157 executor.cpp:712] Forked command at 6163
I0428 06:01:39.787395 6157 executor.cpp:1013] Command exited with status 1 (pid: 6163)
I0428 06:01:40.791885 6162 process.cpp:927] Stopped the socket accept loop
The strange thing is that in STDout, I see this text; even though I set JAVA_HOME in /etc/environment and flink-conf.yam.
Please specify JAVA_HOME. Either in Flink config ./conf/flink-conf.yaml or as system-wide JAVA_HOME.
Would you please tell me what I should do for that problem?
Many Thanks.
You can check your Flink log in Slave node. Also, it is better to change your JSON file like this. It helps you to follow your application.
{
"id": "flink",
"cmd": "/home/flink-1.7.0/bin/mesos-appmaster.sh -Djobmanager.heap.mb=1024
-Djobmanager.rpc.port=6123 -Drest.port=8081
-Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=1024
-Dtaskmanager.numberOfTaskSlots=2 -Dparallelism.default=2
-Dmesos.resourcemanager.tasks.cpus=1",
"cpus": 1.0,
"mem": 1024,
"fetch": [
{
"uri": "/home/flink-1.7.0/bin/mesos-appmaster.sh",
"executable": true
}
]
}
Also, JAVA_HOME to Flink_conf.yaml in every nodes, Master and Slaves.
env.java.home: /opt/java
With adding JAVA_HOME, you do not see the error in STDOUT.
I think it is useful.

Docker 1.10 access a container by its hostname from a host machine

I have the Docker version 1.10 with embedded DNS service.
I have created two service containers in my docker-compose file. They are reachable each other by hostname and by IP, but when I would like reach one of them from the host machine, it doesn't work, it works only with IP but not with hostname.
So, is it possible to access a docker container from the host machine by it's hostname in the Docker 1.10, please?
Update:
docker-compose.yml
version: '2'
services:
service_a:
image: nginx
container_name: docker_a
ports:
- 8080:80
service_b:
image: nginx
container_name: docker_b
ports:
- 8081:80
then I start it by command: docker-compose up --force-recreate
when I run:
docker exec -i -t docker_a ping -c4 docker_b - it works
docker exec -i -t docker_b ping -c4 docker_a - it works
ping 172.19.0.2 - it works (172.19.0.2 is docker_b's ip)
ping docker_a - fails
The result of the docker network inspect test_default is
[
{
"Name": "test_default",
"Id": "f6436ef4a2cd4c09ffdee82b0d0b47f96dd5aee3e1bde068376dd26f81e79712",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1/16"
}
]
},
"Containers": {
"a9f13f023761123115fcb2b454d3fd21666b8e1e0637f134026c44a7a84f1b0b": {
"Name": "docker_a",
"EndpointID": "a5c8e08feda96d0de8f7c6203f2707dd3f9f6c3a64666126055b16a3908fafed",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"c6532af99f691659b452c1cbf1693731a75cdfab9ea50428d9c99dd09c3e9a40": {
"Name": "docker_b",
"EndpointID": "28a1877a0fdbaeb8d33a290e5a5768edc737d069d23ef9bbcc1d64cfe5fbe312",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
}
},
"Options": {}
}
]
As answered here there is a software solution for this, copying the answer:
There is an open source application that solves this issue, it's called DNS Proxy Server
It's a DNS server that resolves container hostnames, and when it can't resolve a hostname then it can resolve it using public nameservers.
Start the DNS Server
$ docker run --hostname dns.mageddo --name dns-proxy-server -p 5380:5380 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/resolv.conf:/etc/resolv.conf \
defreitas/dns-proxy-server
It will set as your default DNS automatically (and revert back to the original when it stops).
Start your container for the test
docker-compose up
docker-compose.yml
version: '2'
services:
redis:
container_name: redis
image: redis:2.8
hostname: redis.dev.intranet
network_mode: bridge # that way he can solve others containers names even inside, solve elasticsearch, for example
elasticsearch:
container_name: elasticsearch
image: elasticsearch:2.2
hostname: elasticsearch.dev.intranet
Now resolve your containers' hostnames
from host
$ nslookup redis.dev.intranet
Server: 172.17.0.2
Address: 172.17.0.2#53
Non-authoritative answer:
Name: redis.dev.intranet
Address: 172.21.0.3
from another container
$ docker exec -it redis ping elasticsearch.dev.intranet
PING elasticsearch.dev.intranet (172.21.0.2): 56 data bytes
As well it resolves Internet hostnames
$ nslookup google.com
Server: 172.17.0.2
Address: 172.17.0.2#53
Non-authoritative answer:
Name: google.com
Address: 216.58.202.78
Here's what I do.
I wrote a Python script called dnsthing, which listens to the Docker events API for containers starting or stopping. It maintains a hosts-style file with the names and addresses of containers. Containers are named <container_name>.<network>.docker, so for example if I run this:
docker run --rm --name mysql -e MYSQL_ROOT_PASSWORD=secret mysql
I get this:
172.17.0.2 mysql.bridge.docker
I then run a dnsmasq process pointing at this hosts file. Specifically, I run a dnsmasq instance using the following configuration:
listen-address=172.31.255.253
bind-interfaces
addn-hosts=/run/dnsmasq/docker.hosts
local=/docker/
no-hosts
no-resolv
And I run the dnsthing script like this:
dnsthing -c "systemctl restart dnsmasq_docker" \
-H /run/dnsmasq/docker.hosts --verbose
So:
dnsthing updates /run/dnsmasq/docker.hosts as containers
stop/start
After an update, dnsthing runs systemctl restart dnsmasq_docker
dnsmasq_docker runs dnsmasq using the above configuration, bound
to a local bridge interface with the address 172.31.255.253.
The "main" dnsmasq process on my system, maintained by
NetworkManager, uses this configuration from
/etc/NetworkManager/dnsmasq.d/dockerdns:
server=/docker/172.31.255.253
That tells dnsmasq to pass all requests for hosts in the .docker
domain to the docker_dnsmasq service.
This obviously requires a bit of setup to put everything together, but
after that it seems to Just Work:
$ ping -c1 mysql.bridge.docker
PING mysql.bridge.docker (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.087 ms
--- mysql.bridge.docker ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms
To specifically solve this problem I created a simple "etc/hosts" domain injection tool that resolves names of local Docker containers on the host. Just run:
docker run -d \
-v /var/run/docker.sock:/tmp/docker.sock \
-v /etc/hosts:/tmp/hosts \
--name docker-hoster \
dvdarias/docker-hoster
You will be able to access a container using the container name, hostname, container id and vía the network aliases they have declared for each network.
Containers are automatically registered when they start and removed when they are paused, dead or stopped.
The easiest way to do this is to add entries to your hosts file
for linux: add 127.0.0.1 docker_a docker_b to /etc/hosts file
for mac: similar to linux but use ip of virtual machine docker-machine ip default
Similar to #larsks, I wrote a Python script too but implemented it as service. Here it is: https://github.com/nicolai-budico/dockerhosts
It launches dnsmasq with parameter --hostsdir=/var/run/docker-hosts and updates file /var/run/docker-hosts/hosts each time a list of running containers was changed.
Once file /var/run/docker-hosts/hosts is changed, dnsmasq automatically updates its mapping and container become available by hostname in a second.
$ docker run -d --hostname=myapp.local.com --rm -it ubuntu:17.10
9af0b6a89feee747151007214b4e24b8ec7c9b2858badff6d584110bed45b740
$ nslookup myapp.local.com
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: myapp.local.com
Address: 172.17.0.2
There are install and uninstall scripts. Only you need is to allow your system to interact with this dnsmasq instance. I registered in in systemd-resolved:
$ cat /etc/systemd/resolved.conf
[Resolve]
DNS=127.0.0.54
#FallbackDNS=
#Domains=
#LLMNR=yes
#MulticastDNS=yes
#DNSSEC=no
#Cache=yes
#DNSStubListener=udp

Resources