Unable to get OpenSearch dashboard by running OpenSearch docker compose - docker

I am a windows user. I installed Windows Subsystem for Linux [wsl2] and then installed docker using it. Then I tried to get started with OpenSearch so I followed the documentation in the given link
https://opensearch.org/downloads.html and run docker-compose up, In the shell, I am getting an error message like
opensearch-dashboards | {"type":"log","#timestamp":"2022-01-18T16:31:18Z","tags":["error","opensearch","data"],"pid":1,"message":"[ConnectionError]: getaddrinfo EAI_AGAIN opensearch-node1 opensearch-node1:9200"}
In the port http://localhost:5601/ I am getting messages like
OpenSearch Dashboards server is not ready yet
I also changed resources preference for memory to 5GB in docker-desktop but it still doesn't work. Can somebody help me with this?

After 5 days having issues with opensearch I've found something working fine for me:
Set docker memory to 4GB
And docker vm.max_map_count = 262144
Then I use previous versions of opensearch because the latest does not seems stable:
opensearchproject/opensearch:1.2.3
opensearchproject/opensearch-dashboards:1.1.0
opensearchproject/logstash-oss-with-opensearch-output-plugin:7.16.2
Here is my docker-compose.yml file:
version: '3'
services:
opensearch-node1A:
image: opensearchproject/opensearch:1.2.3
container_name: opensearch-node1A
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node1A
- discovery.seed_hosts=opensearch-node1A,opensearch-node2A
- cluster.initial_master_nodes=opensearch-node1A,opensearch-node2A
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- opensearch-data1:/usr/share/opensearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
networks:
- opensearch-net
opensearch-node2A:
image: opensearchproject/opensearch:1.2.3
container_name: opensearch-node2A
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node2A
- discovery.seed_hosts=opensearch-node1A,opensearch-node2A
- cluster.initial_master_nodes=opensearch-node1A,opensearch-node2A
- bootstrap.memory_lock=true
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- opensearch-data2:/usr/share/opensearch/data
networks:
- opensearch-net
opensearch-dashboardsA:
image: opensearchproject/opensearch-dashboards:1.1.0
container_name: opensearch-dashboardsA
ports:
- 5601:5601
expose:
- "5601"
environment:
OPENSEARCH_HOSTS: '["https://opensearch-node1A:9200","https://opensearch-node2A:9200"]'
networks:
- opensearch-net
logstash-with-plugin:
image: opensearchproject/logstash-oss-with-opensearch-output-plugin:7.16.2
container_name: logstash-with-plugin
networks:
- opensearch-net
volumes:
opensearch-data1:
opensearch-data2:
networks:
opensearch-net:

I had the same error message when opening "http://localhost:5601/" while testing opensearch and opensearch dasboard locally using Docker in Windows 10:
OpenSearch Dashboards server is not ready yet
opensearch-dashboards |
{"type":"log","#timestamp":"2022-02-10T12:29:35Z","tags":["error","opensearch","data"],"pid":1,"message":"[ConnectionError]:
getaddrinfo EAI_AGAIN opensearch-node1 opensearch-node1:9200"}
But when looking into the log I also found this other error:
opensearch-node1 | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
The 3 part solution working for me was:
Part 1
On each opensearch nodes update the file:
/usr/share/opensearch/config/opensearch.yml
And add line:
plugins.security.disabled: true
Before the security plugins:
cks. "Single-node" mode disables them again.
#discovery.type: single-node
plugins.security.disabled: true
######## Start OpenSearch Security Demo Configuration ########
# WARNING: revise all the lines below before you go into production
plugins.security.ssl.transport.pemcert_filepath: esnode.pem
I found the information on opensearch official documentation
Part 2
Setting allocated memory for docker desktop to 4GB into .wslconfig more information here:
opendistrocommunity discussion
stackoverflow aloocate memory
Make sure your allocated memory is well set up (you have to restart docker desktop) with this command: docker info and check the line "Total Memory" it should be set to 4GB (approximately, in my case it has be set to 3.84GiB)
Part 3
And also increase vm.max_map_count:
open powershell
wsl -d docker-desktop
echo "vm.max_map_count = 262144" > /etc/sysctl.d/99-docker-desktop.conf
The info was founded here on github discussion

I had the same issue with my Opensearch-dashboards instance installed on VM without Docker usage. The problem was caused by wrong setting for connection to search engine in the opensearch-dashboards.yml file. I mixed up https and http protocols here (there was mismatch between settings of opensearch and opensearch-dashboards):
opensearch.hosts: [https://localhost:9200]

Related

Elasticsearch cluster doesn't work on Docker Swarm

The docker-compose YAML below file brings up a 3-node Elasticsearch cluster when used with the docker compose command. That is OK for debug but I want to move to deployment, so I want to deploy on a swarm where the containers can run on different systems.
So
docker compose up
works, but
docker stack deploy -c docker-compose.yml p3es
creates the same containers (although on different systems) and the overlay network, but the elasticsearch instances are not able to talk to each other via port 9300. So a master never gets assigned and although elasticsearch responds to HTTP requests they just error out.
In the logs the following exception/stack trace appears on each container:
p3es_es01.1.sv26uqp4i4s3#carbon | "stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [es03][10.0.12.9:9300][internal:cluster/coordination/join]",
p3es_es01.1.sv26uqp4i4s3#carbon | "Caused by: org.elasticsearch.transport.ConnectTransportException: [es01][10.0.0.53:9300] connect_exception",
(etc)
The cause of the exception turns out to be:
p3es_es01.1.sv26uqp4i4s3#carbon | "Caused by: java.io.IOException: connection timed out: 10.0.0.53/10.0.0.53:9300",
So here are some things I have tried:
I invoke a shell on one of the containers. I can ping each of the other containers. I can also do a curl -XGET on each of the containers and get a response from port 9200.
If I do a curl -XGET on port 9300 on one of the containers I get a "Not an HTTP Port" message. But at least it was able to resolve the address.
Docker stack likes to put prefixes on names for objects. So if you name a network xyz the network actually gets named project_xyz. So I changed the environment variables that tell elasticsearch who is part of the cluster to include the project name prefix. No luck.
I've run out of ideas. Any suggestions?
version: '3.9'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
environment:
- HOSTNAME=es01
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- es9300
volumes:
- nfs-es01:/usr/share/elasticsearch/data
ports:
- 9200:9200
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
environment:
- HOSTNAME=es02
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- es9300
volumes:
- nfs-es02:/usr/share/elasticsearch/data
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
environment:
- HOSTNAME=es03
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- es9300
volumes:
- nfs-es03:/usr/share/elasticsearch/data
volumes:
nfs-es01:
driver_opts:
type: nfs
o: addr=10.2.0.1,rw,nfsvers=4,local_lock=all
device: :/sbn/process3/elasticsearch01
nfs-es02:
driver_opts:
type: nfs
o: addr=10.2.0.1,rw,nfsvers=4,local_lock=all
device: :/sbn/process3/elasticsearch02
nfs-es03:
driver_opts:
type: nfs
o: addr=10.2.0.1,rw,nfsvers=4,local_lock=all
device: :/sbn/process3/elasticsearch03
networks:
es9300:
driver: overlay
attachable: true
As it turns out, Elasticsearch discovery gets confused when Docker provides it with multiple overlay networks. So the directive:
ports:
- 9200:9200
causes each es0* service to be on the ingress overlay network in addition to the overlay network specified (in this case es9300). For some reason when Elasticsearch is running in the containers, it gets the wrong IP address when resolving the service/DNS es01.
I haven't determined why that is, but removing the ports directive to publish port 9200 resolves the issue.
Hopefully this posting will help someone encountering the same issue.

how to install custom REST plugin on a dockerized opensearch

I am trying to install me custom REST plugin on my dockerized OpenSearch...
I am using ubuntu 20
this is my docker-compose.yml file
version: '3'
services:
opensearch-node1:
image: opensearchproject/opensearch:1.0.1
container_name: opensearch-node1
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node1
- discovery.seed_hosts=opensearch-node1,opensearch-node2
- cluster.initial_master_nodes=opensearch-node1,opensearch-node2
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- opensearch-data1:/usr/share/opensearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
networks:
- opensearch-net
opensearch-node2:
image: opensearchproject/opensearch:1.0.1
container_name: opensearch-node2
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node2
- discovery.seed_hosts=opensearch-node1,opensearch-node2
- cluster.initial_master_nodes=opensearch-node1,opensearch-node2
- bootstrap.memory_lock=true
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- opensearch-data2:/usr/share/opensearch/data
networks:
- opensearch-net
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:1.0.1
container_name: opensearch-dashboards
ports:
- 5601:5601
expose:
- "5601"
environment:
OPENSEARCH_HOSTS: '["https://opensearch-node1:9200","https://opensearch-node2:9200"]' # must be a string with no spaces when specified as an environment variable
networks:
- opensearch-net
volumes:
opensearch-data1:
opensearch-data2:
networks:
opensearch-net:
running docker-compose up start the service and it runs fine
but I have no idea how to continue from here...
this is the plugin layout (which I cloned from here )
The docker image is all ready to run, without your plugin of course:/ Therefore, you need to create a docker image with your plugin installed.
Open a new directory outside the plugin project with your packaged plugin in it. This package is called something like ->my-plugin.zip located in your plugin project under build/distributions/my-plugin.zip. If it is not there then you should assemble the plugin like so:
./gradlew assemble -Dopensearch.version=1.0.0 -Dbuild.snapshot=false
Add the following Dockerfile to the new directory:
FROM opensearchproject/opensearch:1.0.0
ADD ./my-plugin.zip /usr/
RUN /usr/share/opensearch/bin/opensearch-plugin install file:///usr/my-plugin.zip
The ADD will add the local package to the container so it can be used by the next command.
The script after RUN will install the plugin into OpenSearch.
Build the docker image, adding a tag will make life easier later:
docker build --tag=opensearch-with-my-plugin .
Now you have an opensearch image built with your plugin installed on it!
Fix the YAML file you posted originally so that it uses the correct image. This means replacing the opensearchproject/opensearch:1.0.1 with the tag you gave the image you had built - opensearch-with-my-plugin. And add it to the directory with the Dockerfile (not as you have it, in the project).
I took the liberty to change the dashboards version to 1.0.0, as i'm not sure if it will work with 1.0.0 of the image.
In any case, this should be a solid start!
version: '3'
services:
opensearch-node1:
image: opensearch-with-my-plugin
container_name: opensearch-node1
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node1
- discovery.seed_hosts=opensearch-node1,opensearch-node2
- cluster.initial_master_nodes=opensearch-node1,opensearch-node2
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- opensearch-data1:/usr/share/opensearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
networks:
- opensearch-net
opensearch-node2:
image: opensearch-with-my-plugin
container_name: opensearch-node2
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node2
- discovery.seed_hosts=opensearch-node1,opensearch-node2
- cluster.initial_master_nodes=opensearch-node1,opensearch-node2
- bootstrap.memory_lock=true
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- opensearch-data2:/usr/share/opensearch/data
networks:
- opensearch-net
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:1.0.0
container_name: opensearch-dashboards
ports:
- 5601:5601
expose:
- "5601"
environment:
OPENSEARCH_HOSTS: '["https://opensearch-node1:9200","https://opensearch-node2:9200"]' # must be a string with no spaces when specified as an environment variable
networks:
- opensearch-net
volumes:
opensearch-data1:
opensearch-data2:
networks:
opensearch-net:
Verify by running this in the terminal and see that your plugin is installed:)
curl -XGET https://localhost:9200/_cat/plugins -u 'admin:admin' --insecure
Also, I created a GitHub template for plugins with more info. The repo you cloned is the one I created for my blog post on writing plugins, so it is a REST plugin, but not the most generic one.
If you are still having issues getting this running please let me know in the comments, I'm glad to help.
EDIT: file locations edited a bit.

My docker can't come up anymore with elasticsearch

My docker container used to start successfully. As of this weekend it fails pulling elasticsearch. Please help (Windows 10 64bit)
> docker-compose up
Pulling elastic (elasticsearch:7.3.1)...
7.3.1: Pulling from library/elasticsearch
ERROR: no matching manifest for windows/amd64 10.0.17763 in the manifest list entries
> docker manifest inspect -v library/elasticsearch:latest
no such manifest: docker.io/library/elasticsearch:latest
Part of the docker-compose.yml
elastic:
image: elasticsearch:7.3.1
restart: always
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
- Elogger.level=TRACE
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata:/data/elasticsearch
ports:
- "9201:9200"
networks:
Can you check this post Docker: "no matching manifest for windows/amd64 in the manifest list entries"
Right click Docker icon in the Windows System Tray
Go to Settings
Daemon
Advanced
Set the "experimental": true
Restart Docker
no idea why you have library/elasticsearch:7.3.1 in your compose file.. according to official elasticsearch images the path is simply elasticsearch:7.3.1 - feel free to edit thet compose file and removelibrary/` part from image name

How to setup a 3-node Elasticsearch cluster on a single AWS EC2 instance?

I am currently trying to deploy a 3-node Elasticsearch cluster on a single EC2 instance (i.e. using ONE instance only) using a docker-compose file. The problem is I could not get the 3 nodes to communicate with each other to form the cluster.
On my Windows 10 machine, I used the official Elasticsearch:6.4.3 image while for AWS EC2, I am using a custom Elasticsearch:6.4.3 image with ec2-discovery plugin installed where I build using the "docker build -t mdasri/eswithec2disc ." command. Refer to dockerfile below.
The dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:6.4.3
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch discovery-ec2
I was successful in setting up the 3-node Elasticsearch cluster locally using docker-compose on my Windows 10 machine. In my docker-compose file, I have 3 different Elasticsearch services to make up the 3-nodes: es01, es02, es03. I was hoping to use the same docker-compose file to set up the cluster on AWS EC2 instance but I was hit with error.
I am using the "ecs-cli compose -f docker-compose.yml up" command to deploy to AWS EC2. The status of the ecs-cli compose was: "Started container...".
So to check the cluster status, I typed x.x.x.x/_cluster/health?pretty, but was hit with this error:
{
"error" : {
"root_cause" : [
{
"type" : "master_not_discovered_exception",
"reason" : null
}
],
"type" : "master_not_discovered_exception",
"reason" : null
},
"status" : 503
}
When I assess each of the docker container logs in the EC2 instance after I ssh in, this is the error I face in ALL 3 containers:
[2019-06-24T06:19:43,880][WARN ][o.e.d.z.UnicastZenPing ] [es01]
failed to resolve host [es02]
This is my docker-compose file for the respective AWS EC2 service:
version: '2'
services:
es01:
image: mdasri/eswithec2disc
container_name: es01
cpu_shares: 100
mem_limit: 2147482548
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
ports:
- "9200:9200"
- "9300:9300"
environment:
- "cluster.name=aws-cluster"
- "node.name=es01"
- "node.master=true"
- "node.data=false"
- "discovery.zen.hosts_provider=ec2"
- "discovery.zen.ping.unicast.hosts=es01, es02"
- "discovery.zen.minimum_master_nodes=2"
- "ES_JAVA_OPTS= -Xmx256m -Xms256m"
- "bootstrap.memory_lock=true"
volumes:
- /usr/share/elasticsearch/data
networks:
- esnet
es02:
image: mdasri/eswithec2disc
container_name: es02
cpu_shares: 100
mem_limit: 2147482548
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
environment:
- "cluster.name=aws-cluster"
- "node.name=es02"
- "node.master=true"
- "node.data=false"
- "discovery.zen.hosts_provider=ec2"
- "discovery.zen.ping.unicast.hosts=es01, es02"
- "ES_JAVA_OPTS= -Xmx256m -Xms256m"
- "bootstrap.memory_lock=true"
volumes:
- /usr/share/elasticsearch/data
networks:
- esnet
es03:
image: mdasri/eswithec2disc
container_name: es03
cpu_shares: 100
mem_limit: 2147482548
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
environment:
- "cluster.name=aws-cluster"
- "node.name=es03"
- "node.master=false"
- "node.data=true"
- "discovery.zen.hosts_provider=ec2"
- "discovery.zen.ping.unicast.hosts=es01,es02"
- "ES_JAVA_OPTS= -Xmx256m -Xms256m"
- "bootstrap.memory_lock=true"
volumes:
- /usr/share/elasticsearch/data
networks:
- esnet
networks:
esnet:
Please help me as I've been stuck on this problem for the past 1-2 weeks.
P.S: Please let me know what other information do you guys need. Thanks!
you need to configure links in your docker-compose to be able to resolvable:
from docker-compose Docs:
Link to containers in another service. Either specify both the service name and a link alias (SERVICE:ALIAS), or just the service name.
web:
links:
- db
- db:database
- redis
and see the comment also from #Mishi.Srivastava

Failed to limit memory with docker compose

My server has 2GB mem
I launched 2 containers in the server with docker-compose
Although I set the memory limiti, but it seems not work
docker-compose
hub:
mem_limit: 256m
image: selenium/hub
ports:
- "4444:4444"
test:
mem_limit: 256m
build: ./
links:
- hub
ports:
- "5900"
I'm not sure about this but try set mem_limit to 256000000 without using 'm'
This is not documented anywhere in docker-compose, but you can pass any valid system call setrlimit option in ulimits.
So, you can specify in docker-compose.yaml
ulimits:
as:
hard: 130000000
soft: 100000000
memory size is in bytes. After going over this limit your process will get memory allocation exceptions, which you may or may not trap.

Resources