ElasticSearch - cannot run two es docker containers at the same time
I'm trying to run 2 services of ElasticSearch using docker-compose.yaml
Every time I run docker-compose up -d only one service is working at time. When I try to start stopped service it runs but the first one which was working before, stops immediately.
This is how my docker-compose.yaml looks like:
version: '3.1'
services:
es-write:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
container_name: es-write
environment:
- discovery.type=single-node
- TAKE_FILE_OWNERSHIP=true
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
es-read:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
container_name: es-read
environment:
- discovery.type=single-node
- TAKE_FILE_OWNERSHIP=true
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9201:9200
sqs:
image: "roribio16/alpine-sqs:latest"
container_name: "sqs"
ports:
- "9324:9324"
- "9325:9325"
volumes:
- "./.docker-configuration:/opt/custom"
stdin_open: true
tty: true
Tldr;
I believe you get the known <container name> exited with code 137 error.
Which is docker way of telling you it is OOM (out of memory).
To solve
Define a maximum amount of Ram each container is allowed to use.
I allowed 4GB, but you choose what suits you.
version: '3.1'
services:
es-write:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
container_name: es-write
environment:
- discovery.type=single-node
ports:
- 9200:9200
deploy:
resources:
limits:
memory: 4GB # Use at most 50 MB of RAM
es-read:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
container_name: es-read
environment:
- discovery.type=single-node
ports:
- 9201:9200
deploy:
resources:
limits:
memory: 4GB # Use at most 50 MB of RAM
Related
hi i want to connect to Elasticsearch inside my app which is defined as "cog-app" service in docker-compose.yml along with ditsro elasticsearch and kibana
i am not able to connect to elasticsearch when i run docker file, can you please tell me how i can connect elasticsearch service to app service
i have defined elasticsearch in cog-app service, and im getting connection failure with elasticsearch
version: "3"
services:
cog-app:
image: app:2.0
build:
context: .
dockerfile: ./Dockerfile
stdin_open: true
tty: true
ports:
- "7111:7111"
environment:
- LANG=C.UTF-8
- LC_ALL=C.UTF-8
- CONTAINER_NAME=app
volumes:
- /home/developer/app:/app
odfe-node1:
image: amazon/opendistro-for-elasticsearch:1.13.2
container_name: odfe-node1
environment:
- cluster.name=odfe-cluster
- node.name=odfe-node1
- discovery.seed_hosts=odfe-node1,odfe-node2
- cluster.initial_master_nodes=odfe-node1,odfe-node2
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "ES_JAVA_OPTS=-Xms2g -Xmx2g" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- odfe-data1:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
odfe-node2:
image: amazon/opendistro-for-elasticsearch:1.13.2
container_name: odfe-node2
environment:
- cluster.name=odfe-cluster
- node.name=odfe-node2
- discovery.seed_hosts=odfe-node1,odfe-node2
- cluster.initial_master_nodes=odfe-node1,odfe-node2
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- odfe-data2:/usr/share/elasticsearch/data
networks:
- odfe-net
kibana:
image: amazon/opendistro-for-elasticsearch-kibana:1.13.2
container_name: odfe-kibana
ports:
- 5601:5601
expose:
- "5601"
environment:
ELASTICSEARCH_URL: https://odfe-node1:9200
ELASTICSEARCH_HOSTS: https://odfe-node1:9200
networks:
- odfe-net
volumes:
odfe-data1:
odfe-data2:
networks:
odfe-net:
please tell me how two services can communicate with each other
As the elasticsearch service is running in another container, localhost is not valid. You should use odfe-node1:9200 as the url
I am running ElasticSearch instance on my Ubuntu server using docker container.
during a large insert-or-update request I get the following exception.
OriginalException: Elasticsearch.Net.ElasticsearchClientException: Request failed to execute. Call: Status code 413 from: POST /_bulk?pretty=true&error_trace=true
It sounds like I need to increase the http.max_content_length content from the default 100mb.
I bring up my docker instance using the following docker-compose
version: '3.4'
services:
# Nginx proxy
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- 80:80
- 443:443
restart: always
networks:
- nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /etc/nginx/vhost.d:/etc/nginx/vhost.d:ro
- /etc/certificates:/etc/nginx/certs
# ElasticSearch instance
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0
container_name: es01
environment:
- VIRTUAL_HOST=my.elastic-domain.com
- VIRTUAL_PORT=9200
- ELASTIC_PASSWORD=mypassword
- xpack.security.enabled=true
- discovery.type=single-node
- http.max_content_length=3000mb
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
expose:
- 9200
networks:
- nginx-proxy
depends_on:
- nginx-proxy
volumes:
data01:
driver: local
networks:
nginx-proxy:
default:
external:
name: nginx-proxy
As you can see, I tried to increase the value by setting an environment variable http.max_content_length=3000mb
Also, in Nginx proxy, I set the client_max_body_size 0; to ensure the proxy allow unlimited size.
How to set the http.max_content_length value for ElasticSearch container using docker container?
you can modify the elasticsearch.yml file and add the
http.max_content_length=3000mb
This file is in your elastic docker conntainer /config/elasticsearch.yml
I'm trying to use Elasticsearch with docker.
And you can see the guide here -> https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
my docker-compose.yml below
version: '2.2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: elasticsearch1
environment:
- node.name=master-node
- cluster.name=es-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es-data01:/usr/share/elasticsearch/data
ports:
- 127.0.0.1:9200:9200
- 127.0.0.1:9300:9300
networks:
- elastic
stdin_open: true
tty: true
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: elasticsearch2
environment:
- node.name=data-node1
- cluster.name=es-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
ports:
- 127.0.0.1:9301:9300
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es-data02:/usr/share/elasticsearch/data
networks:
- elastic
stdin_open: true
tty: true
volumes:
es-data01:
driver: local
es-data02:
driver: local
networks:
elastic:
# driver: bridge
the problem is
I cannot connect by curl -XGET localhost:9200
docker container exits automatically after few seconds
can you help me?
ps : when I try docker run it works. what is the difference between them?
docker run -d -p 9200:9200 -p 9300:9300 --name elasticsearch -it --rm -v els:/usr/share/elasticsearch/data -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.7.0
Please check the container logs by using docker logs <your stopped container-id>, here you can get the container id using docker ps -a command.
Also please follow this SO answer and set the memory requirements
which would help you run the Elasticsearch in docker. if it doesn't help then provide the logs which you can get as explained earlier.
Based on comments adding the updated docker-compose
version: '2.2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: elasticsearch1
environment:
- node.name=master-node
- node.master=true
- cluster.name=es-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "cluster.initial_master_nodes=master-node"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es-data01:/usr/share/elasticsearch/data
ports:
- 127.0.0.1:9200:9200
- 127.0.0.1:9300:9300
networks:
- elastic
stdin_open: true
tty: true
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: elasticsearch2
environment:
- node.name=data-node1
- node.master=false
- cluster.name=es-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "cluster.initial_master_nodes=master-node"
ports:
- 127.0.0.1:9301:9300
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es-data02:/usr/share/elasticsearch/data
networks:
- elastic
stdin_open: true
tty: true
volumes:
es-data01:
driver: local
es-data02:
driver: local
networks:
elastic:
# driver: bridge
As you are following this article, https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
worth checking the second section with limits and memory resources as the containers in docker-compose is exiting due to low resources.
Exited with code 137 error is because of resource limitation (usually RAM) on the host machine. You can resolve this problem by adding this line to the environment variables of your docker-compose file:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
You can read more about heap size settings, on official Elasticsearch documentation, in this link.
I have a small app with a python backend where I'm streaming and classifying tweets in real-time.
I use elasticsearch to collect classified tweets and Kibana to make visualizations based on es data.
In my frontend, I just use kibana visualizations.
For the moment, I'm trying to run my application in a multi-node swarm as a services stack but I'm having problems with my compose file.
I tried to start with elastisearch and to use this info https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html but didn't help, and I didn'd succed to deploy my docker-compose file even with just elasticsearch serivce.
This is my yml file:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- 'ES_JAVA_OPTS=-Xms512m -Xmx512m'
ulimits:
memlock:
soft: -1
hard: -1
ports:
- '9200:9200'
kibana:
image: docker.elastic.co/kibana/kibana:7.6.2
ports:
- '5601:5601'
Below is the docker-compose file which works for a single node in a development environment, which have disabled security and has discovery.type=single-node param to make sure elasticsearch production bootstrap checks are not kicked in.
version: '2.2'
services:
#Elasticsearch Docker Images: https://www.docker.elastic.co/
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
volumes:
elasticsearch-data:
driver: local
networks:
elastic:
external: true
I am trying to run Elasticsearch in a Docker Swarm.
This is my docker-compose file:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
networks:
- swarm_network
ports:
- "9200:9200"
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
deploy:
mode: replicated
replicas: 1
resources:
limits:
memory: 1000M
I got this error:
ERROR: [1] bootstrap checks failed
[1]: memory locking requested for elasticsearch process but memory is not locked
Does anyone know why this error and how to solve it?
Set up LimitMEMLOCK=infinity to docker.service unit works fine for me (docker 18.06.1-ce running on ubuntu server 1804)
(credits: zzswang on https://github.com/FusionAuth/fusionauth-containers/issues/1)
echo -e "[Service]\nLimitMEMLOCK=infinity" | SYSTEMD_EDITOR=tee systemctl edit docker.service
systemctl daemon-reload
systemctl restart docker
You have set the ulimits correctly , But I think it is a privileged operation so you will need to docker run it as privileged.
https://github.com/deviantony/docker-elk/issues/243
try this ( i am not sure please check the syntax)
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
networks:
- swarm_network
privileged: true
ports:
- "9200:9200"
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
deploy:
mode: replicated
replicas: 1
resources:
limits:
memory: 1000M