I have currently tried to use docker-compose to spin up multiple elasticsearch containers but these are all deployed on my local machine. What would be the process for deploying multiple containers to several different machines using Docker?
I know that docker swarm exists but I am not sure how to utilize it along with docker compose so that I spin up multiple elasticsearch nodes that are located in the difference virtual machines that I have spun up.
So far I know that in the elasticsearch.yml config file I need to specify the virtual machine addresses with
discovery.zen.ping.unicast.hosts: ["vm_ip1:9200", "vm_ip2:9200"] but I am not sure how to enforce this change onto the docker containers that are created. To spin up the containers I used the sample docker-compose.yml file that is shown in the elasticsearch docs
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
container_name: elasticsearch
environment:
- cluster.name=elasticsearch
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
container_name: elasticsearch2
environment:
- cluster.name=elasticsearchr
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
- "index.number_of_shards: 2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
Please let me know if any more details are needed, thanks!
Related
I ran the following docker compose script and I am expecting two nodes to be up, however this only one. There seems to be some obvious error.
Taken from the documentation
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.12
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.12
container_name: elasticsearch2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata1:
driver: local
esdata2:
driver: local
networks:
esnet:
http://127.0.0.1:9200/_cat/health
1598033352 18:09:12 docker-cluster green 1 1 0 0 0 0 0 0 - 100.0%
docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------------
elasticsearch /usr/local/bin/docker-entr ... Up 0.0.0.0:9200->9200/tcp, 9300/tcp
elasticsearch2 /usr/local/bin/docker-entr ... Up 9200/tcp, 9300/tcp
Ok, I found out the issue. It seems both the containers are somehow not aware of each other and trying to create the clusters on their own.
I introduced delay with depends, it works fine now. Thanks
It is up. But you need to map the port to some local unused port. Like you have mapped for first one 9200:9200. Add the same for other one too like 8200:9200. Then try hitting second one from local at 8200 port. It should work.
I have a small app with a python backend where I'm streaming and classifying tweets in real-time.
I use elasticsearch to collect classified tweets and Kibana to make visualizations based on es data.
In my frontend, I just use kibana visualizations.
For the moment, I'm trying to run my application in a multi-node swarm as a services stack but I'm having problems with my compose file.
I tried to start with elastisearch and to use this info https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html but didn't help, and I didn'd succed to deploy my docker-compose file even with just elasticsearch serivce.
This is my yml file:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- 'ES_JAVA_OPTS=-Xms512m -Xmx512m'
ulimits:
memlock:
soft: -1
hard: -1
ports:
- '9200:9200'
kibana:
image: docker.elastic.co/kibana/kibana:7.6.2
ports:
- '5601:5601'
Below is the docker-compose file which works for a single node in a development environment, which have disabled security and has discovery.type=single-node param to make sure elasticsearch production bootstrap checks are not kicked in.
version: '2.2'
services:
#Elasticsearch Docker Images: https://www.docker.elastic.co/
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
volumes:
elasticsearch-data:
driver: local
networks:
elastic:
external: true
I've installed Elastic Search and Kibana using docker image in ubuntu machine and the commands used to run are:
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.5.1
docker run --link 36c0ea06f9e3:elasticsearch -p 5601:5601 docker.elastic.co/kibana/kibana:7.5.1
and it gets successfully run in my local machine.
Now the major concern for me when you try to create a cluster node with one master and slave node, user need to edit the /etc/elasticsearc/elasticsearch.yml.
But in this case (installed using docker image in ubuntu).
No such file or folder is created. Do let me know how can I create a cluster and store data in such case.
You already have part of the solution in your first docker run statement. The docker image of elasticsearch is configured using environment variables. For example your run statement sets the node to single-node mode using the -e "discovery.type=single-node" flag.
A full list of configuration options you can find here.
You may also share the config directory with the host using a host-shared volume:
docker run -p 9200:9200 -p 9300:9300 -v <your-dir>/elasticsearch.yml/:/usr/share/elasticsearch/config/elasticsearch.yml docker.elastic.co/elasticsearch/elasticsearch:7.5.1
This allows you to edit the config at your_dir/elasticsearch.yml directly from your hosts filesystem. Just make sure the config file exists in the specified directory before attempting to start the container.
Not sure if this is useful but here you go
https://blog.patricktriest.com/text-search-docker-elasticsearch/
from is official guide
you can use envirmental varible to pass option to /etc/elasticsearc/elasticsearch.yml
this is sample docker-compose.yml:
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge
I am currently trying to deploy a 3-node Elasticsearch cluster on a single EC2 instance (i.e. using ONE instance only) using a docker-compose file. The problem is I could not get the 3 nodes to communicate with each other to form the cluster.
On my Windows 10 machine, I used the official Elasticsearch:6.4.3 image while for AWS EC2, I am using a custom Elasticsearch:6.4.3 image with ec2-discovery plugin installed where I build using the "docker build -t mdasri/eswithec2disc ." command. Refer to dockerfile below.
The dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:6.4.3
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch discovery-ec2
I was successful in setting up the 3-node Elasticsearch cluster locally using docker-compose on my Windows 10 machine. In my docker-compose file, I have 3 different Elasticsearch services to make up the 3-nodes: es01, es02, es03. I was hoping to use the same docker-compose file to set up the cluster on AWS EC2 instance but I was hit with error.
I am using the "ecs-cli compose -f docker-compose.yml up" command to deploy to AWS EC2. The status of the ecs-cli compose was: "Started container...".
So to check the cluster status, I typed x.x.x.x/_cluster/health?pretty, but was hit with this error:
{
"error" : {
"root_cause" : [
{
"type" : "master_not_discovered_exception",
"reason" : null
}
],
"type" : "master_not_discovered_exception",
"reason" : null
},
"status" : 503
}
When I assess each of the docker container logs in the EC2 instance after I ssh in, this is the error I face in ALL 3 containers:
[2019-06-24T06:19:43,880][WARN ][o.e.d.z.UnicastZenPing ] [es01]
failed to resolve host [es02]
This is my docker-compose file for the respective AWS EC2 service:
version: '2'
services:
es01:
image: mdasri/eswithec2disc
container_name: es01
cpu_shares: 100
mem_limit: 2147482548
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
ports:
- "9200:9200"
- "9300:9300"
environment:
- "cluster.name=aws-cluster"
- "node.name=es01"
- "node.master=true"
- "node.data=false"
- "discovery.zen.hosts_provider=ec2"
- "discovery.zen.ping.unicast.hosts=es01, es02"
- "discovery.zen.minimum_master_nodes=2"
- "ES_JAVA_OPTS= -Xmx256m -Xms256m"
- "bootstrap.memory_lock=true"
volumes:
- /usr/share/elasticsearch/data
networks:
- esnet
es02:
image: mdasri/eswithec2disc
container_name: es02
cpu_shares: 100
mem_limit: 2147482548
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
environment:
- "cluster.name=aws-cluster"
- "node.name=es02"
- "node.master=true"
- "node.data=false"
- "discovery.zen.hosts_provider=ec2"
- "discovery.zen.ping.unicast.hosts=es01, es02"
- "ES_JAVA_OPTS= -Xmx256m -Xms256m"
- "bootstrap.memory_lock=true"
volumes:
- /usr/share/elasticsearch/data
networks:
- esnet
es03:
image: mdasri/eswithec2disc
container_name: es03
cpu_shares: 100
mem_limit: 2147482548
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
environment:
- "cluster.name=aws-cluster"
- "node.name=es03"
- "node.master=false"
- "node.data=true"
- "discovery.zen.hosts_provider=ec2"
- "discovery.zen.ping.unicast.hosts=es01,es02"
- "ES_JAVA_OPTS= -Xmx256m -Xms256m"
- "bootstrap.memory_lock=true"
volumes:
- /usr/share/elasticsearch/data
networks:
- esnet
networks:
esnet:
Please help me as I've been stuck on this problem for the past 1-2 weeks.
P.S: Please let me know what other information do you guys need. Thanks!
you need to configure links in your docker-compose to be able to resolvable:
from docker-compose Docs:
Link to containers in another service. Either specify both the service name and a link alias (SERVICE:ALIAS), or just the service name.
web:
links:
- db
- db:database
- redis
and see the comment also from #Mishi.Srivastava
I have an elasticsearch image that is being used as a base image for multiple containers. I am wondering if there is any way to pre-configure an ingest pipeline such that the process of creating the image and building a container also creates the pipeline for me? It'd be great if the base image comes with the pipeline that i want it to have, otherwise I'd have to docker exec into each container that uses this image and send a curl request in each one to create the pipeline.
Right now I'm thinking that I have to add a curl to the elasticsearch server (after it starts) in docker-entrypoint.sh, but i'm not sure if there's any other way
I can advice you to use docker-compose. I personally find it very convenient. With one file you can configure a whole stack.
Here is an example to help you start:
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- node.name=node-test1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- node-test1data:/usr/share/elasticsearch/data
ports:
- 9200:9200
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
container_name: elasticsearch2
environment:
- cluster.name=docker-cluster
- node.name=node-test2
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- node-test2data:/usr/share/elasticsearch/data
kibana:
image: docker.elastic.co/kibana/kibana:6.3.2
container_name: kibana
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
depends_on:
- elasticsearch
logstash:
image: docker.elastic.co/logstash/logstash:6.3.2
container_name: logstash
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro