I want to create 2 elasticsearch cluster in single docker-compose file, so that I can test few changes only on new es cluster,
My docker-compose file is look like this
version: "2.2"
services:
elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- "9200:9200"
mem_limit: '2048M'
new-elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata2:/usr/share/elasticsearch/data
ports:
- "9400:9200"
mem_limit: '2048M'
search:
image: search:latest
entrypoint: java -Delasticsearch.host=elasticsearch-master -DnewElasticsearch.host=new-elasticsearch-master -DnewElasticsearch.port=9400 -jar app.jar
ports:
- "8083:8083"
depends_on:
- elasticsearch-master
- new-elasticsearch-master
mem_limit: '500M'
volumes:
esdata1:
esdata2:
I have 1 java service where I am adding both the host with different environment variable
-Delasticsearch.host=elasticsearch-master
-DnewElasticsearch.host=new-elasticsearch-master
But when I run code from java search service as follow
new RestTemplate().getForEntity("http://elasticsearch-master:9200/_cat/indices?v",String.class)
This gives me correct response.
But when I try to connect to another host on 9400.
new RestTemplate().getForEntity("http://new-elasticsearch-master:9400/_cat/indices?v",String.class)
I am getting Connection Refused error
When I try same host with 9200 then that gives me 200 response.
new RestTemplate().getForEntity("http://new-elasticsearch-master:9200/_cat/indices?v",String.class)
Can someone please tell me how can I make 2 different connection with different port as below.
http://elasticsearch-master:9200
http://new-elasticsearch-master:9400
Thanks
You got the expected behavior. The ports field in docker-compose map the ports to your localhost, which mean that the "old" Elasticsearch will be available via localhost:9200 and the "new" Elasticsearch will be available via localhost:9400.
On the other hand, docker-compose services communicate in an internal network and the service name is the hostname and the port is the original listening port.
Thus, you were able to access (internally) your old one via http://elasticsearch-master:9200 and the new one via http://new-elasticsearch-master:9200.
If you wish to use the new Elasticsearch with 9400 you need to change its settings: http.port. You can do that like:
new-elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata2:/usr/share/elasticsearch/data
environment:
- http.port=9400
ports:
- "9400:9400"
mem_limit: '2048M'
note that you have to change the port mapping as well (because it will map your new port, 9400 to the localhost 9400).
Related
I have 3 containers with my bot, server and db. after docker-compose up, server and db are working. telegram bot does get-request and takes this error:
Get "http://localhost:8080/user/": dial tcp 127.0.0.1:8080: connect: connection refused
docker-compose.yml
version: "3"
services:
db:
image: postgres
container_name: todo_postgres
restart: always
ports:
- "5432:5432"
environment:
# TODO: Change it to environment variables
POSTGRES_USER: user
POSTGRES_DB: somedb
POSTGRES_PASSWORD: pass
server:
depends_on:
- db
build: .
restart: always
ports:
- 8080:8080
environment:
DB_NAME: somedb
DB_USERNAME: user
DB_PASSWORD: pass
bot:
depends_on:
- server
build:
./src/telegram_bot
environment:
BOT_TOKEN: TOKEN
restart: always
links:
- server
When using compose, try using the containers hostname.. in the case your bot should try to connect to
server:8080
Compose will handle the name resolution to the IP you need
What you try is to access localhost within your container (service) bot.
Maybe this answer will help you to solve the problem. It sound similar to your problem.
But I want to provide you another solution to your problem:
In case it's not needed to access the containers form outside (from your host), one appraoch would be making use of the expose functionality and a docker network.
See docs.docker.com: network.
The expose functionality allows to access your other containers within your network
See docs.docker.com: expose
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
Example
What is this example doing?
A couple of steps that are not mandatory
Set a static ip within your docker container
These Steps are not needed and can be omitted. However, I like to do this, since you have now a better control over the network. You can access the containers by their hostname (which is the container name or service name) as well.
The steps that are needed are the following:
This exposes port 8080, but do not publish it.
expose:
- 8080
The network which allows static ip configuration
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
A complete file could look similar to this:
version: "3.8"
services:
first-service:
image: <your-image>
networks:
vpcbr:
ipv4_address: 10.5.0.2
expose:
- 8080
second-service:
image: <your-image>
networks:
vpcbr:
ipv4_address: 10.5.0.3
depends_on:
- first-service
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
Your bot container is up before your server & db containers.
When you use depends_on it's not accually waiting them to finish setup themeselves.
You should try some tricky algorithem for waiting the other container finish setup.
I remmember that when I used Nginx proxy I used something called wait-for-it.sh
I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.
Using Spring cloud Stream 2.1.4 with Spring Boot 2.1.10, I'm trying to target a local instance of Kafka.
This is an extract of my projetc configuation so far:
spring.kafka.bootstrap-servers=PLAINTEXT://localhost:9092
spring.kafka.streams.bootstrap-servers=PLAINTEXT://localhost:9092
spring.cloud.stream.kafka.binder.brokers=PLAINTEXT://localhost:9092
spring.cloud.stream.kafka.binder.zkNodes=localhost:2181
spring.cloud.stream.kafka.streams.binder.brokers=PLAINTEXT://localhost:9092
spring.cloud.stream.kafka.streams.binder.zkNodes=localhost:2181
But the binder keeps on calling a wrong target :
java.io.IOException: Can't resolve address: kafka.example.com:9092
How can can I specify the target if those properties won't do he trick?
More, I deploy the Kafka instance through a Docker Bitnami image and I'd prefer not to use SSL configuration (see PLAINTEXT protocol) but I'm don't find properties for basic credentials login. Does anyone know if this is hopeless?
This is my docker-compose.yml
version: '3'
services:
zookeeper:
image: bitnami/zookeeper:latest
container_name: zookeeper
environment:
- ZOO_ENABLE_AUTH=yes
- ZOO_SERVER_USERS=kafka
- ZOO_SERVER_PASSWORDS=kafka_password
networks:
- kafka-net
kafka:
image: bitnami/kafka:latest
container_name: kafka
hostname: kafka.example.com
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ZOOKEEPER_USER=kafka
- KAFKA_ZOOKEEPER_PASSWORD=kafka_password
networks:
- kafka-net
networks:
kafka-net:
driver: bridge
Thanks in advance
The hostname isn't the issue, rahter the advertised listeners protocol//:port mapping that causes the hostname to be advertised, by default. You should change that, rather than the hostname.
kafka:
image: bitnami/kafka:latest
container_name: kafka
hostname: kafka.example.com # <--- Here's what you are getting in the request
...
environment:
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092 # <--- This returns the hostname to the clients
If you plan on running your code outside of another container, you should advertise localhost in addition to, or instead of the container hostname.
One year later, my comment still is not been merged into the bitnami README, where I was able to get it working with the following vars (changed to match your deployment)
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_CFG_LISTENERS=PLAINTEXT://:29092,PLAINTEXT_HOST://:9092
KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka.example.com:29092,PLAINTEXT_HOST://localhost:9092
All right: got this to work by looking twice to the "dockerfile" (thx to cricket_007):
kafka:
...
hostname: localhost
For the record: I could get rid of all properties above, default being for Kafka localhost:9092
Seems to be a common question but with different contexts but I'm having a problem connecting to my localhost DB when using Docker.
If I inspect the mysql container using docker inspect and find the IP address and use this as the DB host as part of the CMS, it runs fine... the only issue is the mysql container IP address changes (upon eachdocker-compose up and if I change wifi networks) so ideally I'd like to use 'localhost' or '127.0.0.1' but for some reason this results in a SQLSTATE[HY000] [2002] Connection refused error.
How can I use 'localhost' or '127.0.0.1' as the DB hostname in CMS applications so I don't have to keep changing it as the container IP address changes?
This is my docker-compose.yml file:
version: "3"
services:
webserver:
build:
context: ./bin/webserver
restart: 'always'
ports:
- "80:80"
- "443:443"
links:
- mysql
volumes:
- ${DOCUMENT_ROOT-./www}:/var/www/html
- ${PHP_INI-./config/php/php.ini}:/usr/local/etc/php/php.ini
- ${VHOSTS_DIR-./config/vhosts}:/etc/apache2/sites-enabled
- ${LOG_DIR-./logs/apache2}:/var/log/apache2
networks:
mynet:
aliases:
- john.dev
mysql:
image: 'mysql:5.7'
restart: 'always'
ports:
- "3306:3306"
volumes:
- ${MYSQL_DATA_DIR-./data/mysql}:/var/lib/mysql
- ${MYSQL_LOG_DIR-./logs/mysql}:/var/log/mysql
environment:
MYSQL_ROOT_PASSWORD: example
networks:
- mynet
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- mysql
environment:
PMA_HOST: mysql
PMA_PORT: 3306
ports:
- '8080:80'
volumes:
- /sessions
networks:
- mynet
networks:
mynet:
Try using mysql instead of localhost.
You are defining a link between webserver container and mysql container, so webserver container is able to resolve mysql IP.
According to Docker documentation:
Docker Cloud gives your containers two ways find other services:
Using service and container names directly as hostnames
Using service links, which are based on Docker Compose links
Service and Container Hostnames update automatically when a service
scales up or down or redeploys. As a user, you can configure service
names, and Docker Cloud uses these names to find the IP of the
services and containers for you. You can use hostnames in your code to
provide abstraction that allows you to easily swap service containers
or components.
Service links create environment variables which allow containers to
communicate with each other within a stack, or with other services
outside of a stack. You can specify service links explicitly when you
create a new service or edit an existing one, or specify them in the
stackfile for a service stack.
From Docker compose documentation:
Containers for the linked service are reachable at a hostname identical to the alias, or the service name if no alias was specified.
I'm trying to split legacy system combined from hbase and php module into two separated containers with the following docker-compose file:
version: '2'
services:
php:
image: my-legacy-php
volumes:
- ~/workspace/php:/workspace/php
ports:
- "80:80"
links:
- hbase
hbase:
image: dajobe/hbase
hostname: hbase-docker
ports:
- "43590-44000:43590-44000"
- "8085:8085"
- "2181:2181"
- "8080:8080"
- "16010:16010"
- "9095:9095"
- "9090:9091"
- "16020:16020"
- "16030:16030"
- "60000:60000"
volumes:
- ~/workspace/hbase-docker/data:/data
I'm using a public hbase-docker image which using port 9090 for thrift while my legacy php module expect to connect via port 9091. I've tried to 'map' or 'forward' within the docker-compose.yml file "9090:9091" without lack. I also tried the expose attribute of docker-compose but it doesn't takes two ports (only one which is exposed to the other containers). How do I make that append?
I want that the listening port 9090 of hbase container will appear as 9091 from the php container (inside)
One of the possible solutions is: Building your own image, with dajobe/hbase as the base image, but modifying the hbase configs and ports exposed using EXPOSE to match your requirements, And then use that image in your compose file.
But this would require you have build and managing the image by yourself.
The solution is to put both services on the same docker network.
Specifically, add this to your docker-compose.yml:
networks:
app_net:
driver: bridge
Then, in each service's config be sure to include:
networks:
- app_net
Finally (and you've already done this), be sure that the correct port mapping is included in the config for hbase:
ports:
- "9090:9091"