Using Spring cloud Stream 2.1.4 with Spring Boot 2.1.10, I'm trying to target a local instance of Kafka.
This is an extract of my projetc configuation so far:
spring.kafka.bootstrap-servers=PLAINTEXT://localhost:9092
spring.kafka.streams.bootstrap-servers=PLAINTEXT://localhost:9092
spring.cloud.stream.kafka.binder.brokers=PLAINTEXT://localhost:9092
spring.cloud.stream.kafka.binder.zkNodes=localhost:2181
spring.cloud.stream.kafka.streams.binder.brokers=PLAINTEXT://localhost:9092
spring.cloud.stream.kafka.streams.binder.zkNodes=localhost:2181
But the binder keeps on calling a wrong target :
java.io.IOException: Can't resolve address: kafka.example.com:9092
How can can I specify the target if those properties won't do he trick?
More, I deploy the Kafka instance through a Docker Bitnami image and I'd prefer not to use SSL configuration (see PLAINTEXT protocol) but I'm don't find properties for basic credentials login. Does anyone know if this is hopeless?
This is my docker-compose.yml
version: '3'
services:
zookeeper:
image: bitnami/zookeeper:latest
container_name: zookeeper
environment:
- ZOO_ENABLE_AUTH=yes
- ZOO_SERVER_USERS=kafka
- ZOO_SERVER_PASSWORDS=kafka_password
networks:
- kafka-net
kafka:
image: bitnami/kafka:latest
container_name: kafka
hostname: kafka.example.com
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ZOOKEEPER_USER=kafka
- KAFKA_ZOOKEEPER_PASSWORD=kafka_password
networks:
- kafka-net
networks:
kafka-net:
driver: bridge
Thanks in advance
The hostname isn't the issue, rahter the advertised listeners protocol//:port mapping that causes the hostname to be advertised, by default. You should change that, rather than the hostname.
kafka:
image: bitnami/kafka:latest
container_name: kafka
hostname: kafka.example.com # <--- Here's what you are getting in the request
...
environment:
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092 # <--- This returns the hostname to the clients
If you plan on running your code outside of another container, you should advertise localhost in addition to, or instead of the container hostname.
One year later, my comment still is not been merged into the bitnami README, where I was able to get it working with the following vars (changed to match your deployment)
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_CFG_LISTENERS=PLAINTEXT://:29092,PLAINTEXT_HOST://:9092
KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka.example.com:29092,PLAINTEXT_HOST://localhost:9092
All right: got this to work by looking twice to the "dockerfile" (thx to cricket_007):
kafka:
...
hostname: localhost
For the record: I could get rid of all properties above, default being for Kafka localhost:9092
Related
I want to create 2 elasticsearch cluster in single docker-compose file, so that I can test few changes only on new es cluster,
My docker-compose file is look like this
version: "2.2"
services:
elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- "9200:9200"
mem_limit: '2048M'
new-elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata2:/usr/share/elasticsearch/data
ports:
- "9400:9200"
mem_limit: '2048M'
search:
image: search:latest
entrypoint: java -Delasticsearch.host=elasticsearch-master -DnewElasticsearch.host=new-elasticsearch-master -DnewElasticsearch.port=9400 -jar app.jar
ports:
- "8083:8083"
depends_on:
- elasticsearch-master
- new-elasticsearch-master
mem_limit: '500M'
volumes:
esdata1:
esdata2:
I have 1 java service where I am adding both the host with different environment variable
-Delasticsearch.host=elasticsearch-master
-DnewElasticsearch.host=new-elasticsearch-master
But when I run code from java search service as follow
new RestTemplate().getForEntity("http://elasticsearch-master:9200/_cat/indices?v",String.class)
This gives me correct response.
But when I try to connect to another host on 9400.
new RestTemplate().getForEntity("http://new-elasticsearch-master:9400/_cat/indices?v",String.class)
I am getting Connection Refused error
When I try same host with 9200 then that gives me 200 response.
new RestTemplate().getForEntity("http://new-elasticsearch-master:9200/_cat/indices?v",String.class)
Can someone please tell me how can I make 2 different connection with different port as below.
http://elasticsearch-master:9200
http://new-elasticsearch-master:9400
Thanks
You got the expected behavior. The ports field in docker-compose map the ports to your localhost, which mean that the "old" Elasticsearch will be available via localhost:9200 and the "new" Elasticsearch will be available via localhost:9400.
On the other hand, docker-compose services communicate in an internal network and the service name is the hostname and the port is the original listening port.
Thus, you were able to access (internally) your old one via http://elasticsearch-master:9200 and the new one via http://new-elasticsearch-master:9200.
If you wish to use the new Elasticsearch with 9400 you need to change its settings: http.port. You can do that like:
new-elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata2:/usr/share/elasticsearch/data
environment:
- http.port=9400
ports:
- "9400:9400"
mem_limit: '2048M'
note that you have to change the port mapping as well (because it will map your new port, 9400 to the localhost 9400).
I have 3 containers with my bot, server and db. after docker-compose up, server and db are working. telegram bot does get-request and takes this error:
Get "http://localhost:8080/user/": dial tcp 127.0.0.1:8080: connect: connection refused
docker-compose.yml
version: "3"
services:
db:
image: postgres
container_name: todo_postgres
restart: always
ports:
- "5432:5432"
environment:
# TODO: Change it to environment variables
POSTGRES_USER: user
POSTGRES_DB: somedb
POSTGRES_PASSWORD: pass
server:
depends_on:
- db
build: .
restart: always
ports:
- 8080:8080
environment:
DB_NAME: somedb
DB_USERNAME: user
DB_PASSWORD: pass
bot:
depends_on:
- server
build:
./src/telegram_bot
environment:
BOT_TOKEN: TOKEN
restart: always
links:
- server
When using compose, try using the containers hostname.. in the case your bot should try to connect to
server:8080
Compose will handle the name resolution to the IP you need
What you try is to access localhost within your container (service) bot.
Maybe this answer will help you to solve the problem. It sound similar to your problem.
But I want to provide you another solution to your problem:
In case it's not needed to access the containers form outside (from your host), one appraoch would be making use of the expose functionality and a docker network.
See docs.docker.com: network.
The expose functionality allows to access your other containers within your network
See docs.docker.com: expose
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
Example
What is this example doing?
A couple of steps that are not mandatory
Set a static ip within your docker container
These Steps are not needed and can be omitted. However, I like to do this, since you have now a better control over the network. You can access the containers by their hostname (which is the container name or service name) as well.
The steps that are needed are the following:
This exposes port 8080, but do not publish it.
expose:
- 8080
The network which allows static ip configuration
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
A complete file could look similar to this:
version: "3.8"
services:
first-service:
image: <your-image>
networks:
vpcbr:
ipv4_address: 10.5.0.2
expose:
- 8080
second-service:
image: <your-image>
networks:
vpcbr:
ipv4_address: 10.5.0.3
depends_on:
- first-service
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
Your bot container is up before your server & db containers.
When you use depends_on it's not accually waiting them to finish setup themeselves.
You should try some tricky algorithem for waiting the other container finish setup.
I remmember that when I used Nginx proxy I used something called wait-for-it.sh
I am trying to create a analytics dashboard based from mobile events. I want to dockerize all the components to containers in docker and deploy it in localhost and create an analytical dashboard.
Sunbird telemetry https://github.com/project-sunbird/sunbird-telemetry-service
Kafka https://github.com/wurstmeister/kafka-docker
Druid https://github.com/apache/incubator-druid/tree/master/distribution/docker
Superset https://github.com/apache/incubator-superset
What i did
Druid
I executed the command docker build -t apache/incubator-druid:tag -f distribution/docker/Dockerfile .
I executed the command docker-compose -f distribution/docker/docker-compose.yml up
After everything get executed open http://localhost:4008/ and see DRUID running
It takes 3.5 hours to complete both build and run
Kafka
Navigate to kafka folder
docker-compose up -d executed this command
Issue
When we execute druid a zookeeper starts running, and when we start kafka the docker file starts another zookeeper and i cannot establish a connection between kafka and zookeeper.
After i start sunbird telemetry and tries to create topic and connect kafka from sunbird its not getting connected.
I dont understand what i am doing wrong.
Can we tell kafka to share the zookeeper started by DRUID. I am completed new to this environment and these stacks.
I am studying this stacks. Am i doing something wrong. Can anybody point out how to properly connect kafka and druid over docker.
Note:- I am running all this in my mac
My kafka compose file
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: **localhost ip**
KAFKA_ZOOKEEPER_CONNECT: **localhost ip**:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Can we tell kafka to share the zookeeper started by DRUID
You would put all services in the same compose file.
Druids kafka connection is listed here
https://github.com/apache/incubator-druid/blob/master/distribution/docker/environment#L31
You can set KAFKA_ZOOKEEPER_CONNECT to the same address, yes
For example, downloading the file above and adding Kafka to the Druid Compose file...
version: "2.2"
volumes:
metadata_data: {}
middle_var: {}
historical_var: {}
broker_var: {}
coordinator_var: {}
overlord_var: {}
router_var: {}
services:
# TODO: Add sunbird
postgres:
container_name: postgres
image: postgres:latest
volumes:
- metadata_data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=FoolishPassword
- POSTGRES_USER=druid
- POSTGRES_DB=druid
# Need 3.5 or later for container nodes
zookeeper:
container_name: zookeeper
image: zookeeper:3.5
environment:
- ZOO_MY_ID=1
druid-coordinator:
image: apache/incubator-druid
container_name: druid-coordinator
volumes:
- coordinator_var:/opt/druid/var
depends_on:
- zookeeper
- postgres
ports:
- "3001:8081"
command:
- coordinator
env_file:
- environment
# renamed to druid-broker
druid-broker:
image: apache/incubator-druid
container_name: druid-broker
volumes:
- broker_var:/opt/druid/var
depends_on:
- zookeeper
- postgres
- druid-coordinator
ports:
- "3002:8082"
command:
- broker
env_file:
- environment
# TODO: add other Druid services
kafka:
image: wurstmeister/kafka
ports:
- "9092"
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181/kafka # This is the same service that Druid is using
Can we tell kafka to share the zookeeper started by DRUID
Yes, as there's a zookeeper.connect setting for Kafka broker that specifies the Zookeeper address to which Kafka will try to connect. How to do it depends entirely on the docker image you're using. For example, one of the popular images wurstmeister/kafka-docker does this by mapping all environmental variables starting with KAFKA_ to broker settings and adds them to server.properties, so that KAFKA_ZOOKEEPER_CONNECT becomes a zookeeper.connect setting. I suggest taking a look at the official documentation to see what else you can configure.
and when we start kafka the docker file starts another zookeeper
This is your issue. It's the docker-compose file that starts Zookeeper, Kafka, and configures Kafka to use the bundled Zookeeper. You need to modify it, by removing the bundled Zookeeper and configuring Kafka to use a different one. Ideally, you should have a single docker-compose file that starts the whole setup.
I'm trying to pass redis url to docker container but so far i couldn't get it to work. I did a little research and none of the answers worked for me.
version: '3.2'
services:
redis:
image: 'bitnami/redis:latest'
container_name: redis
hostname: redis
expose:
- 6379
links:
- api
api:
image: tufanmeric/api:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- proxy
environment:
- REDIS_URL=redis
depends_on:
- redis
deploy:
mode: global
labels:
- 'traefik.port=3002'
- 'traefik.frontend.rule=PathPrefix:/'
- 'traefik.frontend.rule=Host:api.example.com'
- 'traefik.docker.network=proxy'
networks:
proxy:
Error: Redis connection to redis failed - connect ENOENT redis
You can only communicate between containers on the same Docker network. Docker Compose creates a default network for you, and absent any specific declaration your redis container is on that network. But you also declare a separate proxy network, and only attach the api container to that other network.
The single simplest solution to this is to delete all of the network: blocks everywhere and just use the default network Docker Compose creates for you. You may need to format the REDIS_URL variable as an actual URL, maybe like redis://redis:6379.
If you have a non-technical requirement to have separate networks, add - default to the networks listing for the api container.
You have a number of other settings in your docker-compose.yml that aren't especially useful. expose: does almost nothing at all, and is usually also provided in a Dockerfile. links: is an outdated way to make cross-container calls, and as you've declared it to make calls from Redis to your API server. hostname: has no effect outside the container itself and is usually totally unnecessary. container_name: does have some visible effects, but usually the container name Docker Compose picks is just fine.
This would leave you with:
version: '3.2'
services:
redis:
image: 'bitnami/redis:latest'
api:
image: tufanmeric/api:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
deploy:
mode: global
labels:
- 'traefik.port=3002'
- 'traefik.frontend.rule=PathPrefix:/'
- 'traefik.frontend.rule=Host:api.example.com'
- 'traefik.docker.network=default'
I am trying to set up a 2 node private IPFS cluster using docker. For that purpose I am using ipfs/ipfs-cluster:latest image.
My docker-compose file looks like :
version: '3'
services:
peer-1:
image: ipfs/ipfs-cluster:latest
ports:
- 8080:8080
- 4001:4001
- 5001:5001
volumes:
- ./cluster/peer1/config:/data/ipfs-cluster
peer-2:
image: ipfs/ipfs-cluster:latest
ports:
- 8081:8080
- 4002:4001
- 5002:5001
volumes:
- ./cluster/peer2/config:/data/ipfs-cluster
While starting the containers getting following error
ERROR ipfshttp: error posting to IPFS: Post http://127.0.0.1:5001/api/v0/repo/stat?size-only=true: dial tcp 127.0.0.1:5001: connect: connection refused ipfshttp.go:745
Please help with the problem.
Is there any proper documentation about how to setup a IPFS cluster on docker. This document misses on lot of details.
Thank you.
I figured out how to run a multi-node IPFS cluster on docker environment.
The current ipfs/ipfs-cluster which is version 0.4.17 doesn't run ipfs peer i.e. ipfs/go-ipfs in it. We need to run it separately.
So now in order to run a multi-node (2 node in this case) IPSF cluster in docker environment we need to run 2 IPFS peer container and 2 IPFS cluster container 1 corresponding to each peer.
So your docker-compose file will look as follows :
version: '3'
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
services:
ipfs0:
container_name: ipfs0
image: ipfs/go-ipfs
ports:
- "4001:4001"
- "5001:5001"
- "8081:8080"
volumes:
- ./var/ipfs0-docker-data:/data/ipfs/
- ./var/ipfs0-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.5
ipfs1:
container_name: ipfs1
image: ipfs/go-ipfs
ports:
- "4101:4001"
- "5101:5001"
- "8181:8080"
volumes:
- ./var/ipfs1-docker-data:/data/ipfs/
- ./var/ipfs1-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.7
ipfs-cluster0:
container_name: ipfs-cluster0
image: ipfs/ipfs-cluster
depends_on:
- ipfs0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.5/tcp/5001
ports:
- "9094:9094"
- "9095:9095"
- "9096:9096"
volumes:
- ./var/ipfs-cluster0:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.6
ipfs-cluster1:
container_name: ipfs-cluster1
image: ipfs/ipfs-cluster
depends_on:
- ipfs1
- ipfs-cluster0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.7/tcp/5001
ports:
- "9194:9094"
- "9195:9095"
- "9196:9096"
volumes:
- ./var/ipfs-cluster1:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.8
This will spin 2 peer IPFS cluster and we can store and retrieve file using any of the peer.
The catch here is we need to provide the IPFS_API to ipfs-cluster as environment variable so that the ipfs-cluster knows its corresponding peer. And for both the ipfs-cluster we need to have the same CLUSTER_SECRET.
According to the article you posted:
The container does not run go-ipfs. You should run the IPFS daemon
separetly, for example, using the ipfs/go-ipfs Docker container. We
recommend mounting the /data/ipfs-cluster folder to provide a custom,
working configuration, as well as persistency for the cluster data.
This is usually achieved by passing -v
:/data/ipfs-cluster to docker run).
If in fact you need to connect to another service within the docker-compose, you can simply refer to it by the service name, since hostname entries are created in all the containers in the docker-compose so services can talk to each other by name instead of ip
Additionally:
Unless you run docker with --net=host, you will need to set $IPFS_API
or make sure the configuration has the correct node_multiaddress.
The equivalent of --net=host in docker-compose is network_mode: "host" (incompatible with port-mapping) https://docs.docker.com/compose/compose-file/#network_mode