When the Spark job is run locally without Docker via spark-submit everything works fine.
However, running on a docker container results in no output being generated.
To see if Kafka itself was working, I extracted Kafka on to the Spark worker container, and make a console consumer listen to the same host, port and topic, (kafka:9092, crypto_topic) which was working correctly and showing output. (There's a producer constantly pushing data to the topic in another container)
Expected -
20/09/11 17:35:27 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.29.10:42565 with 366.3 MB RAM, BlockManagerId(driver, 192.168.29.10, 42565, None)
20/09/11 17:35:27 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.29.10, 42565, None)
20/09/11 17:35:27 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.29.10, 42565, None)
-------------------------------------------
Batch: 0
-------------------------------------------
+---------+-----------+-----------------+------+----------+------------+-----+-------------------+---------+
|name_coin|symbol_coin|number_of_markets|volume|market_cap|total_supply|price|percent_change_24hr|timestamp|
+---------+-----------+-----------------+------+----------+------------+-----+-------------------+---------+
+---------+-----------+-----------------+------+----------+------------+-----+-------------------+---------+
...
...
...
followed by more output
Actual
20/09/11 14:49:44 INFO BlockManagerMasterEndpoint: Registering block manager d7443d94165c:46203 with 366.3 MB RAM, BlockManagerId(driver, d7443d94165c, 46203, None)
20/09/11 14:49:44 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, d7443d94165c, 46203, None)
20/09/11 14:49:44 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, d7443d94165c, 46203, None)
20/09/11 14:49:44 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
no more output, stuck here
docker-compose.yml file
version: "3"
services:
zookeeper:
image: zookeeper:3.6.1
container_name: zookeeper
hostname: zookeeper
ports:
- "2181:2181"
networks:
- crypto-network
kafka:
image: wurstmeister/kafka:2.13-2.6.0
container_name: kafka
hostname: kafka
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_PORT=9092
# topic-name:partitions:in-sync-replicas:cleanup-policy
- KAFKA_CREATE_TOPICS="crypto_topic:1:1:compact"
networks:
- crypto-network
kafka-producer:
image: python:3-alpine
container_name: kafka-producer
command: >
sh -c "pip install -r /usr/src/producer/requirements.txt
&& python3 /usr/src/producer/kafkaProducerService.py"
volumes:
- ./kafkaProducer:/usr/src/producer
networks:
- crypto-network
cassandra:
image: cassandra:3.11.8
container_name: cassandra
hostname: cassandra
ports:
- "9042:9042"
#command:
# cqlsh -f /var/lib/cassandra/cql-queries.cql
volumes:
- ./cassandraData:/var/lib/cassandra
networks:
- crypto-network
spark-master:
image: bde2020/spark-master:2.4.5-hadoop2.7
container_name: spark-master
hostname: spark-master
ports:
- "8080:8080"
- "7077:7077"
- "6066:6066"
networks:
- crypto-network
spark-consumer-worker:
image: bde2020/spark-worker:2.4.5-hadoop2.7
container_name: spark-consumer-worker
environment:
- SPARK_MASTER=spark://spark-master:7077
ports:
- "8081:8081"
volumes:
- ./sparkConsumer:/sparkConsumer
networks:
- crypto-network
networks:
crypto-network:
driver: bridge
spark-submit is run by
docker exec -it spark-consumer-worker bash
/spark/bin/spark-submit --master $SPARK_MASTER --class processing.SparkRealTimePriceUpdates \
--packages com.datastax.spark:spark-cassandra-connector_2.11:2.4.3,org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.5 \
/sparkConsumer/sparkconsumer_2.11-1.0-RELEASE.jar
Relevant parts of Spark code
val inputDF: DataFrame = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "kafka:9092")
.option("subscribe", "crypto_topic")
.load()
...
...
...
val queryPrice: StreamingQuery = castedDF
.writeStream
.outputMode("update")
.format("console")
.option("truncate", "false")
.start()
queryPrice.awaitTermination()
val inputDF: DataFrame = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "kafka:9092")
.option("subscribe", "crypto_topic")
.load()
This part of the code was actually
val inputDF: DataFrame = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", KAFKA_BOOTSTRAP_SERVERS)
.option("subscribe", KAFKA_TOPIC)
.load()
Where KAFKA_BOOTSTRAP_SERVERS and KAFKA_TOPIC are read in from a config file while packaging the jar locally.
The best way to debug for me was to set the logs to be more verbose.
Locally, the value of KAFKA_BOOTSTRAP_SERVERS was localhost:9092, but in the Docker container it was changed to kafka:9092 in the config file there. This however, didn't reflect as the JAR was packaged already. So changing the value to kafka:9092 while packaging locally fixed it.
I would appreciate any help about how to have a JAR pick up configurations dynamically though. I don't want to package via SBT on the Docker container.
Related
I'm trying to run nimbus as container in docker and trying to use my machine as supervisor. I've tried the following. I've a docker compose that sets up nimbus and zookeeper containers
nimbus:
image: storm:2.4.0
platform: linux/x86_64
container_name: nimbus
command: storm nimbus
depends_on:
- zookeeper
links:
- zookeeper
restart: always
ports:
- '6627:6627'
networks:
- storm
zookeeper:
image: bitnami/zookeeper:3.6.2
container_name: zookeeper
ports:
- '2181'
volumes:
- ~/data/zookeeper:/bitnami/zookeeper
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- storm
ui:
platform: linux/x86_64
image: storm:2.4.0
container_name: ui
command: storm ui
links:
- nimbus
- zookeeper
restart: always
ports:
- "9090:8080"
networks:
- storm
I've exposed the two with ports. When I try to run storm supervisor from terminal, with localhost as nimbus seed and zookeeper server in storm.yaml, I get the following log and am not able to start the supervisor.
2022-11-03 19:15:55.866 o.a.s.s.o.a.z.ClientCnxn main-SendThread(localhost:2181) [INFO] Socket error occurred: localhost/127.0.0.1:2181: Connection refused
2022-11-03 19:15:55.971 o.a.s.d.s.Supervisor main [ERROR] supervisor can't create stormClusterState
2022-11-03 19:15:55.975 o.a.s.u.Utils main [ERROR] Received error in thread main.. terminating server...
java.lang.Error: java.lang.RuntimeException: org.apache.storm.shade.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /storm
at org.apache.storm.utils.Utils.handleUncaughtException(Utils.java:663) ~[storm-client-2.3.0.jar:2.3.0]
at org.apache.storm.utils.Utils.handleUncaughtException(Utils.java:667) ~[storm-client-2.3.0.jar:2.3.0]
at org.apache.storm.utils.Utils.lambda$createDefaultUncaughtExceptionHandler$2(Utils.java:1047) ~[storm-client-2.3.0.jar:2.3.0]
at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1055) [?:?]
at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1050) [?:?]
at java.lang.Thread.dispatchUncaughtException(Thread.java:1997) [?:?]
I'm able to access the ui as localhost:9090. Can someone please tell me how to resolve this?
This is how my docker-compose yaml looks like:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.1.0
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-kafka:6.1.0
container_name: broker
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
schema-registry:
image: confluentinc/cp-schema-registry:6.1.0
container_name: schema-registry
ports:
- 8081:8081
depends_on:
- zookeeper
- broker
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:29092
kafka-connect:
image: confluentinc/cp-kafka-connect-base:6.1.0
container_name: kafka-connect
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- 8083:8083
environment:
CONNECT_BOOTSTRAP_SERVERS: "broker:29092"
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN: "[%d] %p %X{connector.context}%m (%c:%L)%n"
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_PLUGIN_PATH: '/usr/share/java,/usr/share/confluent-hub-components/'
command:
- bash
- -c
- |
echo "Installing connector plugins"
confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.0.2
confluent-hub install --no-prompt jcustenborder/kafka-connect-spooldir:2.0.60
confluent-hub install --no-prompt streamthoughts/kafka-connect-file-pulse:1.5.0
#
# -----------
# Launch the Kafka Connect worker
/etc/confluent/docker/run &
#
# Don't exit
sleep infinity
volumes:
- ${PWD}/data:/data
When I run docker-compose up -d it is all good but right after the containers are up, the one which holds schema registry exits. Then I start it once again and it works totally fine. The real problem is that when I execute the following command, I can not see any of my plugins installed:
curl -s localhost:8083/connector-plugins|jq '.[].class'|egrep 'SpoolDir'
I am following this tutorial more or less. I just skip the postgres container and also kafkacat and ksql.
EDIT This is what to logs say for the kafka-connect container:
Installing connector plugins
Running in a "--no-prompt" mode
Implicit acceptance of the license below:
https:/github.com/jcustenborder/kafka-connect-spooldir/LICENSE
Implicit confirmation of the question: You are about to install 'kafka-connect-spooldir' from Jeremy Custenborder, as published on Confluent Hub.
Downloading component Kafka Connect Spooldir 2.0.60, provided by Jeremy Custenborder from Confluent Hub and installing into /usr/share/confluent-hub-components
Adding installation directory to plugin path in the following files:
/etc/kafka/connect-distributed.properties
/etc/kafka/connect-standalone.properties
/etc/schema-registry/connect-avro-distributed.properties
/etc/schema-registry/connect-avro-standalone.properties
Completed
Running in a "--no-prompt" mode
Implicit acceptance of the license below:
The Apache Software License, Version 2.0
http://www.apache.org/licenses/LICENSE-2.0.txt
Implicit confirmation of the question: You are about to install 'kafka-connect-file-pulse' from StreamThoughts, as published on Confluent Hub.
Downloading component Kafka Connect File Pulse 1.5.0, provided by StreamThoughts from Confluent Hub and installing into /usr/share/confluent-hub-components
Adding installation directory to plugin path in the following files:
/etc/kafka/connect-distributed.properties
Adding installation directory to plugin path in the following files:
/etc/kafka/connect-distributed.properties
/etc/kafka/connect-standalone.properties
/etc/schema-registry/connect-avro-distributed.properties
/etc/schema-registry/connect-avro-standalone.properties
Completed
uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
===> Configuring ...
The schema registry container may die because depends_on doesn't wait for the broker to start. Might just be a race condition. You should look at its logs rather than blindly restart it, though.
Your command is only looking for a connector class that includes SpoolDir.
If you want to see all installed connectors, don't pipe the curl command into anything and look at the Docker logs to see if any of the plugins failed to install
This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Closed 1 year ago.
I've created a quarkus service that reads from a bunch of Kstreams, joins them and then post the join result back into a kafka topic.
During development, I was running kafka and zookeeper from inside a docker-compose and then running my quarkus service on dev mode with:
mvn quarkus:dev
At this point, everything was working fine. I'm able to connect to the broker without problem and read/write the Kstreams.
Then I tried to create a docker container that runs this quarkus service, but when the service runs inside the container, it doesn't reach the broker.
I tried several different configs inside my docker-compose, but none worked. It just can't connect to the broker.
Here is my Dockerfile:
####
# This Dockerfile is used in order to build a container that runs the Quarkus application in JVM mode
#
# Before building the container image run:
#
# mvn package
#
# Then, build the image with:
#
# docker build -f src/main/docker/Dockerfile.jvm -t connector .
#
# Then run the container using:
#
# docker run -i --rm -p 8080:8080 connector
#
# If you want to include the debug port into your docker image
# you will have to expose the debug port (default 5005) like this : EXPOSE 8080 5050
#
# Then run the container using :
#
# docker run -i --rm -p 8080:8080 -p 5005:5005 -e JAVA_ENABLE_DEBUG="true" connector
#
###
FROM docker.internal/library/quarkus-base:latest
ARG RUN_JAVA_VERSION=1.3.8
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'
USER root
RUN apk update && apk add libstdc++
# Configure the JAVA_OPTIONS, you can add -XshowSettings:vm to also display the heap size.
ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
#ENV QUARKUS_LAUNCH_DEVMODE=true \
# JAVA_ENABLE_DEBUG=true
# -Dquarkus.package.type=mutable-jar
# We make four distinct layers so if there are application changes the library layers can be re-used
COPY --chown=1001 target/quarkus-app/lib/ ${APP_HOME}/lib/
COPY --chown=1001 target/quarkus-app/*-run.jar ${APP_HOME}/app.jar
COPY --chown=1001 target/quarkus-app/app/ ${APP_HOME}/app/
COPY --chown=1001 target/quarkus-app/quarkus/ ${APP_HOME}/quarkus/
EXPOSE 8080
USER 1001
#ENTRYPOINT [ "/deployments/run-java.sh" ]
And here is my docker-compose:
version: '2'
services:
zookeeper:
container_name: zookeeper
image: confluentinc/cp-zookeeper
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
environment:
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
networks:
- kafkastreams-network
kafka:
container_name: kafka
image: confluentinc/cp-kafka
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
- KAFKA_TRANSACTION_STATE_LOG_MIN_ISR=1
- KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=1
- KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS=100
networks:
- kafkastreams-network
connect:
container_name: connect
image: debezium/connect
ports:
- "8083:8083"
depends_on:
- kafka
environment:
- BOOTSTRAP_SERVERS=kafka:29092
- GROUP_ID=1
- CONFIG_STORAGE_TOPIC=my_connect_configs
- OFFSET_STORAGE_TOPIC=my_connect_offsets
- STATUS_STORAGE_TOPIC=my_connect_statuses
networks:
- kafkastreams-network
schema-registry:
image: confluentinc/cp-schema-registry:5.5.0
container_name: schema-registry
ports:
- "8081:8081"
depends_on:
- zookeeper
- kafka
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181
networks:
- kafkastreams-network
kafdrop:
image: obsidiandynamics/kafdrop
container_name: kafdrop
restart: "no"
ports:
- "9001:9000"
environment:
KAFKA_BROKERCONNECT: kafka:29092
JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
depends_on:
- kafka
- schema-registry
networks:
- kafkastreams-network
connector:
image: connector
depends_on:
- zookeeper
- kafka
- connect
environment:
QUARKUS_KAFKA_STREAMS_BOOTSTRAP_SERVERS: kafka:9092
networks:
- kafkastreams-network
networks:
kafkastreams-network:
name: ks
The error I'm getting is:
2021-08-05 11:52:35,433 WARN [org.apa.kaf.cli.NetworkClient] (kafka-admin-client-thread | connector-18d10d7d-b619-4715-a219-2557d70e0479-admin) [AdminClient clientId=connector-18d10d7d-b619-4715-a219-2557d70e0479-admin] Connection to node -1 (kafka/172.21.0.3:9092) could not be established. Broker may not be available.
Am I missing any config on either the Dockerfile or the docker compose?
I figured out that there were 2 problems:
In my docker-compose, I had to change the property KAFKA_ADVERTISED_LISTENERS to PLAINTEXT://kafka:29092,PLAINTEXT_HOST://kafka:9092
In my quarkus application.properties, I had 2 properties pointing to the wrong place:
quarkus.kafka-streams.bootstrap-servers=localhost:9092
quarkus.kafka-streams.application-server=localhost:9999
I am a newbie to ansible and trying to convert my docker-compose file to
ansible.yml.
I put the docker image of a dummy flask app into my gitlab repository as mentioned in https://codebabel.com/ci-gitlabci-docker/.
I wrote a docker-decompose.yaml to run both the above image and elastic search and it's working fine.
# Run both the app and Elasticsearch inside Docker containers.
version: '3'
services:
app:
image: registry.gitlab.com/bacdef/flaskapp:latest
restart: on-failure
ports:
- 5000:5000
depends_on:
- elasticsearch
links:
- elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
container_name: es01
environment:
- node.name=es01
- cluster.initial_master_nodes=es01
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
volumes:
esdata01:
driver: local
networks:
esnet:
Then I tried to convert the same docker-compose.yml into ansible.yml as follows
---
- hosts: localhost
tasks:
- name: "Elasticsearch"
docker_container:
name: "es01"
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
ports:
- "9200:9200"
volumes:
- "sdata01:/usr/share/elasticsearch/data"
env:
node.name: es01
cluster.initial_master_nodes: es01
cluster.name: docker-cluster
bootstrap.memory_lock: "true"
- name: "Launch app container"
docker_container:
name: "app"
image: registry.gitlab.com/bacdef/flaskapp:latest
ports:
- "5000:5000"
When I ran the above ansible file it fails with
PLAY [localhost]
TASK [Gathering Facts]
********************************************************* ok: [localhost]
TASK [Elasticsearch]
*********************************************************** changed: [localhost]
TASK [Launch app container]
**************************************************** fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error starting
container
380e56eb2d9579d407727b031713b056efda14f1a73fd9c88f65349567caa9f9: 500
Server Error: Internal Server Error (\"driver failed programming
external connectivity on endpoint app
(2020132c9cb503817740c063b1d231e3bb72e56d91a0386edd0cc6d23def992f):
Bind for 0.0.0.0:5000 failed: port is already allocated\")"}
PLAY RECAP
********************************************************************* localhost : ok=2 changed=1 unreachable=0
failed=1 skipped=0 rescued=0 ignored=0
But I am sure that no other process is using the port 5000 other than this docker image. I ran losof -i:5000 and killed the process which was running on this port before running ansible-playbook ansible.yml
I could not find why the process is not able to bind to 5000.
Could someone please share some high level advice
If ansible tells you that the port is already in use, then it means that it is already in use.
Try to run lsof as root to find wich process is binded to your port :
sudo lsof -i -P -n | grep 5000
Eventually, you can also check that you have no container running on this port (with a basic docker container ls)
This question is old, and since it shows up in search I'll add my workaround-type solution. The observed behaviour could just be erratic cache behaviour in docker. Try this on the host:
docker container prune
Since containers will be recreated in the next ansible run this should be safe to do. Pruning manually has helped me a couple of times.
I'm currently working on hyperledger blockchain that use several docker containers :
dev-peer0.org1.example.com-marbles-v5.9
peer0.org1.example.com
couchdb
orderer.example.com
cli
ca.example.com
On the default configuration they are all running on the same machine.
What I'm trying to achieve is to divide them on two different computers :
Computer 1 :
dev-peer0.org1.example.com-marbles-v5.9
peer0.org1.example.com
couchdb
Computer 2 :
orderer.example.com
cli
ca.example.com
Can I use the host file to make this work without editing any conf file ?
PC1 :
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.31.128.146 orderer.example.com
10.31.128.146 cli
10.31.128.146 ca.example.com
PC2:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.31.128.198 dev-peer0.org1.example.com-marbles-v5.9
10.31.128.198 peer0.org1.example.com
10.31.128.198 couchdb
I have already tried and I didn't work but I wonder if I should push more into that way ?
#
# Copyright IBM Corp All Rights Reserved
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
networks:
basic:
services:
ca.example.com:
image: hyperledger/fabric-ca:x86_64-1.0.0
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.example.com
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/a22daf356b2aab5792ea53e35f66fccef1d7f1aa2b3a2b92dbfbf96a448ea26a_sk -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca.example.com
networks:
- basic
orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer:x86_64-1.0.0
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp/orderer/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
ports:
- 7050:7050
volumes:
- ./config/:/etc/hyperledger/configtx
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/:/etc/hyperledger/msp/orderer
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/msp/peerOrg1
networks:
- basic
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer:x86_64-1.0.0
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer0.org1.example.com
- CORE_LOGGING_PEER=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
# # the following setting starts chaincode containers on the same
# # bridge network as the peers
# # https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_basic
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
# command: peer node start --peer-chaincodedev=true
ports:
- 7051:7051
- 7053:7053
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/org1.example.com/users:/etc/hyperledger/msp/users
- ./config:/etc/hyperledger/configtx
depends_on:
- orderer.example.com
networks:
- basic
couchdb:
container_name: couchdb
image: hyperledger/fabric-couchdb:x86_64-1.0.0
ports:
- 5984:5984
environment:
DB_URL: http://localhost:5984/member_db
networks:
- basic
cli:
container_name: cli
image: hyperledger/fabric-tools:x86_64-1.0.0
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
- CORE_CHAINCODE_KEEPALIVE=10
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
networks:
- basic
#depends_on:
# - orderer.example.com
# - peer0.org1.example.com
# - couchdb
I would recommend enabling Swarm Mode for an overlay network. Ideally you can even run the containers inside Swarm Mode, but that's not necessary for just the overlay networking.
One one host, run the following to create a manager:
docker swarm init
Then run the output docker swarm join command on your second host.
Option A: for only overlay networking, you need to create a network as attachable to use it outside of swarm mode:
docker network create -d overlay --attachable basic
And then in your compose file, adjust the network definition to be external:
version: '2'
networks:
basic:
external:
name: basic
With that, your containers will be able to communicate directly over the overlay network.
Option B: To use Swarm Mode to run the containers, you can skip the network creation and setting the network to external. Just update your version to version: '3' inside your compose.yml file, I'd also remove the "container_name" lines. Then run:
docker stack deploy -c compose.yml hyperledger
to create a stack called hyperledger.
You'll need to use/add the extra_hosts to each of your compose service definitions if you want to go down the hosts route.