How to enable graph + search + analytics on the same server - datastax-enterprise

It seems by the documentation that it should be enough to set /etc/defaults/dse to enable all the above.
Meaning, that SPARK support was also added.
Also the node tool now indicates that everything is up and running, and that the (single node) cluster is of type Graph, Search, Analytics
So enabling works fine, but trying to use analytics leads to exceptions in the provision level, something which probably has to do with spark configuration. But nothing is documented there.
It seems like others are stuck with this issue, too.
This is the full exception :
Error encountered while constructing Graph/TraversalSource - com.google.inject.ProvisionException: Unable to provision, see the following errors: 1) Error injecting constructor, com.datastax.bdp.gcore.datastore.DataStoreException: Failed to execute statementb5807f0d-b1b0-4bc3-bd77-794d21725fbb at com.datastax.bdp.graph.impl.DseGraphImpl.(DseGraphImpl.java:189) at com.datastax.bdp.graph.impl.GraphModule.configure(Unknown Source) (via modules: com.datastax.bdp.graph.impl.DseGraphFactoryImpl$$Lambda$1430/208354649 -> com.google.inject.util.Modules$OverrideModule -> com.datastax.bdp.graph.impl.GraphModule) while locating com.datastax.bdp.graph.impl.DseGraphImpl 1 error
Error encountered while constructing Graph/TraversalSource - Unable to create an OLAP Traversal Source when Spark is not running or cannot be detected.
Note that this exception is received when trying to use the graph-analytics querying from studio or gremlin console.
There are no specific errors in cassandra log-files, etc.
Moreover from the point of view of the dse service, everything is seems right, but the fact that it doesn't actually work.
These are the errors that appeared after playing with some constants :
WARN [SPARK-MASTER] 2018-07-08 18:25:36,640 SPARK-MASTER Logging.scala:47 - Not overriding system property java.library.path
ERROR [SPARK-MASTER] 2018-07-08 18:25:36,643 SPARK-MASTER Logging.scala:91 - Failed to bind MasterRedirectingUI
ERROR [SPARK-MASTER] 2018-07-08 18:25:36,643 SPARK-MASTER Logging.scala:72 - Failed to stop redirecting UI
WARN [SPARK-WORKER] 2018-07-08 18:25:36,671 SPARK-WORKER Logging.scala:47 - Not overriding system property java.library.path
ERROR [SPARK-WORKER] 2018-07-08 18:25:36,672 SPARK-WORKER InternalServiceRunner.java:45 - SPARK-WORKER caused an exception in state STARTING:
ERROR [SPARK-MASTER] 2018-07-08 18:25:41,644 SPARK-MASTER InternalServiceRunner.java:45 - SPARK-MASTER caused an exception in state NOT_STARTED:
WARN [SPARK-MASTER] 2018-07-08 18:25:41,645 SPARK-MASTER Logging.scala:47 - Not overriding system property java.library.path
ERROR [SPARK-MASTER] 2018-07-08 18:25:41,647 SPARK-MASTER Logging.scala:91 - Failed to bind MasterRedirectingUI
ERROR [SPARK-MASTER] 2018-07-08 18:25:41,647 SPARK-MASTER Logging.scala:72 - Failed to stop redirecting UI
ERROR [SPARK-MASTER] 2018-07-08 18:25:46,648 SPARK-MASTER InternalServiceRunner.java:45 - SPARK-MASTER caused an exception in state NOT_STARTED:
WARN [SPARK-MASTER] 2018-07-08 18:25:46,649 SPARK-MASTER Logging.scala:47 - Not overriding system property java.library.path
ERROR [SPARK-MASTER] 2018-07-08 18:25:46,652 SPARK-MASTER Logging.scala:91 - Failed to bind MasterRedirectingUI
ERROR [SPARK-MASTER] 2018-07-08 18:25:46,652 SPARK-MASTER Logging.scala:72 - Failed to stop redirecting UI
WARN [SPARK-WORKER] 2018-07-08 18:25:46,673 SPARK-WORKER Logging.scala:47 - Not overriding system property java.library.path
ERROR [SPARK-WORKER] 2018-07-08 18:25:46,673 SPARK-WORKER InternalServiceRunner.java:45 - SPARK-WORKER caused an exception in state STARTING:
ERROR [SPARK-MASTER] 2018-07-08 18:25:51,653 SPARK-MASTER InternalServiceRunner.java:45 - SPARK-MASTER caused an exception in state NOT_STARTED:
WARN [SPARK-MASTER] 2018-07-08 18:25:51,653 SPARK-MASTER Logging.scala:47 - Not overriding system property java.library.path
ERROR [SPARK-MASTER] 2018-07-08 18:25:51,656 SPARK-MASTER Logging.scala:91 - Failed to bind MasterRedirectingUI
ERROR [SPARK-MASTER] 2018-07-08 18:25:51,656 SPARK-MASTER Logging.scala:72 - Failed to stop redirecting UI
ERROR [SPARK-MASTER] 2018-07-08 18:25:56,657 SPARK-MASTER InternalServiceRunner.java:45 - SPARK-MASTER caused an exception in state NOT_STARTED:
WARN [SPARK-MASTER] 2018-07-08 18:25:56,658 SPARK-MASTER Logging.scala:47 - Not overriding system property java.library.path
ERROR [SPARK-MASTER] 2018-07-08 18:25:56,660 SPARK-MASTER Logging.scala:91 - Failed to bind MasterRedirectingUI
ERROR [SPARK-MASTER] 2018-07-08 18:25:56,660 SPARK-MASTER Logging.scala:72 - Failed to stop redirecting UI
WARN [SPARK-WORKER] 2018-07-08 18:25:56,673 SPARK-WORKER Logging.scala:47 - Not overriding system property java.library.path
ERROR [SPARK-WORKER] 2018-07-08 18:25:56,674 SPARK-WORKER InternalServiceRunner.java:45 - SPARK-WORKER caused an exception in state STARTING:
ERROR [SPARK-MASTER] 2018-07-08 18:26:01,661 SPARK-MASTER InternalServiceRunner.java:45 - SPARK-MASTER caused an exception in state NOT_STARTED:
WARN [SPARK-MASTER] 2018-07-08 18:26:01,662 SPARK-MASTER Logging.scala:47 - Not overriding system property java.library.path
ERROR [SPARK-MASTER] 2018-07-08 18:26:01,665 SPARK-MASTER Logging.scala:91 - Failed to bind MasterRedirectingUI
ERROR [SPARK-MASTER] 2018-07-08 18:26:01,665 SPARK-MASTER Logging.scala:72 - Failed to stop redirecting UI
ERROR [SPARK-MASTER] 2018-07-08 18:26:06,665 SPARK-MASTER InternalServiceRunner.java:45 - SPARK-MASTER caused an exception in state NOT_STARTED:

Ok, with this log it's becoming clearer. The real culprit is this message: Failed to bind MasterRedirectingUI - this means that Spark Master can't start because somebody is already listening on port that it's trying to use - by default it's 7080, but it's could be overwritten by setting the SPARK_MASTER_WEBUI_PORT variable in the file spark-env.sh (either in resources/spark/conf in tarball installation, or in /etc/dse/spark/).
Check that no other processes are listening on these ports - you can use netstat -plnt to do this...

Related

RabbitMQ fails to boot from docker-compose

Im trying to set up rabbitmq instance from docker-compose command.
My docker compose yaml
version: '3.8'
services:
rabbitmq:
image: rabbitmq:3-management
hostname: rabbit
container_name: 'rabbitmq'
volumes:
- ./etc/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
- ./data:/var/lib/rabbitmq/mnesia/rabbit#rabbit
- ./logs:/var/log/rabbitmq/log
- ./etc/ssl/CERT_LAB_CA.pem:/etc/rabbitmq/ssl/cacert.pem
- ./etc/ssl/CERT_LAB_RABBITMQ.pem:/etc/rabbitmq/ssl/cert.pem
- ./etc/ssl/KEY_LAB_RABBITMQ.pem:/etc/rabbitmq/ssl/key.pem
ports:
- 5672:5672
- 15672:15672
- 15671:15671
- 5671:5671
environment:
- RABBITMQ_DEFAULT_USER=secret
- RABBITMQ_DEFAULT_PASS=secret
When I run docker compose up for the first time, everything works fine. But when I add queues and exchanged(loaded from definitions.json), shut down and remove container and try to docker compose up again, I got this error
2022-09-29 13:32:09.522956+00:00 [notice] <0.44.0> Application mnesia exited with reason: stopped
2022-09-29 13:32:09.523096+00:00 [error] <0.229.0>
2022-09-29 13:32:09.523096+00:00 [error] <0.229.0> BOOT FAILED
2022-09-29 13:32:09.523096+00:00 [error] <0.229.0> ===========
2022-09-29 13:32:09.523096+00:00 [error] <0.229.0> Error during startup: {error,
2022-09-29 13:32:09.523096+00:00 [error] <0.229.0> {schema_integrity_check_failed,
2022-09-29 13:32:09.523096+00:00 [error] <0.229.0> [{table_missing,rabbit_listener}]}}
2022-09-29 13:32:09.523096+00:00 [error] <0.229.0>
BOOT FAILED
===========
Error during startup: {error,
{schema_integrity_check_failed,
[{table_missing,rabbit_listener}]}}
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> crasher:
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> initial call: application_master:init/4
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> pid: <0.228.0>
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> registered_name: []
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> exception exit: {{schema_integrity_check_failed,
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> [{table_missing,rabbit_listener}]},
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> {rabbit,start,[normal,[]]}}
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> in function application_master:init/4 (application_master.erl, line 142)
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> ancestors: [<0.227.0>]
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> message_queue_len: 1
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> messages: [{'EXIT',<0.229.0>,normal}]
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> links: [<0.227.0>,<0.44.0>]
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> dictionary: []
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> trap_exit: true
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> status: running
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> heap_size: 2586
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> stack_size: 28
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> reductions: 180
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> neighbours:
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0>
And here is my rabbitmq.conf file
listeners.tcp.default = 5672
listeners.ssl.default = 5671
ssl_options.cacertfile = /etc/rabbitmq/ssl/cacert.pem
ssl_options.certfile = /etc/rabbitmq/ssl/cert.pem
ssl_options.keyfile = /etc/rabbitmq/ssl/key.pem
#Generate client cert and uncomment this if client has to provide cert.
#ssl_options.verify = verify_peer
#ssl_options.fail_if_no_peer_cert = true
collect_statistics_interval = 10000
#load_definitions = /path/to/exported/definitions.json
#definitions.skip_if_unchanged = true
management.tcp.port = 15672
management.ssl.port = 15671
management.ssl.cacertfile = /etc/rabbitmq/ssl/cacert.pem
management.ssl.certfile = /etc/rabbitmq/ssl/cert.pem
management.ssl.keyfile = /etc/rabbitmq/ssl/key.pem
management.http_log_dir = /var/log/rabbitmq/http
What am I missing?
Try to substitute ./data:/var/lib/rabbitmq/mnesia/rabbit#rabbit in your config with ./data:/var/lib/rabbitmq.
I had the same error and spent quite time trying to figure out the problem. My configuration was slightly different from yours and looked like this:
rabbitmq:
image: rabbitmq:3.11.2-management-alpine
hostname: rabbitmq
environment:
RABBITMQ_DEFAULT_USER: tester
RABBITMQ_DEFAULT_PASS: qwerty
RABBITMQ_MNESIA_DIR: /my-custom-data-folder-path-inside-container
RABBITMQ_NODENAME: rabbitmq
volumes:
- type: bind
source: /my-custom-data-folder-path-on-host
target: /my-custom-data-folder-path-inside-container
I'm not an expert in RabbitMQ, and my idea was just to make RabbitMQ to persist it's database in the /my-custom-data-folder-path-on-host folder on host. And just like in your case on the first run it was able to start successfully, but after container restart I was getting the following error:
BOOT FAILED
Error during startup: {error, {schema_integrity_check_failed, [{table_missing,rabbit_listener}]}}
I learned from the documentation is that rabbit_listener is a table inside the Mnesia database that is used by RabbitMQ and that "listeners" are the TCP-listeners that are configured in RabbitMQ to accept client connections.
For RabbitMQ to accept client connections, it needs to bind to one or more interfaces and listen on (protocol-specific) ports. One such interface/port pair is called a listener in RabbitMQ parlance. Listeners are configured using the listeners.tcp.* configuration option(s).
I wanted to dig into the Mnesia database to troubleshoot but not managed to do that without Erlang knowledge. It seems that for some reason on the first run RabbitMQ does not create "rabbit_listener" table, but on subsequent runs requires it.
Finally, I managed to workaround the problem by changing my initial configuration as follows:
service-bus:
image: rabbitmq:3.11.2-management-alpine
hostname: rabbitmq
environment:
RABBITMQ_DEFAULT_USER: tester
RABBITMQ_DEFAULT_PASS: qwerty
RABBITMQ_NODENAME: rabbitmq
volumes:
- type: bind
source: /my-custom-data-folder-path-on-host
target: /var/lib/rabbitmq
Instead of overriding just the RABBITMQ_MNESIA_DIR folder I've overridden the entire /var/lib/rabbitmq. This did the trick and now my RabbitMQ successfully endures restarts.
I hit this problem and I changed my docker-compose.yml file to use rabbitmq:3.9-management rather than rabbitmq:3-management.
The problem happened for me when I restarted the stack and the rabbitmq image went to 3.11.

Issues while Setting up hyperledger fabric 2.0 in different container ports (Testing & Devlopment)

I have been working on Hyperledger fabric 2.0 Multi-Org Networking running under default ports. The setup is as follows:
Org1 ( Peer0:7051, Peer1:8051, CA: 7054 ,couchdb0:5984, couchdb1:6984:5984)
Org2 ( Peer0:9051, Peer1:10051, CA: 8054,couchdb2:7984:5984, couchdb3:8984:5984)
Orderer (0rderer1:7050, Orderer2:8050, Orderer3: 9050) RAFT Mechanism
The requirement is to redefine all the container ports mentioned above so that I can run the same Fabric application as two environments ( One for Testing(Stable version) and one for Development )
I tried to change the ports (Specifying environmental variables for ports in docker-compose) of Peers, orderers, CA. But I don't have any option for the CouchDB which always has the default port(5984)
Is there any way to achieve this? so that it will also be helpful in running two different fabric applications in the same virtual machine
EDIT1:
My docker-compose.yaml file (I have only mentioned for- Org1(Peer0,peer1), Orderer1,ca-org1, couchdb0,couchdb1)
version: "2"
networks:
test2:
services:
ca-org1:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.org1.test.com
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.test.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/priv_sk
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-tls/tlsca.org1.test.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-tls/priv_sk
ports:
- "3054:3054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./channel/crypto-config/peerOrganizations/org1.test.com/ca/:/etc/hyperledger/fabric-ca-server-config
- ./channel/crypto-config/peerOrganizations/org1.test.com/tlsca/:/etc/hyperledger/fabric-ca-server-tls
container_name: ca.org1.test.com
hostname: ca.org1.test.com
networks:
- test2
orderer.test.com:
container_name: orderer.test.com
image: hyperledger/fabric-orderer:2.1
dns_search: .
environment:
- ORDERER_GENERAL_LOGLEVEL=info
- FABRIC_LOGGING_SPEC=INFO
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_METRICS_PROVIDER=prometheus
- ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:3443
- ORDERER_GENERAL_LISTENPORT=3050
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers
command: orderer
ports:
- 3050:3050
- 3443:3443
networks:
- test2
volumes:
- ./channel/genesis.block:/var/hyperledger/orderer/genesis.block
- ./channel/crypto-config/ordererOrganizations/test.com/orderers/orderer.test.com/msp:/var/hyperledger/orderer/msp
- ./channel/crypto-config/ordererOrganizations/test.com/orderers/orderer.test.com/tls:/var/hyperledger/orderer/tls
couchdb0:
container_name: couchdb0-test
image: hyperledger/fabric-couchdb
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 1984:1984
networks:
- test2
couchdb1:
container_name: couchdb1-test
image: hyperledger/fabric-couchdb
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 2984:1984
networks:
- test2
peer0.org1.test.com:
container_name: peer0.org1.test.com
extends:
file: base.yaml
service: peer-base
environment:
- FABRIC_LOGGING_SPEC=DEBUG
- ORDERER_GENERAL_LOGLEVEL=DEBUG
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=artifacts_test2
- CORE_PEER_ID=peer0.org1.test.com
- CORE_PEER_ADDRESS=peer0.org1.test.com:3051
- CORE_PEER_LISTENADDRESS=0.0.0.0:3051
- CORE_PEER_CHAINCODEADDRESS=peer0.org1.test.com:3052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:3052
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org1.test.com:4051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.test.com:3051
# - CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:9440
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb0-test:1984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_METRICS_PROVIDER=prometheus
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto/peer/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto/peer/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto/peer/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/crypto/peer/msp
depends_on:
- couchdb0
ports:
- 3051:3051
volumes:
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer0.org1.test.com/msp:/etc/hyperledger/crypto/peer/msp
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer0.org1.test.com/tls:/etc/hyperledger/crypto/peer/tls
- /var/run/:/host/var/run/
- ./channel/:/etc/hyperledger/channel/
networks:
- test2
peer1.org1.test.com:
container_name: peer1.org1.test.com
extends:
file: base.yaml
service: peer-base
environment:
- FABRIC_LOGGING_SPEC=DEBUG
- ORDERER_GENERAL_LOGLEVEL=debug
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=artifacts_test2
- CORE_PEER_ID=peer1.org1.test.com
- CORE_PEER_ADDRESS=peer1.org1.test.com:4051
- CORE_PEER_LISTENADDRESS=0.0.0.0:4051
- CORE_PEER_CHAINCODEADDRESS=peer1.org1.test.com:4052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:4052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.test.com:4051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.test.com:3051
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb1-test:1984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_METRICS_PROVIDER=prometheus
# - CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:9440
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto/peer/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto/peer/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto/peer/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/crypto/peer/msp
ports:
- 4051:4051
volumes:
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer1.org1.test.com/msp:/etc/hyperledger/crypto/peer/msp
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer1.org1.test.com/tls:/etc/hyperledger/crypto/peer/tls
- /var/run/:/host/var/run/
- ./channel/:/etc/hyperledger/channel/
networks:
- test2
Thanks for the suggestions regarding couchDB. I had a thought that we should only specify the default couchDB port each instance. Anyway I missed the step of changing the container name in the first place (default peer0.org1.example.com to peer0.org1.test.com) I was able to start the docker containers with new container names so that it doesn't stop(recreate) the existing containers which is already running on the actual ports.
The issue which I am facing now is peer is not able to communicate with the couchdb-test url
U 04c Entering VerifyCouchConfig()
2020-08-12 11:22:45.010 UTC [couchdb] handleRequest -> DEBU 04d Entering handleRequest() method=GET url=http://couchdb1-test:1984/ dbName=
2020-08-12 11:22:45.010 UTC [couchdb] handleRequest -> DEBU 04e Request URL: http://couchdb1-test:1984/
2020-08-12 11:22:45.011 UTC [couchdb] handleRequest -> WARN 04f Retrying couchdb request in 125ms. Attempt:1 Error:Get "http://couchdb1-test:1984/": dial tcp 172.27.0.11:1984: connect: connection refused
2020-08-12 11:22:45.137 UTC [couchdb] handleRequest -> WARN 050 Retrying couchdb request in 250ms. Attempt:2 Error:Get "http://couchdb1-test:1984/": dial tcp 172.27.0.11:1984: connect: connection refused
2020-08-12 11:22:45.389 UTC [couchdb] handleRequest -> WARN 051 Retrying couchdb request in 500ms. Attempt:3 Error:Get "http://couchdb1-test:1984/": dial tcp 172.27.0.11:1984: connect: connection refused
Hence if I try to create a channel, peer container exits even though it was running till now and it's not able to join the channel
2020-08-12 10:58:29.264 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-08-12 10:58:29.301 UTC [cli.common] readBlock -> INFO 002 Expect block, but got status: &{NOT_FOUND}
2020-08-12 10:58:29.305 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2020-08-12 10:58:29.506 UTC [cli.common] readBlock -> INFO 004 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:29.509 UTC [channelCmd] InitCmdFactory -> INFO 005 Endorser and orderer connections initialized
2020-08-12 10:58:29.710 UTC [cli.common] readBlock -> INFO 006 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:29.713 UTC [channelCmd] InitCmdFactory -> INFO 007 Endorser and orderer connections initialized
2020-08-12 10:58:29.916 UTC [cli.common] readBlock -> INFO 008 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:29.922 UTC [channelCmd] InitCmdFactory -> INFO 009 Endorser and orderer connections initialized
2020-08-12 10:58:30.123 UTC [cli.common] readBlock -> INFO 00a Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:30.126 UTC [channelCmd] InitCmdFactory -> INFO 00b Endorser and orderer connections initialized
2020-08-12 10:58:30.327 UTC [cli.common] readBlock -> INFO 00c Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:30.331 UTC [channelCmd] InitCmdFactory -> INFO 00d Endorser and orderer connections initialized
2020-08-12 10:58:30.534 UTC [cli.common] readBlock -> INFO 00e Received block: 0
Error: error getting endorser client for channel: endorser client failed to connect to localhost:3051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:53668->127.0.0.1:3051: read: connection reset by peer"
Error: error getting endorser client for channel: endorser client failed to connect to localhost:4051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:60724->127.0.0.1:4051: read: connection reset by peer"
Error: error getting endorser client for channel: endorser client failed to connect to localhost:5051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:57948->127.0.0.1:5051: read: connection reset by peer"
Error: error getting endorser client for channel: endorser client failed to connect to localhost:6051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:58976->127.0.0.1:6051: read: connection reset by peer"
2020-08-12 10:58:37.518 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-08-12 10:58:37.552 UTC [channelCmd] update -> INFO 002 Successfully submitted channel update
2020-08-12 10:58:37.685 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-08-12 10:58:37.763 UTC [channelCmd] update -> INFO 002 Successfully submitted channel update
Here, only the Orderers are successfully added to the channel but not the peers even after changing the ports.
This isnt an issue, you can just specify it as you did for the others like this. Are you facing some specific issues while mapping the ports
ports:
- 6984:5984 # Mapping Host Port to Container Port
You can change the couchDb port from the docker-compose file.
Showing a snippet from docekr-compose.yaml file.
couchdb0:
container_name: couchdb0
image: couchdb:2.3
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
# Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
# for example map it to utilize Fauxton User Interface in dev environments.
ports:
- "5984:5984"
networks:
- byfn
From here you can change ports easily.

Hyperledger Fabric: Timeout Expired while starting chaincode on "Peer Chaincode Instantiate" command

I am trying to instantiate an installed chaincode using the "Peer Chaincode Instantiate" command (as below). On execution of the command, I am receiving the following error message:
Command to instantiate chaincode:
peer chaincode instantiate -o orderer.proofofownership.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/proofofownership.com/orderers/orderer.proofofownership.com/msp/tlscacerts/tlsca.proofofownership.com-cert.pem -C dmanddis -n CreateDiamond -v 1.0 -c '{"Args":[]}' -P "OR ('DiamondManufacturerMSP.peer','DistributorMSP.peer')"
Error Message received:
Error: Error endorsing chaincode: rpc error: code = Unknown desc = timeout expired while starting chaincode CreateDiamond:1.0(networkid:dev,peerid:peer0.dm.proofofownership.com,tx:1a96ecc8763e214ee543ecefe214df6025f8e98f2449f2b7877d04655ddadb49)
I tried rectifying this issue by adding the following attributes in "peer-base.yaml file"
- CORE_CHAINCODE_EXECUTETIMEOUT=300s
- CORE_CHAINCODE_DEPLOYTIMEOUT=300s
Although, I am still receiving this particular error.
Following are my docker container configurations:
peer-base.yaml File:
services:
peer-base:
image: hyperledger/fabric-peer:x86_64-1.1.0
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=proof_of_ownership_pow
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=pow
#- CORE_LOGGING_LEVEL=INFO
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_CHAINCODE_EXECUTETIMEOUT=300s
- CORE_CHAINCODE_DEPLOYTIMEOUT=300s
#- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
cli - container configuration in "docker-compose-cli.yaml" file:
cli:
container_name: cli
image: hyperledger/fabric-tools:x86_64-1.1.0
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
#- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.dm.proofofownership.com:7051
- CORE_PEER_LOCALMSPID=DiamondManufacturerMSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/users/Admin#dm.proofofownership.com/msp
- CORE_PEER_CHAINCODELISTENADDRESS=peer0.dm.proofofownership.com:7052
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=host
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=pow
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
#- ./../chaincode/:/opt/gopath/src/github.com/chaincode
#- ./chaincode/CreateDiamond/go:/opt/gopath/src/github.com/chaincode/
- ./chaincode/CreateDiamond:/opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode/
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- orderer.proofofownership.com
- peer0.dm.proofofownership.com
- peer1.dm.proofofownership.com
- peer0.dist.proofofownership.com
- peer1.dist.proofofownership.com
#network_mode: host
networks:
- pow
peer configuration in "docker-compose-base.yaml" file:
peer0.dm.proofofownership.com:
container_name: peer0.dm.proofofownership.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.dm.proofofownership.com
#- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/users/Admin#dm.proofofownership.com/msp
#- CORE_PEER_MSPCONFIGPATH=/home/john/Proof-Of-Ownership/crypto-config/peerOrganizations/dm.proofofownership.com/users/Admin#dm.proofofownership.com/msp
- CORE_PEER_ADDRESS=peer0.dm.proofofownership.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.dm.proofofownership.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.dm.proofofownership.com:7051
- CORE_PEER_LOCALMSPID=DiamondManufacturerMSP
- CORE_PEER_CHAINCODELISTENADDRESS=peer0.dm.proofofownership.com:7052
#- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/ca.crt
#- CORE_PEER_TLS_ROOTCERT_FILE=/home/john/Proof-Of-Ownership/crypto-config/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/ca.crt
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls:/etc/hyperledger/fabric/tls
- peer0.dm.proofofownership.com:/var/hyperledger/production
ports:
- 7051:7051
- 7053:7053
Orderer Configuration in "docker-compose-base.yaml" file:
orderer.proofofownership.com:
container_name: orderer.proofofownership.com
image: hyperledger/fabric-orderer:x86_64-1.1.0
environment:
# CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE Newly Added
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=proof_of_ownership_pow
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=pow
- ORDERER_GENERAL_LOGLEVEL=DEBUG
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
#- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
#- ORDERER_GENERAL_TLS_ENABLED=false
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
# New Addition
- CONFIGTX_ORDERER_ORDERERTYPE=solo
- CONFIGTX_ORDERER_BATCHSIZE_MAXMESSAGECOUNT=10
- CONFIGTX_ORDERER_BATCHTIMEOUT=2s
- CONFIGTX_ORDERER_ADDRESSES=[127.0.0.1:7050]
#working_dir: /opt/gopath/src/github.com/hyperledger/fabric
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/genesis.block
- ../crypto-config/ordererOrganizations/proofofownership.com/orderers/orderer.proofofownership.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/proofofownership.com/orderers/orderer.proofofownership.com/tls/:/var/hyperledger/orderer/tls
- orderer.proofofownership.com:/var/hyperledger/production/orderer
ports:
- 7050:7050
I also reviewed the peer's docker container logs (using docker logs ) and received the following logs:
Launch -> ERRO 3eb launchAndWaitForRegister failed: timeout expired while starting chaincode CreateDiamond:1.0(networkid:dev,peerid:peer0.dm.proofofownership.com,tx:cc34a20176d7f09e1537b039f3340450e08f6447bf16965324655e72a2a58623)
2018-08-01 12:59:08.739 UTC [endorser] simulateProposal -> ERRO 3ed [dmanddis][cc34a201] failed to invoke chaincode name:"lscc" , error: timeout expired while starting chaincode CreateDiamond:1.0(networkid:dev,peerid:peer0.dm.proofofownership.com,tx:cc34a20176d7f09e1537b039f3340450e08f6447bf16965324655e72a2a58623)
Following logs were received on installing chaincode:
2018-08-03 09:44:55.822 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-08-03 09:44:55.822 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2018-08-03 09:44:55.822 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
2018-08-03 09:44:55.822 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
2018-08-03 09:44:55.822 UTC [chaincodeCmd] getChaincodeSpec -> DEBU 005 java chaincode disabled
2018-08-03 09:44:58.270 UTC [golang-platform] getCodeFromFS -> DEBU 006 getCodeFromFS github.com/hyperledger/fabric/peer/chaincode
2018-08-03 09:45:02.089 UTC [golang-platform] func1 -> DEBU 007 Discarding GOROOT package bytes
2018-08-03 09:45:02.089 UTC [golang-platform] func1 -> DEBU 008 Discarding GOROOT package encoding/json
2018-08-03 09:45:02.089 UTC [golang-platform] func1 -> DEBU 009 Discarding GOROOT package fmt
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00a Discarding provided package github.com/hyperledger/fabric/core/chaincode/shim
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00b Discarding provided package github.com/hyperledger/fabric/protos/peer
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00c Discarding GOROOT package strconv
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00d skipping dir: /opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode/go
2018-08-03 09:45:02.090 UTC [golang-platform] GetDeploymentPayload -> DEBU 00e done
2018-08-03 09:45:02.090 UTC [container] WriteFileToPackage -> DEBU 00f Writing file to tarball: src/github.com/hyperledger/fabric/peer/chaincode/CreateDiamond.go
2018-08-03 09:45:02.122 UTC [msp/identity] Sign -> DEBU 010 Sign: plaintext: 0AE3070A5B08031A0B089EC890DB0510...EC7BFE1B0000FFFFEE433C37001C0000
2018-08-03 09:45:02.122 UTC [msp/identity] Sign -> DEBU 011 Sign: digest: E5160DE95DB096379967D959FA71E692F098983F443378600943EA5D7265A82C
2018-08-03 09:45:02.230 UTC [chaincodeCmd] install -> DEBU 012 Installed remotely response:<status:200 payload:"OK" >
2018-08-03 09:45:02.230 UTC [main] main -> INFO 013 Exiting.....
In the peer configuration, you specified a different port for the chaincode endpoint than the peer adress (chaincode endpoint port 7052, peer adress on port 7051):
CORE_PEER_CHAINCODELISTENADDRESS=peer0.dm.proofofownership.com:7052
But this port is not exposed. Please add this to your peer port configuration:
- 7052:7052
It is likely that your chaincode is failing on start-up. You might want to try using the development mode tutorial approach to debug your chaincode. It is possible that the chaincode process is failing. By executing from within the container, you can view the logs to see what might not be working for you.
The devmode tutorial is here . You will simply need to replace the tutorial's chaincode with your own.

Spring Boot docker microservices restTemplate exception

I'm trying to make a rest request through restTemplate between Spring Boot microservices in docker, but I get an error.
docker-compose.yml:
api:
image: api-service
container_name: api-service
restart: always
depends_on:
- product
ports:
- 8081:8080
links:
- product:product
environment:
- SERVICE_PORT_PRODUCT=8083
product:
image: product-service
container_name: product-service
restart: always
ports:
- 8083:8080
Exception log:
ERROR 1 --- [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.web.client.ResourceAccessException: I/O error on POST request for "http://product:8083/api/products/": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)] with root cause
java.net.ConnectException: Connection refused (Connection refused)
looks correct:
POST request for "http://product:8083/api/products/"
why does not it work?
As documented in the official docs for Networking in Compose
Networked service-to-service communication use the CONTAINER_PORT
Thus when you want to do a request from one container to the other, you need use the container port and not the host port.
The request should be to: http://product:8080/api/products/ from the api container to the product container.

Dockerized Spring Cloud Stream services with Kafka broker unable to connect to Zookeeper

I'm testing a sample spring cloud stream application (running on a Ubuntu linux machine) with one source and one sink services. All my services are docker-containerized and I would like to use kafka as message broker.
Below the relevant parts of the docker-compose.yml:
zookeeper:
image: confluent/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka:0.9.0.0-1
container_name: kafka
ports:
- "9092:9092"
links:
- zookeeper:zk
environment:
- KAFKA_ADVERTISED_HOST_NAME=192.168.33.101
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_DELETE_TOPIC_ENABLE=true
- KAFKA_LOG_RETENTION_HOURS=1
- KAFKA_MESSAGE_MAX_BYTES=10000000
- KAFKA_REPLICA_FETCH_MAX_BYTES=10000000
- KAFKA_GROUP_MAX_SESSION_TIMEOUT_MS=60000
- KAFKA_NUM_PARTITIONS=2
- KAFKA_DELETE_RETENTION_MS=1000
.
.
.
# not shown: eureka service registry, spring cloud config service, etc.
myapp-service-test-source:
container_name: myapp-service-test-source
image: myapp-h2020/myapp-service-test-source:0.0.1
environment:
SERVICE_REGISTRY_HOST: 192.168.33.101
SERVICE_REGISTRY_PORT: 8761
ports:
- 8081:8080
.
.
.
Here the relevant part of application.yml for my service-test-source service:
spring:
cloud:
stream:
defaultBinder: kafka
bindings:
output:
destination: messages
content-type: application/json
kafka:
binder:
brokers: ${SERVICE_REGISTRY_HOST:192.168.33.101}
zkNodes: ${SERVICE_REGISTRY_HOST:192.168.33.101}
defaultZkPort: 2181
defaultBrokerPort: 9092
The problem is the following, if I launch the docker-compose above, in the test-source container log I notice that the service fails to connect to zookeeper, giving a repeated set of Connection refused error, and finishing with a ZkTimeoutException which makes the service terminate (see below).
The strange fact is that, if instead of running my source (and sink) test services as docker containers I run them as jar files via maven mvn spring-boot:run <etc...> the services work fine and are able to exchange messages via kafka. (note that kafka, zookeeper, etc. are still running as docker containers).
.
.
.
*** THE FOLLOWING REPEATED n TIMES ***
2017-02-14 14:40:09.164 INFO 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-02-14 14:40:09.166 WARN 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_111]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_111]
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) ~[zookeeper-3.4.6.jar!/:3.4.6-1569965]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[zookeeper-3.4.6.jar!/:3.4.6-1569965]
.
.
.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:53)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.springframework.context.ApplicationContextException: Failed to start bean 'outputBindingLifecycle'; nested exception is org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 10000
Any idea what the problem might be?
edit:
I discovered that in the "jar" execution logs the test-source service tries to connect to zookeeper through the IP 127.0.0.1, as can be seen from the log snipped below:
2017-02-15 14:24:04.159 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-02-15 14:24:04.159 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-02-15 14:24:04.178 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established to localhost/127.0.0.1:2181, initiating session
2017-02-15 14:24:04.201 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15a421fd9ec000a, negotiated timeout = 10000
2017-02-15 14:24:05.870 INFO 10348 --- [ main] org.apache.zookeeper.ZooKeeper : Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient#72ba68e3
2017-02-15 14:24:05.882 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-02-15 14:24:05.883 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
This explains why everything works on the jar execution but not the docker one (the zookeeper container exports its 2181 port to the host machine, so it's visible as localhost for the service process when running directly on the host machine), but doesn't solve the problem: Apparently the spring cloud stream kafka configuration is ignoring the property spring.cloud.stream.kafka.binder.zkNodes as set in the application.yml (note that if I log the value of such environment variable from the service, I see the correct value of 192.168.33.101 that I hardcoded there for debugging purposes).
You have set the defaultBinder to be rabbit while trying to use the Kafka binder configuration. Do you have both rabbit and kafka binders in the classpath of your application? In that case, you can enable here
zookeeper:
image: wurstmeister/zookeeper
container_name: 'zookeeper'
ports:
- 2181:2181
--------------------- kafka --------------------------------
kafka:
image: wurstmeister/kafka
container_name: 'kafka'
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CREATE_TOPICS=kafka_docker_topic:1:1
ports:
- 9092:9092
depends_on:
- zookeeper
spring:
profiles: dev
cloud:
stream:
defaultBinder: kafka
kafka:
binder:
brokers: kafka # i added brokers and zkNodes property
zkNodes: zookeeper #
bindings:
input:
destination: message
content-type: application/json

Resources