launchAndWaitForRegister failed Timeout expired while starting chaincode - hyperledger

I'm using docker swarm mode to setup 4 vp nodes. The docker service scripts look like below:
docker service create --name vp0 --replicas 1 --network over \
--endpoint-mode dnsrr \
--mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock,ro=false \
--env CORE_PEER_ID=vp0 \
--env CORE_PEER_ADDRESSAUTODETECT=true \
--env CORE_LOGGING_LEVEL=debug \
--env CORE_PEER_NETWORKID=dev \
--env CORE_VM_ENDPOINT=unix:///var/run/docker.sock \
--env CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN=pbft \
--env CORE_PBFT_GENERAL_N=4 \
--env CORE_PBFT_GENERAL_MODE=batch \
--env CORE_PBFT_GENERAL_TIMEOUT_BROADCAST=20s \
--env CORE_PBFT_GENERAL_TIMEOUT_REQUEST=60s \
--env CORE_PBFT_GENERAL_TIMEOUT_RESENDVIEWCHANGE=120s \
--env CORE_PBFT_GENERAL_TIMEOUT_VIEWCHANGE=60s \
--env CORE_REST_ENABLED=false \
--env CORE_CHAINCODE_STARTUPTIMEOUT=600000 \
--env CORE_CHAINCODE_DEPLOYTIMEOUT=600000 \
ibmblockchain/fabric-peer:x86_64-0.6.1-preview peer node start
It seems that the network works fine after starting 4 services(vp0,vp1,vp2,vp3). But, when I was trying to deploy a chaincode example, I got this error after 10mins:
Can anyone help me fix this?

Hypothesis
Your startup timeout value is misconfigured, so fabric defaults this to 5 seconds, which is too short for your deployment.
Reasoning
If it's really failing after a few seconds (~5?), then it suggests that your CORE_CHAINCODE_STARTUPTIMEOUT=600000 isn't being honored. Default core.yaml value is 300000, which is still longer than a few seconds.
If absent from both ENV and core.yaml, fabric defaults the value to 5 seconds.
If it takes longer than 5 seconds to get the REGISTER - then getting to the bottom of why this setting isn't making it to your peer process may fix your problem.
How to confirm
Ensure debug logging is enabled, and you should see could not retrive timeout var...setting to 5secs in the peer log when it starts. (with the spelling errors)
Source
https://github.com/hyperledger/fabric/blob/v0.6/peer/node/start.go#L259

For Fabric v1.1, the correct format for the chaincode timeout line is:
CORE_CHAINCODE_STARTUPTIMEOUT=240s
the s defines the value as 240 seconds.
Testing this in the FABCAR app, the timeout changed as expected to 4 minutes

Related

Enable fine grained on keycloak with docker

I have set up keycloak using docker, my problem is that I need to do some modifications on the clients that need the fine grained to be enabled. I have read the documentation and i know I should use the parameter -Dkeycloak.profile=preview or -Dkeycloak.profile.feature.admin_fine_grained_authz=enabled. My problem is that I tried to use that on my docker execution command, but with no luck
docker run --rm \
--name keycloak \
-p 80:8080 \
-e KEYCLOAK_USER=admin \
-e KEYCLOAK_PASSWORD=[adminPass] \
-e PROXY_ADDRESS_FORWARDING=true \
-e DB_VENDOR=MYSQL \
-e DB_ADDR=[SQL_Server] \
-e DB_DATABASE=keycloak \
-e DB_USER=[DBUSER] \
-e DB_PASSWORD=[DB_PASS] \
-e JDBC_PARAMS=useSSL=false \
-e -Dkeycloak.profile.feature.admin_fine_grained_authz=enabled \
jboss/keycloak
any help?
It is documented in the Docker image readme https://hub.docker.com/r/jboss/keycloak
Additional server startup options (extension of JAVA_OPTS) can be configured using the JAVA_OPTS_APPEND environment variable.
So in your case:
-e JAVA_OPTS_APPEND="-Dkeycloak.profile=preview"
Guess you might need to pass the environment variables to the JVM when starting the Wildfly containing the Keycloak WAR. There is a runner shell script that starts when launching the container. You need to add your environment variables to that call.

What to do or how to handle if health_status of a docker container changes

I am running a docker container with health-cmd and I know it will turn to unhealthy when it stops working.
$ docker run
--name=some-container \
--health-cmd='curl -sS http://127.0.0.1:5000 || exit 1' \
--health-timeout=10s \
--health-retries=3 \
--health-interval=5s \
--restart on-failure \
container-image
I want to restart the container when it changes its health-status. How can do that? How to trigger the restart?
My Docker version 19.03.1, build 74b1e89
depends on your Dockerfile if the health check faild the container is exited with the code 1 becaus of your command :
--health-cmd='curl -sS http://127.0.0.1:5000 || exit 1'
therefore your restart policy on-failure will restart the container after ~35 seconds timeout + retries + interval when only the check failed.
the timeout + retries + interval values you can determind on many conditions there is no perfect values for them.
I think your command are good to go
You can use autoheal to restart unhealthy docker containers.
Sample:
docker run -d \
--name autoheal \
--restart=always \
-e AUTOHEAL_CONTAINER_LABEL=all \
-v /var/run/docker.sock:/var/run/docker.sock \
willfarrell/autoheal
Note: You must apply HEALTHCHECK to your docker images first. See https://docs.docker.com/engine/reference/builder/#healthcheck for details.

Schema Registry for a 3 Node Kafka Cluster with SSL

I am configuring a 3 node Kafka Cluster ( 3 brokers and 3 zookeepers with SSL enabled) using docker. Now I need to set up a schema registry. If I just need to use one schema registry is it possible? If Yes how does my SSL trust store and key store configs looks like while running?
I did refer to confluents documentation, where they discuss about Kafka based leader election and zookeeper based leader election, but not clear.
This is my faulty docker run command.
docker run -d \
--net=host \
--name=schema-registry \
-e
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL\
=localhost:22181,localhost:32181,localhost:42181 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_DEBUG=true \
-e SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL=SSL
-e SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_LOCATION\
=kafka.broker1.truststore.jks \
-e
SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_PASSWORD\
=broker1_truststore_creds \
-e SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_LOCATION\
=kafka.broker1.keystore.jks \
-e SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_PASSWORD\
=broker1_keystore_creds \
-e SCHEMA_REGISTRY_KAFKASTORE_SSL_KEY_PASSWORD=broker1_sslkey_creds \
-v ${KAFKA_SSL_SECRETS_DIR}:/etc/kafka/secrets \
confluentinc/cp-schema-registry:5.0.1
I am sure my understanding of how schema registry works with a clustered setup is not correct.

How to set heap memory in cassandra on docker

I am using a Cassandra docker (official Cassandra docker) to setup my local env.
As part of this I want to limit the amount of memory the Cassandra is using in my local deployment.
By default Cassandra has a pre defined way to set its memory.
I found references to some info saying that i can use JVM_OPTS to set this values but it does not seem to take hold.
I am looking for a way to set up this values without creating my own Cassandra docker.
Docker command that is used to run container:
docker run -dit --name sdc-cs --env RELEASE="${RELEASE}" \
--env CS_PASSWORD="${CS_PASSWORD}" --env ENVNAME="${DEP_ENV}" \
--env HOST_IP=${IP} --env JVM_OPTS="-Xms1024m -Xmx1024m" \
--log-driver=json-file --log-opt max-size=100m \
--log-opt max-file=10 --ulimit memlock=-1:-1 --ulimit nofile=4096:100000 \
--volume /etc/localtime:/etc/localtime:ro \
--volume ${WORKSPACE}/data/CS:/var/lib/cassandra \
--volume ${WORKSPACE}/data/environments:/root/chef-solo/environments \
--publish 9042:9042 --publish 9160:9160 \
${PREFIX}/sdc-cassandra:${RELEASE} /bin/s
Any advice will be appreciated!
I am using docker-compose, in the docker-compose.yml file I set the following env variables. It seems to work.
environment:
- HEAP_NEWSIZE=128M
- MAX_HEAP_SIZE=2048M
Entrypoint script starts cassandra as usual, and during start it executes cassandra-env.sh script that may set memory options if they aren't set in the JVM_OPTS environment variable, so if you start container with corresponding memory options set via -e JVM_OPTS..., then it should work.
But in a long run it's better to submit config files via /config mount point of Docker image, and put memory option into jvm.options file that is loaded by cassandra-env.sh.
P.S. Just tried it on my machine:
docker run --rm -e DS_LICENSE=accept store/datastax/dse-server:5.1.5
Gives me following memory switches: -Xms1995M -Xmx1995M.
If I run it with:
docker run --rm -e DS_LICENSE=accept \
-e JVM_OPTS="-Xms1024M -Xmx1024M" store/datastax/dse-server:5.1.5
then it gives correct -Xms1024M -Xmx1024M...

Hyperledger fabricV1 on docker swarm

I have created a docker swarm with one manager and two workers and i am trying to deploy the hyperledger fabric on top of that for this i am using the below command
docker service create --name orderer.nokia.com hyperledger/fabric-orderer orderer\
--env ORDERER_GENERAL_LOGLEVEL=debug \
--env ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 \
--env ORDERER_GENERAL_GENESISMETHOD=file \
--env ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block \
--env ORDERER_GENERAL_LOCALMSPID=OrdererMSP \
--env ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp \
--env ORDERER_GENERAL_TLS_ENABLED=true \
--env ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key \
--env ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt \
--env ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] \
--mount type=bind,source=../channel-artifacts/genesis.block,destination=/var/hyperledger/orderer/orderer.genesis.block \
--mount type=bind,source=../crypto-config/ordererOrganizations/nokia.com/orderers/orderer.nokia.com/msp,destination=/var/hyperledger/orderer/msp \
--mount type=bind,source=../crypto-config/ordererOrganizations/nokia.com/orderers/orderer.nokia.com/tls/,destination=/var/hyperledger/orderer/tls \
--publish 7050:7050
but getting below error
Error response from daemon: rpc error: code = 3 desc = name must be valid as a DNS name component
docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
o8ftuvxa3szzhgphxc71w5fv9 * SwarmNode1-192 Ready Active Leader
sm7b4hf7oz9mkwksrxylu0ncq SwarmNode3-194 Ready Active
yag0gy3dlhu4fy8rl3iawro07 SwarmNode2-193 Ready Active
OS:Ubuntu
Docker version 17.06.1-ce, build 874a737
Had the same issue. In my case it was the names of the services having "." in them.
If you change it from --name orderer.nokia.com to --name orderernokiacom it should build correctly.
However, I am still trying to deploy chaincode successfully so not 100% sure
========================================EDIT===============================
I have it set up and running with no problems now.
Indeed the error you are getting is from the "dots" in the services names.
If for some reason you need your service names to contain "." you can use network aliases.
To deploy in swarm mode, you first need to create an overlay network (if you are using compose this has to be created outside the compose file).
And then everything should work just fine. For an example have a look https://github.com/endimion/HL_V1_test/blob/master/docker-swarm-compose.yml

Resources