I deploy a demo as document of http://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html
When the cli node execute instantiateChaincode function in script.sh,throws an Error:
Error endorsing chaincode: rpc error: code = 2 desc = Timeout expired while starting chaincode mycc:1.0(networkid:dev,peerid:peer0.org1.example.com,tx:36950e4638442cdd37215838c2bd6062af63b6f0e729b43d76eda0f3e1eb6b8b)
I could not remove this error so long time.
How did I do it?
It happens the same errors to me. Because I set wrong network in docker compose file. The peers and orderer are not at same networks, they can't communicate with each other.
Please check if you set this while starting the network using docker-compose: CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_byfn
and related networks in your docker file.
For example,
if you config your docker file like this:
peer0.org1.example.com:
container_name: peer0.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org1.example.com
networks:
- byfn
you need to set CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE, and the prefix of the value is your directory's name, the suffix of the value is your networks's value(i.e. byfn), and use "_" to connect the prefix and suffix.
You can follow this url to set the network in docker compose.
Make sure the peer and orderer on the same network.
This is a tough one to diagnose with that amount of information. Some helpful tips to remedy though... make sure you have the most up to date fabric images and utilities - no need to build from source, just follow the instructions outlined in the prerequisites section. Check your versioning for Go, Docker, etc... Kill any stale containers and prune your persisted docker networks. docker rm -f $(docker ps -aq) & docker network prune.
Now RESTART your docker engine. Ironically this actually solves connectivity errors more often than one would think. Now proceed by following the "Build Your First Network" tutorial. http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html
Related
I have a docker-compose.yml file which works with docker-compose up --build. My app works and everything is fine.
version: '3'
services:
myapp:
container_name: myapp
restart: always
build: ./myapp
ports:
- "8000:8000"
command: /usr/local/bin/gunicorn -w 2 -b :8000 flaskplot:app
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- "80:80"
depends_on:
- myapp
But when I use docker stack deploy -c docker-compose.yml myapp, I get the following error:
Ignoring unsupported options: build, restart
Ignoring deprecated options:
container_name: Setting the container name is not supported.
Creating network myapp_default
Creating service myapp_myapp
failed to create service myapp_myapp: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
any hints how I should "translate" the docker-compose.yml file to make it compatible with docker stack deploy?
To run containers in swarm mode, you do not build them on each swarm node individually. Instead you build the image once, typically on a CI server, push to a registry server (often locally hosted, or you can use docker hub), and specify the image name inside your compose file with an "image" section for each service.
Doing that will get rid of the hard error. You'll likely remove the build section of the compose file since it no longer applies.
Specifying "container_name" is unsupported because it would break the ability to scale or perform updates (a container name must be unique within the docker engine). Let swarm name the containers and reference your app on the docker network by it's service name.
Specifying "depends_on" is not supported because containers may be started on different nodes, and rolling updates/failure recovery may remove some containers providing a service after the app started. Docker can retry the failing app until the other service starts up, or preferably you configure an entrypoint that waits for the dependencies to become available with some kind of ping for a minute or two.
Without seeing your Dockerfile, I'd also recommend setting up a healthcheck on each image. Swarm mode uses this to control rolling updates and recover from application failures.
Lastly, consider adding a "deploy" section to your compose file. This tells swarm mode how to deploy and update your service, including how many replicas, constraints on where to run, memory and CPU limits and requirements, and how fast to update the service. You can define a restart policy here as well but I recommend against it since I've seen docker engines restarting containers that conflict with swarm mode deploying containers on other nodes, or even a new container on the same node.
You can see the full compose file documentation with all of these options here: https://docs.docker.com/compose/compose-file/
What we want to do:
We want to use docker-compose to link one already running container (A) to another container (B) by container name. We use "external-link" as both containers are started from different docker-compose.yml files.
Problem:
Container B fails to start with the error although a container with that name is running.
ERROR: for container_b Cannot start service container_b: Cannot link to a non running container: /PREVIOUSLY_LINKED_ID_container_a_1 AS /container_b_1/container_a_1
output of "docker ps":
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
RUNNING_ID container_a "/docker-entrypoint.s" 15 minutes ago Up 15 minutes 5432/tcp container_a_1
Sample code:
docker-compose.yml of Container B:
container_b:
external_links:
- container_a_1
What differs this question from the other "how to fix"-questions:
we can't use "sudo service docker restart" (which works) as this is a production environment
We don't want to fix this every time manually but find the reason so that we can
understand what we are doing wrong
understand how to avoid this
Assumptions:
It seems like two instances of the container_a exist (RUNNING_ID and PREVIOUSLY_LINKED_ID)
This might happen because we
rebuilt the container via docker-compose build and
changed the forwarded external port of the container (80801:8080)
Comment
Do not use docker-compose down as suggested in the comments, this removes volumnes!
Docker links are deprecated so unless you need some functionality they provide or are on an extremely old version of docker, I'd recommend switching to docker networks.
Since the containers you want to connect appear to be started in separate compose files, you would create that network externally:
docker network create app_net
Then in your docker-compose.yml files, you connect your containers to that network:
version: '3'
networks:
app_net:
external:
name: app_net
services:
container_a:
# ...
networks:
- app_net
Then in your container_b, you would connect to container_a as "container_a", not "container_a_1".
As an aside, docker-compose down is not documented to remove volumes unless you pass the -v flag. Perhaps you are using anonymous volumes, in which case I'm not sure that docker-compose up would know where to find your data. A named volume is preferred. More than likely, your data was not being stored in a volume, which is dangerous and removes your ability to update your containers:
$ docker-compose down --help
By default, the only things removed are:
- Containers for services defined in the Compose file
- Networks defined in the `networks` section of the Compose file
- The default network, if one is used
Networks and volumes defined as `external` are never removed.
Usage: down [options]
Options:
--rmi type Remove images. Type must be one of:
'all': Remove all images used by any service.
'local': Remove only images that don't have a custom tag
set by the `image` field.
-v, --volumes Remove named volumes declared in the `volumes` section
of the Compose file and anonymous volumes
attached to containers.
--remove-orphans Remove containers for services not defined in the
Compose file
The hyperledger project has a built-in docker image definition for running peer nodes. Given the vagrant focused development environment documentation, it's not immediately obvious that you can set up your own chain network using docker-compose.
To do that, first build the docker image by running this test (this test step is entirely dedicated to building the image):
go test github.com/hyperledger/fabric/core/container -run=BuildImage_Peer
Once the image is built, use docker-compose to launch the peer nodes. This folder has some pre-built yaml files for docker-compose:
github.com/hyperledger/fabric/bddtests
Use the following command to launch 3 peers (for instance):
docker-compose -f docker-compose-3.yml up --force-recreate -d
After the container instances are up, use docker inspect to get the IP addresses and use port 5000 to call the REST APIs (refer to the documentation for REST API spec).
Now that the Hyperledger Fabric project has published its inaugural release (v0.5-developer-preview), we have begun publishing official Hyperledger docker images for the fabric-baseimage, fabric-peer and fabric-membersrvc.
These images can be deployed, as noted by other respondents, using docker-compose. As noted above in the response by #tuand, the fabric/bddtests are a good source of compose files that could be repurposed.
Note that if running on a Mac or Windows using Docker for Mac(beta) that you'll need to use port mapping to expose ports for a peer, as Docker for Mac does not support routing IP traffic to and from containers. Container linking works as expected. Hence, you will either need to map different ports for each of the peers, or only expose a single peer instance.
The following compose file will start a single peer node on a Mac using Docker for Mac. Simply run docker-compose up:
vp:
image: hyperledger/fabric-peer
ports:
- "5000:5000"
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=http://127.0.0.1:2375
- CORE_LOGGING_LEVEL=DEBUG
command: peer node start
You can look in the hyperledger/fabric github repository under the ./bddtests and ./consensus/docker-compose-files directories for examples on how to setup peer networks of 3, 4 or 5 nodes.
Remember to expose port 5000 for one of the validating peers so that you can use the REST api to interact with the peer node.
There are two github repositories that let you build docker images with hyperledger that you can run directly
https://github.com/joequant/hyperledger
and
https://github.com/yeasy/docker-hyperledger-peer
Under yeasy there are some repositories that contain fabric deploy scripts.
I am using a bash script to spin up a virtual network with two docker containers on it. This feels prehistoric. Is there some tool that can spin such an ensemble up and down & show its current status, or does one have to take care of that on their own?
In case docker-compose, it is unclear from docker documentation whether docker-compose is self-contained or tied to swarm, and an authoritative example of a compose definition file, with commands for starting and stopping the ensemble would be very helpful.
E.g. here is what a bash script would do to define/start an application of two interrelated containers, needless to say this script does not help with managing its lifecycle beyond just starting it up once.
docker network create --driver bridge FooAppNet
docker run --rm --net=FooAppNet --name=component1 -p 9000:9000 component1-image
docker run --rm --net=FooAppNet --name=component2 component2-image
Also in this example, container component1 exposes port 9000 to the host, and its contained application has it hardwired in its configuration file, to consume the service of component2 by its name (following the common docker networking practice relying on docker networks' internal DNS).
For the example you've given, the following Docker Compose file would give you what you want:
component1:
image: component1-image
net: FooAppNet
container_name: component1
ports:
- "9000:9000"
component2:
image: component2-image
net: FooAppNet
container_name: component2
If you store this in a docker-compose.yml file and then run docker-compose up -d it will create/start/restart your containers and assign them to your FooAppNet network.
The -d flag runs the containers in detached mode and prevents the logging output being printed to your terminal window when you start the containers. You can still get their log via docker logs -f ... like with any other container.
You can then use docker-compose down and docker-compose restart etc to control the ensemble's lifecycle. As an aside, using variables can spice up the definition file towards greater flexibility.
See in the comments below about using the network automatically spun up by docker compose.
TL;DR ― see the beginning section of https://docs.docker.com/compose/networking/ for the solution. It walks you through the entire necessary configuration. Works nicely, and need to master the various docker-compose command-line options to be productive with it.
I'm trying to make a Dockerfile based on the RabbitMQ repository with a customized policy set. The problem is that I can't useCMD or ENTRYPOINT since it will override the base Dockerfile's and then I have to come up with my own and I don't want to go down that path. Let alone the fact if I don't use RUN, it will be a part of run time commands and I want this to be included in the image, not just the container.
Other thing I can do is to use RUN command but the problem with that is the RabbitMQ server is not running at build time and also there's no --offline flag for the set_policycommand of rabbitmqctl program.
When I use docker's RUN command to set the policy, here's the error I face:
Error: unable to connect to node rabbit#e06f5a03fe1f: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#e06f5a03fe1f]
rabbit#e06f5a03fe1f:
* connected to epmd (port 4369) on e06f5a03fe1f
* epmd reports: node 'rabbit' not running at all
no other nodes on e06f5a03fe1f
* suggestion: start the node
current node details:
- node name: 'rabbitmq-cli-136#e06f5a03fe1f'
- home dir: /var/lib/rabbitmq
- cookie hash: /Rw7u05NmU/ZMNV+F856Fg==
So is there any way I can set a policy for the RabbitMQ without writing my own version of CMD and/or ENTRYPOINT?
You're in a slightly tricky situation with RabbitMQ as it's mnesia data path is based on the host name of the container.
root#bf97c82990aa:/# ls -1 /var/lib/rabbitmq/mnesia
rabbit#bf97c82990aa
rabbit#bf97c82990aa-plugins-expand
rabbit#bf97c82990aa.pid
For other image builds you could seed the data files, or write a script that RUN calls to launch the application or database and configure it. With RabbitMQ, the container host name will change between image build and runtime so the image's config won't be picked up.
I think you are stuck with doing the config on container creation or at startup time.
Options
Creating a wrapper CMD script to do the policy after startup is a bit complex as /usr/lib/rabbitmq/bin/rabbitmq-server runs rabbit in the foreground, which means you don't have access to an "after startup" point. Docker doesn't really do background processes so rabbitmq-server -detached isn't much help.
If you were to use something like Ansible, Chef or Puppet to setup the containers. Configure a fixed hostname for the containers startup. Then start it up and configure the policy as the next step. This only needs to be done once, as long as the hostname is fixed and you are not using the --rm flag.
At runtime, systemd could complete the config to a service with ExecStartPost. I'm sure most service managers will have the same feature. I guess you could end up dropping messages, or at least causing errors at every start up if anything came in before configuration was finished?
You can configure the policy as described here.
Docker compose:
rabbitmq:
image: rabbitmq:3.7.8-management
container_name: rabbitmq
volumes:
- ~/rabbitmq/data:/var/lib/rabbitmq:rw
- ./rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
- ./rabbitmq/definitions.json:/etc/rabbitmq/definitions.json
ports:
- "5672:5672"
- "15672:15672"