Manually starting hyperledger peers using docker images - docker

The hyperledger project has a built-in docker image definition for running peer nodes. Given the vagrant focused development environment documentation, it's not immediately obvious that you can set up your own chain network using docker-compose.
To do that, first build the docker image by running this test (this test step is entirely dedicated to building the image):
go test github.com/hyperledger/fabric/core/container -run=BuildImage_Peer
Once the image is built, use docker-compose to launch the peer nodes. This folder has some pre-built yaml files for docker-compose:
github.com/hyperledger/fabric/bddtests
Use the following command to launch 3 peers (for instance):
docker-compose -f docker-compose-3.yml up --force-recreate -d
After the container instances are up, use docker inspect to get the IP addresses and use port 5000 to call the REST APIs (refer to the documentation for REST API spec).

Now that the Hyperledger Fabric project has published its inaugural release (v0.5-developer-preview), we have begun publishing official Hyperledger docker images for the fabric-baseimage, fabric-peer and fabric-membersrvc.
These images can be deployed, as noted by other respondents, using docker-compose. As noted above in the response by #tuand, the fabric/bddtests are a good source of compose files that could be repurposed.
Note that if running on a Mac or Windows using Docker for Mac(beta) that you'll need to use port mapping to expose ports for a peer, as Docker for Mac does not support routing IP traffic to and from containers. Container linking works as expected. Hence, you will either need to map different ports for each of the peers, or only expose a single peer instance.
The following compose file will start a single peer node on a Mac using Docker for Mac. Simply run docker-compose up:
vp:
image: hyperledger/fabric-peer
ports:
- "5000:5000"
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=http://127.0.0.1:2375
- CORE_LOGGING_LEVEL=DEBUG
command: peer node start

You can look in the hyperledger/fabric github repository under the ./bddtests and ./consensus/docker-compose-files directories for examples on how to setup peer networks of 3, 4 or 5 nodes.
Remember to expose port 5000 for one of the validating peers so that you can use the REST api to interact with the peer node.

There are two github repositories that let you build docker images with hyperledger that you can run directly
https://github.com/joequant/hyperledger
and
https://github.com/yeasy/docker-hyperledger-peer
Under yeasy there are some repositories that contain fabric deploy scripts.

Related

github workflow: "ECONNREFUSED 127.0.0.1:***" error when connecting to docker container

In my github actions workflow I am getting this error (ECONNREFUSED) while running my jest test script. The test uses axios to connect to my api which is running in a container bootstrapped via docker-compose (created during the github workflow itself). That network has just has 2 containers: the api, and postgres. So my test script is, I am assume, on the "host network" (github workflow), but it couldn't reach the docker network via the containers' mapped ports.
I then skipped jest test entirely and just tried to directly ping the containers. Didn't work.
I then modified the workflow to inspect the default docker network that should have been created:
UPDATE 1
I've narrowed down the issue as follows. When I modified the compose file to rely on the default network (i no longer have a networks: in my compose file):
So it looks as though the containers were never attached to the default bridge network.
UPDATE 2
It looks like I just have the wrong paradigm. After reading this: https://help.github.com/en/actions/configuring-and-managing-workflows/about-service-containers
I realise this is not how GA expects us to instantiate containers at all. Looks like I should be using services: nodes inside the workflow file, not using containers from my own docker-compose files. 🤔 Gonna try that...
So the answer is:
do not use docker-compose to build your own custom containers. GA does not support this yet.
Use services: in your workflow .yml file to launch your containers, which must be public docker images. If your container is based on a private image or custom dockerfile, it's not supported yet by GA.
So instead of "docker-compose up" to bootstrap postgres + my api for integration testing, I had to:
Create postgres as a service container in my github workflow .yml
Change my test command in package.json to:
first start the api as background process (because I can't create my own docker image from it 🙄) then
invoke my test framework next (as the foreground process)
so npm run start & npm run <test launch cmds>. This worked.
There are several possibilities here.
Host Network
Since you are using docker compose, when you start the api container, publish the endpoint that the api is listening on to the host machine. You can achieve this by doing:
version: 3
services:
api:
...
ports:
- "3010:3010"
in your docker-compose.yml. This will publish the ports, similar to doing docker run ... ---publish localhost:3010:3010. See reference here: https://docs.docker.com/compose/compose-file/#ports
Compose network
By default, docker-compose will create a network called backend-net_default. Containers created by this docker-compose.yml will have access to other containers via this network. The host name to access other containers on the network is simply the name of the service. For example, your tests could access the api endpoint using the host api (assuming that is the name of your api service), e.g.:
http://api:3010
The one caveat here is that the tests must be launched in a container that is managed by that same docker-compose.yml, so that it may access the common backend-net_default network.

How to switch to docker Compose file v3 for applications running exclusively on my workstation?

There are a lot of applications which I launch on my workstation using docker-compose up.
Reasons:
They don't have an installer, or I don't want to use it
They require a dedicated storage engine to be present
They require a build process step
They are created by me and I want them to be easily launched on any workstation
e.t.c
So what I usually end up with the following file-structure:
myAppDir
- docker-compose.yml
- Dockerfile (not always)
- someConfigFile
And my docker-compose.yml is something like this:
(It can contain 2 or 3 services, but I provide the simplest form that I use)
version: '3.7'
services:
mysql:
image: mysql:5.7.29
restart: always
volumes:
- ./mysqld.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf
environment:
- MYSQL_ROOT_PASSWORD=xyz
ports:
- 3306:3306
Then when I need to launch the application I just perform:
docker-compose up # (or with --build)
Recently I tried to add:
deploy:
resources:
limits:
cpus: '0.50'
memory: 200M
and got a message:
Some services (mysql) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use docker stack deploy to deploy to a swarm.
So I tried:
docker stack deploy mystack --compose-file docker-compose.yml
and got message:
Ignoring unsupported options: restart
this node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again
This seems more complex that docker-compose up.
I saw that I can use --compatibility flag e.g.
docker-compose --compatibility up
But the word compatibility means to me that I should soon switch to a new way of launching my apps locally.
My question is: What is the new procedure that I should follow for launching apps on my workstation using a docker and a descriptor file, in order to support options present in Compose file v3?
If you want to specify memory limits and similar constraints for local containers, you need to use a version 2 Compose file. This is called out in the documentation for the deploy: resources: section. docker/compose#4513 has some reasonably clear statements that Compose file version 2 is more targeted at local setups and version 3 more at Swarm installations, and that Docker intends to keep supporting both file versions.
Docker has put many options and functions specific to their Swarm cluster-installation mode into the core product. Anything that mentions a "stack", for example, is specific to a Swarm setup. One consequence of Swarm and plain-Docker things being combined together is that the deploy: Docker Compose options only have an effect in Swarm mode. The documentation for the deploy: key notes:
This only takes effect when deploying to a swarm with docker stack deploy, and is ignored by docker-compose up and docker-compose run.
My question is: What is the new procedure that I should follow for launching apps on my workstation using a docker and a descriptor file, in order to support options present in Compose file v3?
Docker compose V3 is meant to be used with Docker Swarm deployments, therefore you need to run your Docker in Swarm mode, otherwise just keep using the V2 and it's simpler interface for localhost developments.
For example restart is ignored because that responsibility belongs now to the Docker Swarm, not to Docker itself.
Using the compatibility flag it's kind of converting at runtime your V3 compose file into a V2 compose file.
So in short just use V3 if you want to run Docker in Swarm mode to take advantage of all its new features, aka it's kind of a Kubernetes in Docker land.

What is the difference between docker and docker-compose

docker and docker-compose seem to be interacting with the same dockerFile, what is the difference between the two tools?
The docker cli is used when managing individual containers on a docker engine. It is the client command line to access the docker daemon api.
The docker-compose cli can be used to manage a multi-container application. It also moves many of the options you would enter on the docker run cli into the docker-compose.yml file for easier reuse. It works as a front end "script" on top of the same docker api used by docker, so you can do everything docker-compose does with docker commands and a lot of shell scripting. See this documentation on docker-compose for more details.
Update for Swarm Mode
Since this answer was posted, docker has added a second use of docker-compose.yml files. Starting with the version 3 yml format and docker 1.13, you can use the yml with docker-compose and also to define a stack in docker's swarm mode. To do the latter you need to use docker stack deploy -c docker-compose.yml $stack_name instead of docker-compose up and then manage the stack with docker commands instead of docker-compose commands. The mapping is a one for one between the two uses:
Compose Project -> Swarm Stack: A group of services for a specific purpose
Compose Service -> Swarm Service: One image and it's configuration, possibly scaled up.
Compose Container -> Swarm Task: A single container in a service
For more details on swarm mode, see docker's swarm mode documentation.
docker manages single containers
docker-compose manages multiple container applications
Usage of docker-compose requires 3 steps:
Define the app environment with a Dockerfile
Define the app services in docker-compose.yml
Run docker-compose up to start and run app
Below is a docker-compose.yml example taken from the docker docs:
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
redis:
image: redis
volumes:
logvolume01: {}
A Dockerfile is a text document that contains all the commands/Instruction a user could call on the command line to assemble an image.
Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. By default, docker-compose expects the name of the Compose file as docker-compose.yml or docker-compose.yaml. If the compose file has a different name we can specify it with -f flag.
Check here for more details
docker or more specifically docker engine is used when we want to handle only one container whereas the docker-compose is used when we have multiple containers to handle. We would need multiple containers when we have more than one service to be taken care of, like we have an application that has a client server model. We need a container for the server model and one more container for the client model. Docker compose usually requires each container to have its own dockerfile and then a yml file that incorporates all the containers.

Managing a group of docker containers without the sweat

I am using a bash script to spin up a virtual network with two docker containers on it. This feels prehistoric. Is there some tool that can spin such an ensemble up and down & show its current status, or does one have to take care of that on their own?
In case docker-compose, it is unclear from docker documentation whether docker-compose is self-contained or tied to swarm, and an authoritative example of a compose definition file, with commands for starting and stopping the ensemble would be very helpful.
E.g. here is what a bash script would do to define/start an application of two interrelated containers, needless to say this script does not help with managing its lifecycle beyond just starting it up once.
docker network create --driver bridge FooAppNet
docker run --rm --net=FooAppNet --name=component1 -p 9000:9000 component1-image
docker run --rm --net=FooAppNet --name=component2 component2-image
Also in this example, container component1 exposes port 9000 to the host, and its contained application has it hardwired in its configuration file, to consume the service of component2 by its name (following the common docker networking practice relying on docker networks' internal DNS).
For the example you've given, the following Docker Compose file would give you what you want:
component1:
image: component1-image
net: FooAppNet
container_name: component1
ports:
- "9000:9000"
component2:
image: component2-image
net: FooAppNet
container_name: component2
If you store this in a docker-compose.yml file and then run docker-compose up -d it will create/start/restart your containers and assign them to your FooAppNet network.
The -d flag runs the containers in detached mode and prevents the logging output being printed to your terminal window when you start the containers. You can still get their log via docker logs -f ... like with any other container.
You can then use docker-compose down and docker-compose restart etc to control the ensemble's lifecycle. As an aside, using variables can spice up the definition file towards greater flexibility.
See in the comments below about using the network automatically spun up by docker compose.
TL;DR ― see the beginning section of https://docs.docker.com/compose/networking/ for the solution. It walks you through the entire necessary configuration. Works nicely, and need to master the various docker-compose command-line options to be productive with it.

How to link multiple Docker containers and encapsulate the result?

I have a Node.js web-application that connects to a Neo4j database. I would like to encapsulate these in a single Docker image (using also a Neo4j Docker container), but I'm a docker novice and can't seem to figure this out. What's the recommended way to do it in the latest Docker versions?
My intuition would be to run the Neo4j container nested inside the app container. But from what I've read, I think the supported / recommended approach is to link the containers together. What I need is pretty well illustrated in this image. But the article where the image comes from isn't clear to me. Anyway, it's using the soon-to-be-deprecated legacy container linking, while networking is recommended these days. A tutorial or explanation would be much appreciated.
Also, how does docker-compose fit into all this?
Running a container within another container would imply to run a Docker engine within a Docker container. This is referenced as dind for Docker-in-Docker and I would strongly advise against it. You can search 'dind' online and discover why in most cases it is a bad idea, but as it is not the main object of your question I won't extend this subject any further.
Running both a node.js process and a neo4j process in the same container
While most people will tell you to refrain yourself from running more than one process within a Docker container, nothing prevents you from doing so. If you want to follow this path, take a look at the Using Supervisor with Docker from the Docker documentation website, or at the Phusion baseimage Docker image.
Just be aware that this way of doing things will make your Docker image more and more difficult to maintain over time.
Linking containers
As you found out, keeping Docker images as simple as you can (i.e: running one and only one app within a Docker container) will make your life easier on the long term.
Linking containers together is trivial when both containers run on the same Docker engine. It is just a matter of:
having your neo4j container expose the port its service listens on
running your node.js container with the --link <neo4j container name>:<alias> option
within the node.js application configuration, set the neo4j host to the <alias> hostname, docker will take care of forwarding that connection to the IP it assigned to the neo4j container
When you want to run those two containers on different hosts, things get more difficult.
With Docker Compose, you have to use the link: key to define your links
The new Docker network feature
You also discovered that linking containers won't be supported in the future and that the new way of making multiple Docker containers communicate is to create a virtual network and attach those 2 containers to that network.
Here's how to proceed:
docker network create mynet
docker run --detach --name myneo4j --net mynet neo4j
docker run --detach --name mynodejs --net mynet <your nodejs image>
Your node application configuration should then use myneo4j as the host to connect to.
To tell Docker Compose to use the new network feature, you would have to use the --x-networking option. Also you would not use the links: key.
Using the new networking feature also means that you won't be able to define any alias for the db. As a result you have to use the container name. Beware that unless you use the container_name: key in your docker-compose.yml file, Compose will create container names based on the directory which contains your docker-compose.yml file, the service name as found in the yml file and a number.
For instance, the following docker-compose.yml file, if within a directory named "foo" would create two containers named foo_web_1 and foo_db_1:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
when started with docker-compose --x-networking up, the web app configuration should then use foo_db_1 as the db hostname.
While if you use container_name:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
container_name: mydb
when started with docker-compose --x-networking up, the web app configuration should then use mydb as the db hostname.
Example of using Docker Compose to run a web app using nodeJS and neo4j
In this example, I will show how to dockerize the example app from github project aseemk/node-neo4j-template which uses nodejs and neo4j.
I assume you already have Docker 1.9.0+ and Docker Compose 1.5+ installed.
This project will use 2 docker containers, one to run the neo4j database and one to run the nodeJS web app.
Dockerizing the web app
We need to build a Docker image from which Docker compose will run a container. For that, we will write a Dockerfile.
Create a file named Dockerfile (mind the capital D) with the following content:
FROM node
RUN git clone https://github.com/aseemk/node-neo4j-template.git
WORKDIR /node-neo4j-template
RUN npm install
# ugly 20s sleep to wait for neo4j to initialize
CMD sleep 20s && node app.js
This Dockerfile describes the steps the Docker engine will have to follow to build a docker image for our web app. This docker image will:
be based on the official node docker image
clone the nodeJS example project from Github
change the working directory to the directory containing the git clone
run the npm install command to download and install the nodeJS app dependencies
instruct docker which command to use when running a container of that image
A quick review of the nodeJS code reveals that the author allows us to configure the URL to use to connect to the neo4j database using the NEO4J_URL environment variable.
Dockerizing the neo4j database
Well people took care of that for us already. We will use the official Docker image for neo4j which can be found on the Docker Hub.
A quick review of the readme tells us to use the NEO4J_AUTH environment variable to change the neo4j password. And setting this variable to none will disable the authentication all together.
Setting up Docker Compose
In the same directory as the one containing our Dockerfile, create a docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
This Compose configuration file describes 2 services: db and web.
The db service will produce a container named my-neo4j-db from the official neo4j docker image and will start that container setting up the NEO4J_AUTH environment variable to none.
The web service will produce a container named at docker compose discretion using a docker image built from the Dockerfile found in the current directory (build: .). It will start that container setting up the environment variable NEO4J_URL to http://my-neo4j-db:7474 (note how we use here the name of the neo4j container my-neo4j-db). Furthermore, docker compose will instruct the Docker engine to expose the web container's port 3000 on the docker host port 80.
Firing it up
Make sure you are in the directory that contains the docker-compose.yml file and type: docker-compose --x-networking up.
Docker compose will read the docker-compose.yml file, figure out it has to first build a docker image for the web service, then create and start both containers and finally will provide you with the logs from both containers.
Once the log shows web_1 | Express server listening at: http://localhost:3000/, everything is cooked and you can direct your Internet navigator to http://<ip of the docker host>/.
To stop the application, hit Ctrl+C.
If you want to start the app in the background, use docker-compose --x-networking up -d instead. Then in order to display the logs, run docker-compose logs.
To stop the service: docker-compose stop
To delete the containers: docker-compose rm
Making neo4j storage persistent
The official neo4j docker image readme says the container persists its data on a volume at /data. We then need to instruct Docker Compose to mount that volume to a directory on the docker host.
Change the docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
With that config file, when you will run docker-compose --x-networking up, docker compose will create a neo4j-data directory and mount it into the container at location /data.
Starting a 2nd instance of the application
Create a new directory and copy over the Dockerfile and docker-compose.yml files.
We then need to edit the docker-compose.yml file to avoid name conflict for the neo4j container and the port conflict on the docker host.
Change its content to:
db:
container_name: my-neo4j-db2
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db2:7474
ports:
- 81:3000
Now it is ready for the docker-compose --x-networking up command. Note that you must be in the directory with that new docker-compose.yml file to start the 2nd instance up.

Resources