Hyperledger Fabric - Join peer from another host - docker

My setup is the following:
2 Linux virtual machines running in VirtualBox;
Both hosts are engaged in a Docker Swarm;
Host 1 consists of: 1 orderer, 1 organization with 2 peers and a cli container;
Host 2 consists of: 1 organization with 2 peers;
I'm using the following tutorial as reference (https://hyperledger.github.io/composer/latest/tutorials/deploy-to-fabric-multi-org)
How I'm actually running the Fabric network:
I'm generating the channel artifacts & crypto-config files the same on both hosts.
Starting fabric on host 2 - with both peers, couchdbs and ca;
Starting fabric on host 1;
Generating a channel on host 1; joining peers from host 1 and updating anchor peer;
When inspecting the overlay swarm network I'm able to see both peers and containers available for each host;
My problems appear when trying to make the peers from host 2 join the channel. I'm trying to add them to the channel through the cli on host 1.
But I'm receiving the following error:
Error: error getting endorser client for channel: endorser client failed to connect to peer0.sponsor.example.com:7051: failed to create new connection: context deadline exceeded
This is my docker-compose-cli.yaml for host 1:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
volumes:
orderer.example.com:
peer0.manager.example.com:
peer1.manager.example.com:
peer0.sponsor.example.com:
peer1.sponsor.example.com:
networks:
example:
services:
orderer.example.com:
extends:
file: base/docker-compose-base-1.yaml
service: orderer.example.com
container_name: orderer.example.com
networks:
- example
peer0.manager.example.com:
container_name: peer0.manager.example.com
extends:
file: base/docker-compose-base-1.yaml
service: peer0.manager.example.com
networks:
- example
peer1.manager.example.com:
container_name: peer1.manager.example.com
extends:
file: base/docker-compose-base-1.yaml
service: peer1.manager.example.com
networks:
- example
cli:
container_name: cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- CORE_LOGGING_LEVEL=DEBUG
- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.manager.example.com:7051
- CORE_PEER_LOCALMSPID=ManagerMSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/manager.example.com/peers/peer0.manager.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/manager.example.com/peers/peer0.manager.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/manager.example.com/peers/peer0.manager.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/manager.example.com/users/Admin#manager.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- orderer.example.com
- peer0.manager.example.com
- peer1.manager.example.com
networks:
- example

The node fails to connect to peer0.sponsor.proa.com. This is probably due to some changes you have made to customize the network addresses to your liking, is this something you've customized? I haven't followed this tutorial but I've run into some similar problems while customizing the first-network example while following this one.
Make sure your peer addresses are well configured on the configtx.yaml, cryptoconfig.yaml, docker-compose-cli.yaml and docker-compose-base.yaml and also peer-base.yaml if you changed the network name.
If the addresses of the peers don't check out accross those files you will probably need to generate the channel transactions again and start the network over, since the configuration for the channel that is present in the blockchain is not matching your current network configuration.

I was making a very simple mistake: I was not copying the generated crypto material from one host to another; I was generating new crypto materials on all of the hosts, thinking that they will be the same.

Related

ERROR: Pool overlaps with other one on this address space

I'm trying to implement this tutorial. The "docker-compose" content is this :
# WARNING: Do not deploy this tutorial configuration directly to a production environment
#
# The tutorial docker-compose files have not been written for production deployment and will not
# scale. A proper architecture has been sacrificed to keep the narrative focused on the learning
# goals, they are just used to deploy everything onto a single Docker machine. All FIWARE components
# are running at full debug and extra ports have been exposed to allow for direct calls to services.
# They also contain various obvious security flaws - passwords in plain text, no load balancing,
# no use of HTTPS and so on.
#
# This is all to avoid the need of multiple machines, generating certificates, encrypting secrets
# and so on, purely so that a single docker-compose file can be read as an example to build on,
# not use directly.
#
# When deploying to a production environment, please refer to the Helm Repository
# for FIWARE Components in order to scale up to a proper architecture:
#
# see: https://github.com/FIWARE/helm-charts/
#
version: "3.5"
services:
# Orion is the context broker
orion:
image: fiware/orion:latest
hostname: orion
container_name: fiware-orion
depends_on:
- mongo-db
networks:
- default
expose:
- "1026"
ports:
- "1026:1026"
command: -dbhost mongo-db -logLevel DEBUG
healthcheck:
test: curl --fail -s http://orion:1026/version || exit 1
interval: 5s
# Tutorial displays a web app to manipulate the context directly
tutorial:
image: fiware/tutorials.context-provider
hostname: iot-sensors
container_name: fiware-tutorial
networks:
- default
expose:
- "3000"
- "3001"
ports:
- "3000:3000"
- "3001:3001"
environment:
- "DEBUG=tutorial:*"
- "PORT=3000"
- "IOTA_HTTP_HOST=iot-agent"
- "IOTA_HTTP_PORT=7896"
- "DUMMY_DEVICES_PORT=3001"
- "DUMMY_DEVICES_API_KEY=4jggokgpepnvsb2uv4s40d59ov"
- "DUMMY_DEVICES_TRANSPORT=HTTP"
iot-agent:
image: fiware/iotagent-ul:latest
hostname: iot-agent
container_name: fiware-iot-agent
depends_on:
- mongo-db
networks:
- default
expose:
- "4041"
- "7896"
ports:
- "4041:4041"
- "7896:7896"
environment:
- "IOTA_CB_HOST=orion"
- "IOTA_CB_PORT=1026"
- "IOTA_NORTH_PORT=4041"
- "IOTA_REGISTRY_TYPE=mongodb"
- "IOTA_LOG_LEVEL=DEBUG"
- "IOTA_TIMESTAMP=true"
- "IOTA_MONGO_HOST=mongo-db"
- "IOTA_MONGO_PORT=27017"
- "IOTA_MONGO_DB=iotagentul"
- "IOTA_HTTP_PORT=7896"
- "IOTA_PROVIDER_URL=http://iot-agent:4041"
# Database
mongo-db:
image: mongo:3.6
hostname: mongo-db
container_name: db-mongo
expose:
- "27017"
ports:
- "27017:27017"
networks:
- default
command: --bind_ip_all --smallfiles
volumes:
- mongo-db:/data
healthcheck:
test: |
host=`hostname --ip-address || echo '127.0.0.1'`;
mongo --quiet $host/test --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)' && echo 0 || echo 1
interval: 5s
networks:
default:
ipam:
config:
- subnet: 172.18.1.0/24
volumes:
mongo-db: ~
But when I run the docker compose with the command "docker-compose up -d" I get this error :
*WARNING: The host variable is not set. Defaulting to a blank string.
Creating network "fiware_default" with the default driver
ERROR: Pool overlaps with other one on this address space*
I also get these networks by running the command "docker network ls" :
*NETWORK ID NAME DRIVER SCOPE
78403834b9bd bridge bridge local
1dc5b7d0534b hadig_default bridge local
4162244c37b0 host host local
ac5a94a89bde none null local*
I see no conflict with the name "fiware_default". where is the problem?
The "pool" the error message refers to is the 172.18.1.0/24 CIDR block that file manually specifies. If something else on your system is using that network space, it won't start up. (Docker might have assigned another Compose file's network to 172.18.0.0/16, for example.)
You don't usually need to manually specify IP addresses in Docker at all, and so you should remove that ipam: block. Having done that, you're telling Compose to configure the default network with default settings, and you can actually remove the entire networks: block at the end of the file.
The exception to this is if your host network environment is using some of the same IP address blocks, and then you do potentially need an override like this. If you run ifconfig or a similar command from the host (or look at your host's network settings from a desktop application) and your host or a VPN is using a 172.18.1.* address, you'll also get this message. In that case, change the network to something else; if you only need a /24 (254 addresses) then setting subnet: 192.168.123.0/24 (where "123" can be any number between 1 and 254) should get you past this.

How can I connect from `project` to `mysql` container in docker swarm?

I am trying to deploy a stack with the docker swarm with the following configuration docker-compose.yaml file as below via the command:
docker stack deploy --with-registry-auth -c docker-compose.yaml project
version: "3.9"
services:
mysql:
image: mysql:8.0
deploy:
replicas: 1
volumes:
- mysql_data:/var/lib/mysql
networks:
- internal
ports:
- 3306:3306
environment:
MYSQL_ROOT_HOST: '%'
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: project_production
MYSQL_USER: username
MYSQL_PASSWORD: password
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- internal
website:
image: registry.gitlab.com/project/project-website:latest
networks:
- internal
deploy:
replicas: 1
ports:
- 3000:3000
environment:
- RAILS_ENV=production
- MYSQL_HOST=mysql
- ES_HOST=http://es01
- project_DATABASE_USERNAME=root
- project_DATABASE_PASSWORD=root
depends_on:
- es01
- mysql
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
mysql_data:
networks:
internal:
external: true
name: project
Before I deploy the stack I also have created the network for the project via the following command:
docker network create -d overlay project
But when I see the logs for the project using docker logs command I see the following error stops my project get started:
Mysql2::Error: Host '10.0.2.202' is not allowed to connect to this MySQL server
I went exactly as the documents suggested I am not sure what is wrong with the settings that I have come up!
Question:
How can I connect from project to mysql container in docker swarm?
Based on the documentation, Docker Swarm automatically creates the overlay network for you. So I think you don't need to create an external network by default, unless you have specific needs:
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
an overlay network called ingress, which handles the control and data traffic related to swarm services. When you create a swarm service and do not connect it to a user-defined overlay network, it connects to the ingress network by default.
a bridge network called docker_gwbridge, which connects the individual Docker daemon to the other daemons participating in the swarm.
As Chris also mentioned in the comments, the DB credentials also don't match.
OPTIONAL: MYSQL_ROOT_HOST is only necessary if you want to connect as root user which is not recommended in production environments. There's also no need to expose the port to the host machine since the database service will only be used from inside the cluster. So if you still want to use root user, you can set the variable to allow connections only from inside the cluster, like MYSQL_ROOT_HOST=10.*.*.*.

no data exchanged between IBpy2 and IBGateway

I am using backtrader as client with IBpy2 to access my IBC controlled IBGateway running on Docker.
I'm facing the issue that my system starts and just hangs there, with no errors or printed debug info.
I debugged my way as far as this line, reading:
self.m_serverVersion = self.m_reader.readInt()
Which is waiting to receive the server version through the connection, which never arrives.
I get this only when the IBGateway runs through docker, I don't understand how it's possible that IBpy can establish a connection but cannot exchange data.
I could not pinpoint where the problem might be, the fact that it happens only when IBC is on docker compose suggests that this depends on Docker compose, here's my docker-compose.yml file
--- updated: ---
version: '3.7'
services:
trader:
build: ./
image: mytrader
container_name: mytrader
networks:
- trading
depends_on:
- tws
tws:
build: ./ib-docker
image: ibconnect
container_name: ibconnect
ports:
# - "4001:4001"
- "4003:4003"
- "5901:5901"
volumes:
- ./ib-docker/config.ini:/root/ibc/config.ini
# - ./ib-docker/twsstart.sh:/opt/ibc/twsstart.sh
- ./ib-docker/gatewaystart.sh:/opt/ibc/gatewaystart.sh
environment:
- TZ=UTC
# Variables pulled from /root/IBController/IBControllerGatewayStart.sh
- VNC_PASSWORD=password
- IBC_PATH=/opt/ibc
- LOG_PATH=/root/ibc/logs
env_file:
- tws_credentials.env
networks:
- trading
networks:
trading:
driver: bridge
and the list of networks
% docker network ls
NETWORK ID NAME DRIVER SCOPE
4ad25f1cf0f4 bridge bridge local
9ca6f0e3f509 giuliotrader_default bridge local
3afbca83e020 giuliotrader_trading bridge local
73c2590a3a11 host host local
34e58c19f5e3 none null local
happy to post any additional files or info as might be needed.
Thanks,
Good afternoon, maybe you should use link from trader to tws
services:
trader:
links:
- tws
build: ./
image: mytrader
container_name: mytrader

Kafka connect and HDFS in docker

I am using kafka connect HDFS sink and Hadoop (for HDFS) in a docker-compose.
Hadoop (namenode and datanode) seems working correctly.
But I have an error with kafka connect sink:
ERROR Recovery failed at state RECOVERY_PARTITION_PAUSED
(io.confluent.connect.hdfs.TopicPartitionWriter:277)
org.apache.kafka.connect.errors.DataException:
Error creating writer for log file hdfs://namenode:8020/logs/MyTopic/0/log
For information:
Hadoop services in my docker-compose.yml:
namenode:
image: uhopper/hadoop-namenode:2.8.1
hostname: namenode
container_name: namenode
ports:
- "50070:50070"
networks:
default:
fides-webapp:
aliases:
- "hadoop"
volumes:
- namenode:/hadoop/dfs/name
env_file:
- ./hadoop.env
environment:
- CLUSTER_NAME=hadoop-cluster
datanode1:
image: uhopper/hadoop-datanode:2.8.1
hostname: datanode1
container_name: datanode1
networks:
default:
fides-webapp:
aliases:
- "hadoop"
volumes:
- datanode1:/hadoop/dfs/data
env_file:
- ./hadoop.env
And my kafka-connect file:
name=hdfs-sink
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=1
topics=MyTopic
hdfs.url=hdfs://namenode:8020
flush.size=3
EDIT:
I add an env variable for kafka connect to be aware of the cluster name (env variable: CLUSTER_NAME to add in kafka connect service in docker compose file).
The error is not the same (and it seems to solve a problem):
INFO Starting commit and rotation for topic partition scoring-topic-0 with start offsets {partition=0=0} and end offsets {partition=0=2}
(io.confluent.connect.hdfs.TopicPartitionWriter:368)
ERROR Exception on topic partition MyTopic-0: (io.confluent.connect.hdfs.TopicPartitionWriter:403)
org.apache.kafka.connect.errors.DataException: org.apache.hadoop.ipc.RemoteException(java.io.IOException):
File /topics/+tmp/MyTopic/partition=0/bc4cf075-ccfa-4338-9672-5462cc6c3404_tmp.avro
could only be replicated to 0 nodes instead of minReplication (=1).
There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
EDIT2:
The hadoop.env file is:
CORE_CONF_fs_defaultFS=hdfs://namenode:8020
# Configure default BlockSize and Replication for local
# data. Keep it small for experimentation.
HDFS_CONF_dfs_blocksize=1m
YARN_CONF_yarn_log___aggregation___enable=true
YARN_CONF_yarn_resourcemanager_recovery_enabled=true
YARN_CONF_yarn_resourcemanager_store_class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
YARN_CONF_yarn_resourcemanager_fs_state___store_uri=/rmstate
YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
YARN_CONF_yarn_log_server_url=http://historyserver:8188/applicationhistory/logs/
YARN_CONF_yarn_timeline___service_enabled=true
YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true
YARN_CONF_yarn_resourcemanager_system___metrics___publisher_enabled=true
YARN_CONF_yarn_resourcemanager_hostname=resourcemanager
YARN_CONF_yarn_timeline___service_hostname=historyserver
Finaly like noticed by #cricket_007 I need to configure hadoop.conf.dir.
The directory should contain hdfs-site.xml.
When each service is dockerized, I need to create a named volume in order to share configuration files between kafka-connect service and namenode service.
To do this I add in my docker-compose.yml:
volumes:
hadoopconf:
Then for namenode service I add:
volumes:
- hadoopconf:/etc/hadoop
And for kafka connect service:
volumes:
- hadoopconf:/usr/local/hadoop-conf
Finaly I set hadoop.conf.dir in my HDFS sink properties file to /usr/local/hadoop-conf.

Configure 3 Mesos instance with 1 master using docker and docker-compose

By reading this article : how-to-configure-a-production-ready-mesosphere-cluster-on-ubuntu-14-04,
I wanted to start my own docker mesosphere using 3 server.
The setup is similar than the article, expect I use 4 dockerized server :
Docker Zookeeper
Docker Mesos Master
Docker Mesos Slave
Docker Marathon
I got really confused by the configuration files location, because they install the 4 components on the same machine.
Docker install use 4 different server, how do you apply the steps correctly using Docker.
I have
Server 1 - prod02 - prod02.domain.com
Server 2 - preprod02 - preprod02.domain.com
Server 3 - prod01 - prod01.domain.com
Here is a the docker-compose.yml I started writting for running the master mesosphere server 1
zookeeper:
build: zookeeper
restart: always
command: /usr/share/zookeeper/bin/zkServer.sh start-foreground
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
master:
build: master
restart: always
environment:
- MESOS_HOSTNAME=master.prod-02.example.com
- MESOS_ZK=zk://prod-02.example.com:2181,prod-01.example.com:2181,preprod-02.example.com:2181/mesos
- MESOS_QUORUM=1
- MESOS_LOG_DIR=/var/log/mesos
- MESOS_WORK_DIR=/var/lib/mesos
volumes:
- /srv/docker/mesos-master:/var/log/mesos
ports:
- "5050:5050"
slave:
build: slave
restart: always
privileged: true
environment:
- MESOS_HOSTNAME=slave.prod-02.example.com
- MESOS_MASTER=zk://prod-02.example.com:2181,prod-01.example.com:2181,preprod-02.example.com:2181/mesos
- MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins #also in Dockerfile
- MESOS_CONTAINERIZERS=docker,mesos
- MESOS_LOG_DIR=/var/log/mesos
- MESOS_LOGGING_LEVEL=INFO
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
- /sys:/sys:ro
- /srv/docker/mesos-slave:/var/log/mesos
- /srv/docker/mesos-data/docker.tar.gz:/etc/docker.tar.gz
ports:
- "5051:5051"
marathon:
build: marathon
restart: always
environment:
- MARATHON_HOSTNAME=marathon.prod-02.example.com
- MARATHON_MASTER=zk://prod-02.example.com:2181,prod-01.example.com:2181,preprod-02.example.com:2181/mesos
- MARATHON_ZK=zk://prod-02.example.com:2181,prod-01.example.com:2181,preprod-02.example.com:2181/marathon
ports:
- "8081:8080"
My project directory looks like this
/prod-02
/marathon
Dockerfile
/master
Dockerfile
/slave
Dockerfile
/zookeeper
/assets
/conf
myid
zoo.cfg
docker-compose.yml
With this config, the master and slave serveur can't start , log is :
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1016 12:12:49.976361 1 process.cpp:895] Failed to initialize: Failed to bind on XXX.XXX.XXX.XXX:5051: Cannot assign requested address: Cannot assign requested address [99]
*** Check failure stack trace: ***
I feel a bit lost due to lake of documentation, any help to configure is well appreciated
I finally sort this out, what was missing was the external ip address MESOS_IP set for master and slave and also the net: host mode

Resources