I am attempting to create a cluster of Hyperledger validating peers, each running on a different host, but it does not appear to be functioning properly.
After starting the root node and 3 peer nodes, this is the output running peer network list on the root node, vp0:
{"peers":[{"ID":{"name":"vp1"},"address":"172.17.0.2:30303","type":1},{"ID":{"name":"vp2"},"address":"172.17.0.2:30303","type":1},{"ID":{"name":"vp3"},"address":"172.17.0.2:30303","type":1}]}
This is the output from the same command on one of the peers, vp3:
{"peers":[{"ID":{"name":"vp0"},"address":"172.17.0.2:30303","type":1},{"ID":{"name":"vp3"},"address":"172.17.0.2:30303","type":1}]}
All of the peers only list themselves and the root, vp0, in their lists.
This is the log output from the root node, vp0: https://gist.github.com/mikezaccardo/f139eaf8004540cdfd24da5a892716cc
This is the log output from one of the peer nodes, vp3: https://gist.github.com/mikezaccardo/7379584ca4f67bce553c288541e3c58e
This is the command I'm running to create the root node:
nohup sudo docker run --name=$HYPERLEDGER_PEER_ID \
--restart=unless-stopped \
-i \
-p 5000:5000 \
-p 30303:30303 \
-p 30304:30304 \
-p 31315:31315 \
-e CORE_VM_ENDPOINT=http://172.17.0.1:4243 \
-e CORE_PEER_ID=$HYPERLEDGER_PEER_ID \
-e CORE_PEER_ADDRESSAUTODETECT=true \
-e CORE_PEER_NETWORKID=dev \
-e CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN=pbft \
-e CORE_PBFT_GENERAL_MODE=classic \
-e CORE_PBFT_GENERAL_N=$HYPERLEDGER_CLUSTER_SIZE \
-e CORE_PBFT_GENERAL_TIMEOUT_REQUEST=10s \
joequant/hyperledger /bin/bash -c "rm config.yaml; cp /usr/share/go-1.6/src/github.com/hyperledger/fabric/consensus/obcpbft/config.yaml .; peer node start" > $HYPERLEDGER_PEER_ID.log 2>&1&
And this is the command I'm running to create each of the other peer nodes:
nohup sudo docker run --name=$HYPERLEDGER_PEER_ID \
--restart=unless-stopped \
-i \
-p 30303:30303 \
-p 30304:30304 \
-p 31315:31315 \
-e CORE_VM_ENDPOINT=http://172.17.0.1:4243 \
-e CORE_PEER_ID=$HYPERLEDGER_PEER_ID \
-e CORE_PEER_DISCOVERY_ROOTNODE=$HYPERLEDGER_ROOT_NODE_ADDRESS:30303 \
-e CORE_PEER_ADDRESSAUTODETECT=true \
-e CORE_PEER_NETWORKID=dev \
-e CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN=pbft \
-e CORE_PBFT_GENERAL_MODE=classic \
-e CORE_PBFT_GENERAL_N=$HYPERLEDGER_CLUSTER_SIZE \
-e CORE_PBFT_GENERAL_TIMEOUT_REQUEST=10s \
joequant/hyperledger /bin/bash -c "rm config.yaml; cp /usr/share/go-1.6/src/github.com/hyperledger/fabric/consensus/obcpbft/config.yaml .; peer node start" > $HYPERLEDGER_PEER_ID.log 2>&1&
HYPERLEDGER_PEER_ID is vp0 for the root node and vp1, vp2, ... for the peer nodes, HYPERLEDGER_ROOT_NODE_ADDRESS is the public IP address of the root node, and HYPERLEDGER_CLUSTER_SIZE is 4.
This is the Docker image that I am using: github.com/joequant/hyperledger
Is there anything obviously wrong with my commands? Should the actual public IP addresses of the peers be showing up as opposed to just 172.17.0.2? Are my logs helpful / is any additional information needed?
Any help or insight would be greatly appreciated, thanks!
I've managed to get a noops cluster working in which all nodes discover each other and chaincodes successfully deploy.
I made a few fixes since my post above:
I now use mikezaccardo/hyperledger-peer image, a fork of yeasy/hyperledger-peer, instead of joequant/hyperledger.
I changed:
-e CORE_PEER_ADDRESSAUTODETECT=true \
to:
-e CORE_PEER_ADDRESS=$HOST_ADDRESS:30303 \
-e CORE_PEER_ADDRESSAUTODETECT=false \
so that each peer would advertise its public IP, not private.
And I properly tag my image as the official base image:
sudo docker tag mikezaccardo/hyperledger:latest hyperledger/fabric-baseimage:latest
Finally, for context, this is all related to my development of a blueprint for Apache Brooklyn which deploys a Hyperledger Fabric cluster. That repository, which contains all of the code mentioned in this post and answer, can be found here: https://github.com/cloudsoft/brooklyn-hyperledger.
Related
SUMMARY
I am running a Zabbix Server container, but I am not being able to communicate on its listening port - Locally even.
OS / ENVIRONMENT / Used docker-compose files
This is the script I am currently using to run it:
docker run -d --name zabbix-server \
--restart always \
--link zabbix-snmptraper:zabbix-snmptraps --volumes-from zabbix-snmptraper \
-p 192.168.1.248:10052:10051 \
-e MYSQL_DATABASE="zabbix" \
-e MYSQL_USER="zabbix" \
-e MYSQL_PASSWORD="aro#123" \
-e ZBX_LISTENPORT=10052 \
-e ZBX_HOUSEKEEPINGFREQUENCY=12 \
-e ZBX_LOGSLOWQUERIES=1000 \
-e ZBX_STARTPOLLERSUNREACHABLE=1 \
-e ZBX_STARTPINGERS=5 \
-e ZBX_STARTTRAPPERS=1 \
-e ZBX_STARTDBSYNCERS=3 \
-e ZBX_STARTDISCOVERERS=4 \
-e ZBX_STARTPOLLERS=10 \
-e ZBX_TIMEOUT=30 \
-e ZBX_VALUECACHESIZE=32M \
-e ZBX_CACHESIZE=48M \
-e ZBX_MAXHOUSEKEEPERDELETE=432000 \
-e ZBX_ENABLE_SNMP_TRAPS=true \
-e MYSQL_ROOT_PASSWORD="my_root_pass_of_mysql..." \
-e DB_SERVER_HOST="mysql-server" \
-e DB_SERVER_PORT="3306" \
-v /etc/localtime:/etc/localtime:ro \
-v /mnt/dados/zabbix/external_scripts:/usr/lib/zabbix/externalscripts \
--network=zabbix-net \
zabbix/zabbix-server-mysql:5.4-ubuntu-latest
CONFIGURATION
The code block of commands is being run on a Debian 11.
STEPS TO REPRODUCE
Basically, the container is UP and running.
The passive queries are all working - I can gather data from Zabbix to other Zabbix Agents, SNMP, etc.
The problem happens when I try to do a active query from outside to Zabbix Server itself... (Active queries.)
My deduction was that the docker container did not create the necessary routes for this, so I must specify something or there is some configuration missing.
EXPECTED RESULTS
When doing a telnet to 10052 on my Zabbix Server, the expected result is a OK Connected.
ACTUAL RESULTS
Locally, on my own Zabbix Server, when I did:
sudo telnet 192.168.1.248 10052
I got telnet: Unable to connect to remote host: Connection refused
Crazy thing is that when doing this on the IP address of the DOCKER NETWORK, (Got the IP from docker inspect zabbix-server "IPAddress": "172.18.0.4"):
sudo telnet 172.18.0.4 10052
Trying 172.18.0.4...
Connected to 172.18.0.4.
It worked. So there is a routing problem with this container.
But most containers when running create the rules or at least show it in logs or docs. how to do it.
But I could not find this anywhere...
Can you please help me?
I am on this for more than two weeks and do not know what to do anymore.
If this is in the wrong section or "flow", please direct me to the correct place to this.
I really appreciate the help.
Edit 1
Here is the output TCPDUMP gave me:
16:28:12.373378 IP 192.168.17.24.55114 > 192.168.1.248.10052: Flags [S], seq 2008667124, win 64240, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0
As you can see, packets are coming through and arriving to the Docker Server.
I tried adding the following rule to IPTables to see if it solved it:
sudo iptables -t nat -A PREROUTING -p tcp --dport 10052 -j DNAT --to-destination 172.18.0.4:10052 -m comment --comment "Redirect requests from IP 248 to the container IP"
But it did not work. Or I created this wrongly.
To list the rules I used the command:
sudo iptables -t nat -v -L PREROUTING -n --line-number
It was created all fine.
While you configured Zabbix to listen on port 10052 (-e ZBX_LISTENPORT=10052), you mount the host port 10052 to the containers port 10051 instead (-p 192.168.1.248:10052:10051).
Use -p 192.168.1.248:10052:10052 to make it work.
docker run -d --name zabbix-server \
--restart always \
--link zabbix-snmptraper:zabbix-snmptraps --volumes-from zabbix-snmptraper \
-p 192.168.1.248:10052:10052 \
-e MYSQL_DATABASE="zabbix" \
-e MYSQL_USER="zabbix" \
-e MYSQL_PASSWORD="aro#123" \
-e ZBX_LISTENPORT=10052 \
-e ZBX_HOUSEKEEPINGFREQUENCY=12 \
-e ZBX_LOGSLOWQUERIES=1000 \
-e ZBX_STARTPOLLERSUNREACHABLE=1 \
-e ZBX_STARTPINGERS=5 \
-e ZBX_STARTTRAPPERS=1 \
-e ZBX_STARTDBSYNCERS=3 \
-e ZBX_STARTDISCOVERERS=4 \
-e ZBX_STARTPOLLERS=10 \
-e ZBX_TIMEOUT=30 \
-e ZBX_VALUECACHESIZE=32M \
-e ZBX_CACHESIZE=48M \
-e ZBX_MAXHOUSEKEEPERDELETE=432000 \
-e ZBX_ENABLE_SNMP_TRAPS=true \
-e MYSQL_ROOT_PASSWORD="my_root_pass_of_mysql..." \
-e DB_SERVER_HOST="mysql-server" \
-e DB_SERVER_PORT="3306" \
-v /etc/localtime:/etc/localtime:ro \
-v /mnt/dados/zabbix/external_scripts:/usr/lib/zabbix/externalscripts \
--network=zabbix-net \
zabbix/zabbix-server-mysql:5.4-ubuntu-latest
I am following the steps in the "Standalone Instance, Two-Way SSL" section of https://hub.docker.com/r/apache/nifi. However, when I visit the NiFi page, my user has insufficient permissions. Below is the process I am using:
Generate self-signed certificates
mkdir conf
docker exec \
-ti toolkit \
/opt/nifi/nifi-toolkit-current/bin/tls-toolkit.sh \
standalone \
-n 'nifi1.bluejay.local' \
-C 'CN=admin,OU=NIFI'
docker cp toolkit:/opt/nifi/nifi-current/nifi-cert.pem conf
docker cp toolkit:/opt/nifi/nifi-current/nifi-key.key conf
docker cp toolkit:/opt/nifi/nifi-current/nifi1.bluejay.local conf
docker cp toolkit:/opt/nifi/nifi-current/CN=admin_OU=NIFI.p12 conf
docker cp toolkit:/opt/nifi/nifi-current/CN=admin_OU=NIFI.password conf
docker stop toolkit
Import client certificate to browser
Import the .p12 file into your browser.
Update /etc/hosts
Add "127.0.0.1 nifi1.bluejay.local" to the end of your /etc/hosts file.
Define a NiFi network
docker network create --subnet=10.18.0.0/16 nifi
Run NiFi in a container
docker run -d \
-e AUTH=tls \
-e KEYSTORE_PATH=/opt/certs/keystore.jks \
-e KEYSTORE_TYPE=JKS \
-e KEYSTORE_PASSWORD=$(grep keystorePasswd conf/nifi1.bluejay.local/nifi.properties | cut -d'=' -f2) \
-e TRUSTSTORE_PATH=/opt/certs/truststore.jks \
-e TRUSTSTORE_PASSWORD=$(grep truststorePasswd conf/nifi1.bluejay.local/nifi.properties | cut -d'=' -f2) \
-e TRUSTSTORE_TYPE=JKS \
-e INITIAL_ADMIN_IDENTITY="CN=admin,OU=NIFI" \
-e NIFI_WEB_PROXY_CONTEXT_PATH=/nifi \
-e NIFI_WEB_PROXY_HOST=nifi1.bluejay.local \
--hostname nifi1.bluejay.local \
--ip 10.18.0.10 \
--name nifi \
--net nifi \
-p 8443:8443 \
-v $(pwd)/conf/nifi1.bluejay.local:/opt/certs:ro \
-v /data/projects/nifi-shared:/opt/nifi/nifi-current/ls-target \
apache/nifi
Visit Page
When you visit http://localhost:8443/nifi, you'll be asked to select a certificate. Select the certificate (e.g. admin) that you imported.
At this point, I am seeing:
Insufficient Permissions
Unknown user with identity 'CN=admin, OU=NIFI'. Contact the system administrator.
In the examples I am seeing, there is no mention of this issue or how to resolve it.
How are permissions assigned to the Initial Admin Identity?
You are missing a space at line
-e INITIAL_ADMIN_IDENTITY="CN=admin,OU=NIFI"
See the error msg.
I am using this :- https://hub.docker.com/r/apache/nifi/;
running the below for secure instance :-
docker run --name nifi \
-v /User/hddev/certs/localhost:/opt/certs \
-p 8443:8443 \
-e AUTH=tls \
-e NIFI_WEB_PROXY_HOST=ec2-hostname.aws.com:8443 \
-e KEYSTORE_PATH=/opt/certs/keystore.jks \
-e KEYSTORE_TYPE=JKS \
-e KEYSTORE_PASSWORD=securepass \
-e TRUSTSTORE_PATH=/opt/certs/truststore.jks \
-e TRUSTSTORE_PASSWORD=securepass \
-e TRUSTSTORE_TYPE=JKS \
-e INITIAL_ADMIN_IDENTITY='CN=admin, OU=NIFI' \
-d \
apache/nifi:latest
WebUI works fine I am able to access but with s2s I get this at the RPG,
java.net.UnknownHostException for docker-container-id:8443
o.a.n.r.util.SiteToSiteRestApiClient Failed to create transaction for
https://docker-container-id:8443/nifi-api/data-transfer/input-ports/someportaddress/transactions
When I see the property for ,it takes the container id :- 2eftfdf
nifi.web.https.host=2eftfdf
nifi.remote.input.host=2eftfdf
due to below code.
https://github.com/apache/nifi/blob/master/nifi-docker/dockerhub/sh/secure.sh#L61
I was able to get it work by by changing the environment variable $HOSTNAME to public dns of ec2.
Is there setting in to achieve this without chaging. env
Do we need to update nifi.remote.route.{protocol}.{name}.when related properties? If not then how?
I am setting up ksql-cli with confluent version 3.3.0 in following way
#zookeper
docker run -d -it \
--net=host \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=32181 \
confluentinc/cp-zookeeper:3.3.0
#kafka
docker run -d \
--net=host \
--name=kafka \
-e KAFKA_ZOOKEEPER_CONNECT=localhost:32181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:3.3.0
#schema-registry
docker run -d \
--net=host \
--name=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=localhost:32181 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_LISTENERS=http://localhost:8081 \
confluentinc/cp-schema-registry:3.3.0
i am running ksql-cli docker image in following manner
docker run -it \
--net=host \
--name=ksql-cli \
-e KSQL_CONFIG_DIR="/etc/ksql" \
-e KSQL_LOG4J_OPTS="-Dlog4j.configuration=file:/etc/ksql/log4j-rolling.properties" \
-e STREAMS_BOOTSTRAP_SERVERS=localhost:29092 \
-e STREAMS_SCHEMA_REGISTRY_HOST=localhost \
-e STREAMS_SCHEMA_REGISTRY_PORT=8081 \
confluentinc/ksql-cli:0.5
when i am running ksql-cli by going in bash of container in folowing way
docker exec -it ksql-cli bash
and running ksql-cli in following way:
./usr/bin/ksql-cli local
It is giving me following error:
Initializing KSQL...
Could not fetch broker information. KSQL cannot initialize AdminCLient.
By default, the ksql-cli attempts to connect to the Kafka brokers on localhost:9092. It looks like your setup is using a different port, so you'll need to provide this on the command line, e.g.
./usr/bin/ksql-cli local --bootstrap-server localhost:32181
You'll probably also need to specify the schema registry port, so you may want to use a properties file, e.g. :
./usr/bin/ksql-cli local --properties-file ./ksql.properties
Where ksql.properties has:
bootstrap.servers=localhost:29092
schema.registry.url=localhost:8081
Or provide both on the command line:
./usr/bin/ksql-cli local \
--bootstrap-server localhost:29092 \
--schema.registry.url http://localhost:8081
Note, from KSQL version 4.1 onwards the commands and properties change name. ksql-cli becomes just ksql. The local mode disappears - you'll need to run a ksql-server node or two explicitly. --property-file becomes --config-file and schema.registry.url becomes ksql.schema.registry.url.
I retrieved the most recent image using docker pull boomi/atom:2.3.0
I then run the following script (using placeholders for USERNAME, PASSWORD and ACCOUNT_ID):
#!/bin/bash
atom_name=boomidemo01
docker stop $atom_name
docker rm $atom_name
docker run -p 9090:9090 -h boomidemo01 -e URL="platform.boomi.com" \
-e BOOMI_USERNAME=<USERNAME> -e BOOMI_PASSWORD=<PASSWORD> \
-e BOOMI_ATOMNAME=$atom_name \
-e BOOMI_CONTAINERNAME=$atom_name \
-e BOOMI_ACCOUNTID=<ACCOUNT_ID> \
--name $atom_name \
-d -t boomi/atom:2.3.0
But the atom fails to start (not able to connect on port 9090 via a browser on http://127.0.0.1:9090). Did anyone managed to use docker for running a Boomi atom?
I eventually figured it out... the following script works
#!/bin/bash
atom_name=boomidemo01
host_dir=/home/user/Boomi
docker stop $atom_name
docker rm $atom_name
docker run -p 9090:9090 -h $atom_name \
-v $host_dir:/home/boomi/host_dir \
-e URL=https://platform.boomi.com \
-e BOOMI_USERNAME=<USERNAME> \
-e BOOMI_PASSWORD=<PASSWORD> \
-e BOOMI_ATOMNAME=$atom_name \
-e BOOMI_CONTAINERNAME=$atom_name \
-e BOOMI_ACCOUNTID=<ACCOUNT_ID> \
-e PROXY_HOST= \
-e PROXY_USERNAME= \
-e PROXY_PASSWORD= \
-e PROXY_PORT= \
-e DOCKERUID= \
-e SYMLINKS_DIR= \
-e ATOM_LOCALHOSTID=$atom_name \
-e INSTALL_TOKEN= \
--name $atom_name \
-d -t boomi/atom:2.3.0
Download the docker install script from within the UI.
Generate the token.
Run the script with the name, and token.
The port does not matter because Atomsphere is never pinged. It fetches processes for you.