Error in adding 4th organization in to Hyperledger Fabric 2.0 - docker

I am new to Fabric 2.0 and recently installed all samples and I was able to run test-network without an issue with 2 orgs. Then I followed the directory on addOrg3 to add 3rd organization and join the channel I created earlier.
Now the fun part came in when I wanted to add 4th organization. What I did was, I copied the addOrg3 folder and renamed almost everything in each file to represent 4th organization. I even assigned new PORT for this organization. However I am seeing the following error.
I've also added the following in Scripts/envVar.sh
export PEER0_ORG4_CA=${PWD}/organizations/peerOrganizations/org4.example.com/peers/peer0.org4.example.com/tls/ca.crt
And added the following in envVarCLI.sh
elif [ $ORG -eq 4 ]; then
CORE_PEER_LOCALMSPID="Org4MSP"
CORE_PEER_TLS_ROOTCERT_FILE=$PEER0_ORG4_CA
CORE_PEER_ADDRESS=peer0.org4.example.com:12051
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/organizations/peerOrganizations/org4.example.com/users/Admin#.../msp
I have also added step1Org4.sh and step2Org4.sh basically following by following addOrg3's structure.
What steps do you follow to add additional organizations ? Please help.
"No such container: Org4cli"
Sorry for the formatting since I wasn't able to put in to coding style but here is the output from running the command "./addOrg4.sh up"
**Add Org4 to channel 'mychannel' with '10' seconds and CLI delay of '3' seconds and using database 'leveldb'
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/cryptogen
##########################################################
##### Generate certificates using cryptogen tool #########
##########################################################
##########################################################
############ Create Org4 Identities ######################
##########################################################
+ cryptogen generate --config=org4-crypto.yaml --output=../organizations
org4.example.com
+ res=0
+ set +x
Generate CCP files for Org4
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/configtxgen
##########################################################
####### Generating Org4 organization definition #########
##########################################################
+ configtxgen -printOrg Org4MSP
2020-05-29 13:33:04.609 EDT [common.tools.configtxgen] main -> INFO 001 Loading configuration
2020-05-29 13:33:04.617 EDT [common.tools.configtxgen.localconfig] LoadTopLevel -> INFO 002 Loaded configuration: /Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/configtx.yaml
+ res=0
+ set +x
###############################################################
####### Generate and submit config tx to add Org4 #############
###############################################################
Error: No such container: Org4cli
ERROR !!!! Unable to create config tx **

In your addOrg4.sh have condition check like this:
CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
fi
If you already run addOrg3.sh up, CONTAINER_IDS alway have value (Example: 51b4ad60d812). It is ContainerID of Org3cli. So function Org4Up will never call. Simple way is just comment code like this:
# CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
# if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
# fi
It will bring up Org4cli you missing.

First check the container is up or not and if it is up then I think the CLI where the command is executed is not bootstrapped with the Org4 details.
I have added the 4th Organization from the three Org Hyperledger Fabric Network .Firstly, you have to create the Org4-artifacts (Crypto.yaml and Org4 docker file including the Org4Cli) and then try to follow the manual (step by step) process to add the new Organization from the official documentation.
https://hyperledger-fabric.readthedocs.io/en/release-2.0/channel_update_tutorial.html
Omit the process of editing scripts (step1 Org3.sh ...) because the workflow for adding the 4th or a new Org is slightly changed.So,you will spend a lot of time in just modifying the scripts.
I will write an article to add a new Org (4th) on medium,will paste the link here too.

Related

Failure to create Genesis Block (Hyperledger fabric)

This is my first post on Stackoverflow. Please bear with me...
I am new to basically everything around hyperledger-fabric.
While trying to follow the "Writing Your First Application Tutorial" i am unable to generate the genesis block.
I am running:
windows 10
Docker version 18.03.0-ce, build 0520e24302
docker-compose version 1.20.1, build 5d8c71b2
npm 6.1.2
node.js v12.1
python 2.7
I used the curl commands given in the instructions.
When i run
./startFabric.sh javascript
at first everything is going ok. I am getting some warnings but starting the Channel seems to work fine.
In the next few lines however i am getting an error.
Generate CCP files for Org1 and Org2
/c/Users/Jonathan/Hyperledger_Fabric/fabric-samples/bin/configtxgen
Generating Orderer Genesis block
CONSENSUS_TYPE=solo
+ '[' solo == solo ']'
+ configtxgen -profile TwoOrgsOrdererGenesis -channelID byfn-sys-channel -outputBlock ./channel-artifacts/genesis.block
2019-11-25 10:35:01.676 CET [common.tools.configtxgen] main -> INFO 001 Loading configuration
2019-11-25 10:35:01.677 CET [common.tools.configtxgen.localconfig] Load -> PANI 002 Error reading configuration: Unsupported Config Type ""
2019-11-25 10:35:01.679 CET [common.tools.configtxgen] func1 -> PANI 003 Error reading configuration: Unsupported Config Type ""
panic: Error reading configuration: Unsupported Config Type "" [recovered]
panic: Error reading configuration: Unsupported Config Type ""
It seems that the configuration is missing/not the right "type".
I googled my error and came up with people having similar/the same issue but i havent been able to solve my issue by following the advice they were given (hence why i am here).
Similar posts i found:
https://jira.hyperledger.org/browse/FAB-3467
Error cryptogen tool in Hyperledger Fabric
The people responding to the posts above generally seem to think its an issue regarding the path to the config file which apparently can be fixed with
FABRIC_CFG_PATH=$PWD
But that hasnt worked for me.
Full Console Output:
https://pastebin.com/CHxjZ1Uq
Look at the warning at the start: you are missing some prerequisites.
First, use the generate script to generate the crypto-config and artifacts folder.
Once you got your certificates you can start fabric.
You should have a generte.sh script or something like that.
Essentially it runs the certificate generation for your network using cryptogen.
The code is something like this:
which cryptogen
if [ "$?" -ne 0 ]; then
echo "cryptogen tool not found. exiting"
exit 1
fi
if [ -d "crypto-config" ]; then
rm -Rf crypto-config
fi
set -x
cryptogen generate --config=./crypto-config.yaml
Once you generated the certificates, you can now generate the genesis block with something like:
which configtxgen
if [ -d "config" ]; then
rm -Rf config
fi
mkdir config
configtxgen -profile OneOrgOrdererGenesis -outputBlock ./config/genesis.block
configtxgen -profile OneOrgChannel -outputCreateChannelTx ./config/channel.tx -channelID $CHANNEL_NAME
configtxgen -profile OneOrgChannel -outputAnchorPeersUpdate ./config/${MSP_NAME}anchors.tx -channelID $CHANNEL_NAME -asOrg $MSP_NAME
Obviously replacing with your names and variables.

Building & running headless unity3D game in ubuntu docker container

I was trying to find a way how to build and run my game from a Ubunty-based linux container just using command line. Even though I was able to find few containers on DockerHub, neither of them allowed me to pass license registration stage in 'batch' mode.
https://hub.docker.com/r/eamonwoortman/unity3d/~/dockerfile/
https://hub.docker.com/r/chenjr0719/ubuntu-unity-novnc/
Commands I tried so far:
xvfb-run --auto-servernum /opt/Unity/Editor/Unity -force-free -batchmode -nographics -logFile -username 'xxx' -password xxx -quit
/opt/Unity/Editor/Unity -force-free -batchmode -nographics -logFile -username 'xxx' -password xxx -quit
I'm usually getting following error log from Unity:
mono profile = '/opt/Unity/Editor/Data/Mono/lib/mono/2.0' Initialize
mono Mono path[0] = '/opt/Unity/Editor/Data/Managed' Mono path[1] =
'/opt/Unity/Editor/Data/Mono/lib/mono/2.0' Mono path[2] =
'/opt/Unity/Editor/Data/UnityScript' Mono path[3] =
'/opt/Unity/Editor/Data/Mono/lib/mono/2.0' Mono config path =
'/opt/Unity/Editor/Data/Mono/etc' Using monoOptions
--debugger-agent=transport=dt_socket,embedding=1,defer=y,address=0.0.0.0:56542
DisplayProgressbar: Unity license Cancelling DisplayDialog: Failed to
activate/update license. Timeout occured while trying to update
license. Please try again later or contact support#unity3d.com This
should not be called in batch mode.
I wonder is someone else had already solved that problem and able to share Dockerfile & proper command lines.
It has been quite a while since the original question, but it took me a lot of time to find the correct answer, so just in case someone else would struggle with this problem - the answer is:
At some point Unity stopped requiring usage of xvfb-run and neatly builds stuff without it. If using xvfb-run - you'll be receiving timeouts (Unity error messages are pretty misleading in this case). I have a pipeline on Jenkins that uses Ubuntu image and these commands work fine for me so far
The trick is in using "-nographics" argument after implementation of which the need in xvfb-run disappeared.
Activate
/opt/unity/Editor/Unity -quit -logFile -batchmode -nographics -username XXXX -password XXXX -serial XXXX
Build
/opt/unity/Editor/Unity -quit -logFile -batchmode -nographics -quit -projectPath . -buildTarget iOS -customBuildTarget iOS -customBuildName XXXX -customBuildPath builds/iOS/ -executeMethod BuildCommand.PerformBuild -logFile /dev/stdout
Return license
/opt/unity/Editor/Unity -quit -logFile -batchmode -nographics -username XXXX -password XXXX -serial XXXX
Hope this saves someone that huge amount of time I wasted on figuring this out ;)

jenkins plugin for triggering build whenever any file changed in a given directory

I am looking for functionality where we have a directory with some files in it.
Whenever any one makes a change in any of the files in the directory, jenkins shoukd trigger a build.
Is there any plugin or mathod for this functionality. Please advise.
Thanks in advance.
I have not tried it myself, but The FSTrigger plugin seems to do what you want:
FSTrigger provides polling mechanisms to monitor a file system and
trigger a build if a file or a set of files have changed.
If you can monitor the directory with a script, you can trigger the build with a HTTP GET, for example with wget or curl:
wget -O- $JENKINS_URL/job/JOBNAME/build
Although slightly related.. it seems like this issue was about monitoring static files on system.. however there are many version control systems for just this purpose.
I answered this in another post if you're using git to track changes on the files themselves:
#!/bin/bash
set -e
job_name="whatever"
JOB_URL="http://myserver:8080/job/${job_name}/"
FILTER_PATH="path/to/folder/to/monitor"
python_func="import json, sys
obj = json.loads(sys.stdin.read())
ch_list = obj['changeSet']['items']
_list = [ j['affectedPaths'] for j in ch_list ]
for outer in _list:
for inner in outer:
print inner
"
_affected_files=`curl --silent ${JOB_URL}${BUILD_NUMBER}'/api/json' | python -c "$python_func"`
if [ -z "`echo \"$_affected_files\" | grep \"${FILTER_PATH}\"`" ]; then
echo "[INFO] no changes detected in ${FILTER_PATH}"
exit 0
else
echo "[INFO] changed files detected: "
for a_file in `echo "$_affected_files" | grep "${FILTER_PATH}"`; do
echo " $a_file"
done;
fi;
You can add the check directly to the top of the job's exec shell, and it will exit 0 if no changes detected.. Hence, you can always poll the top level of the repo for check-in's to trigger a build. And only complete a build if the files in question change.

Monitoring URLs with Nagios

I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in #INC (#INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in #INC (#INC contains:
You can make an script plugin. It is easy, you only have to check the URL with something like:
`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.

Watch a web page for changes

I googled and couldn't find any could that would compare a webpage to a previous version.
In this case the page I'm trying to watch is link text. There are services that can watch a page, but I'd like to set this up on my own server.
I've set this up as a wiki so anyone can add to the code. Here's my idea
Check if previous version of file exists. If false then download page
If page exists, diff to find differences and email the new content along with dates of new and old versions.
This script would be called nightly via cron or on-demand via the browser (the latter is not a priority)
Sounds simple, maybe I'm just not looking in the right place.
Perhaps a simple sh-script like this, featuring wget, diff & test?
#!/bin/sh
WWWURI="http://foo.bar/testfile.html"
LOCALCOPY="testfile.html"
TMPFILE="tmpfile"
WEBFILE="changed.html"
MAILADDRESS="$(whoami)"
SUBJECT_NEWFILE="$LOCALCOPY is new"
BODY_NEWFILE="first version of $LOCALCOPY loaded"
SUBJECT_CHANGEDFILE="$LOCALCOPY updated"
SUBJECT_NOTCHANGED="$LOCALCOPY not updated"
BODY_CHANGEDFILE="new version of $LOCALCOPY"
# test for old file
if [ -e "$LOCALCOPY" ]
then
mv "$LOCALCOPY" "$LOCALCOPY.bak"
wget "$WWWURI" -O"$LOCALCOPY" -o/dev/null
diff "$LOCALCOPY" "$LOCALCOPY.bak" > $TMPFILE
# test for update
if [ -s "$TMPFILE" ]
then
echo "$SUBJECT_CHANGEDFILE"
( echo "$BODY_CHANGEDFILE" ; cat "$TMPFILE" ) | tee "$WEBFILE" | mail -s "$SUBJECT_CHANGEDFILE" "$MAILADDRESS"
else
echo "$SUBJECT_NOTCHANGED"
fi
else
wget "$WWWURI" -O"$LOCALCOPY" -o/dev/null
echo "$BODY_NEWFILE"
echo "$BODY_NEWFILE" | tee "$WEBFILE" | mail -s "$SUBJECT_NEWFILE" "$MAILADDRESS"
fi
[ -e "$TMPFILE" ] && rm "$TMPFILE"
Update: Pipe through tee, little spelling & remove of $TMPFILE
You can check This SO posting to get a few ideas and also information about the challenge of detecting "true" changes to a web page (with fluctuating advertisement block, and other "noise")

Resources