Hyperledger Fabric Set Up Docker-Compose Key Error - docker

I am attempting to launch a sample hyper ledger environment with an orderer, a peer, and a ca-server.
When I issue the command docker-compose up I receive the following errors during setup:
peer0 | panic: Error when setting up MSP from directory /etc/hyperledger/fabric/msp/sampleconfig: err Could not load a valid signer certificate from directory /etc/hyperledger/fabric/msp/sampleconfig/signcerts, err stat /etc/hyperledger/fabric/msp/sampleconfig/signcerts: no such file or directory
And for the orderer:
orderer | * '' has invalid keys: genesis, sbftlocal
orderer | panic: Error unmarshaling config into struct:1 error(s) decoding:
orderer |
orderer | * '' has invalid keys: genesis, sbftlocal
Finally the output when I check the results in another terminal, I find that only the fabric-ca-server has successfully initiated.:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d32864fd391f hyperledger/fabric-ca:latest "fabric-ca-server ..." 3 days ago Up 3 days 0.0.0.0:7054->7054/tcp ca
Where does this problem arise? I was told in the tutorial you do not need to set up the keys when using vagrant and docker.

The error occurs when you try to start your network, i.e. when you execute the command docker-compose up.
It tries to create three Docker containers: one for the CA, another one for the orderer and a last one for the peer. However, it only create the Docker container for the CA, as we can see in your third image.
When you execute the command docker-compose up, you should define a configuratio file (docker-compose-cli.yaml). In that configuration file you define the MSP directories, i.e. the directories where you have your keys for each node. In addition to it, you should change the configuration file (docker-compose-base.yaml), because the first configuration file uses also this one to create each node.

Related

Port conflicts in starting test-network of Hyperledger Fabric using fabric-samples folder

I'm a new starter and have been struggling for a while about these port errors
When I run ./network.sh in the directory: fabric-samples/test-network
the following port errors occurred:
yujindeMBP:test-network yujin$ ./network.sh up
Starting nodes with CLI timeout of '5' tries and CLI delay of '3' seconds and using database 'leveldb' with crypto from 'cryptogen'
LOCAL_VERSION=2.3.0
DOCKER_IMAGE_VERSION=2.3.0
/Users/yujin/fabric-samples-with-bis/test-network/../bin/cryptogen
Generating certificates using cryptogen tool
Creating Org1 Identities
+ cryptogen generate --config=./organizations/cryptogen/crypto-config-org1.yaml --output=organizations
org1.example.com
+ res=0
Creating Org2 Identities
+ cryptogen generate --config=./organizations/cryptogen/crypto-config-org2.yaml --output=organizations
org2.example.com
+ res=0
Creating Orderer Org Identities
+ cryptogen generate --config=./organizations/cryptogen/crypto-config-orderer.yaml --output=organizations
+ res=0
Generating CCP files for Org1 and Org2
Creating network "net_test" with the default driver
Creating volume "net_orderer.example.com" with default driver
Creating volume "net_peer0.org1.example.com" with default driver
Creating volume "net_peer0.org2.example.com" with default driver
Creating orderer.example.com ... error
Creating peer0.org2.example.com ...
Creating peer0.org1.example.com ...
Creating peer0.org1.example.com ... error
Creating peer0.org2.example.com ... done
ERROR: for peer0.org1.example.com Cannot start service peer0.org1.example.com: Ports are not available: listen tcp 0.0.0.0:7051: bind: address already in use
ERROR: for orderer.example.com Cannot start service orderer.example.com: Ports are not available: listen tcp 0.0.0.0:7050: bind: address already in use
ERROR: for peer0.org1.example.com Cannot start service peer0.org1.example.com: Ports are not available: listen tcp 0.0.0.0:7051: bind: address already in use
ERROR: Encountered errors while bringing up the project.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6feb86580f43 hyperledger/fabric-orderer:latest "orderer" 1 second ago Created orderer.example.com
dbfae1aa4c11 hyperledger/fabric-peer:latest "peer node start" 1 second ago Created peer0.org1.example.com
d0367a0d6089 hyperledger/fabric-peer:latest "peer node start" 1 second ago Up Less than a second 7051/tcp, 0.0.0.0:9051->9051/tcp peer0.org2.example.com
It seems the orderer, the org1 and the org2 are using the same ports 7050 and 7051, they got conflicted with each other. I have thought I can avoid these port errors by running docker. However, it seems that I'm wrong. I checked the docker environment before I run ./network.sh and I'm sure no other processes are running in the same time.
yujindeMBP:test-network yujin$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
The docker's processes table is clear before I tried to start the test-network.
I'm really confused and need your help. Thanks a lot!
Try docker ps -a This list all the active container.
Remove containers using the command docker rm -f [container_id/container_name]
once all containers removed then bring back your network.
If you're still facing the issue then go to your crypto-config-org1.yaml, crypto-config-org2.yaml, crypto-config-orderer.yaml edit and change the ports. Also you can ask question on https://chat.hyperledger.org/

Can I run k8s master INSIDE a docker container? Getting errors about k8s looking for host's kernel details

In a docker container I want to run k8s.
When I run kubeadm join ... or kubeadm init commands I see sometimes errors like
\"modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could
not open moddep file
'/lib/modules/3.10.0-1062.1.2.el7.x86_64/modules.dep.bin'.
nmodprobe:
FATAL: Module configs not found in directory
/lib/modules/3.10.0-1062.1.2.el7.x86_64",
err: exit status 1
because (I think) my container does not have the expected kernel header files.
I realise that the container reports its kernel based on the host that is running the container; and looking at k8s code I see
// getKernelConfigReader search kernel config file in a predefined list. Once the kernel config
// file is found it will read the configurations into a byte buffer and return. If the kernel
// config file is not found, it will try to load kernel config module and retry again.
func (k *KernelValidator) getKernelConfigReader() (io.Reader, error) {
possibePaths := []string{
"/proc/config.gz",
"/boot/config-" + k.kernelRelease,
"/usr/src/linux-" + k.kernelRelease + "/.config",
"/usr/src/linux/.config",
}
so I am bit confused what is simplest way to run k8s inside a container such that it consistently past this getting the kernel info.
I note that running docker run -it solita/centos-systemd:7 /bin/bash on a macOS host I see :
# uname -r
4.9.184-linuxkit
# ls -l /proc/config.gz
-r--r--r-- 1 root root 23834 Nov 20 16:40 /proc/config.gz
but running exact same on a Ubuntu VM I see :
# uname -r
4.4.0-142-generic
# ls -l /proc/config.gz
ls: cannot access /proc/config.gz
[Weirdly I don't see this FATAL: Module configs not found in directory error every time, but I guess that is a separate question!]
UPDATE 22/November/2019. I see now that k8s DOES run okay in a container. Real problem was weird/misleading logs. I have added an answer to clarify.
I do not believe that is possible given the nature of containers.
You should instead test your app in a docker container then deploy that image to k8s either in the cloud or locally using minikube.
Another solution is to run it under kind which uses docker driver instead of VirtualBox
https://kind.sigs.k8s.io/docs/user/quick-start/
It seems the FATAL error part was a bit misleading.
It was badly formatted by my test environment (all on one line.
When k8s was failing I saw the FATAL and assumed (incorrectly) that was root cause.
When I format the logs nicely I see ...
kubeadm join 172.17.0.2:6443 --token 21e8ab.1e1666a25fd37338 --discovery-token-unsafe-skip-ca-verification --experimental-control-plane --ignore-preflight-errors=all --node-name 172.17.0.3
[preflight] Running pre-flight checks
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-142-generic
DOCKER_VERSION: 18.09.3
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-142-generic/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/4.4.0-142-generic\n", err: exit status 1
[discovery] Trying to connect to API Server "172.17.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.2:6443"
[discovery] Failed to request cluster info, will try again: [the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps cluster-info)]
There are other errors later, which I originally though were a side-effect of the nasty looking FATAL error e.g. .... "[util/etcd] Attempt timed out"]} but I now think root cause is Etcd part times out sometimes.
Adding this answer in case someone else puzzled like I was.

Failed getting affiliation 'org3.department1 : : scode: 404, code: 63, msg: Failed to get Affiliation: sql: no rows in result set

I followed below steps of Hyperledger Fabric Balance Transfer Application (1.4.3 version):
I have made a copy of balance transfer application and created a new project.
Did required changes in below files
artifacts/channel/crytogen.yaml
artifacts/channel/configtx.yaml
artifacts/channel/docker-compose.yaml
artifacts/network-config.yaml
artifacts/org3.yaml
config.js
app/instantiate-chaincode.js
Started the network, everything went fine.
If I register user with orgName Org1 or Org2 everything works fine.
But when I tried to register user on Org3 from this api,
curl -s -X POST http://localhost:4000/users -H "content-type: application/x-www-form-urlencoded" -d 'username=Ramesh&orgName=Org3'
It is showing this error : Failed getting affiliation 'org3.department1 : : scode: 404, code: 63, msg: Failed to get Affiliation: sql: no rows in result set
By default, fabric-ca only has the following affiliations:
org1.department1
org1.department2
org2.department1
So I tried to add below commands in bash terminal, **docker exec -it bash **.
fabric-ca-client affiliation add org3
fabric-ca-client affiliation add org3.department1
Still getting same error.
Also I tried to add new org details in fabric-ca-server-config.yaml file and gave that path in docker-compose.yaml file volumes for all 3 orgs ca containers .
volumes:
- ./channel/crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
- ../ca-server-config/fabric-ca-server-config.yaml:/etc/hyperledger/fabric-ca-server-config/fabric-ca-server-config.yaml
Restarted the netowrk, but it is showing below error,
ERROR: for ca.org3.example.com Cannot start service ca.org3.example.com: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\"/home/ubuntu/fabric-samples/newProject/ca-server-config/fabric-ca-server-config.yaml\\" to rootfs \\"/var/lib/docker/overlay2/03d0b6d5e25572670c817f37b1a791938de81835680cce9f11f5d2c0f05d6320/merged\\" at \\"/var/lib/docker/overlay2/03d0b6d5e25572670c817f37b1a791938de81835680cce9f11f5d2c0f05d6320/merged/etc/hyperledger/fabric-ca-server-config/fabric-ca-server-config.yaml\\" caused \\"not a directory\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Please suggest on above error and how can i add new org in balance transfer application?

How to fix a malfunctioning docker container?

I currently have a Docker container that runs NGINX for me. While trying to learn how to set up a proxy pass example I created a setting that crashes this container and I can no longer start the container.
Creating a new NGINX container is not a big deal, but I would like to use this example for a learning experience.
Is it possible to start up this stopped container with a different entree point rather than having it start NGINX?
I've read that I have to commit the broken container into an image and then can start up a new container from this image which I have been able to do, but this seems rather cumbersome.
If the above is the only method than I might as well just create a new container.
I encountered a similar problem, which can be fixed in the following ways:
Method 1: based on docker cp, copy the file contents of the damaged container to the current environment (even if the container cannot be started);
Method 2: based on docker commit resubmit the damaged container as another image, and then start the additional entry point entry;
Note: the above methods are just a few tricks to use during debugging or development, and eventually we should write the related operations into the Dockerfile configuration or docker-compose.yml configuration.
Fix Progress:
Because I temporarily modified the ./php-fpm.d/www.conf configuration in my PHP-FPM
container, the container could not be started:
$ docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------------------
phpfpm_fpm_1 docker-php-entrypoint php-fpm Restarting
phpfpm_nginx_1 nginx -g daemon off; Up 80/tcp, 0.0.0.0:86->86/tcp
Check related error information by docker-compose log-f:
fpm_1 | [03-Dec-2019 03:57:50] ERROR: Unable to create or open slowlog(/usr/local/log/www.log.slow): No such file or directory (2)
fpm_1 | [03-Dec-2019 03:57:50] ERROR: Unable to create or open slowlog(/usr/local/log/www.log.slow): No such file or directory (2)
fpm_1 | [03-Dec-2019 03:57:50] ERROR: failed to post process the configuration
fpm_1 | [03-Dec-2019 03:57:50] ERROR: failed to post process the configuration
fpm_1 | [03-Dec-2019 03:57:50] ERROR: FPM initialization failed
fpm_1 | [03-Dec-2019 03:57:50] ERROR: FPM initialization failed
fpm_1 | [03-Dec-2019 03:58:51] ERROR: Unable to create or open slowlog(/usr/local/log/www.log.slow): No such file or directory (2)
fpm_1 | [03-Dec-2019 03:58:51] ERROR: Unable to create or open slowlog(/usr/local/log/www.log.slow): No such file or directory (2)
fpm_1 | [03-Dec-2019 03:58:51] ERROR: failed to post process the configuration
fpm_1 | [03-Dec-2019 03:58:51] ERROR: failed to post process the configuration
fpm_1 | [03-Dec-2019 03:58:51] ERROR: FPM initialization failed
fpm_1 | [03-Dec-2019 03:58:51] ERROR: FPM initialization failed
Check the configuration information manually debugged in the container by docker diff container-id:
$ docker ps -a|grep php
5dfe26f00059 tkstorm/phpngx "nginx -g 'daemon of…" 2 weeks ago Up 41 hours 80/tcp, 0.0.0.0:86->86/tcp phpfpm_nginx_1
6f8a2044ba36 tkstorm/phpfpm "docker-php-entrypoi…" 2 weeks ago Restarting (78) 7 seconds ago phpfpm_fpm_1
Copy the wrong ./php-fpm.d/www.conf configuration that damaged the container to the local directory and fix it:
$ docker cp phpfpm_fpm_1:/usr/local/etc/php-fpm.d/www.conf fix-www.conf
$ vi fix-www.conf
...
slowlog = /var/log/$pool.log.slow
...
Copy the repaired configuration to the damaged container again:
// after fix up
$ docker cp fix-www.conf phpfpm_fpm_1:/usr/local/etc/php-fpm.d/www.conf
Restart the container:
$ docker restart phpfpm_fpm_1
// it' fix ok
$ docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------------
phpfpm_fpm_1 docker-php-entrypoint php-fpm Up 9000/tcp
phpfpm_nginx_1 nginx -g daemon off; Up 80/tcp, 0.0.0.0:86->86/tcp

docker run hello-world results in "Incorrect Usage" error: "flag provided but not defined: -console"

When running docker run hello-world I get an "Incorrect Usage" error (full output pasted below). I'm running the following:
Docker 17.05.0-ce, build 89658be
docker-containerd 0.2.3 (commit 9048e5e)
runc v1.0.0-rc4
Linux kernel 4.1.15
Using buildroot 2017.11 (commit 1f1a242) to generate custom toolchain/rootfs
systemd 234
Seems as though I can pull the hello-world image down properly, as it is included in docker images output. Wondering if there is an incompatibility between docker/containerd/runc? Or maybe something obvious? First time working with docker.
Additionally, I've run a docker check-config.sh script I found that states the only kernel configuration features I'm missing are optional. They are CONFIG_CGROUP_PIDS, CONFIG_CGROUP_HUGETLB, CONFIG_AUFS_FS, /dev/zfs, zfs command, and zpool command. Everything else, including all required, are enabled.
Output:
# docker run hello-world
[ 429.332968] device vethc0d83d1 entered promiscuous mode
[ 429.359681] IPv6: ADDRCONF(NETDEV_UP): vethc0d83d1: link is not ready
Incorrect Usage.
NAME:
docker-runc create - create a container
USAGE:
docker-runc create [command options] <container-id>
Where "<container-id>" is your name for the instance of the container that you
are starting. The name you provide for the container instance must be unique on
your host.
DESCRIPTION:
The create command creates an instance of a container for a bundle. The bundle
is a directory with a specification file named "config.json" and a root
filesystem.
The specification file includes an args parameter. The args parameter is used
to specify command(s) that get run when the container is started. To change the
command(s) that get executed on start, edit the args parameter of the spec. See
"runc spec --help" for more explanation.
OPTIONS:
--bundle value, -b value path to the root of the bundle directory, defaults to the current directory
--console-socket value path to an AF_UNIX socket which will receive a file descriptor referencing the master end of the console's pseudoterminal
--pid-file value specify the file to write the process id to
--no-pivot do not use pivot root to jail process inside rootfs. This should be used whenever the rootfs is on top of a ramdisk
--no-new-keyring do not create a new session keyring for the container. This will cause the container to inherit the calling processes session key
--preserve-fds value Pass N additional file descriptors to the container (stdio + $LISTEN_FDS + N in total) (default: 0)
flag provided but not defined: -console
[ 429.832198] docker0: port 1(vethc0d83d1) entered disabled state
[ 429.849301] device vethc0d83d1 left promiscuous mode
[ 429.859317] docker0: port 1(vethc0d83d1) entered disabled state
docker: Error response from daemon: oci runtime error: flag provided but not defined: -console.
The -console option was replaced with --console-socket in runc Dec 2016 for v1.0.0-rc4.
So I would guess you need an older version of runc or a newer version of Docker.
If you are building Docker yourself, use Docker 17.09.0-ce or an older release of runc. I'm not sure if that's v0.1.1 or just an earlier 1.0 like v1.0.0-rc2
If you were upgrading packages, something has gone wrong with the install. Probably purge everything and reinstall Docker.

Resources