Error while downloading geckodriver during webdriver-manager update from jenkins - jenkins

I am trying to run webdriver update from jenkins. I am downloading geckoDriver and chromeDriver. Chrome driver is downloading and unzipping properly. But gecko driver download is not working.
However this is working fine from local. Issue occurs only in jenkins
Command used:
node_modules/protractor/bin/webdriver-manager update --ignore_ssl --proxy=http://proxy --versions.gecko=v0.25.0 --versions.chrome=78.0.3904.105
Firefox version in server: 60.9.0
Error:
[16:23:13] I/http_utils - ignoring SSL certificate
[16:23:13] I/config_source - curl -ok /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/chrome-response.xml 'http://proxy...' -H 'host:chromedriver.storage.googleapis.com'
[16:23:13] I/http_utils - ignoring SSL certificate
[16:23:13] I/config_source - curl -ok /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/gecko-response.json 'http://proxy.../repos/mozilla/geckodriver/releases' -H 'host:api.github.com'
[16:23:13] I/http_utils - ignoring SSL certificate
[16:23:14] I/downloader - curl -ok /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_78.0.3904.105.zip 'http://proxy.../78.0.3904.70/chromedriver_linux64.zip' -H 'host:chromedriver.storage.googleapis.com'
[16:23:14] I/update - chromedriver: unzipping chromedriver_78.0.3904.105.zip
[16:23:14] I/update - chromedriver: setting permissions to 0755 for /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_78.0.3904.105
[16:23:16] I/http_utils - ignoring SSL certificate
[16:23:17] E/downloader - tunneling socket could not be established, statusCode=403
[16:23:17] I/update - geckodriver: file exists /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/geckodriver-v0.25.0.tar.gz
[16:23:17] I/update - geckodriver: unzipping geckodriver-v0.25.0.tar.gz
(node:42561) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, rename '/var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/geckodriver' -> '/var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/geckodriver-v0.25.0'
at Object.renameSync (fs.js:598:3)
at unzip (/var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/built/lib/cmds/update.js:240:8)
at files_1.FileManager.downloadFile.then.downloaded (/var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/built/lib/cmds/update.js:205:13)
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:42561) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:42561) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
When i manually downloaded the driver files and added inside node_modules/selenium location, webdriver-manager update is successful. But the tunneling socket error was still present. Logs below:
[16:30:00] I/http_utils - ignoring SSL certificate
[16:30:00] I/http_utils - ignoring SSL certificate
[16:30:00] I/http_utils - ignoring SSL certificate
[16:30:00] I/update - chromedriver: file exists /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_78.0.3904.105.zip
[16:30:00] I/update - chromedriver: unzipping chromedriver_78.0.3904.105.zip
[16:30:00] I/update - chromedriver: setting permissions to 0755 for /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_78.0.3904.105
[16:30:00] I/update - chromedriver: chromedriver_78.0.3904.105 up to date
[16:30:02] E/downloader - tunneling socket could not be established, statusCode=403
[16:30:02] I/update - geckodriver: file exists /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/geckodriver-v0.25.0.tar.gz
[16:30:02] I/update - geckodriver: unzipping geckodriver-v0.25.0.tar.gz
[16:30:02] I/update - geckodriver: setting permissions to 0755 for /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/geckodriver-v0.25.0
[16:30:02] I/update - geckodriver: geckodriver-v0.25.0 up to date
But ng e2e is failing with the below error:
[16:30:03] I/launcher - Running 1 instances of WebDriver
[16:30:03] I/direct - Using FirefoxDriver directly...
[16:30:03] E/launcher - spawn /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/geckodriver-v0.25.0 EACCES
[16:30:03] E/launcher - Error: spawn /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/geckodriver-v0.25.0 EACCES
at Process.ChildProcess._handle.onexit (internal/child_process.js:240:19)
at onErrorNT (internal/child_process.js:415:16)
at process._tickCallback (internal/process/next_tick.js:63:19)
[16:30:03] E/launcher - Process exited with error code 199
npm ERR! code ELIFECYCLE
npm ERR! errno 1
I am using directconnect=true in protractor conf.js
Can someone please check what i am doing wrong here?

There are a few things that you can try here:
1) If your tests are running inside a container, you will have to disable the dev-shm usage by adding a "--disable-dev-shm-usage" flag in your capabilities. Or you can mount the dev/shm as a volume when you run your tests.
2) You can set Marionette to true in your browser capabilities for firefox.
3) Run the container as root so that it runs as a privilaged user
4) Run the tests using ./node_modules/protractor/bin/protractor protractor.conf.js instead of using ng e2e
5) Update the webdriver packages using ./node_modules/protractor/bin/webdriver-manager update --ignore_ssl --proxy=http://proxy --versions.gecko=v0.25.0 --versions.chrome=78.0.3904.105
6) Try adding these lines to your entrypoint for the docker image:
#!/bin/bash
uid=$(stat -c %u ${PWD})
gid=$(stat -c %g ${PWD})
groupadd -o -g $gid protractor
useradd -m -o -u $uid -g $gid protractor
sudo -u protractor npm run test
Still I cannot say if one of these steps would solve your problem.

I had same problem when I was doing this in a docker container. The 'tar' and 'gzip' packages weren't installed. The problem got resolved after I installed these packages.

Related

Hyperledger Fabric v2 new chaincode lifecycle install problem with dind vm endpoint + tls

having trouble with new v2 chaincode lifecycle, I am using docker vm dind endpoint https://127.0.0.1 with tls on, peer has all docker client crypto material set
CORE_VM_DOCKER_TLS_ENABLED=true
CORE_VM_DOCKER_TLS_CERT=/tmp/org1/peer1/docker/cert.pem
CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=host
CORE_VM_DOCKER_TLS_KEY=/tmp/org1/peer1/docker/key.pem
CORE_VM_ENDPOINT=https://127.0.0.1:2376
CORE_VM_DOCKER_TLS_CA=/tmp/org1/peer1/docker/ca.pem
Trying to install chaincode package.
peer lifecycle chaincode install patient_consent-v0.0.1-package.tar.gz \
--peerAddresses fabric-dev-peer1-org1:7051 --connTimeout 10s \
--tlsRootCertFiles /tmp/org1/peer1/tls/msp/cacerts/fabric-dev-tlsca-org1-7052.pem \
-o fabric-dev-orderer1-org1:7050 --tls --cafile /tmp/org1/peer1/tls/msp/cacerts/fabric-dev-tlsca-org1-7052.pem
This gives me
Error: chaincode install failed with
status: 500 - failed to invoke backing implementation of 'InstallChaincode'
could not build chaincode
docker build failed
docker image inspection failed
Get https://127.0.0.1:2376/images/dev-peer1-org1-patient_consent-v0.0.1-9aedb4f5f58cb4bf18cf38f53751928caf9074c4bcb6859d8417fb37c09ab596-0acf342a6da8bfef85ec6b4d9dbe3ca4236ab9e52d903bb9fb014db836696d7b/json
remote error:
tls: bad certificate
In the peer chaincode install command, you have put the wrong tlsRootCertFiles for the orderer. The --cafile is same as the peer. It is the orderer CA file .
--tlsRootCertFiles /tmp/org1/peer1/tls/msp/cacerts/fabric-dev-tlsca-org1-7052.pem \
-o fabric-dev-orderer1-org1:7050 --tls --cafile /tmp/org1/peer1/tls/msp/cacerts/fabric-dev-tlsca-org1-7052.pem
Usually when you spin up the test-network (2.0/1),the orderer tls CA files is found at organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
You can check in your setup and put the right path for orderer --cafile flag.

Getting permission denied even as root inside the docker container

Connecting to running docker container as a root still gets Operation not permitted error when trying to apt-get update, yet I can still see sensitive file like /etc/passwd. Below are my configurations and also the error message from apt-get update. My host operating system is Ubuntu 18.04.3. My docker version is Docker version 19.03.5, build 633a0ea838
I create a container with the following Dockerfile
FROM python:3.8-slim-buster
RUN useradd -ms /bin/bash andrej
WORKDIR /home/andrej
COPY . /home/andrej/
RUN apt-get update && \
apt-get install -y gcc && \
pip install -r requirements.txt && \
apt-get remove -y gcc && apt-get -y autoremove
RUN chown andrej:andrej pycurl && \
chmod 0744 pycurl
USER andrej
ENTRYPOINT ["uwsgi"]
CMD ["--ini", "uwsgi.ini"]
starting it with docker compose looking like this:
version: "3.3"
services:
andrej-cv:
build: ./andrej_cv
container_name: andrej-cv
restart: always
security_opt:
- no-new-privileges
expose:
- 5000
healthcheck:
test: ./pycurl --host=127.0.0.1 --port=5050 --uri=/health_check
interval: 1m30s
timeout: 10s
retries: 3
My docker daemon config:
{
"icc": false,
"userns-remap": "default",
"log-driver": "syslog",
"live-restore": true,
"userland-proxy": false,
"no-new-privileges": true
}
I connect to the container with following command (as root):
docker exec -it -u root <container_hash> /bin/bash but when I try to update I got the following:
root#ed984abff684:/home/andrej# apt-get update
E: setgroups 65534 failed - setgroups (1: Operation not permitted)
E: setegid 65534 failed - setegid (1: Operation not permitted)
E: seteuid 100 failed - seteuid (1: Operation not permitted)
E: setgroups 0 failed - setgroups (1: Operation not permitted)
Hit:1 http://deb.debian.org/debian buster InRelease
Ign:2 http://deb.debian.org/debian buster-updates InRelease
Err:4 http://deb.debian.org/debian buster-updates Release
Could not open file /var/lib/apt/lists/partial/deb.debian.org_debian_dists_buster-updates_Release - open (13: Permission denied) [IP: 151.101.36.204 80]
Hit:3 http://security-cdn.debian.org/debian-security buster/updates InRelease
rm: cannot remove '/var/cache/apt/archives/partial/*.deb': Permission denied
Reading package lists... Done
W: chown to _apt:root of directory /var/lib/apt/lists/partial failed - SetupAPTPartialDirectory (1: Operation not permitted)
W: chmod 0700 of directory /var/lib/apt/lists/partial failed - SetupAPTPartialDirectory (1: Operation not permitted)
W: chown to _apt:root of directory /var/lib/apt/lists/auxfiles failed - SetupAPTPartialDirectory (1: Operation not permitted)
W: chmod 0700 of directory /var/lib/apt/lists/auxfiles failed - SetupAPTPartialDirectory (1: Operation not permitted)
E: setgroups 65534 failed - setgroups (1: Operation not permitted)
E: setegid 65534 failed - setegid (1: Operation not permitted)
E: seteuid 100 failed - seteuid (1: Operation not permitted)
W: Download is performed unsandboxed as root as file '/var/lib/apt/lists/partial/deb.debian.org_debian_dists_buster_InRelease' couldn't be accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied)
E: setgroups 0 failed - setgroups (1: Operation not permitted)
W: Problem unlinking the file /var/lib/apt/lists/partial/deb.debian.org_debian_dists_buster_InRelease - PrepareFiles (13: Permission denied)
W: Problem unlinking the file /var/lib/apt/lists/partial/deb.debian.org_debian_dists_buster-updates_InRelease - PrepareFiles (13: Permission denied)
W: Problem unlinking the file /var/lib/apt/lists/partial/deb.debian.org_debian_dists_buster-updates_Release - PrepareFiles (13: Permission denied)
E: The repository 'http://deb.debian.org/debian buster-updates Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
W: Problem unlinking the file /var/lib/apt/lists/partial/security.debian.org_debian-security_dists_buster_updates_InRelease - PrepareFiles (13: Permission denied)
In the container /etc/subuid and /etc/subgid look like this (both):
andrej:100000:65536
On the host /etc/subuid and /etc/subgid look like this (both):
andrej:100000:65536
dockremap:165536:65536
Apparmor is running on Ubuntu host with following status (only docker-default profile):
andrej#machine:/etc/apparmor.d$ sudo aa-status
apparmor module is loaded.
38 profiles are loaded.
36 profiles are in enforce mode.
/sbin/dhclient
/snap/core/8268/usr/lib/snapd/snap-confine
/snap/core/8268/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
/usr/bin/evince
/usr/bin/evince-previewer
/usr/bin/evince-previewer//sanitized_helper
/usr/bin/evince-thumbnailer
/usr/bin/evince//sanitized_helper
/usr/bin/man
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/NetworkManager/nm-dhcp-helper
/usr/lib/connman/scripts/dhclient-script
/usr/lib/cups/backend/cups-pdf
/usr/lib/snapd/snap-confine
/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
/usr/sbin/cups-browsed
/usr/sbin/cupsd
/usr/sbin/cupsd//third_party
/usr/sbin/ippusbxd
/usr/sbin/tcpdump
docker-default
libreoffice-senddoc
libreoffice-soffice//gpg
libreoffice-xpdfimport
man_filter
man_groff
snap-update-ns.core
snap-update-ns.gnome-calculator
snap-update-ns.gnome-characters
snap-update-ns.gnome-logs
snap-update-ns.gnome-system-monitor
snap.core.hook.configure
snap.gnome-calculator.gnome-calculator
snap.gnome-characters.gnome-characters
snap.gnome-logs.gnome-logs
snap.gnome-system-monitor.gnome-system-monitor
2 profiles are in complain mode.
libreoffice-oopslash
libreoffice-soffice
17 processes have profiles defined.
14 processes are in enforce mode.
docker-default (1101)
docker-default (1102)
docker-default (1111)
docker-default (1600)
docker-default (1728)
docker-default (1729)
docker-default (1730)
docker-default (1731)
docker-default (1732)
docker-default (1798)
docker-default (1799)
docker-default (1800)
docker-default (1801)
docker-default (1802)
0 processes are in complain mode.
3 processes are unconfined but have a profile defined.
/sbin/dhclient (491)
/usr/sbin/cups-browsed (431)
/usr/sbin/cupsd (402)
Selinux seems to be disabled as there is not /etc/selinux/config file and getenfoce and sestatus command are not available.
Also su andrej command run as root (where andrej is unprivileged user in the container) errors out with su: cannot set groups: Operation not permitted
Not sure about Docker, but in kubernetes in runc container for me helps:
Get root access to container
List all containers
minikube ssh docker container ls
Connect to your container (use your container id from previous command instead of 44a7ad70d45b):
minikube ssh "docker container exec -it -u 0 44a7ad70d45b /bin/bash"
As root inside container:
root#mycontainer:/# apt-config dump | grep Sandbox::User
APT::Sandbox::User "_apt";
root#mycontainer:/# cat <<EOF > /etc/apt/apt.conf.d/sandbox-disable
APT::Sandbox::User "root";
EOF
Be sure, that result is valid:
root#mycontainer:/# apt-config dump | grep Sandbox::User
APT::Sandbox::User "root";
apt update and see
Get:1 http://archive.ubuntu.com/ubuntu hirsute InRelease [269 kB]
Get:2 http://apt.postgresql.org/pub/repos/apt hirsute-pgdg InRelease [16.7 kB]
...
I had exatly the same issues when running
Ubuntu-16.04-based container in rootless Podman with Manjaro as the host system.
TL;DR: try to rebuild the image. That helped in my case.
The issue is likely that Docker cannot map the /var/lib/apt/lists/package directory's owner (_apt) UID to host's UID namespace. This might happen if the /etc/sub{u,g}id is modified after the image is pulled/built.
This is only a guess but the reason might be that Docker performs the UID map first for the image and then modifies /etc/sub{u,g}id resulting in different UID map rules -> Docker cannot map the user inside the container.
You can verify this by running docker inspect <image name> and checking the directories in "LowerDir" part. In one of those there should exist a directory var/lib/apt/lists/package with UID outside of the range specified for dockremap in /etc/sub{u,g}id. The exact command for podman was podman image inspect <image name> --format '{{.GraphDriver.Data.LowerDir}}' but the CLI APIs of Podman and Docker should be identical so the same command should work with Docker also.
E.g. I had an entry tlammi:100000:65536 in /etc/sub{u,g}id but /var/lib/apt/lists/package was owned by UID 165538 in host side which is outside of range [100000, 100000 + 65536).
We run into the same issue with rootless podman.
We changed the subuid/subgid range of the user. This means one would need to fix the files stored with the old ranges or just delete the temporary files from the container storage directory:
$ podman info|grep GraphRoot
GraphRoot: /opt/tmp/container_graphroot/.local/share/containers/storage
$ rm -rf /opt/tmp/container_graphroot/.local/share/containers/storage/*
The docker container you describe is working as designed. With the default capabilities.
Fast test if you have a missing capability.
Run your container with Full/All container capabilities (--privileged)
More explanations about capabilities from Docker documentation
The Linux kernel is able to break down the privileges of the root user
into distinct units referred to as capabilities. For example, the
CAP_CHOWN capability is what allows the root user to make arbitrary
changes to file UIDs and GIDs. The CAP_DAC_OVERRIDE capability allows
the root user to bypass kernel permission checks on file read, write
and execute operations. Almost all of the special powers associated
with the Linux root user are broken down into individual capabilities.
This breaking down of root privileges into granular capabilities
allows you to:
Remove individual capabilities from the root user account, making it less powerful/dangerous.
Add privileges to non-root users at a very granular level.
Capabilities apply to both files and threads. File capabilities allow
users to execute programs with higher privileges. This is similar to
the way the setuid bit works. Thread capabilities keep track of the
current state of capabilities in running programs.
The Linux kernel lets you set capability bounding sets that impose
limits on the capabilities that a file/thread can gain.
Docker imposes certain limitations that make working with capabilities
much simpler. For example, file capabilities are stored within a
file’s extended attributes, and extended attributes are stripped out
when Docker images are built. This means you will not normally have to
concern yourself too much with file capabilities in containers.
It is of course possible to get file capabilities into containers at runtime, however this is not recommended.
In an environment without file based capabilities, it’s not possible
for applications to escalate their privileges beyond the bounding set
(a set beyond which capabilities cannot grow). Docker sets the
bounding set before starting a container. You can use Docker commands
to add or remove capabilities to or from the bounding set.
By default, Docker drops all capabilities except those needed, using a
whitelist approach.
This is not an answer yet (as of 2021-05-10) but my current research on this issue so far. Hopefully, it would give others hints about where to look further. Maybe I'll come back to edit this post to make it a real answer.
As far as I see, the "issue" is caused by the use of the security option no-new-privileges. Note that it is specified in OP's docker-compose file and the Docker daemon's configuration file.
Here is its description in the Docker's doc:
--security-opt="no-new-privileges:true" Disable container processes from gaining new privileges
...
If you want to prevent your container processes from gaining additional privileges, you can execute the following command:
$ docker run --security-opt no-new-privileges -it centos bash
no-new-privileges is a flag that was added to Linux kernel since 3.5. Here is its doc:
The no_new_privs bit (since Linux 3.5) is a new, generic mechanism to make it safe for a process to modify its execution environment in a manner that persists across execve. Any task can set no_new_privs. Once the bit is set, it is inherited across fork, clone, and execve and cannot be unset. With no_new_privs set, execve() promises not to grant the privilege to do anything that could not have been done without the execve call. For example, the setuid and setgid bits will no longer change the uid or gid; file capabilities will not add to the permitted set, and LSMs will not relax constraints after execve.
Notice the last sentence of "the setuid and setgid bits will no longer change the uid or gid". This may be why you would see the following error messages:
E: setgroups 65534 failed - setgroups (1: Operation not permitted)
E: setegid 65534 failed - setegid (1: Operation not permitted)
E: seteuid 100 failed - seteuid (1: Operation not permitted)
E: setgroups 0 failed - setgroups (1: Operation not permitted)
I found an article that talks about it with examples clearly: Running Docker application containers more securely.
My current thoughts:
I don't call the failed "apt-get update" an "issue" or a "problem" because that should be an intentional behavior for security consideration. In other words, it's a good thing.
Because the quoted kernel doc says "Once the bit is set, it ... cannot be unset", I believe you won't be able to "fix" it in the existing containers.
I don't think removing no-new-privileges is the right solution. At least it's not right before you fully discuss it with your team.
Alternatively, create a container without no-new-privileges for testing purpose only.
NOTE: Only if you have docker and docker-compose installed
If initially you had not been running as root and rebuilt the image, run a prune
docker system prune -f
docker-compose up
This then makes sure you're running on a fresh build

Composer fails within Docker 'Failed to enable crypto'

I've been battling an issue with a corporate proxy when trying to run docker-compose up -d nginx mysql
I'm attempting to run the Laradock container on OSX but keep running into errors when composer attempts to install dependencies. I've updated my docker settings to notify it about my corporate proxy:
Before adding the proxy information, I was receiving this error:
[Composer\Downloader\TransportException]
The "https://packagist.org/packages.json" file could not be downloaded: SSL operation failed with code 1. OpenSSL Error messages:
error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed
Since updating the proxy details, I am now receiving this error:
Step 27/183 : RUN if [ ${COMPOSER_GLOBAL_INSTALL} = true ]; then composer global install ;fi
---> Running in a7699d4ecebd
Changed current directory to /home/laradock/.composer
Loading composer repositories with package information
[Composer\Downloader\TransportException]
The "https://packagist.org/packages.json" file could not be downloaded: SSL: Success
Failed to enable crypto
failed to open stream: operation failed
I'm an experienced dev, but new to Docker. I think that the error is being caused because PHP is running inside the docker container but for some reason does not have access to my local certificates?

Node API failing with Error : EHOSTUNREACH

ERRO 8dc [composerchannel][a68ccd16] failed to invoke chaincode name:"tryme" , error: Failed to generate platform-specific docker build: Error returned from build: 1 "npmERR! code EHOSTUNREACH
npm ERR! errno EHOSTUNREACH
npm ERR! request to https://registry.npmjs.org/composer-common failed, reason: connect EHOSTUNREACH 104.16.18.35:443
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2018-08-08T10_52_34_967Z-debug.log
I get this error on my NodeJS API, when I open docker logs I find this error was the part of Peer Container.
What I understand from this error is Hyperledger is trying to ping the URL 104.16.18.35:443, which is been blocked by my firewall as per my understanding.
But the bigger Question is If my network is internally setup then why Docker Container is trying to Ping this IP.

HFC: CC deployment successfull while PEER: "Error building images: ..."

TL;DR; Go to ---- EDIT section below
I am using hfc#0.6.5 in a standalone node.js application.
A membersrvc and peer are started with docker-compose, where:
membersrvc:
container_name: membersrvc
image: hyperledger/fabric-membersrvc:latest
ports:
- "7054:7054"
command: membersrvc
vp0:
container_name: peer
image: hyperledger/fabric-peer:latest
ports:
- "7050:7050"
- "7051:7051"
- "7053:7053"
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=unix:///var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=vp0
- CORE_SECURITY_ENABLED=true
- CORE_PEER_PKI_ECA_PADDR=172.17.0.2:7054
- CORE_PEER_PKI_TCA_PADDR=172.17.0.2:7054
- CORE_PEER_PKI_TLSCA_PADDR=172.17.0.2:7054
- CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN=noops
[...]
links:
- membersrvc
command: sh -c "sleep 10; peer node start"
The node.js application successfully registers new users and tries to deploy a chaincode using the enrolledUser.deploy(deployRequest); method.
As a value of deployRequest.chaincodePath a path
'github.com/asset-chaincode/' is set.The directory contains a chaincode .go file.
A callback of deployTx.on('complete', cb) prints its log message:
SUCCESS: Successfully deployed chaincode [chainCodeId:415123c8532fd28393d7a5370193af555e9f2141a4b56e635806e5e1fcce1e58], deploy request={"fcn":"init","args":[],"confidential":true,"metadata":{"type":"Buffer","data":[48,...155,253,0]},"chaincodePath":"github.com/gvlax/chaincodes/asset-chaincode"}, response={"uuid":"415123c8532fd28393d7a5370193af555e9f2141a4b56e635806e5e1fcce1e58","chaincodeID":"415123c8532fd28393d7a5370193af555e9f2141a4b56e635806e5e1fcce1e58"}
chaincodeId=415123c8532fd28393d7a5370193af555e9f2141a4b56e635806e5e1fcce1e58
However, when I check an output of a peer console,
I can see error messages
--> full logs here:
peer | 15:40:00.416 [dockercontroller] deployImage -> ERRO 47a Error building images: The command '/bin/sh -c go install build-chaincode && cp src/build-chaincode/vendor/github.com/hyperledger/fabric/peer/core.yaml $GOPATH/bin && mv $GOPATH/bin/build-chaincode $GOPATH/bin/23991376d1b935790631a448843fd12a9d60f7ab3f0b8b55f629cf0190077436' returned a non-zero code: 1
[...]
peer | ---> Running in 812439684bf7
peer | src/build-chaincode/asset-chaincode.go:25:2: cannot find package "github.com/hyperledger/fabric/core/chaincode/shim" in any of:
peer | /opt/go/src/github.com/hyperledger/fabric/core/chaincode/shim (from $GOROOT)
peer | /opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/shim (from $GOPATH)
peer | src/build-chaincode/asset-chaincode.go:26:2: cannot find package "github.com/hyperledger/fabric/core/crypto/primitives" in any of:
peer | /opt/go/src/github.com/hyperledger/fabric/core/crypto/primitives (from $GOROOT)
peer | /opt/gopath/src/github.com/hyperledger/fabric/core/crypto/primitives (from $GOPATH)
peer | package build-chaincode
peer | imports github.com/hyperledger/fabric/vendor/github.com/op/go-logging: must be imported as github.com/op/go-logging
[...]
It looks like there are some issues not exactly related to the chaincode compilation on the peer.
Or the chaincode is deployed to a location on the peer where relative import paths in the chaincode cannot be resolved ....
---- EDIT (some hours later)
After many attempts and experiments, I think my problem is always the same:
Regardless of any valid chaincode (stored locally in a directory $GOPATH/src/gibhub.com/<mychaincode_dir> + building with no errors) after being deployed on a peer with the method enrolledUser.deploy(deployRequest) (hfc#0.6.5), a result gives always the same errors on the target node:
peer | Step 4 : RUN go install build-chaincode && cp src/build-chaincode/vendor/github.com/hyperledger/fabric/peer/core.yaml $GOPATH/bin && mv $GOPATH/bin/build-chaincode $GOPATH/bin/0881d0fe8f4528e1369bfe917cd207d919a07758cc098e212ca74f6766c636d4
peer | ---> Running in b0ca2abbe609
peer | src/build-chaincode/asset-chaincode.go:25:2: cannot find package "github.com/hyperledger/fabric/core/chaincode/shim" in any of:
peer | /opt/gopath/src/build-chaincode/vendor/github.com/hyperledger/fabric/core/chaincode/shim (vendor tree)
peer | /opt/go/src/github.com/hyperledger/fabric/core/chaincode/shim (from $GOROOT)
The imports of the cc the peer complains about are:
import (
"encoding/base64"
"errors"
"fmt"
"github.com/hyperledger/fabric/core/chaincode/shim"
[...]
moreover, when I go to the peer CLI, the shim can be found there ...
$ docker exec -it peer bash
$ find / -name shim
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/shim
$ echo $GOPATH/
/opt/gopath/
Isn't something wrong with the fabric-peer image ???
When you deploy with hfc, you need to "vendor" the fabric chaincode packages - see http://hyperledger-fabric.readthedocs.io/en/v0.6/nodeSDK/node-sdk-indepth/#chaincode-deployment
You can also have a look at https://github.com/IBM-Blockchain/SDK-Demo/tree/master/src/chaincode for an example of doing this with the SDK as well

Resources