Node API failing with Error : EHOSTUNREACH - docker

ERRO 8dc [composerchannel][a68ccd16] failed to invoke chaincode name:"tryme" , error: Failed to generate platform-specific docker build: Error returned from build: 1 "npmERR! code EHOSTUNREACH
npm ERR! errno EHOSTUNREACH
npm ERR! request to https://registry.npmjs.org/composer-common failed, reason: connect EHOSTUNREACH 104.16.18.35:443
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2018-08-08T10_52_34_967Z-debug.log
I get this error on my NodeJS API, when I open docker logs I find this error was the part of Peer Container.
What I understand from this error is Hyperledger is trying to ping the URL 104.16.18.35:443, which is been blocked by my firewall as per my understanding.
But the bigger Question is If my network is internally setup then why Docker Container is trying to Ping this IP.

Related

Npm offline package installation in docker container

I am trying to install node-red-contrib-influxdb in a node-red docker container that is on a computer with no way to access internet. I have a Windows computer, that I have also installed node.js on, and installed/downloaded the node-red-contrib-influxdb package and npm-pack-all.
I tried to run npm-pack-all in the node-red-contrib-influxdb install location to get all the dependencies installed (those dependencies I have installed manually as well, but this didn't help) and moved the resulting tgz file to the docker container and ran npm install on it. This results in a following error:
npm ERR! code EAI_AGAIN
npm ERR! errno EAI_AGAIN
npm ERR! request to https://registry.npmjs.org/influx failed, reason: getaddrinfo EAI_AGAIN registry.npmjs.org
npm ERR! A complete log of this run can be found in:
npm ERR! /usr/src/node-red/.npm/_logs/2023-01-31T14_33_13_404Z-debug.log
With the relevant lines in the log being:
15 silly resolveWithNewModule node-red-contrib-influxdb#0.6.1 checking installable status
16 silly fetchPackageMetaData error for influx#5.6.3 request to https://registry.npmjs.org/influx failed, reason: getaddrinfo EAI_AGAIN registry.npmjs.org
17 timing stage:rollbackFailedOptional Completed in 19ms
18 timing stage:runTopLevelLifecycles Completed in 7103ms
19 silly saveTree node-red-project#0.0.1
19 silly saveTree +-- #influxdata/influxdb-client#1.33.1
19 silly saveTree +-- influx#5.9.3
19 silly saveTree +-- lodash#4.17.21
19 silly saveTree `-- node-red-contrib-influxdb#0.6.1
20 verbose type system
21 verbose stack FetchError: request to https://registry.npmjs.org/influx failed, reason: getaddrinfo EAI_AGAIN registry.npmjs.org
21 verbose stack at ClientRequest.<anonymous> (/usr/local/lib/node_modules/npm/node_modules/node-fetch-npm/src/index.js:68:14)
21 verbose stack at ClientRequest.emit (events.js:400:28)
21 verbose stack at TLSSocket.socketErrorListener (_http_client.js:475:9)
21 verbose stack at TLSSocket.emit (events.js:400:28)
21 verbose stack at emitErrorNT (internal/streams/destroy.js:106:8)
21 verbose stack at emitErrorCloseNT (internal/streams/destroy.js:74:3)
21 verbose stack at processTicksAndRejections (internal/process/task_queues.js:82:21)
22 verbose cwd /data
23 verbose Linux 4.4.0-cip-rt-moxa-imx7d
24 verbose argv "/usr/local/bin/node" "/usr/local/bin/npm" "install" "./node-red-contrib-influxdb-0.6.1.tgz"
25 verbose node v14.18.2
26 verbose npm v6.14.15
27 error code EAI_AGAIN
28 error errno EAI_AGAIN
29 error request to https://registry.npmjs.org/influx failed, reason: getaddrinfo EAI_AGAIN registry.npmjs.org
30 verbose exit [ 1, true ]
How would I resolve this error and enable offline installation of this particular npm package?
I would approach this in a different way.
Create a custom Node-RED container by extending the existing container and then install the required nodes while building the container (on a machine with internet access) then copy the new container to your offline machine.
e.g.
FROM nodered/node-red:latest
USER root
RUN npm install node-red-contrib-influxdb
USER node-red
The USER switch is because the install will be into /usr/src/node-red which is along side Node-RED

Gitlab CI-CD via Docker : can't access Nexus in another container

I'm using Gitlab CI-CD to build some projects using a single Runner (for now) on Docker (the runner itself is a docker container, so I guess this is Docker in Docker..)
My problem is that I can't use my own nexus/npm repository while building...
npm install --registry=http://153.89.23.53:8082/repository/npm-all
npm ERR! code EHOSTUNREACH
npm ERR! errno EHOSTUNREACH
npm ERR! request to http://153.89.23.53:8082/repository/npm-all/typescript/-/typescript-3.6.5.tgz failed, reason: connect EHOSTUNREACH 153.89.23.53:8082
The same runner on another server works perfectly, but it doens't work if running on the same server hosting the Nexus (everything is container-based)
The Gitlab runner is using the host network.
If I connect to the Runner and try to ping 153.89.23.53:8082 (Nexus), it works
root#62591008a000:/# wget http://153.89.23.53:8082
--2020-07-13 09:56:16-- http://153.89.23.53:8082/
Connecting to 153.89.23.53:8082... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7952 (7.8K) [text/html]
Saving to: 'index.html'
index.html 100%[===========================================================================================>] 7.77K --.-KB/s in 0s
2020-07-13 09:56:16 (742 MB/s) - 'index.html' saved [7952/7952]
So I guess the problem occurs in the "second docker container", the one used inside the runner... but I have no idea what I should change.
Note : I could probably set the gitlab runner to join the nexus network and use internal IPs, but this would break the scripts if the runner is started on other servers...
Ok, I found the solution..
There is a network_mode settings that can be set in the runner configuration. Default value is bridge, not host..
**config.toml**
[runners.docker]
...
volumes = ["/cache"]
network_mode = "host"

Error while downloading geckodriver during webdriver-manager update from jenkins

I am trying to run webdriver update from jenkins. I am downloading geckoDriver and chromeDriver. Chrome driver is downloading and unzipping properly. But gecko driver download is not working.
However this is working fine from local. Issue occurs only in jenkins
Command used:
node_modules/protractor/bin/webdriver-manager update --ignore_ssl --proxy=http://proxy --versions.gecko=v0.25.0 --versions.chrome=78.0.3904.105
Firefox version in server: 60.9.0
Error:
[16:23:13] I/http_utils - ignoring SSL certificate
[16:23:13] I/config_source - curl -ok /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/chrome-response.xml 'http://proxy...' -H 'host:chromedriver.storage.googleapis.com'
[16:23:13] I/http_utils - ignoring SSL certificate
[16:23:13] I/config_source - curl -ok /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/gecko-response.json 'http://proxy.../repos/mozilla/geckodriver/releases' -H 'host:api.github.com'
[16:23:13] I/http_utils - ignoring SSL certificate
[16:23:14] I/downloader - curl -ok /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_78.0.3904.105.zip 'http://proxy.../78.0.3904.70/chromedriver_linux64.zip' -H 'host:chromedriver.storage.googleapis.com'
[16:23:14] I/update - chromedriver: unzipping chromedriver_78.0.3904.105.zip
[16:23:14] I/update - chromedriver: setting permissions to 0755 for /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_78.0.3904.105
[16:23:16] I/http_utils - ignoring SSL certificate
[16:23:17] E/downloader - tunneling socket could not be established, statusCode=403
[16:23:17] I/update - geckodriver: file exists /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/geckodriver-v0.25.0.tar.gz
[16:23:17] I/update - geckodriver: unzipping geckodriver-v0.25.0.tar.gz
(node:42561) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, rename '/var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/geckodriver' -> '/var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/geckodriver-v0.25.0'
at Object.renameSync (fs.js:598:3)
at unzip (/var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/built/lib/cmds/update.js:240:8)
at files_1.FileManager.downloadFile.then.downloaded (/var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/built/lib/cmds/update.js:205:13)
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:42561) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:42561) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
When i manually downloaded the driver files and added inside node_modules/selenium location, webdriver-manager update is successful. But the tunneling socket error was still present. Logs below:
[16:30:00] I/http_utils - ignoring SSL certificate
[16:30:00] I/http_utils - ignoring SSL certificate
[16:30:00] I/http_utils - ignoring SSL certificate
[16:30:00] I/update - chromedriver: file exists /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_78.0.3904.105.zip
[16:30:00] I/update - chromedriver: unzipping chromedriver_78.0.3904.105.zip
[16:30:00] I/update - chromedriver: setting permissions to 0755 for /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_78.0.3904.105
[16:30:00] I/update - chromedriver: chromedriver_78.0.3904.105 up to date
[16:30:02] E/downloader - tunneling socket could not be established, statusCode=403
[16:30:02] I/update - geckodriver: file exists /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/geckodriver-v0.25.0.tar.gz
[16:30:02] I/update - geckodriver: unzipping geckodriver-v0.25.0.tar.gz
[16:30:02] I/update - geckodriver: setting permissions to 0755 for /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/geckodriver-v0.25.0
[16:30:02] I/update - geckodriver: geckodriver-v0.25.0 up to date
But ng e2e is failing with the below error:
[16:30:03] I/launcher - Running 1 instances of WebDriver
[16:30:03] I/direct - Using FirefoxDriver directly...
[16:30:03] E/launcher - spawn /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/geckodriver-v0.25.0 EACCES
[16:30:03] E/launcher - Error: spawn /var/lib/jenkins/jobs/x/workspace/node_modules/protractor/node_modules/webdriver-manager/selenium/geckodriver-v0.25.0 EACCES
at Process.ChildProcess._handle.onexit (internal/child_process.js:240:19)
at onErrorNT (internal/child_process.js:415:16)
at process._tickCallback (internal/process/next_tick.js:63:19)
[16:30:03] E/launcher - Process exited with error code 199
npm ERR! code ELIFECYCLE
npm ERR! errno 1
I am using directconnect=true in protractor conf.js
Can someone please check what i am doing wrong here?
There are a few things that you can try here:
1) If your tests are running inside a container, you will have to disable the dev-shm usage by adding a "--disable-dev-shm-usage" flag in your capabilities. Or you can mount the dev/shm as a volume when you run your tests.
2) You can set Marionette to true in your browser capabilities for firefox.
3) Run the container as root so that it runs as a privilaged user
4) Run the tests using ./node_modules/protractor/bin/protractor protractor.conf.js instead of using ng e2e
5) Update the webdriver packages using ./node_modules/protractor/bin/webdriver-manager update --ignore_ssl --proxy=http://proxy --versions.gecko=v0.25.0 --versions.chrome=78.0.3904.105
6) Try adding these lines to your entrypoint for the docker image:
#!/bin/bash
uid=$(stat -c %u ${PWD})
gid=$(stat -c %g ${PWD})
groupadd -o -g $gid protractor
useradd -m -o -u $uid -g $gid protractor
sudo -u protractor npm run test
Still I cannot say if one of these steps would solve your problem.
I had same problem when I was doing this in a docker container. The 'tar' and 'gzip' packages weren't installed. The problem got resolved after I installed these packages.

Composer fails within Docker 'Failed to enable crypto'

I've been battling an issue with a corporate proxy when trying to run docker-compose up -d nginx mysql
I'm attempting to run the Laradock container on OSX but keep running into errors when composer attempts to install dependencies. I've updated my docker settings to notify it about my corporate proxy:
Before adding the proxy information, I was receiving this error:
[Composer\Downloader\TransportException]
The "https://packagist.org/packages.json" file could not be downloaded: SSL operation failed with code 1. OpenSSL Error messages:
error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed
Since updating the proxy details, I am now receiving this error:
Step 27/183 : RUN if [ ${COMPOSER_GLOBAL_INSTALL} = true ]; then composer global install ;fi
---> Running in a7699d4ecebd
Changed current directory to /home/laradock/.composer
Loading composer repositories with package information
[Composer\Downloader\TransportException]
The "https://packagist.org/packages.json" file could not be downloaded: SSL: Success
Failed to enable crypto
failed to open stream: operation failed
I'm an experienced dev, but new to Docker. I think that the error is being caused because PHP is running inside the docker container but for some reason does not have access to my local certificates?

Hyperledger Composer: command composer network start failing

I am trying to deploy a business network using command:
composer network start -c PeerAdmin#hlfv1 -n test-bna -V 0.0.1 -A admin -S adminpw
And it is failing with error:
Error trying to start business network. Error: Failed to connect to
any peer event hubs. It is required that at least 1 event hub has been
connected to receive the commit event.
On checking the composer logs, it says:
2018-11-08T13:30:59.190Z WARN :HLFConnection
:_connectToEventHubs() event hub localhost:7051 failed to connect:
12 UNIMPLEMENTED: unknown service protos.Deliver {}$
2018-11-08T13:31:46.763Z WARN :HLFConnection
:_connectToEventHubs() event hub localhost:7051 failed to connect:
12 UNIMPLEMENTED: unknown service protos.Deliver {}$
Could someone please help with resolving this?

Resources