We need to run "composer" command outside of docker container's network.
When I specify orderer and peer host name (e.g. peer0.org1.example.com) in /etc/hosts file, "composer" command seems to work.
However, if I specify server's IP address, it does not work. Here is sample.
$ composer network list -p hlfv1 -n info-share-bc -i PeerAdmin -s secret
✖ List business network info-share-bc
Error trying to ping. Error: Error trying to query chaincode. Error: Connect Failed
Command succeeded
This is a command example when I specify host name in /etc/hosts.
$ composer network list -p hlfv1 -n info-share-bc -i PeerAdmin -s secret
✔ List business network info-share-bc
name: info-share-bc
models:
- org.hyperledger.composer.system
- bc.share.info
<snip>
I believe when the server name can not be resolved, we will specify the option called "ssl-target-name-override", hyperledger node.js SDK as described here.
https://jimthematrix.github.io/Remote.html
- ssl-target-name-override {string} Used in test environment only,
when the server certificate's hostname (in the 'CN' field) does not
match the actual host endpoint that the server process runs at,
the application can work around the client TLS verify failure by
setting this property to the value of the server certificate's hostname
Is there any option to specify host name in connection profile (connection.json) ?
Found a work around: hostnameOverride option in connection profile resolved the connection issue.
"eventURL": "grpcs://<target-host>:17053",
"hostnameOverride": "peer0.org1.example.com",
Related
I have a cap rover instance in my digital ocean instance that I created. I want to use teh caprover instance to run cap rover sample apps.
I opened the digital ocean droplet web console in order to run a caprover isntance.
I ran the following lines of code:
ufw allow 80,443,3000,996,7946,4789,2377/tcp; ufw allow 7946,4789,2377/udp;
and got this:
Skipping adding existing rule
Skipping adding existing rule (v6)
Skipping adding existing rule
Skipping adding existing rule (v6)
I then ran this:
docker run -p 80:80 -p 443:443 -p 3000:3000 -v /var/run/docker.sock:/var/run/docker.sock -v /captain:/captain caprover/caprover
I got this:
Unable to find image 'caprover/caprover:latest' locally
latest: Pulling from caprover/caprover
Digest: sha256:39c3f188a8f425775cfbcdc4125706cdf614cd38415244ccf967cd1a4e692b4f
Status: Downloaded newer image for caprover/caprover:latest
docker: Error response from daemon: driver failed programming external connectivity on endpoint priceless_sammet (9da9028cfc4873818f113458237ebd00f9c64fa648b853730a60b10bea39c720): Bind for 0.0.0.0:3000 failed: port is already allocated.
I tried changing the ports to:
docker run -p 81:81 -p 444:444 -p 3321:3321 -v /var/run/docker.sock:/var/run/docker.sock -v /captain:/captain caprover/caprover
and got this:
Captain Starting ...
Installing Captain Service ...
Installation of CapRover is starting...
For troubleshooting, please see: https://caprover.com/docs/troubleshooting.html
>>> Checking System Compatibility <<<
Docker Version passed.
Ubuntu detected.
X86 CPU detected.
Total RAM 1033 MB
Are your trying to run CapRover on a local machine or a machine without public IP?
In that case, you need to add this to your installation command:
-e MAIN_NODE_IP_ADDRESS='127.0.0.1'
Otherwise, if you are running CapRover on a VPS with public IP:
Your firewall may have been blocking an in-use port: 80
A simple solution on Ubuntu systems is to run "ufw disable" (security risk)
Or [recommended] just allowing necessary ports:
ufw allow 80,443,3000,996,7946,4789,2377/tcp; ufw allow 7946,4789,2377/udp;
See docs for more details on how to fix firewall issues
Finally, if you are an advanced user, and you want to bypass this check (NOT RECOMMENDED),
you can append the docker command with an addition flag: -e BY_PASS_PROXY_CHECK='TRUE'
Installation failed.
Error: Port seems to be closed: 80
at Request._callback (/usr/src/app/built/utils/CaptainInstaller.js:149:24)
at Request.self.callback (/usr/src/app/node_modules/request/request.js:185:22)
at Request.emit (events.js:400:28)
at Request.<anonymous> (/usr/src/app/node_modules/request/request.js:1154:10)
at Request.emit (events.js:400:28)
at IncomingMessage.<anonymous> (/usr/src/app/node_modules/request/request.js:1076:12)
at Object.onceWrapper (events.js:519:28)
at IncomingMessage.emit (events.js:412:35)
at endReadableNT (internal/streams/readable.js:1334:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
How can I open port 80, 443, and 3000 so that I can run the cap rover instance
Running Windows-based containers I am unable to access the internet from within. Example:
From my host machine I can run the following command:
PS C:\Developer> nslookup aka.ms
Server: cache100.ns.tdc.net
Address: 193.162.153.164
Non-authoritative answer:
Name: aka.ms
Address: 88.221.62.148
When I try to do this from inside a container:
PS C:\Developer> docker run mcr.microsoft.com/dotnet/framework/runtime:4.8-windowsservercore-ltsc2019 powershell nslookup aka.ms
*** UnKnown can't find aka.ms: Server failed
Server: UnKnown
Address: 172.28.112.1
While I am not specifically interested in aka.ms, this error happens for all services I try to connect to, so I am not able to install external libraries, etc.
I am running Docker Desktop v19.03.12. The behaviour occurs regardless of whether I have WSL 2 enabled or not, and my Docker setup is all defaults.
Note: I have some time ago experienced this behaviour. Back then I added the following snippet to my Dockerfile:
RUN powershell -command certutil -generateSSTFromWU roots.sst && certutil -addstore -f root roots.sst && del roots.sst
To my understanding this would install an SSH certificate, which solved the issue. This command, however, now fails:
PS C:\> certutil -generateSSTFromWU roots.sst
The server name or address could not be resolved 0x80072ee7 (WinHttp: 12007 ERROR_WINHTTP_NAME_NOT_RESOLVED) -- http://ctldl.windowsupdate.com/msdownload/update/v3/static/truste
dr/en/authrootstl.cab
CertUtil: -generateSSTFromWU command FAILED: 0x80072ee7 (WinHttp: 12007 ERROR_WINHTTP_NAME_NOT_RESOLVED)
CertUtil: The server name or address could not be resolved
I tested this out on a basic server core image and I got it worked with adding DNS settings.
I connected to the container interactively to test this, but you can probably add the command to a DockerFile too.
docker run -it container powershell
Type netsh to start network configuration
First we look up the network we want to change
( in my case "Ethernet 2")
Then we add an static DNS server to this interface
interface ip show config
interface ipv4 set dns name="Ehternet 2" static 8.8.8.8
exit
nslookup aka.ms
S C:\> nslookup aka.ms
Server: dns.google
Address: 8.8.8.8
Non-authoritative answer:
Name: aka.ms
Address: 23.38.17.26
Reference Docker Networking
Reference howto
I am attempting to docker-machine create to a Ubuntu 16.04 host like this:
ssh-keygen -R ${remote_host}
ssh-copy-id -i ~/.ssh/id_host_rsa.pub root#${remote_host}
docker-machine create \
--driver generic \
--generic-ip-address=${remote_host} \
--generic-ssh-key ~/.ssh/id_host_rsa \
--generic-ssh-user=root ${machine_name}
Version information:
docker --version
Docker version 19.03.6, build 369ce74a3c
docker-machine --version
docker-machine version 0.16.2, build bd45ab13
I am repeatedly asked for a password .. Why is this?
Here is the output:
...
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: ERROR: Received disconnect from 77.68.21.66 port 22:2: Too many authentication failures
ERROR: Disconnected from 77.68.21.66 port 22
Running pre-create checks...
Creating machine...
(production) Importing SSH key...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Password:
Detecting the provisioner...
Password:
Provisioning with ubuntu(systemd)...
Password:
.. etc
The reason for this problem was the ordering of ~/.ssh/config.
I had a Host * entry in config first, before that of my corresponding specific server Host XX.XX.XX.XX entry.
I moved the wildcard entry at the end of ~/.ssh/config and now the password is no longer constantly asked for and the problem is now fixed.
I how this helps someone.
I am a complete newbie on Hyperledger Fabric and as described in the docs I installed all the prerequisites and set up the Network Artifacts. But after i want to bring Up the Network. I get this error and i don't know what to do:
Error: failed to create deliver client: orderer client failed to connect to orde
rer.example.com:7050: failed to create new connection: x509: certificate signed
by unknown authority (possibly because of "x509: ECDSA verification failure" whi
le trying to verify candidate authority certificate "tlsca.example.com")
Usage:
peer channel create [flags]
Flags:
-c, --channelID string In case of a newChain command, the channel ID to crea
te.
-f, --file string Configuration transaction file generated by a tool su
ch as configtxgen for submitting to orderer
-t, --timeout int Channel creation timeout (default 5)
Global Flags:
--cafile string Path to file containing PEM-encoded
trusted certificate(s) for the ordering endpoint
--certfile string Path to file containing PEM-encoded
X509 public key to use for mutual TLS communication with the orderer endpoint
--clientauth Use mutual TLS when communicating wi
th the orderer endpoint
--keyfile string Path to file containing PEM-encoded
private key to use for mutual TLS communication with the orderer endpoint
--logging-level string Default logging level and overrides,
see core.yaml for full syntax
-o, --orderer string Ordering service endpoint
--ordererTLSHostnameOverride string The hostname override to use when va
lidating the TLS connection to the orderer.
--tls Use TLS when communicating with the
orderer endpoint
-v, --version Display current version of fabric pe
er server
!!!!!!!!!!!!!!! Channel creation failed !!!!!!!!!!!!!!!!
========= ERROR !!! FAILED to execute End-2-End Scenario ===========
ERROR !!!! Test failed
OS: Windows 7
Hyperledger Fabric: 1.1
Latest Docker installation
After multiple attempts to get the network running i created containers which i had to delete with the command docker ps -aq. Also i had to get the network down before that with the command ./byfn.sh -m down. Then i restarted everything by using byfn.sh -m generate -c mychannel and byfn.sh -m up -c mychannel -s couchdb Then everything worked as in the docs described.
I am new to nagios.
I am trying to configure the "check_disk" service for one host but I am not getting the expected results.
I should get the emails when when disk usage goes beyond 80%.
So, There is already service defined for this task with multiple hosts, as below:
define service{
use local-service ; Name of service template to use
host_name localhost, host1, host2, host3, host4, host5, host6
service_description Root Partition
check_command check_local_disk!20%!10%!/
contact_groups unix-admins,db-admins
}
The Issue:
Further I tried to test single host i.e. "host2". The current usage of host2 is as follow:
# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rootvg-rootvol 94G 45G 45G 50% /
So to get instant emails, I written another service as below, where warning set to <60% and critical set to <40%.
define service{
use local-service
host_name host2
service_description Root Partition again
check_command check_local_disk!60%!40%!/
contact_groups dev-admins
}
But still I am not receive any emails for the same.
Where it going wrong.
The "check_local_disk" command is defined as below:
define command{
command_name check_local_disk
command_line $USER1$/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$
}
Your command definition currently is setup to only check your Nagios server's disk, not the remote hosts (such as host2). You need to define a new command definition to execute check_disk on the remote host via NRPE (Nagios Remote Plugin Execution).
On Nagios server, define the following:
define command {
command_name check_remote_disk
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c check_disk -a $ARG1$ $ARG2$ $ARG3$
register 1
}
define service{
use genric-service
host_name host1, host2, host3, host4, host5, host6
service_description Root Partition
check_command check_remote_disk!20%!10%!/
contact_groups unix-admins,db-admins
}
Restart the Nagios service.
On the remote host:
Ensure you have NRPE plugin installed.
Instructions for Ubuntu: http://tecadmin.net/install-nrpe-on-ubuntu/
Instructions for CentOS / RHEL: http://sharadchhetri.com/2013/03/02/how-to-install-and-configure-nagios-nrpe-in-centos-and-red-hat/
Ensure there is a command defined for check_disk on the remote host. This is usually included in nrpe.cfg, but commented-out. You'd have to un-comment the line.
Ensure you have the check_disk plugin installed on the remote host. Mine is located at: /usr/lib64/nagios/plugins/check_disk
Ensure that allowed_hosts field of nrpe.cfg includes the IP address / hostname of your Nagios server.
Ensure that dont_blame_nrpe field of nrpe.cfg is set to 1 to allow command line arguments to NRPE commands: dont_blame_nrpe=1
If you made any changes, restart the nrpe service.