I am trying to Install Hyperledger composer on Mac OS by using this tutorial.
When I run the following command from tutorial
composer network start --networkName tutorial-network --networkVersion
0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card
PeerAdmin#hlfv1 --file networkadmin.card
I get the following error
Error: Error trying to start business network. Error: No valid responses from any peers.
Response from attempted peer comms was an error: Error: REQUEST_TIMEOUT
When I check my Docker Logs. I have the following errors
I would like to know is there a way I can pass .npmrc file to docker to solve this problem ?
Can I set NODE_TLS_Unauthorized=0 as environment variable and give it to Docker ?
Are there any work around solution from which i can solve the problem?
Notes:
I have provided .npmrc in the following command
composer network install --card PeerAdmin#hlfv1 --archiveFile tutorial-
network#0.0.1.bna -o npmrcFile=/Users/1/.npmrc
I have the following .npmrc file
always-auth=true
strict-ssl=false
ca=
loglevel=verbose
proxy= myproxy
https-proxy=myproxy
unsafe-perm=true
NODE_TLS_REJECT_UNAUTHORIZED=0
registry=http://registry.npmjs.org/
I am running all the applications behind corporate firewall as well as in Mac OS
You can pass an npmrc file as part of the composer network install command. When fabric builds the chaincode image for the business network it will use that npmrc file as part of the npm install it performs, see
https://hyperledger.github.io/composer/latest/managing/connector-information
for more information about the CLI options.
Related
background
My learning objective is to set up an aws s3 gateway server locally on my raspberry pi for kubernetes to connect to s3 via nfs. aws has also provided some instructions on gateway server creation. (source: aws nfs csi, gateway creation).
problem
what I am unsure of, is how to set up the gateway server in kubernetes. So for starters, I'm trying to build a docker image that could launch the linux kvm qcow2 image that they have provided. but this is where i am failing.
what i've tried to do so far
dockerfile
FROM ubuntu:latest
COPY ./aws-storage-gateway-ID_NUMBER /
RUN apt-get update && apt-get upgrade -y &&\
apt-get install qemu qemu-kvm libvirt-clients libvirt-daemon-system virtinst bridge-utils -y
within this docker image, i tried to follow the instructions in gateway creation but i'm met with this error from virsh
root#ac48abdfc902:/# virsh version
Authorization not available. Check if polkit service is running or see debug message for more information.
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
true enough, /var/run/libvirt/libvirt-sock does not exist. but I am stuck and can't find any useful information to resolve this error to get virsh running.
any thoughts and ideas would be appreciated.
When using buildpacks to build my spring boot application on Fedora I get the following error during the execution of the spring-boot-plugin:build-image goal:
[INFO] [creator] ERROR: initializing analyzer: getting previous image: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/info": dial unix /var/run/docker.sock: connect: permission denied
After digging into the goal and buildpacks, I found the following command in the buildpack.io docs (by selecting "Linux" and "Container"):
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $PWD:/workspace -w /workspace \
buildpacksio/pack build <my-image> --builder <builder-image>
AFAICT this command should be equivalent to what happens inside of maven and it exhibits the same error.
My previous assumption was that the use in the buildpacksio/pack image doesn't have the access permissions to my docker socket. (The socket had 660 permissions and root:docker owner).
UPDATE: Even after updating to 666 permissions the issue still persists.
I don't really want to allow anyone to interact with the docker socket so setting it to 666 seems unwise. Is this the only option or can I also add the user in the container to the docker group?
The solution was that the Fedora docker package is no longer the most up-to-date way to install Docker. See the official Docker documentation
They both provide the same version number, but their build hash is different.
While I could not fully diagnose the difference between the two, I can report that it works with docker-ce and doesn't with docker.
I'm setting up the development environment of Hyperledger Fabric following the tutorial:
Running chaincode in development mode
I had cloned the fabric folder and set up the development environment for the an orderer and a peer, they are both performing well. However, I had set them both on my PC's environment together, not separately in different docker containers. Following the instructions, I had created the channel and started the sample chaincode as well.
However, when I run this command in the "Next Steps" part of the tutorial:
CORE_PEER_ADDRESS=127.0.0.1:7051 peer chaincode invoke -o 127.0.0.1:7050 -C ch -n mycc -c '{"Args":["init","a","100","b","200"]}' --isInit
an error occurred:
Error: endorsement failure during invoke. response: status:500 message:"error in simulation: failed to execute transaction bc2357ccb38b3abcca2499210a9f380c4263d186fe8e7bd974c7875ce4a7f8c4: could not launch chaincode mycc:1.0: error building chaincode: error building image: failed to get chaincode package for external build: could not get legacy chaincode package 'mycc:1.0': open /var/hyperledger/production/chaincodes/mycc.1.0: no such file or directory"
I'm a new beginner and get really confused about this. Do I need to set the peer node and orderer node separately in two docker containers? Or do I have to change the route of mycc.1.0 used by this command?
It seems that the peer doesn't install mycc, run peer chaincode list --installed to find out
Context:
OS: Windows 10 Pro; Docker ver: 18.09.0 (build 4d60db4); Behind corporate proxy, using CNTLM to solve this issue. (currently pulling / running image works fine)
Problem:
I was trying to build the following Dockerfile:
FROM alpine:3.5
RUN apk add --update \
python3
RUN pip3 install bottle
EXPOSE 8000
COPY main.py /main.py
CMD python3 /main.py
This is what I got:
Sending build context to Docker daemon 11.26kB
Step 1/6 : FROM alpine:3.5
---> dc496f71dbb5
Step 2/6 : RUN apk add --update python3
---> Running in 7f5099b20192
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.5/main: could not connect to server (check repositories file)
WARNING: Ignoring APKINDEX.c51f8f92.tar.gz: No such file or directory
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.5/community: could not connect to server (check repositories file)
WARNING: Ignoring APKINDEX.d09172fd.tar.gz: No such file or directory
ERROR: unsatisfiable constraints:
python3 (missing):
required by: world[python3]
The command '/bin/sh -c apk add --update python3' returned a non-zero code: 1
I was able to access the URL from a browser, so there is no problem with the server itself.
I suspected that it has something to do with the proxy not being propagated to the container, as explained in this question, since I also did not get the http_proxy line when running docker run alpine env. However, after entering the proxies into the config file, it finally appeared. Yet the problem still exists.
I also tried to change the DNS as instructed here, but the problem is still unsolved.
I finally managed to solve this problem, and the culprit was my setting in the CNTLM.
For a background story, please check this post.
The root cause of this problem is that the docker container could not access the internet from inside the VM due to wrong IP setting inside the CNTLM.ini.
Normally CNTLM listens to 127.0.0.1:3128 by default to forward the proxy. I followed the default, and thus set the proxy setting on Docker (for the daemon - through the GUI, and for the container - through config.json) is also set into that address and port. It turns out that this "localhost" does not apply to the VM where docker sits, since the VM has its own localhost. Long story short, the solution is to change that address into dockerNAT IP address (10.0.75.1:3128) in all of the following locations:
CNTLM.ini (on the Listen line. Actually if we use CNTLM for other purposes as well, it is possible to supply more than one Listen line)
Docker daemon's proxy (through the Docker setting GUI)
Docker container config.json (usually in C:\Users\<username>\.docker), by adding the following lines:
"proxies":
{
"default":
{
"httpProxy": "http://10.0.75.1:3128",
"httpsProxy": "http://10.0.75.1:3128",
"noProxy": <your no_proxy>
}
}
also check these related posts:
Building a docker image for a node.js app fails behind proxy
Docker client ignores HTTP_PROXY envar and build args
Beginner having trouble with docker behind company proxy
You can try to build your docker file with the following command:
docker build --build-arg http_proxy=http://your.proxy:8080 --build-arg http_proxy=http://your.proxy:8080 -t yourimage .
I can't manually log into my private GitLab Docker Registry from CLI:
# docker login -u "${DOCKER_USER}" -p "${DOCKER_PASS}" "${DOCKER_URL}"
error getting credentials - err: exit status 1, out: `Cannot autolaunch D-Bus without X11 $DISPLAY`
System info:
Ubuntu 18.04
docker-ce 18.03.1~ce~3-0~ubuntu (from official repo, without install script)
There is no ~/.docker/config.json for any users and I'm executing the docker login as root.
On Google, I just find recommendations to export DISPLAY... Can docker only login to remote registries in a GUI environment?
Exporting DISPLAY=0:0 yields:
error getting credentials - err: exit status 1, out: `Failed to execute child process “dbus-launch” (No such file or directory)`
Am I missing some dependency? Docker runs fine otherwise, but login doesn't work. I know there are backends to store credentials, but I don't want to store credentials. I'm just trying to authenticate against my registry to pull an image, doesn't that work in Docker ootb?
The docker-compose package unnecessarily depend on the broken golang-github-docker-docker-credential-helpers package. Removing the executable fixes this.
rm /usr/bin/docker-credential-secretservice
Note: This is a workaround and will need to be repeated each time the package is updated.
This affects the Ubuntu 18.04 (and possibly other non-LTS releases) and some Debian releases.