using ansible for provisioning docker containers - docker

Hi am using below docker file to build an image. Its running maven playbook perfectly But when started running sonar play book it gets hanged and nothing happens.Only dffrence in playbooks are that sonar playbook has restart statements.Can it a reason for the problem.I have tested each playbook and each one runs perfectly inside the container.
Is ther any way to see container logs side by side as what all ansible is doing when building the image.As of now Docker spits logs only when its done with a step.
FROM xyz.com/akathaku/cidemo:latest
MAINTAINER akathaku <akathaku#gmail.com>
USER root
ENV SCPATH /etc/supervisor/conf.d
ENV PLAYBOOKS /ansible
# RUN yum -y update
# The daemons
RUN yum -y install supervisor
RUN mkdir -p /var/log/supervisor
# Supervisor Configuration
ADD ./supervisord/conf.d/* $SCPATH/
#Running ansible
ADD ./sonar-playbook $PLAYBOOKS/sonar-playbook
ADD ./maven-playbook $PLAYBOOKS/maven-playbook
ADD ./jenkins-playbook $PLAYBOOKS/jenkins-playbook
ADD ./tomcat-playbook $PLAYBOOKS/tomcat-playbook
WORKDIR $PLAYBOOKS
RUN ansible-playbook $PLAYBOOKS/sonar-playbook/sonar.yml -c local
RUN ansible-playbook $PLAYBOOKS/maven-playbook/maven.yml -c local
RUN ansible-playbook $PLAYBOOKS/jenkins-playbook/jenkins.yml -c local
RUN ansible-playbook $PLAYBOOKS/tomcat-playbook/tomcat.yml -c local
# Application Code
CMD ["supervisord", "-c", "/etc/supervisor/conf.d/supervisor.conf"]
EXPOSE 8080 8081 9000 9001 8086
Below is the output of docker run
Step 1 : FROM xyz.com/akathaku/cidemo:latest
---> d477ceab5d3b
Step 2 : MAINTAINER akathaku <akathaku#gmail.com>
---> Using cache
---> a9c3c191aabd
Step 3 : USER root
---> Using cache
---> 24a18ddc6f49
Step 4 : ENV SCPATH /etc/supervisor/conf.d
---> Using cache
---> ea6f4dada89c
Step 5 : ENV PLAYBOOKS /ansible
---> Using cache
---> 9d42760dc51f
Step 6 : RUN yum -y install supervisor
---> Using cache
---> 7af486ce2a8c
Step 7 : RUN mkdir -p /var/log/supervisor
---> Using cache
---> a1b1c145d490
Step 8 : ADD ./supervisord/conf.d/* $SCPATH/
---> Using cache
---> f16e32135351
Step 9 : ADD ./sonar-playbook $PLAYBOOKS/sonar-playbook
---> 170c1dc82ffa
Removing intermediate container bfa474ef9d11
Step 10 : ADD ./maven-playbook $PLAYBOOKS/maven-playbook
---> 90a57735fe3b
Removing intermediate container b5f7bbb3b85d
Step 11 : ADD ./jenkins-playbook $PLAYBOOKS/jenkins-playbook
---> 09ab0f929f45
Removing intermediate container 7dc62423354d
Step 12 : ADD ./tomcat-playbook $PLAYBOOKS/tomcat-playbook
---> 13c3bb5f7aca
Removing intermediate container 4356605f503a
Step 13 : WORKDIR $PLAYBOOKS/maven-playbook
---> Running in 34867677f4e1
---> f48ffe4115db
Removing intermediate container 34867677f4e1
Step 14 : RUN ansible-playbook maven.yml -c local
---> Running in 4eda53bf7e00
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [maven | Install Java 1.8 JRE] ******************************************
changed: [localhost]
TASK: [maven | Install Java 1.8 JDK] ******************************************
changed: [localhost]
TASK: [maven | lineinfile dest='/etc/profile' regexp='^#?\s*export JAVA_HOME=(.*)$' line='export JAVA_HOME=/usr/lib/jvm/java-openjdk' state=present] ***
changed: [localhost]
TASK: [maven | lineinfile dest=/etc/profile regexp='^#?\s*export PATH=(.*)JAVA_HOME(.*)$' line="export PATH=$PATH:$JAVA_HOME/bin" state=present] ***
changed: [localhost]
TASK: [maven | Reload profile] ************************************************
changed: [localhost]
TASK: [maven | Download Apache Maven] *****************************************
changed: [localhost]
TASK: [maven | Untar Maven to /opt] *******************************************
changed: [localhost]
TASK: [maven | Create symbolic link maven to the /opt/apache-{{ maven_version }}] ***
changed: [localhost]
TASK: [maven | lineinfile dest=/etc/profile regexp='^#?\s*export MAVEN_HOME=(.*)$' line='export MAVEN_HOME=/opt/maven' state=present] ***
changed: [localhost]
TASK: [maven | lineinfile dest=/etc/profile regexp='^#?\s*export PATH=(.*)MAVEN_HOME(.*)$' line="export PATH=$PATH:$MAVEN_HOME/bin" state=present] ***
changed: [localhost]
TASK: [maven | Reload profile] ************************************************
changed: [localhost]
TASK: [maven | Create local repository] ***************************************
changed: [localhost]
TASK: [maven | Creates setting.xml file] **************************************
changed: [localhost]
PLAY RECAP ********************************************************************
localhost : ok=14 changed=13 unreachable=0 failed=0
---> fb211f3fbbfd
Removing intermediate container 4eda53bf7e00
Step 15 : WORKDIR $PLAYBOOKS/sonar-playbook
---> Running in b9ba0623b48a
---> d48ad5abbf43
Removing intermediate container b9ba0623b48a
Step 16 : RUN ansible-playbook sonar.yml -c local
---> Running in 2d2716354c5d

Here's what you can do:
Comment out the dockerfile from the line that hangs until the end.
Build the image, start a new container interactively and then run the same line directly from the shell. This is basically equivalent to letting it run during docker build, only you'd get a chance to also inspect whatever's happening on that machine (you could tail logs, you could docker exec into the container while it's hanging and figure out which process is stuck, etc.).

Related

Override UID GID in VSCode Remote-containers session

I am running VSCode in a Windows 10 machine, connecting to a Docker instance on a remote Linux host, to develop C++ projects. The docker instance mounts local folders for source code files, and user is set to match the user on Linux host to avoid file ownership and permission problems.
On Windows 10 I use WSL1, the default user has both UID/GID 1000, and the VSCode's docker processes use these IDs to launch and connect to the docker instance on remote. Is there a way to override the UID/GID VSCode uses so they match the IDs on remote?
Thanks,
Step 1/4 : FROM devenv:latest
---> 120d987bae07
Step 2/4 : RUN groupadd -g 301765 chengd
---> Using cache
---> 58a697ed3565
Step 3/4 : RUN useradd -l -u 301765 -g chengd chengd
---> Using cache
---> b5c7c2b48a83
Step 4/4 : USER chengd
---> Using cache
---> f310c9d1e05b
Successfully built f310c9d1e05b
Successfully tagged vsc-devenv-9da1a5f5cedc16a80d314a148acdbcaf:latest
[7325 ms] Start: Run: wsl -d Ubuntu-20.04 -e /bin/sh -c cd '/home/da/repos/devenv' && DISPLAY='1' ELECTRON_RUN_AS_NODE='1' SSH_ASKPASS='d:\Users\ChengD\.vscode\extensions\ms-vscode-remote.remote-containers-0.177.2\scripts\ssh-askpass.bat' VSCODE_SSH_ASKPASS_NODE='D:\Users\ChengD\AppData\Local\Programs\Microsoft VS Code\Code.exe' VSCODE_SSH_ASKPASS_MAIN='d:\Users\ChengD\.vscode\extensions\ms-vscode-remote.remote-containers-0.177.2\dist\common\sshAskpass.js' VSCODE_SSH_ASKPASS_HANDLE='\\.\pipe\ssh-askpass-7e8e4f69496930d0e88509584ba46ab3357d9ff1-sock' DOCKER_CONTEXT='tcp_201' VSCODE_SSH_ASKPASS_COUNTER='5' docker 'inspect' '--type' 'image' 'vsc-devenv-9da1a5f5cedc16a80d314a148acdbcaf'
[10240 ms] Start: Run: wsl -d Ubuntu-20.04 -e /bin/sh -c cd '/home/da/repos/devenv' && DISPLAY='1' ELECTRON_RUN_AS_NODE='1' SSH_ASKPASS='d:\Users\ChengD\.vscode\extensions\ms-vscode-remote.remote-containers-0.177.2\scripts\ssh-askpass.bat' VSCODE_SSH_ASKPASS_NODE='D:\Users\ChengD\AppData\Local\Programs\Microsoft VS Code\Code.exe' VSCODE_SSH_ASKPASS_MAIN='d:\Users\ChengD\.vscode\extensions\ms-vscode-remote.remote-containers-0.177.2\dist\common\sshAskpass.js' VSCODE_SSH_ASKPASS_HANDLE='\\.\pipe\ssh-askpass-7e8e4f69496930d0e88509584ba46ab3357d9ff1-sock' DOCKER_CONTEXT='tcp_201' VSCODE_SSH_ASKPASS_COUNTER='6' docker 'build' '-f' '/tmp/vsch/updateUID.Dockerfile-0.177.2' '-t' 'vsc-devenv-9da1a5f5cedc16a80d314a148acdbcaf-uid' '--build-arg' 'BASE_IMAGE=vsc-devenv-9da1a5f5cedc16a80d314a148acdbcaf' '--build-arg' 'REMOTE_USER=chengd' '--build-arg' 'NEW_UID=1000' '--build-arg' 'NEW_GID=1000' '--build-arg' 'IMAGE_USER=chengd' '/tmp/vsch'
Solved the problem by setting updateRemoteUserUID to false in devcontainer.json.

Missing perl command in Perl image

I'm sure this is an incredibly simple fix. I tried to build Docker image with Perl in it (plus some Perl) modules. However, when I go to run this, it says there is no /bin/perl. The question is:
Why did the Perl Docker Image not have Perl in it?
My Dockerfile below:
FROM perl:5.20
ENV PERL_MM_USE_DEFAULT 1
RUN cpan install Net::SSL inc:latest
RUN mkdir /ssc
COPY /ssc /ssc
RUN mkdir /tmp/ssc-bin-files;cp /ssc/bin/*.sh /tmp/ssc-bin-files;chmod a+rx /tmp/ssc-bin-files/*;cp /tmp/ssc-bin-files/* /ssc/bin
RUN chmod a+rx /ssc/bin/*.sh
ENTRYPOINT ["/ssc/bin/put-and-submit.sh"]
Jenkins Pipeline snippet:
stage('Build, Tag and Push SSC Dockerfile'){
tagAsTest = "${IMAGE_NAME}:test"
REPO = "chq-ic2e-sprint-images-docker-local"
println "Docker App Build"
docker.build(tagAsTest,"-f Dockerfile .")
sh 'docker image ls | grep rules-client'
}
stage('Set image tag to :approved'){
hasReachedDockerComposeUp=false;
REPO = "chq-ic2e-sprint-images-docker-local"
sh "docker tag ${IMAGE_NAME}:test ${IMAGE_NAME}:approved"
buildInfo = rtDocker.push("${IMAGE_NAME}:approved", REPO , buildInfo)
server.publishBuildInfo buildInfo
}
The Jenkins log below:
[Pipeline] sh
+ docker build -t chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/ssc-cost-file-processor:test -f Dockerfile .
Sending build context to Docker daemon 39.42kB
Step 1/8 : FROM perl:5.20
---> bbe5a82c1dbe
Step 2/8 : ENV PERL_MM_USE_DEFAULT 1
---> Using cache
---> ca2769a89ab8
Step 3/8 : RUN cpan install Net::SSL inc:latest
---> Using cache
---> 1e53f0573131
Step 4/8 : RUN mkdir /ssc
---> Using cache
---> a324effec8ce
Step 5/8 : COPY /ssc /ssc
---> d40bf34f8565
Step 6/8 : RUN mkdir /tmp/ssc-bin-files;cp /ssc/bin/*.sh /tmp/ssc-bin-files;chmod a+rx /tmp/ssc-bin-files/*;cp /tmp/ssc-bin-files/* /ssc/bin
---> Running in 02386f41174f
Removing intermediate container 02386f41174f
---> 4767a8e6f23a
Step 7/8 : RUN chmod a+rx /ssc/bin/*.sh
---> Running in 07646aa96048
Removing intermediate container 07646aa96048
---> f070fcd8a9e9
Step 8/8 : ENTRYPOINT ["/ssc/bin/put-and-submit.sh"]
---> Running in e6bab12f8f40
Removing intermediate container e6bab12f8f40
---> 1422df9d957b
Successfully built 1422df9d957b
Successfully tagged chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/ssc-cost-file-processor:test
[Pipeline] sh
+ docker image ls
+ grep rules-client
chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/rules-client approved da334d1d8fae 2 days ago 22.5MB
chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/rules-client test da334d1d8fae 2 days ago 22.5MB
Script is being run via pipeline like this:
stage('Run image'){
sh '''
docker run -i -v \
--mount type=bind,source="$(pwd)/host-dirs,target=/host-dirs" \
chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/ssc-cost-file-processor:approved
sh
'''
}
or from terminal like this:
#!/bin/bash
docker run -it \
--mount type=bind,source="$(pwd)/host-dirs,target=/host-dirs" \
chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/ssc-cost-file-processor:approved sh
The perl binary is probably in /usr/local/bin/perl. You can check that in a shell in the running container.
host> docker exec -it your_container bash
container> which perl
/usr/local/bin/perl
container> exit
It sure has perl version 5.20 in it. I'm just curious about the entrypoint script in your dockerfile. You're running a shell script by default when the container is started. What the script starts or runs? If you want to run perl without entering the container, use --entrypoint=perl with your docker run command.
docker run --rm --name perl perl:5.20 perl --version
### Output
This is perl 5, version 20, subversion 3 (v5.20.3) built for x86_64-linux
(with 1 registered patch, see perl -V for more detail)
Copyright 1987-2015, Larry Wall
Perl may be copied only under the terms of either the Artistic License or the
GNU General Public License, which may be found in the Perl 5 source kit.
Complete documentation for Perl, including FAQ lists, should be found on
this system using "man perl" or "perldoc perl". If you have access to the
Internet, point your browser at http://www.perl.org/, the Perl Home Page.
###

Docker hub automated build falling with missing variable

I have been set the "BUILD ENVIRONMENT VARIABLES" with the necessary variable, and added the hooks/build with:
#! /bin/bash
docker build \
--build-arg HBASE_VERSION="${HBASE_VERSION}" \
-f "${DOCKERFILE_PATH}" \
-t "${IMAGE_NAME}" .
Not are passing in build process, take a look in the log output:
Building in Docker Cloud's infrastructure...
Cloning into '.'...
Warning: Permanently added the RSA host key for IP address '192.30.253.113' to the list of known hosts.
Reset branch 'develop'
Your branch is up-to-date with 'origin/develop'.
KernelVersion: 4.4.0-1060-aws
Components: [{u'Version': u'18.03.1-ee-3', u'Name': u'Engine', u'Details': {u'KernelVersion': u'4.4.0-1060-aws', u'Os': u'linux', u'BuildTime': u'2018-08-30T18:42:30.000000000+00:00', u'ApiVersion': u'1.37', u'MinAPIVersion': u'1.12', u'GitCommit': u'b9a5c95', u'Arch': u'amd64', u'Experimental': u'false', u'GoVersion': u'go1.10.2'}}]
Arch: amd64
BuildTime: 2018-08-30T18:42:30.000000000+00:00
ApiVersion: 1.37
Platform: {u'Name': u''}
Version: 18.03.1-ee-3
MinAPIVersion: 1.12
GitCommit: b9a5c95
Os: linux
GoVersion: go1.10.2
Starting build of index.docker.io/rowupper/hbase-base:1.4.9...
Step 1/9 : FROM openjdk:8-jre-alpine3.9
---> b76bbdb2809f
Step 2/9 : RUN apk add --no-cache wget bash perl
---> Running in 50cf82a30723
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
Executing busybox-1.29.3-r10.trigger
OK: 130 MiB in 60 packages
Removing intermediate container 50cf82a30723
---> 108b5b9b6569
Step 3/9 : ARG HBASE_VERSION
---> Running in 5407a0bcbf60
Removing intermediate container 5407a0bcbf60
---> ea35e0967933
Step 4/9 : ENV HBASE_HOME=/usr/local/hbase HBASE_CONF_DIR=/etc/hbase PATH=${HBASE_HOME}/bin:$PATH
---> Running in 3a74e814acc8
Removing intermediate container 3a74e814acc8
---> 7a289348ba9b
Step 5/9 : WORKDIR $HBASE_HOME
Removing intermediate container e842d4658bf1
---> a6fede2510ec
Step 6/9 : RUN wget -O - https://archive.apache.org/dist/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz | tar -xz --strip-components=1 --no-same-owner --no-same-permissions
---> Running in 39b75bc77c5a
--2019-03-19 18:46:05-- https://archive.apache.org/dist/hbase//hbase--bin.tar.gz
Resolving archive.apache.org... 163.172.17.199
Connecting to archive.apache.org|163.172.17.199|:443...
connected.
HTTP request sent, awaiting response...
404 Not Found
2019-03-19 18:46:06 ERROR 404: Not Found.
tar: invalid magic
tar: short read
Removing intermediate container 39b75bc77c5a
The command '/bin/sh -c wget -O - https://archive.apache.org/dist/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz | tar -xz --strip-components=1 --no-same-owner --no-same-permissions' returned a non-zero code: 1
Is visible that the variable is missing, what I need do to solve this issue?
Could you try ARG and ENV, like
ARG HBASE_HOME="default_value"
ENV HBASE_HOME="$HBASE_HOME"
in your Dockerfile
build-arg VALUE should override the "default_value".
The hooks/build file need living in the same directory of Dockerfile.
My project has several subfolders, each folder with a Dockerfile.

undefined method `source_url' for #<Chef::Cookbook::Metadata:0x000000006f1378>

I'm learning docker using book docker in practice.
I am working on technique 47 in chapter 5.
This recipe is about using chef for managing docker configurations.
The github link is here.
When I build the docker image from the container, I encounter below error.
$ docker build -t chef-example .
Sending build context to Docker daemon 9.728kB
Step 1/12 : FROM ubuntu:latest
---> ccc7a11d65b1
Step 2/12 : RUN apt-get update && apt-get install -yy git curl
---> Using cache
---> ef956c61c59f
Step 3/12 : RUN curl -L https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.3.5-1_amd64.deb -o chef.deb
---> Using cache
---> 1260301dbe67
Step 4/12 : RUN dpkg -i chef.deb && rm chef.deb
---> Using cache
---> 8c1aeaf84423
Step 5/12 : COPY . /chef
---> 18986195e732
Removing intermediate container 758dfce43670
Step 6/12 : WORKDIR /chef/cookbooks
---> fbdd9c386801
Removing intermediate container 936393187cb4
Step 7/12 : RUN knife cookbook site download apache2
---> Running in 2ba7d0765ae2
WARNING: No knife configuration file found
Downloading apache2 from the cookbooks site at version 5.0.1 to /chef/cookbooks/apache2-5.0.1.tar.gz
Cookbook saved: /chef/cookbooks/apache2-5.0.1.tar.gz
---> 8b3fa14f4416
Removing intermediate container 2ba7d0765ae2
Step 8/12 : RUN knife cookbook site download iptables
---> Running in 94275acfdb44
WARNING: No knife configuration file found
Downloading iptables from the cookbooks site at version 4.3.1 to /chef/cookbooks/iptables-4.3.1.tar.gz
Cookbook saved: /chef/cookbooks/iptables-4.3.1.tar.gz
---> c8a4c6d17253
Removing intermediate container 94275acfdb44
Step 9/12 : RUN knife cookbook site download logrotate
---> Running in 27b5f736d6cf
WARNING: No knife configuration file found
Downloading logrotate from the cookbooks site at version 2.2.0 to /chef/cookbooks/logrotate-2.2.0.tar.gz
Cookbook saved: /chef/cookbooks/logrotate-2.2.0.tar.gz
---> 1b4b4460bdc9
Removing intermediate container 27b5f736d6cf
Step 10/12 : RUN /bin/bash -c 'for f in $(ls *gz); do tar -zxf $f; rm $f; done'
---> Running in 7e6b912d910e
---> d5e77acc14f1
Removing intermediate container 7e6b912d910e
Step 11/12 : RUN chef-solo -c /chef/config.rb -j /chef/attributes.json
---> Running in a0c7f7f7a00a
[2017-12-05T08:01:08+00:00] INFO: Forking chef instance to converge...
[2017-12-05T08:01:08+00:00] INFO: *** Chef 11.18.0.rc.1 ***
[2017-12-05T08:01:08+00:00] INFO: Chef-client pid: 9
[2017-12-05T08:01:09+00:00] INFO: Setting the run_list to
["recipe[apache2::default]", "recipe[mysite::default]"] from CLI options
[2017-12-05T08:01:09+00:00] INFO: Run List is
[recipe[apache2::default], recipe[mysite::default]]
[2017-12-05T08:01:09+00:00] INFO: Run List expands to [apache2::default, mysite::default]
[2017-12-05T08:01:09+00:00] INFO: Starting Chef Run for 8089fe031125
[2017-12-05T08:01:09+00:00] INFO: Running start handlers
[2017-12-05T08:01:09+00:00] INFO: Start handlers complete.
[2017-12-05T08:01:09+00:00] ERROR: Running exception handlers
[2017-12-05T08:01:09+00:00] ERROR: Exception handlers complete
[2017-12-05T08:01:09+00:00] FATAL: Stacktrace dumped to /chef/cache/chef-stacktrace.out
[2017-12-05T08:01:09+00:00] ERROR: undefined method `source_url' for #<Chef::Cookbook::Metadata:0x000000006f1378>
[2017-12-05T08:01:09+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
The command '/bin/sh -c chef-solo -c /chef/config.rb -j /chef/attributes.json' returned a non-zero code: 1
I'm new to chef. Not sure why I'm getting this error.
Your cookbook doesn't have a metadata.rb which is probably breaking things, but you're also using Chef 11.18 which is entirely out of support at this point. Current Chef is 13.6.4.
Also we don't really recommend using Chef to build container images in most cases. It can definitely work, but overall Chef was built to manage servers so this will result in server-like fat images in most cases.

Is my hyperledger peer make successful?

I am following http://hyperledger-fabric.readthedocs.io/en/latest/Setup/Chaincode-setup/ and using Options 1 i.e. vagrant development environment.When I run make membersrvc && membersrvc i get below message :
build/bin/membersrvc
CGO_CFLAGS=" " CGO_LDFLAGS="-lrocksdb -lstdc++ -lm -lz -lbz2 -lsnappy"
GOBIN=/opt/gopath/src/github.com/hyperledger/fabric/build/bin go install -
ldflags "-X github.com/hyperledger/fabric/metadata.Version=0.7.0-snapshot-
131b36c" github.com/hyperledger/fabric/membersrvc
Binary available as build/bin/membersrvc
I assume membersrvc is running because "ps -a | grep membersrvc" returns
2486 pts/0 00:00:01 membersrvc
After this I ran "make peer" and got this :
Building docker javaenv-image
docker build -t hyperledger/fabric-javaenv build/image/javaenv
Sending build context to Docker daemon 44.03 kB
Step 1 : FROM openjdk:8
---> 96cddf5ae9f1
Step 2 : RUN wget https://services.gradle.org/distributions/gradle-2.12-
bin.zip -P /tmp --quiet
---> Using cache
---> 3dbbd6c16d7e
Step 3 : RUN unzip -qo /tmp/gradle-2.12-bin.zip -d /opt && rm /tmp/gradle-
2.12-b in.zip
---> Using cache
---> bd1d42253704
Step 4 : RUN ln -s /opt/gradle-2.12/bin/gradle /usr/bin
---> Using cache
---> 248e99587f37
Step 5 : ENV MAVEN_VERSION 3.3.9
---> Using cache
---> 27105db40f7a
Step 6 : ENV USER_HOME_DIR "/root"
---> Using cache
---> 03f5e84bf9ce
Step 7 : RUN mkdir -p /usr/share/maven /usr/share/maven/ref && curl -fsSL
http ://apache.osuosl.org/maven/maven-
3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_V ERSION-
bin.tar.gz | tar -xzC /usr/share/maven --strip-components=1 && ln -
s /usr/share/maven/bin/mvn /usr/bin/mvn
---> Running in 6ec30acda848
This stays on the window forever and nothing happens after this.
After this i try to run "peer node start --peer-chaincodedev" in another window
but i get below error:
No command 'peer' found, did you mean:
Why is not my peer created yet?
#PySa - a correct build of the Peer will drop you back to the cmd line and if you then issue the cmd peer it will show you the help / switches. To make / build the memberservices and peer all you have to do is the following:
vagrant up
ssh into the machine
cd /hyperledger
make membersrvc
make peer - this can take a LOOOOONG time depending on your
machine & internet connection - the process has to download a LOT of
data to complete correctly.
Once the above is done I would also strongly suggest you run make unit-test and when that's done make behave - again these will take a long to run but assuming all is well by the time it's done you'll be able to run membersrvc and peer node start (each in their own terminal windows) without problems...
FYI - the memberservices does NOT report anything to the console - the peer however does...

Resources