Docker hub automated build falling with missing variable - docker

I have been set the "BUILD ENVIRONMENT VARIABLES" with the necessary variable, and added the hooks/build with:
#! /bin/bash
docker build \
--build-arg HBASE_VERSION="${HBASE_VERSION}" \
-f "${DOCKERFILE_PATH}" \
-t "${IMAGE_NAME}" .
Not are passing in build process, take a look in the log output:
Building in Docker Cloud's infrastructure...
Cloning into '.'...
Warning: Permanently added the RSA host key for IP address '192.30.253.113' to the list of known hosts.
Reset branch 'develop'
Your branch is up-to-date with 'origin/develop'.
KernelVersion: 4.4.0-1060-aws
Components: [{u'Version': u'18.03.1-ee-3', u'Name': u'Engine', u'Details': {u'KernelVersion': u'4.4.0-1060-aws', u'Os': u'linux', u'BuildTime': u'2018-08-30T18:42:30.000000000+00:00', u'ApiVersion': u'1.37', u'MinAPIVersion': u'1.12', u'GitCommit': u'b9a5c95', u'Arch': u'amd64', u'Experimental': u'false', u'GoVersion': u'go1.10.2'}}]
Arch: amd64
BuildTime: 2018-08-30T18:42:30.000000000+00:00
ApiVersion: 1.37
Platform: {u'Name': u''}
Version: 18.03.1-ee-3
MinAPIVersion: 1.12
GitCommit: b9a5c95
Os: linux
GoVersion: go1.10.2
Starting build of index.docker.io/rowupper/hbase-base:1.4.9...
Step 1/9 : FROM openjdk:8-jre-alpine3.9
---> b76bbdb2809f
Step 2/9 : RUN apk add --no-cache wget bash perl
---> Running in 50cf82a30723
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
Executing busybox-1.29.3-r10.trigger
OK: 130 MiB in 60 packages
Removing intermediate container 50cf82a30723
---> 108b5b9b6569
Step 3/9 : ARG HBASE_VERSION
---> Running in 5407a0bcbf60
Removing intermediate container 5407a0bcbf60
---> ea35e0967933
Step 4/9 : ENV HBASE_HOME=/usr/local/hbase HBASE_CONF_DIR=/etc/hbase PATH=${HBASE_HOME}/bin:$PATH
---> Running in 3a74e814acc8
Removing intermediate container 3a74e814acc8
---> 7a289348ba9b
Step 5/9 : WORKDIR $HBASE_HOME
Removing intermediate container e842d4658bf1
---> a6fede2510ec
Step 6/9 : RUN wget -O - https://archive.apache.org/dist/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz | tar -xz --strip-components=1 --no-same-owner --no-same-permissions
---> Running in 39b75bc77c5a
--2019-03-19 18:46:05-- https://archive.apache.org/dist/hbase//hbase--bin.tar.gz
Resolving archive.apache.org... 163.172.17.199
Connecting to archive.apache.org|163.172.17.199|:443...
connected.
HTTP request sent, awaiting response...
404 Not Found
2019-03-19 18:46:06 ERROR 404: Not Found.
tar: invalid magic
tar: short read
Removing intermediate container 39b75bc77c5a
The command '/bin/sh -c wget -O - https://archive.apache.org/dist/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz | tar -xz --strip-components=1 --no-same-owner --no-same-permissions' returned a non-zero code: 1
Is visible that the variable is missing, what I need do to solve this issue?

Could you try ARG and ENV, like
ARG HBASE_HOME="default_value"
ENV HBASE_HOME="$HBASE_HOME"
in your Dockerfile
build-arg VALUE should override the "default_value".

The hooks/build file need living in the same directory of Dockerfile.
My project has several subfolders, each folder with a Dockerfile.

Related

"container-suseconnect-zypp" : dockerfile fail on PAYG SLES15.1 VM(azuer)

I'm new here.
I'm trying to Creating custom Docker container images on Azure VM.
But I cant create them because of "container-suseconnect-zypp"
environment : AZURE Virtual Machine (Standard B4ms) - SLES15 SP1 (PAYG)
first of all, I've no problem for getting repository on local (below)
# zypper lr -u
Refreshing service 'container-suseconnect-zypp'.
Repository priorities are without effect. All enabled repositories share the same priority.
# | Alias | Name | Enabled | GPG Check | Refresh | URI
----+-------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------+---------+-----------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 | Basesystem_Module_x86_64:SLE-Module-Basesystem15-SP1-Debuginfo-Pool | SLE-Module-Basesystem15-SP1-Debuginfo-Pool | No | ---- | ---- | plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Products/SLE-Module-Basesystem/15-SP1/x86_64/product_debug/
.
.
.
Secondly, I've already started "containerbuild-regionsrv" service
and used "host network" when I built Docker Image
with reference to the following : https://documentation.suse.com/container/all/single-html/SLES-container/index.html
> sudo systemctl start containerbuild-regionsrv
> sudo systemctl enable containerbuild-regionsrv
> docker build --network host /build-directory/
my Dockerfile is
FROM registry.suse.com/suse/sle15:15.1
# Extra metadata
LABEL version="1.0"
LABEL description="Base SLES 15 SP1 SAP image"
# Create zypper repos and empty folder in NEW Container
RUN mkdir -p /etc/zypp/repos.d \
&& mkdir -p /jail
# add repo from local repo
RUN zypper ar plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Products/SLE-Module-Basesystem/15-SP1/x86_64/product_debug/ \
&& zypper ar plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Updates/SLE-Module-Basesystem/15-SP1/x86_64/update_debug/
.
.
.
# Update repos and install missing packages:
RUN update-ca-certificates && zypper ref -s && zypper update -y
here's my Question.
Why my URI of SLES repo start with not "https://" but "plugin:/"? Is there no problem for adding repo inside container?
When I build Docker Image from Dockerfile, My result is :
# docker build --network host -t base_os .
Sending build context to Docker daemon 4.359GB
Step 1/6 : FROM registry.suse.com/suse/sle15:15.1
---> d6d9e74d8ba3
Step 2/6 : LABEL version="1.0"
---> Running in 51b8f6dc39e5
Removing intermediate container 51b8f6dc39e5
---> 12b8756a372c
Step 3/6 : LABEL description="Base SLES 15 SP1 SAP image"
---> Running in 1b57cfcdceea
Removing intermediate container 1b57cfcdceea
---> aa8ddd1de6b4
Step 4/6 : RUN mkdir -p /etc/zypp/repos.d && mkdir -p /jail
---> Running in fd5a0d6cf9bc
Removing intermediate container fd5a0d6cf9bc
---> 982b38ddd9c7
Step 5/6 : RUN zypper ar plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Products/SLE-Module-Basesystem/15-SP1/x86_64/product_debug/
---> Running in 74416afc3982
Removing intermediate container 74416afc3982
---> 3e78bccdfcd1
Step 6/6 : RUN update-ca-certificates && zypper ref -s && zypper update -y
---> Running in 0d52ac4d4e28
Refreshing service 'container-suseconnect-zypp'.
Warning: Skipping service 'container-suseconnect-zypp' because of the above error.
All services have been refreshed.
Warning: There are no enabled repositories defined.
Use 'zypper addrepo' or 'zypper modifyrepo' commands to add or enable repositories.
Problem retrieving the repository index file for service 'container-suseconnect-zypp':
[container-suseconnect-zypp|file:/usr/lib/zypp/plugins/services/container-suseconnect-zypp]
I think, because of 'container-suseconnect-zypp' issue, I can't install some additional packages needed for SAP inside my container, even if I'll build Docker image without Dockerfile step 6/6.
Is this problem related with Azure VM using PAYG SLES15?
Do I need SSL certificate used by RMT?

Missing perl command in Perl image

I'm sure this is an incredibly simple fix. I tried to build Docker image with Perl in it (plus some Perl) modules. However, when I go to run this, it says there is no /bin/perl. The question is:
Why did the Perl Docker Image not have Perl in it?
My Dockerfile below:
FROM perl:5.20
ENV PERL_MM_USE_DEFAULT 1
RUN cpan install Net::SSL inc:latest
RUN mkdir /ssc
COPY /ssc /ssc
RUN mkdir /tmp/ssc-bin-files;cp /ssc/bin/*.sh /tmp/ssc-bin-files;chmod a+rx /tmp/ssc-bin-files/*;cp /tmp/ssc-bin-files/* /ssc/bin
RUN chmod a+rx /ssc/bin/*.sh
ENTRYPOINT ["/ssc/bin/put-and-submit.sh"]
Jenkins Pipeline snippet:
stage('Build, Tag and Push SSC Dockerfile'){
tagAsTest = "${IMAGE_NAME}:test"
REPO = "chq-ic2e-sprint-images-docker-local"
println "Docker App Build"
docker.build(tagAsTest,"-f Dockerfile .")
sh 'docker image ls | grep rules-client'
}
stage('Set image tag to :approved'){
hasReachedDockerComposeUp=false;
REPO = "chq-ic2e-sprint-images-docker-local"
sh "docker tag ${IMAGE_NAME}:test ${IMAGE_NAME}:approved"
buildInfo = rtDocker.push("${IMAGE_NAME}:approved", REPO , buildInfo)
server.publishBuildInfo buildInfo
}
The Jenkins log below:
[Pipeline] sh
+ docker build -t chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/ssc-cost-file-processor:test -f Dockerfile .
Sending build context to Docker daemon 39.42kB
Step 1/8 : FROM perl:5.20
---> bbe5a82c1dbe
Step 2/8 : ENV PERL_MM_USE_DEFAULT 1
---> Using cache
---> ca2769a89ab8
Step 3/8 : RUN cpan install Net::SSL inc:latest
---> Using cache
---> 1e53f0573131
Step 4/8 : RUN mkdir /ssc
---> Using cache
---> a324effec8ce
Step 5/8 : COPY /ssc /ssc
---> d40bf34f8565
Step 6/8 : RUN mkdir /tmp/ssc-bin-files;cp /ssc/bin/*.sh /tmp/ssc-bin-files;chmod a+rx /tmp/ssc-bin-files/*;cp /tmp/ssc-bin-files/* /ssc/bin
---> Running in 02386f41174f
Removing intermediate container 02386f41174f
---> 4767a8e6f23a
Step 7/8 : RUN chmod a+rx /ssc/bin/*.sh
---> Running in 07646aa96048
Removing intermediate container 07646aa96048
---> f070fcd8a9e9
Step 8/8 : ENTRYPOINT ["/ssc/bin/put-and-submit.sh"]
---> Running in e6bab12f8f40
Removing intermediate container e6bab12f8f40
---> 1422df9d957b
Successfully built 1422df9d957b
Successfully tagged chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/ssc-cost-file-processor:test
[Pipeline] sh
+ docker image ls
+ grep rules-client
chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/rules-client approved da334d1d8fae 2 days ago 22.5MB
chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/rules-client test da334d1d8fae 2 days ago 22.5MB
Script is being run via pipeline like this:
stage('Run image'){
sh '''
docker run -i -v \
--mount type=bind,source="$(pwd)/host-dirs,target=/host-dirs" \
chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/ssc-cost-file-processor:approved
sh
'''
}
or from terminal like this:
#!/bin/bash
docker run -it \
--mount type=bind,source="$(pwd)/host-dirs,target=/host-dirs" \
chq-ic2e-sprint-images-docker-local.artifactory.swg-devops.com/ssc-cost-file-processor:approved sh
The perl binary is probably in /usr/local/bin/perl. You can check that in a shell in the running container.
host> docker exec -it your_container bash
container> which perl
/usr/local/bin/perl
container> exit
It sure has perl version 5.20 in it. I'm just curious about the entrypoint script in your dockerfile. You're running a shell script by default when the container is started. What the script starts or runs? If you want to run perl without entering the container, use --entrypoint=perl with your docker run command.
docker run --rm --name perl perl:5.20 perl --version
### Output
This is perl 5, version 20, subversion 3 (v5.20.3) built for x86_64-linux
(with 1 registered patch, see perl -V for more detail)
Copyright 1987-2015, Larry Wall
Perl may be copied only under the terms of either the Artistic License or the
GNU General Public License, which may be found in the Perl 5 source kit.
Complete documentation for Perl, including FAQ lists, should be found on
this system using "man perl" or "perldoc perl". If you have access to the
Internet, point your browser at http://www.perl.org/, the Perl Home Page.
###

Trying to automate arm64 build on Docker Hub

From what I understood, it is possible to build an arm64v8 image on the Docker Hub infrastructure (that use amd64). According to this thread it can be done using Qemu.
So I added a pre_build hook:
#!/bin/bash
docker run --rm --privileged multiarch/qemu-user-static:register --reset
The Qemu binaries are also downloaded inside the container:
FROM alpine AS builder
RUN apk update
RUN apk add curl
WORKDIR /qemu
# downloaded here...
RUN curl -L https://github.com/balena-io/qemu/releases/download/v3.0.0%2Bresin/qemu-3.0.0+resin-arm.tar.gz | tar zxvf - -C . && mv qemu-3.0.0+resin-arm/qemu-arm-static .
FROM area51/gdal:arm64v8-2.2.3
# ...then added here
COPY --from=builder /qemu/qemu-arm-static /usr/bin
RUN apt-get update
RUN apt-get install -y libgdal-dev python3-pip libspatialindex-dev unar bc
ENV CPLUS_INCLUDE_PATH=/usr/include/gdal
ENV C_INCLUDE_PATH=/usr/include/gdal
ADD ./requirements.txt .
RUN pip3 install -r requirements.txt
RUN mkdir /code
ADD . /code/
WORKDIR /code
CMD python3.5 server.py
EXPOSE 8080
Unfortunatly, it doesn't works:
Cloning into '.'...
Warning: Permanently added the RSA host key for IP address '140.82.114.4' to the list of known hosts.
Switched to a new branch 'auto-build'
Executing pre_build hook...
Unable to find image 'multiarch/qemu-user-static:register' locally
register: Pulling from multiarch/qemu-user-static
bdbbaa22dec6: Pulling fs layer
42399a41a764: Pulling fs layer
ed8a5179ae11: Pulling fs layer
1ec39da9c97d: Pulling fs layer
1ec39da9c97d: Waiting
42399a41a764: Verifying Checksum
42399a41a764: Download complete
bdbbaa22dec6: Verifying Checksum
bdbbaa22dec6: Download complete
ed8a5179ae11: Verifying Checksum
ed8a5179ae11: Download complete
1ec39da9c97d: Verifying Checksum
1ec39da9c97d: Download complete
bdbbaa22dec6: Pull complete
42399a41a764: Pull complete
ed8a5179ae11: Pull complete
1ec39da9c97d: Pull complete
Digest: sha256:7502ce31890ab5da0ab6e5e5edc1e2563caa45da1c5d76aaf7dc4252aea926dc
Status: Downloaded newer image for multiarch/qemu-user-static:register
Setting /usr/bin/qemu-alpha-static as binfmt interpreter for alpha
Setting /usr/bin/qemu-arm-static as binfmt interpreter for arm
Setting /usr/bin/qemu-armeb-static as binfmt interpreter for armeb
Setting /usr/bin/qemu-sparc-static as binfmt interpreter for sparc
Setting /usr/bin/qemu-sparc32plus-static as binfmt interpreter for sparc32plus
Setting /usr/bin/qemu-sparc64-static as binfmt interpreter for sparc64
Setting /usr/bin/qemu-ppc-static as binfmt interpreter for ppc
Setting /usr/bin/qemu-ppc64-static as binfmt interpreter for ppc64
Setting /usr/bin/qemu-ppc64le-static as binfmt interpreter for ppc64le
Setting /usr/bin/qemu-m68k-static as binfmt interpreter for m68k
Setting /usr/bin/qemu-mips-static as binfmt interpreter for mips
Setting /usr/bin/qemu-mipsel-static as binfmt interpreter for mipsel
Setting /usr/bin/qemu-mipsn32-static as binfmt interpreter for mipsn32
Setting /usr/bin/qemu-mipsn32el-static as binfmt interpreter for mipsn32el
Setting /usr/bin/qemu-mips64-static as binfmt interpreter for mips64
Setting /usr/bin/qemu-mips64el-static as binfmt interpreter for mips64el
Setting /usr/bin/qemu-sh4-static as binfmt interpreter for sh4
Setting /usr/bin/qemu-sh4eb-static as binfmt interpreter for sh4eb
Setting /usr/bin/qemu-s390x-static as binfmt interpreter for s390x
Setting /usr/bin/qemu-aarch64-static as binfmt interpreter for aarch64
Setting /usr/bin/qemu-aarch64_be-static as binfmt interpreter for aarch64_be
Setting /usr/bin/qemu-hppa-static as binfmt interpreter for hppa
Setting /usr/bin/qemu-riscv32-static as binfmt interpreter for riscv32
Setting /usr/bin/qemu-riscv64-static as binfmt interpreter for riscv64
Setting /usr/bin/qemu-xtensa-static as binfmt interpreter for xtensa
Setting /usr/bin/qemu-xtensaeb-static as binfmt interpreter for xtensaeb
Setting /usr/bin/qemu-microblaze-static as binfmt interpreter for microblaze
Setting /usr/bin/qemu-microblazeel-static as binfmt interpreter for microblazeel
Setting /usr/bin/qemu-or1k-static as binfmt interpreter for or1k
KernelVersion: 4.4.0-1060-aws
Components: [{u'Version': u'18.03.1-ee-3', u'Name': u'Engine', u'Details': {u'KernelVersion': u'4.4.0-1060-aws', u'Os': u'linux', u'BuildTime': u'2018-08-30T18:42:30.000000000+00:00', u'ApiVersion': u'1.37', u'MinAPIVersion': u'1.12', u'GitCommit': u'b9a5c95', u'Arch': u'amd64', u'Experimental': u'false', u'GoVersion': u'go1.10.2'}}]
Arch: amd64
BuildTime: 2018-08-30T18:42:30.000000000+00:00
ApiVersion: 1.37
Platform: {u'Name': u''}
Version: 18.03.1-ee-3
MinAPIVersion: 1.12
GitCommit: b9a5c95
Os: linux
GoVersion: go1.10.2
Starting build of index.docker.io/cl00e9ment/open-elevation:latest...
Step 1/18 : FROM alpine AS builder
---> e7d92cdc71fe
Step 2/18 : RUN apk update
---> Running in a62df65e92ac
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/community/x86_64/APKINDEX.tar.gz
v3.11.3-6-gb1cd1b7acf [http://dl-cdn.alpinelinux.org/alpine/v3.11/main]
v3.11.3-5-gb26b362c4a [http://dl-cdn.alpinelinux.org/alpine/v3.11/community]
OK: 11259 distinct packages available
Removing intermediate container a62df65e92ac
---> 9decee1216df
Step 3/18 : RUN apk add curl
---> Running in 440f41edd63d
(1/4) Installing ca-certificates (20191127-r0)
(2/4) Installing nghttp2-libs (1.40.0-r0)
(3/4) Installing libcurl (7.67.0-r0)
(4/4) Installing curl (7.67.0-r0)
Executing busybox-1.31.1-r9.trigger
Executing ca-certificates-20191127-r0.trigger
OK: 7 MiB in 18 packages
Removing intermediate container 440f41edd63d
---> 54c70441e6d3
Step 4/18 : WORKDIR /qemu
Removing intermediate container 58b03a58671b
---> 89c6e32b5854
Step 5/18 : RUN curl -L https://github.com/balena-io/qemu/releases/download/v3.0.0%2Bresin/qemu-3.0.0+resin-arm.tar.gz | tar zxvf - -C . && mv qemu-3.0.0+resin-arm/qemu-arm-static .
---> Running in 11696855e374
[91m % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0[0m
[91m 100 619 0 619 0 0 3327 0 --:--:-- --:--:-- --:--:-- 3917
[0m
qemu-3.0.0+resin-arm/
[91m 19 1678k 19 321k 0 0 846k 0 0:00:01 --:--:-- 0:00:01 846k[0m
[91m 100 1678k 100 1678k 0 0 2535k 0 --:--:-- --:--:-- --:--:-- 4828k
[0m
qemu-3.0.0+resin-arm/qemu-arm-static
Removing intermediate container 11696855e374
---> 80668e34eb37
Step 6/18 : FROM area51/gdal:arm64v8-2.2.3
---> 4edbfeef8f1a
Step 7/18 : COPY --from=builder /qemu/qemu-arm-static /usr/bin
---> 91c196da9280
Step 8/18 : RUN apt-get update
---> Running in 37c97a8903f3
[91mstandard_init_linux.go:190: exec user process caused "no such file or directory"
[0m
Removing intermediate container 37c97a8903f3
The command '/bin/sh -c apt-get update' returned a non-zero code: 1
The error:
standard_init_linux.go:190: exec user process caused "no such file or directory"
Looks like that one:
standard_init_linux.go:190: exec user process caused "exec format error"
That I'm starting to be used to see and means that there is an architecture problem. Does the first one mean the same thing?
If there is again an architecture problem, what I'm missing?
I was able to fix the "no such file or directory" error using the solution from this article...
https://stackoverflow.com/a/56063679/1194731

undefined method `source_url' for #<Chef::Cookbook::Metadata:0x000000006f1378>

I'm learning docker using book docker in practice.
I am working on technique 47 in chapter 5.
This recipe is about using chef for managing docker configurations.
The github link is here.
When I build the docker image from the container, I encounter below error.
$ docker build -t chef-example .
Sending build context to Docker daemon 9.728kB
Step 1/12 : FROM ubuntu:latest
---> ccc7a11d65b1
Step 2/12 : RUN apt-get update && apt-get install -yy git curl
---> Using cache
---> ef956c61c59f
Step 3/12 : RUN curl -L https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.3.5-1_amd64.deb -o chef.deb
---> Using cache
---> 1260301dbe67
Step 4/12 : RUN dpkg -i chef.deb && rm chef.deb
---> Using cache
---> 8c1aeaf84423
Step 5/12 : COPY . /chef
---> 18986195e732
Removing intermediate container 758dfce43670
Step 6/12 : WORKDIR /chef/cookbooks
---> fbdd9c386801
Removing intermediate container 936393187cb4
Step 7/12 : RUN knife cookbook site download apache2
---> Running in 2ba7d0765ae2
WARNING: No knife configuration file found
Downloading apache2 from the cookbooks site at version 5.0.1 to /chef/cookbooks/apache2-5.0.1.tar.gz
Cookbook saved: /chef/cookbooks/apache2-5.0.1.tar.gz
---> 8b3fa14f4416
Removing intermediate container 2ba7d0765ae2
Step 8/12 : RUN knife cookbook site download iptables
---> Running in 94275acfdb44
WARNING: No knife configuration file found
Downloading iptables from the cookbooks site at version 4.3.1 to /chef/cookbooks/iptables-4.3.1.tar.gz
Cookbook saved: /chef/cookbooks/iptables-4.3.1.tar.gz
---> c8a4c6d17253
Removing intermediate container 94275acfdb44
Step 9/12 : RUN knife cookbook site download logrotate
---> Running in 27b5f736d6cf
WARNING: No knife configuration file found
Downloading logrotate from the cookbooks site at version 2.2.0 to /chef/cookbooks/logrotate-2.2.0.tar.gz
Cookbook saved: /chef/cookbooks/logrotate-2.2.0.tar.gz
---> 1b4b4460bdc9
Removing intermediate container 27b5f736d6cf
Step 10/12 : RUN /bin/bash -c 'for f in $(ls *gz); do tar -zxf $f; rm $f; done'
---> Running in 7e6b912d910e
---> d5e77acc14f1
Removing intermediate container 7e6b912d910e
Step 11/12 : RUN chef-solo -c /chef/config.rb -j /chef/attributes.json
---> Running in a0c7f7f7a00a
[2017-12-05T08:01:08+00:00] INFO: Forking chef instance to converge...
[2017-12-05T08:01:08+00:00] INFO: *** Chef 11.18.0.rc.1 ***
[2017-12-05T08:01:08+00:00] INFO: Chef-client pid: 9
[2017-12-05T08:01:09+00:00] INFO: Setting the run_list to
["recipe[apache2::default]", "recipe[mysite::default]"] from CLI options
[2017-12-05T08:01:09+00:00] INFO: Run List is
[recipe[apache2::default], recipe[mysite::default]]
[2017-12-05T08:01:09+00:00] INFO: Run List expands to [apache2::default, mysite::default]
[2017-12-05T08:01:09+00:00] INFO: Starting Chef Run for 8089fe031125
[2017-12-05T08:01:09+00:00] INFO: Running start handlers
[2017-12-05T08:01:09+00:00] INFO: Start handlers complete.
[2017-12-05T08:01:09+00:00] ERROR: Running exception handlers
[2017-12-05T08:01:09+00:00] ERROR: Exception handlers complete
[2017-12-05T08:01:09+00:00] FATAL: Stacktrace dumped to /chef/cache/chef-stacktrace.out
[2017-12-05T08:01:09+00:00] ERROR: undefined method `source_url' for #<Chef::Cookbook::Metadata:0x000000006f1378>
[2017-12-05T08:01:09+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
The command '/bin/sh -c chef-solo -c /chef/config.rb -j /chef/attributes.json' returned a non-zero code: 1
I'm new to chef. Not sure why I'm getting this error.
Your cookbook doesn't have a metadata.rb which is probably breaking things, but you're also using Chef 11.18 which is entirely out of support at this point. Current Chef is 13.6.4.
Also we don't really recommend using Chef to build container images in most cases. It can definitely work, but overall Chef was built to manage servers so this will result in server-like fat images in most cases.

Is my hyperledger peer make successful?

I am following http://hyperledger-fabric.readthedocs.io/en/latest/Setup/Chaincode-setup/ and using Options 1 i.e. vagrant development environment.When I run make membersrvc && membersrvc i get below message :
build/bin/membersrvc
CGO_CFLAGS=" " CGO_LDFLAGS="-lrocksdb -lstdc++ -lm -lz -lbz2 -lsnappy"
GOBIN=/opt/gopath/src/github.com/hyperledger/fabric/build/bin go install -
ldflags "-X github.com/hyperledger/fabric/metadata.Version=0.7.0-snapshot-
131b36c" github.com/hyperledger/fabric/membersrvc
Binary available as build/bin/membersrvc
I assume membersrvc is running because "ps -a | grep membersrvc" returns
2486 pts/0 00:00:01 membersrvc
After this I ran "make peer" and got this :
Building docker javaenv-image
docker build -t hyperledger/fabric-javaenv build/image/javaenv
Sending build context to Docker daemon 44.03 kB
Step 1 : FROM openjdk:8
---> 96cddf5ae9f1
Step 2 : RUN wget https://services.gradle.org/distributions/gradle-2.12-
bin.zip -P /tmp --quiet
---> Using cache
---> 3dbbd6c16d7e
Step 3 : RUN unzip -qo /tmp/gradle-2.12-bin.zip -d /opt && rm /tmp/gradle-
2.12-b in.zip
---> Using cache
---> bd1d42253704
Step 4 : RUN ln -s /opt/gradle-2.12/bin/gradle /usr/bin
---> Using cache
---> 248e99587f37
Step 5 : ENV MAVEN_VERSION 3.3.9
---> Using cache
---> 27105db40f7a
Step 6 : ENV USER_HOME_DIR "/root"
---> Using cache
---> 03f5e84bf9ce
Step 7 : RUN mkdir -p /usr/share/maven /usr/share/maven/ref && curl -fsSL
http ://apache.osuosl.org/maven/maven-
3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_V ERSION-
bin.tar.gz | tar -xzC /usr/share/maven --strip-components=1 && ln -
s /usr/share/maven/bin/mvn /usr/bin/mvn
---> Running in 6ec30acda848
This stays on the window forever and nothing happens after this.
After this i try to run "peer node start --peer-chaincodedev" in another window
but i get below error:
No command 'peer' found, did you mean:
Why is not my peer created yet?
#PySa - a correct build of the Peer will drop you back to the cmd line and if you then issue the cmd peer it will show you the help / switches. To make / build the memberservices and peer all you have to do is the following:
vagrant up
ssh into the machine
cd /hyperledger
make membersrvc
make peer - this can take a LOOOOONG time depending on your
machine & internet connection - the process has to download a LOT of
data to complete correctly.
Once the above is done I would also strongly suggest you run make unit-test and when that's done make behave - again these will take a long to run but assuming all is well by the time it's done you'll be able to run membersrvc and peer node start (each in their own terminal windows) without problems...
FYI - the memberservices does NOT report anything to the console - the peer however does...

Resources