Skaffold and microk8s -- getting started -- x509: certificate signed by unknown authority - docker

Try to get started with skaffold, hitting lots of issues. So I went back to basics and tried to get the examples running:
cloned https://github.com/GoogleContainerTools/skaffold
ran cd ~/git/skaffold/examples/getting-started/ and then tried to get going;
$ skaffold dev --default-repo=aliwatters
Listing files to watch...
- skaffold-example
Generating tags...
- skaffold-example -> aliwatters/skaffold-example:v1.18.0-2-gf0bfcccce
Checking cache...
- skaffold-example: Not found. Building
Building [skaffold-example]...
Sending build context to Docker daemon 3.072kB
Step 1/8 : FROM golang:1.12.9-alpine3.10 as builder
---> e0d646523991
Step 2/8 : COPY main.go .
---> Using cache
---> fb29e25db0a3
Step 3/8 : ARG SKAFFOLD_GO_GCFLAGS
---> Using cache
---> aa8dd4cbab42
Step 4/8 : RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -o /app main.go
---> Using cache
---> 9a666995c00a
Step 5/8 : FROM alpine:3.10
---> be4e4bea2c2e
Step 6/8 : ENV GOTRACEBACK=single
---> Using cache
---> bdb74c01e0b9
Step 7/8 : CMD ["./app"]
---> Using cache
---> 15c248dd54e9
Step 8/8 : COPY --from=builder /app .
---> Using cache
---> 73564337b083
Successfully built 73564337b083
Successfully tagged aliwatters/skaffold-example:v1.18.0-2-gf0bfcccce
The push refers to repository [docker.io/aliwatters/skaffold-example]
37806ae41d23: Preparing
1b3ee35aacca: Preparing
37806ae41d23: Pushed
1b3ee35aacca: Pushed
v1.18.0-2-gf0bfcccce: digest: sha256:a8defaa979650baea27a437318a3c4cd51c44397d6e2c1910e17d81d0cde43ac size: 739
Tags used in deployment:
- skaffold-example -> aliwatters/skaffold-example:v1.18.0-2-gf0bfcccce#sha256:a8defaa979650baea27a437318a3c4cd51c44397d6e2c1910e17d81d0cde43ac
Deploy Failed. Could not connect to cluster microk8s due to "https://127.0.0.1:16443/version?timeout=32s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "10.152.183.1"). Check your connection for the cluster.
So this isn't making sense to me, the https://127.0.0.1:16443/version?timeout=32s is kubectl by the looks of it and it has a self-signed cert (viewed in the browser)
$ snap version
snap 2.48.2
snapd 2.48.2
series 16
ubuntu 20.04
kernel 5.4.0-60-generic
$ snap list
# ...
microk8s v1.20.1 1910 1.20/stable canonical✓ classic
# ...
$ microk8s kubectl version
Client Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.1-34+e7db93d188d0d1", GitCommit:"e7db93d188d0d12f2fe5336d1b85cdb94cb909d3", GitTreeState:"clean", BuildDate:"2021-01-11T23:48:42Z", GoVersion:"go1.15.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.1-34+e7db93d188d0d1", GitCommit:"e7db93d188d0d12f2fe5336d1b85cdb94cb909d3", GitTreeState:"clean", BuildDate:"2021-01-11T23:50:46Z", GoVersion:"go1.15.6", Compiler:"gc", Platform:"linux/amd64"}
$ skaffold version
v1.18.0
$ docker version
Client: Docker Engine - Community
Version: 19.03.4-rc1
...
Where do I start with debugging this?
Thanks for any ideas!

Solved via the github issues https://github.com/GoogleContainerTools/skaffold/issues/5283 (thx briandealwis)
Combo of an alias, and an export of config was needed, so skaffold can understand the my setup.
$ sudo snap unalias kubectl
$ sudo snap install kubectl --classic
$ microk8s.kubectl config view --raw > $HOME/.kube/config
$ skaffold dev --default-repo=<your-docker-repository>
Full output
$ sudo snap unalias kubectl
# just in case
ali#stinky:~/git/skaffold/examples/getting-started (master)$ sudo snap install kubectl --classic
kubectl 1.20.2 from Canonical✓ installed
ali#stinky:~/git/skaffold/examples/getting-started (master)$ which kubectl
/snap/bin/kubectl
ali#stinky:~/git/skaffold/examples/getting-started (master)$ microk8s.kubectl config view --raw > $HOME/.kube/config
ali#stinky:~/git/skaffold/examples/getting-started (master)$ skaffold dev --default-repo=aliwatters
Listing files to watch...
- skaffold-example
Generating tags...
- skaffold-example -> aliwatters/skaffold-example:v1.18.0-2-gf0bfcccce
Checking cache...
- skaffold-example: Found Remotely
Tags used in deployment:
- skaffold-example -> aliwatters/skaffold-example:v1.18.0-2-gf0bfcccce#sha256:a8defaa979650baea27a437318a3c4cd51c44397d6e2c1910e17d81d0cde43ac
Starting deploy...
- pod/getting-started created
Waiting for deployments to stabilize...
Deployments stabilized in 23.793238ms
Press Ctrl+C to exit
Watching for changes...
[getting-started] Hello world!
[getting-started] Hello world!
[getting-started] Hello world!
# ^C
Cleaning up...
- pod "getting-started" deleted

Related

"container-suseconnect-zypp" : dockerfile fail on PAYG SLES15.1 VM(azuer)

I'm new here.
I'm trying to Creating custom Docker container images on Azure VM.
But I cant create them because of "container-suseconnect-zypp"
environment : AZURE Virtual Machine (Standard B4ms) - SLES15 SP1 (PAYG)
first of all, I've no problem for getting repository on local (below)
# zypper lr -u
Refreshing service 'container-suseconnect-zypp'.
Repository priorities are without effect. All enabled repositories share the same priority.
# | Alias | Name | Enabled | GPG Check | Refresh | URI
----+-------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------+---------+-----------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 | Basesystem_Module_x86_64:SLE-Module-Basesystem15-SP1-Debuginfo-Pool | SLE-Module-Basesystem15-SP1-Debuginfo-Pool | No | ---- | ---- | plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Products/SLE-Module-Basesystem/15-SP1/x86_64/product_debug/
.
.
.
Secondly, I've already started "containerbuild-regionsrv" service
and used "host network" when I built Docker Image
with reference to the following : https://documentation.suse.com/container/all/single-html/SLES-container/index.html
> sudo systemctl start containerbuild-regionsrv
> sudo systemctl enable containerbuild-regionsrv
> docker build --network host /build-directory/
my Dockerfile is
FROM registry.suse.com/suse/sle15:15.1
# Extra metadata
LABEL version="1.0"
LABEL description="Base SLES 15 SP1 SAP image"
# Create zypper repos and empty folder in NEW Container
RUN mkdir -p /etc/zypp/repos.d \
&& mkdir -p /jail
# add repo from local repo
RUN zypper ar plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Products/SLE-Module-Basesystem/15-SP1/x86_64/product_debug/ \
&& zypper ar plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Updates/SLE-Module-Basesystem/15-SP1/x86_64/update_debug/
.
.
.
# Update repos and install missing packages:
RUN update-ca-certificates && zypper ref -s && zypper update -y
here's my Question.
Why my URI of SLES repo start with not "https://" but "plugin:/"? Is there no problem for adding repo inside container?
When I build Docker Image from Dockerfile, My result is :
# docker build --network host -t base_os .
Sending build context to Docker daemon 4.359GB
Step 1/6 : FROM registry.suse.com/suse/sle15:15.1
---> d6d9e74d8ba3
Step 2/6 : LABEL version="1.0"
---> Running in 51b8f6dc39e5
Removing intermediate container 51b8f6dc39e5
---> 12b8756a372c
Step 3/6 : LABEL description="Base SLES 15 SP1 SAP image"
---> Running in 1b57cfcdceea
Removing intermediate container 1b57cfcdceea
---> aa8ddd1de6b4
Step 4/6 : RUN mkdir -p /etc/zypp/repos.d && mkdir -p /jail
---> Running in fd5a0d6cf9bc
Removing intermediate container fd5a0d6cf9bc
---> 982b38ddd9c7
Step 5/6 : RUN zypper ar plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Products/SLE-Module-Basesystem/15-SP1/x86_64/product_debug/
---> Running in 74416afc3982
Removing intermediate container 74416afc3982
---> 3e78bccdfcd1
Step 6/6 : RUN update-ca-certificates && zypper ref -s && zypper update -y
---> Running in 0d52ac4d4e28
Refreshing service 'container-suseconnect-zypp'.
Warning: Skipping service 'container-suseconnect-zypp' because of the above error.
All services have been refreshed.
Warning: There are no enabled repositories defined.
Use 'zypper addrepo' or 'zypper modifyrepo' commands to add or enable repositories.
Problem retrieving the repository index file for service 'container-suseconnect-zypp':
[container-suseconnect-zypp|file:/usr/lib/zypp/plugins/services/container-suseconnect-zypp]
I think, because of 'container-suseconnect-zypp' issue, I can't install some additional packages needed for SAP inside my container, even if I'll build Docker image without Dockerfile step 6/6.
Is this problem related with Azure VM using PAYG SLES15?
Do I need SSL certificate used by RMT?

Installing plugin on ElasticSearch within Dockerfile fails with SSL error

Here is the Dockerfile I use :
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:7.9.3
ENV ES_PATH=/usr/share/elasticsearch
USER root
RUN $ES_PATH/bin/elasticsearch-plugin install analysis-icu \
&& $ES_PATH/bin/elasticsearch-plugin install analysis-kuromoji \
&& $ES_PATH/bin/elasticsearch-plugin install analysis-smartcn \
&& $ES_PATH/bin/elasticsearch-plugin install analysis-stempel
USER elasticsearch
EXPOSE 9200 9300
When I try to build it using docker image build -t elasticsearch:7.9.3-custom . I get this error:
$ docker image build -t elasticsearch:7.9.3-liferay .
Sending build context to Docker daemon 2.048kB
Step 1/6 : FROM docker.elastic.co/elasticsearch/elasticsearch-oss:7.9.3
---> 8ac9cec94278
Step 2/6 : ENV ES_PATH=/usr/share/elasticsearch
---> Using cache
---> 7e546ac6cbe6
Step 3/6 : USER root
---> Using cache
---> 6a5b7b716ae7
Step 4/6 : RUN $ES_PATH/bin/elasticsearch-plugin install analysis-icu && $ES_PATH/bin/elasticsearch-plugin install analysis-kuromoji && $ES_PATH/bin/elasticsearch-plugin install analysis-smartcn && $ES_PATH/bin/elasticsearch-plugin install analysis-stempel
---> Running in 3722c6026f45
-> Installing analysis-icu
-> Failed installing analysis-icu
-> Rolling back analysis-icu
-> Rolled back analysis-icu
Exception in thread "main" javax.net.ssl.SSLHandshakeException: Remote host terminated the handshake
at java.base/sun.security.ssl.SSLSocketImpl.handleEOF(SSLSocketImpl.java:1687)
at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1496)
at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1394)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:441)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:412)
at java.base/sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:567)
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:183)
at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:142)
at org.elasticsearch.plugins.InstallPluginCommand.urlExists(InstallPluginCommand.java:418)
at org.elasticsearch.plugins.InstallPluginCommand.getElasticUrl(InstallPluginCommand.java:374)
at org.elasticsearch.plugins.InstallPluginCommand.download(InstallPluginCommand.java:305)
at org.elasticsearch.plugins.InstallPluginCommand.execute(InstallPluginCommand.java:251)
at org.elasticsearch.plugins.InstallPluginCommand.execute(InstallPluginCommand.java:224)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127)
at org.elasticsearch.cli.MultiCommand.execute(MultiCommand.java:91)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127)
at org.elasticsearch.cli.Command.main(Command.java:90)
at org.elasticsearch.plugins.PluginCli.main(PluginCli.java:47)
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:474)
at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:463)
at java.base/sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:160)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:110)
at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1488)
... 17 more
It seems to be related to some kind of SSL issue but I don't see what to do and where.
Is this a Docker issue? An environment issue? A proxy/firewall issue?
Do you have any idea of what's going on and what to do to fix it?
The env is CentOS 7, Docker 18.09.7.
This is an exception error about SSL handshake in Java. It usually means that the server terminated the handshake for some reason during the handshake, such as a certificate problem.
To resolve this issue, you can try the following steps:
Make sure the certificate you are using is valid and trusted.
Make sure the server's timestamp is correct, as the certificate needs to be within the validity period.
Try skipping certificate verification in your code so you can get more information about the problem.
If none of the above steps work, please check whether the server configuration (such as port, protocol version, etc.) is correct.
Hope this information helps you.

Docker hub automated build falling with missing variable

I have been set the "BUILD ENVIRONMENT VARIABLES" with the necessary variable, and added the hooks/build with:
#! /bin/bash
docker build \
--build-arg HBASE_VERSION="${HBASE_VERSION}" \
-f "${DOCKERFILE_PATH}" \
-t "${IMAGE_NAME}" .
Not are passing in build process, take a look in the log output:
Building in Docker Cloud's infrastructure...
Cloning into '.'...
Warning: Permanently added the RSA host key for IP address '192.30.253.113' to the list of known hosts.
Reset branch 'develop'
Your branch is up-to-date with 'origin/develop'.
KernelVersion: 4.4.0-1060-aws
Components: [{u'Version': u'18.03.1-ee-3', u'Name': u'Engine', u'Details': {u'KernelVersion': u'4.4.0-1060-aws', u'Os': u'linux', u'BuildTime': u'2018-08-30T18:42:30.000000000+00:00', u'ApiVersion': u'1.37', u'MinAPIVersion': u'1.12', u'GitCommit': u'b9a5c95', u'Arch': u'amd64', u'Experimental': u'false', u'GoVersion': u'go1.10.2'}}]
Arch: amd64
BuildTime: 2018-08-30T18:42:30.000000000+00:00
ApiVersion: 1.37
Platform: {u'Name': u''}
Version: 18.03.1-ee-3
MinAPIVersion: 1.12
GitCommit: b9a5c95
Os: linux
GoVersion: go1.10.2
Starting build of index.docker.io/rowupper/hbase-base:1.4.9...
Step 1/9 : FROM openjdk:8-jre-alpine3.9
---> b76bbdb2809f
Step 2/9 : RUN apk add --no-cache wget bash perl
---> Running in 50cf82a30723
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
Executing busybox-1.29.3-r10.trigger
OK: 130 MiB in 60 packages
Removing intermediate container 50cf82a30723
---> 108b5b9b6569
Step 3/9 : ARG HBASE_VERSION
---> Running in 5407a0bcbf60
Removing intermediate container 5407a0bcbf60
---> ea35e0967933
Step 4/9 : ENV HBASE_HOME=/usr/local/hbase HBASE_CONF_DIR=/etc/hbase PATH=${HBASE_HOME}/bin:$PATH
---> Running in 3a74e814acc8
Removing intermediate container 3a74e814acc8
---> 7a289348ba9b
Step 5/9 : WORKDIR $HBASE_HOME
Removing intermediate container e842d4658bf1
---> a6fede2510ec
Step 6/9 : RUN wget -O - https://archive.apache.org/dist/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz | tar -xz --strip-components=1 --no-same-owner --no-same-permissions
---> Running in 39b75bc77c5a
--2019-03-19 18:46:05-- https://archive.apache.org/dist/hbase//hbase--bin.tar.gz
Resolving archive.apache.org... 163.172.17.199
Connecting to archive.apache.org|163.172.17.199|:443...
connected.
HTTP request sent, awaiting response...
404 Not Found
2019-03-19 18:46:06 ERROR 404: Not Found.
tar: invalid magic
tar: short read
Removing intermediate container 39b75bc77c5a
The command '/bin/sh -c wget -O - https://archive.apache.org/dist/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz | tar -xz --strip-components=1 --no-same-owner --no-same-permissions' returned a non-zero code: 1
Is visible that the variable is missing, what I need do to solve this issue?
Could you try ARG and ENV, like
ARG HBASE_HOME="default_value"
ENV HBASE_HOME="$HBASE_HOME"
in your Dockerfile
build-arg VALUE should override the "default_value".
The hooks/build file need living in the same directory of Dockerfile.
My project has several subfolders, each folder with a Dockerfile.

Error processing tar file (exit status 1): unexpected EOF when building with docker-compose while data directory exists

My docker-compose.yml looks like this:
version: '3'
services:
phab:
build:
context: .
args:
- PHAB_BASE_URI=https://phab.example.com
- PHAB_REPO_PATH=/var/repo
- PHAB_TIMEZONE=Europe/Berlin
- PHP_POST_MAX_SIZE=32MB
ports:
- "127.0.0.1:8012:80"
volumes:
- ./.data/repos:/var/repo
- ./.data/mysql:/var/lib/mysql/
If I try to rebuild after I started the container, I get
$ docker-compose build
Building phab
ERROR: Error processing tar file(exit status 1): unexpected EOF
This appears to be due to the .data directory. The only "cure" I found was either deleting the directory or moving it outside of the project directory. Renaming the directory to eg. .data1 does not fix it.
$ sudo mv .data .data1
$ docker-compose build
Building phab
ERROR: Error processing tar file(exit status 1): unexpected EOF
$ sudo mv .data1 ..
$ docker-compose build
Building phab
Step 1/27 : FROM tutum/lamp:latest
---> 3d49e175ec00
Step 2/27 : RUN apt-get update && apt-get install -y php5-curl php5-mysqlnd php5-gd python3-pygments
---> Using cache
[ ... ]
I am using docker-compose 1.18.0, build 8dd22a9 abd Docker 18.06.0-ce, build 0ffa825 on Debian 9.5.
I have seen the question Docker ERROR: Error processing tar file(exit status 1): unexpected EOF. However just flushing the /var/lib/docker directory does not appear as an option to me. Pruning unused images and even removing the base image before the build does not fix the issue.
I had the same issue. The following steps solved it:
1.) Stop Docker Service.
systemctl stop docker
2.) Backup /var/lib/docker.
3.) Remove /var/lib/docker.
sudo rm -rf /var/lib/docker
4.) Start Docker Service.
systemctl start docker
Try upgrading to 18.09 which was just released yesterday. The "unexpected EOF" during a build looks like a known issue with 18.06: https://github.com/moby/moby/pull/37771
remove files generated by other container, like "db_data", "mysql_data", etc

undefined method `source_url' for #<Chef::Cookbook::Metadata:0x000000006f1378>

I'm learning docker using book docker in practice.
I am working on technique 47 in chapter 5.
This recipe is about using chef for managing docker configurations.
The github link is here.
When I build the docker image from the container, I encounter below error.
$ docker build -t chef-example .
Sending build context to Docker daemon 9.728kB
Step 1/12 : FROM ubuntu:latest
---> ccc7a11d65b1
Step 2/12 : RUN apt-get update && apt-get install -yy git curl
---> Using cache
---> ef956c61c59f
Step 3/12 : RUN curl -L https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.3.5-1_amd64.deb -o chef.deb
---> Using cache
---> 1260301dbe67
Step 4/12 : RUN dpkg -i chef.deb && rm chef.deb
---> Using cache
---> 8c1aeaf84423
Step 5/12 : COPY . /chef
---> 18986195e732
Removing intermediate container 758dfce43670
Step 6/12 : WORKDIR /chef/cookbooks
---> fbdd9c386801
Removing intermediate container 936393187cb4
Step 7/12 : RUN knife cookbook site download apache2
---> Running in 2ba7d0765ae2
WARNING: No knife configuration file found
Downloading apache2 from the cookbooks site at version 5.0.1 to /chef/cookbooks/apache2-5.0.1.tar.gz
Cookbook saved: /chef/cookbooks/apache2-5.0.1.tar.gz
---> 8b3fa14f4416
Removing intermediate container 2ba7d0765ae2
Step 8/12 : RUN knife cookbook site download iptables
---> Running in 94275acfdb44
WARNING: No knife configuration file found
Downloading iptables from the cookbooks site at version 4.3.1 to /chef/cookbooks/iptables-4.3.1.tar.gz
Cookbook saved: /chef/cookbooks/iptables-4.3.1.tar.gz
---> c8a4c6d17253
Removing intermediate container 94275acfdb44
Step 9/12 : RUN knife cookbook site download logrotate
---> Running in 27b5f736d6cf
WARNING: No knife configuration file found
Downloading logrotate from the cookbooks site at version 2.2.0 to /chef/cookbooks/logrotate-2.2.0.tar.gz
Cookbook saved: /chef/cookbooks/logrotate-2.2.0.tar.gz
---> 1b4b4460bdc9
Removing intermediate container 27b5f736d6cf
Step 10/12 : RUN /bin/bash -c 'for f in $(ls *gz); do tar -zxf $f; rm $f; done'
---> Running in 7e6b912d910e
---> d5e77acc14f1
Removing intermediate container 7e6b912d910e
Step 11/12 : RUN chef-solo -c /chef/config.rb -j /chef/attributes.json
---> Running in a0c7f7f7a00a
[2017-12-05T08:01:08+00:00] INFO: Forking chef instance to converge...
[2017-12-05T08:01:08+00:00] INFO: *** Chef 11.18.0.rc.1 ***
[2017-12-05T08:01:08+00:00] INFO: Chef-client pid: 9
[2017-12-05T08:01:09+00:00] INFO: Setting the run_list to
["recipe[apache2::default]", "recipe[mysite::default]"] from CLI options
[2017-12-05T08:01:09+00:00] INFO: Run List is
[recipe[apache2::default], recipe[mysite::default]]
[2017-12-05T08:01:09+00:00] INFO: Run List expands to [apache2::default, mysite::default]
[2017-12-05T08:01:09+00:00] INFO: Starting Chef Run for 8089fe031125
[2017-12-05T08:01:09+00:00] INFO: Running start handlers
[2017-12-05T08:01:09+00:00] INFO: Start handlers complete.
[2017-12-05T08:01:09+00:00] ERROR: Running exception handlers
[2017-12-05T08:01:09+00:00] ERROR: Exception handlers complete
[2017-12-05T08:01:09+00:00] FATAL: Stacktrace dumped to /chef/cache/chef-stacktrace.out
[2017-12-05T08:01:09+00:00] ERROR: undefined method `source_url' for #<Chef::Cookbook::Metadata:0x000000006f1378>
[2017-12-05T08:01:09+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
The command '/bin/sh -c chef-solo -c /chef/config.rb -j /chef/attributes.json' returned a non-zero code: 1
I'm new to chef. Not sure why I'm getting this error.
Your cookbook doesn't have a metadata.rb which is probably breaking things, but you're also using Chef 11.18 which is entirely out of support at this point. Current Chef is 13.6.4.
Also we don't really recommend using Chef to build container images in most cases. It can definitely work, but overall Chef was built to manage servers so this will result in server-like fat images in most cases.

Resources