undefined method `source_url' for #<Chef::Cookbook::Metadata:0x000000006f1378> - docker

I'm learning docker using book docker in practice.
I am working on technique 47 in chapter 5.
This recipe is about using chef for managing docker configurations.
The github link is here.
When I build the docker image from the container, I encounter below error.
$ docker build -t chef-example .
Sending build context to Docker daemon 9.728kB
Step 1/12 : FROM ubuntu:latest
---> ccc7a11d65b1
Step 2/12 : RUN apt-get update && apt-get install -yy git curl
---> Using cache
---> ef956c61c59f
Step 3/12 : RUN curl -L https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.3.5-1_amd64.deb -o chef.deb
---> Using cache
---> 1260301dbe67
Step 4/12 : RUN dpkg -i chef.deb && rm chef.deb
---> Using cache
---> 8c1aeaf84423
Step 5/12 : COPY . /chef
---> 18986195e732
Removing intermediate container 758dfce43670
Step 6/12 : WORKDIR /chef/cookbooks
---> fbdd9c386801
Removing intermediate container 936393187cb4
Step 7/12 : RUN knife cookbook site download apache2
---> Running in 2ba7d0765ae2
WARNING: No knife configuration file found
Downloading apache2 from the cookbooks site at version 5.0.1 to /chef/cookbooks/apache2-5.0.1.tar.gz
Cookbook saved: /chef/cookbooks/apache2-5.0.1.tar.gz
---> 8b3fa14f4416
Removing intermediate container 2ba7d0765ae2
Step 8/12 : RUN knife cookbook site download iptables
---> Running in 94275acfdb44
WARNING: No knife configuration file found
Downloading iptables from the cookbooks site at version 4.3.1 to /chef/cookbooks/iptables-4.3.1.tar.gz
Cookbook saved: /chef/cookbooks/iptables-4.3.1.tar.gz
---> c8a4c6d17253
Removing intermediate container 94275acfdb44
Step 9/12 : RUN knife cookbook site download logrotate
---> Running in 27b5f736d6cf
WARNING: No knife configuration file found
Downloading logrotate from the cookbooks site at version 2.2.0 to /chef/cookbooks/logrotate-2.2.0.tar.gz
Cookbook saved: /chef/cookbooks/logrotate-2.2.0.tar.gz
---> 1b4b4460bdc9
Removing intermediate container 27b5f736d6cf
Step 10/12 : RUN /bin/bash -c 'for f in $(ls *gz); do tar -zxf $f; rm $f; done'
---> Running in 7e6b912d910e
---> d5e77acc14f1
Removing intermediate container 7e6b912d910e
Step 11/12 : RUN chef-solo -c /chef/config.rb -j /chef/attributes.json
---> Running in a0c7f7f7a00a
[2017-12-05T08:01:08+00:00] INFO: Forking chef instance to converge...
[2017-12-05T08:01:08+00:00] INFO: *** Chef 11.18.0.rc.1 ***
[2017-12-05T08:01:08+00:00] INFO: Chef-client pid: 9
[2017-12-05T08:01:09+00:00] INFO: Setting the run_list to
["recipe[apache2::default]", "recipe[mysite::default]"] from CLI options
[2017-12-05T08:01:09+00:00] INFO: Run List is
[recipe[apache2::default], recipe[mysite::default]]
[2017-12-05T08:01:09+00:00] INFO: Run List expands to [apache2::default, mysite::default]
[2017-12-05T08:01:09+00:00] INFO: Starting Chef Run for 8089fe031125
[2017-12-05T08:01:09+00:00] INFO: Running start handlers
[2017-12-05T08:01:09+00:00] INFO: Start handlers complete.
[2017-12-05T08:01:09+00:00] ERROR: Running exception handlers
[2017-12-05T08:01:09+00:00] ERROR: Exception handlers complete
[2017-12-05T08:01:09+00:00] FATAL: Stacktrace dumped to /chef/cache/chef-stacktrace.out
[2017-12-05T08:01:09+00:00] ERROR: undefined method `source_url' for #<Chef::Cookbook::Metadata:0x000000006f1378>
[2017-12-05T08:01:09+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
The command '/bin/sh -c chef-solo -c /chef/config.rb -j /chef/attributes.json' returned a non-zero code: 1
I'm new to chef. Not sure why I'm getting this error.

Your cookbook doesn't have a metadata.rb which is probably breaking things, but you're also using Chef 11.18 which is entirely out of support at this point. Current Chef is 13.6.4.
Also we don't really recommend using Chef to build container images in most cases. It can definitely work, but overall Chef was built to manage servers so this will result in server-like fat images in most cases.

Related

"container-suseconnect-zypp" : dockerfile fail on PAYG SLES15.1 VM(azuer)

I'm new here.
I'm trying to Creating custom Docker container images on Azure VM.
But I cant create them because of "container-suseconnect-zypp"
environment : AZURE Virtual Machine (Standard B4ms) - SLES15 SP1 (PAYG)
first of all, I've no problem for getting repository on local (below)
# zypper lr -u
Refreshing service 'container-suseconnect-zypp'.
Repository priorities are without effect. All enabled repositories share the same priority.
# | Alias | Name | Enabled | GPG Check | Refresh | URI
----+-------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------+---------+-----------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 | Basesystem_Module_x86_64:SLE-Module-Basesystem15-SP1-Debuginfo-Pool | SLE-Module-Basesystem15-SP1-Debuginfo-Pool | No | ---- | ---- | plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Products/SLE-Module-Basesystem/15-SP1/x86_64/product_debug/
.
.
.
Secondly, I've already started "containerbuild-regionsrv" service
and used "host network" when I built Docker Image
with reference to the following : https://documentation.suse.com/container/all/single-html/SLES-container/index.html
> sudo systemctl start containerbuild-regionsrv
> sudo systemctl enable containerbuild-regionsrv
> docker build --network host /build-directory/
my Dockerfile is
FROM registry.suse.com/suse/sle15:15.1
# Extra metadata
LABEL version="1.0"
LABEL description="Base SLES 15 SP1 SAP image"
# Create zypper repos and empty folder in NEW Container
RUN mkdir -p /etc/zypp/repos.d \
&& mkdir -p /jail
# add repo from local repo
RUN zypper ar plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Products/SLE-Module-Basesystem/15-SP1/x86_64/product_debug/ \
&& zypper ar plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Updates/SLE-Module-Basesystem/15-SP1/x86_64/update_debug/
.
.
.
# Update repos and install missing packages:
RUN update-ca-certificates && zypper ref -s && zypper update -y
here's my Question.
Why my URI of SLES repo start with not "https://" but "plugin:/"? Is there no problem for adding repo inside container?
When I build Docker Image from Dockerfile, My result is :
# docker build --network host -t base_os .
Sending build context to Docker daemon 4.359GB
Step 1/6 : FROM registry.suse.com/suse/sle15:15.1
---> d6d9e74d8ba3
Step 2/6 : LABEL version="1.0"
---> Running in 51b8f6dc39e5
Removing intermediate container 51b8f6dc39e5
---> 12b8756a372c
Step 3/6 : LABEL description="Base SLES 15 SP1 SAP image"
---> Running in 1b57cfcdceea
Removing intermediate container 1b57cfcdceea
---> aa8ddd1de6b4
Step 4/6 : RUN mkdir -p /etc/zypp/repos.d && mkdir -p /jail
---> Running in fd5a0d6cf9bc
Removing intermediate container fd5a0d6cf9bc
---> 982b38ddd9c7
Step 5/6 : RUN zypper ar plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Products/SLE-Module-Basesystem/15-SP1/x86_64/product_debug/
---> Running in 74416afc3982
Removing intermediate container 74416afc3982
---> 3e78bccdfcd1
Step 6/6 : RUN update-ca-certificates && zypper ref -s && zypper update -y
---> Running in 0d52ac4d4e28
Refreshing service 'container-suseconnect-zypp'.
Warning: Skipping service 'container-suseconnect-zypp' because of the above error.
All services have been refreshed.
Warning: There are no enabled repositories defined.
Use 'zypper addrepo' or 'zypper modifyrepo' commands to add or enable repositories.
Problem retrieving the repository index file for service 'container-suseconnect-zypp':
[container-suseconnect-zypp|file:/usr/lib/zypp/plugins/services/container-suseconnect-zypp]
I think, because of 'container-suseconnect-zypp' issue, I can't install some additional packages needed for SAP inside my container, even if I'll build Docker image without Dockerfile step 6/6.
Is this problem related with Azure VM using PAYG SLES15?
Do I need SSL certificate used by RMT?

Docker RUN apt-get update returned a non-zero code:132

Below is docker build -t test:test . log
Sending build context to Docker daemon 1.225MB
Step 1/3 : FROM ppc64le/ubuntu:jammy
---> b4cdd8bc1823
Step 2/3 : ARG DEBIAN_FRONTEND=noninteractive
---> Using cache
---> 0d6079ed0b29
Step 3/3 : RUN apt-get update
---> Running in fca7ae125244
The command '/bin/sh -c apt-get update' returned a non-zero code: 132
It is clear to me that apt-get update is causing a problem but I don't know how to solve it. I googled everywhere but doesn't seem that peopple are getting this error code. Is it something related with ppc64le related? Any clues?
Per man bash:
"When a command terminates on a fatal signal N, bash uses the value of 128+N as the exit status"
So a 132 code indicates signal 4. which per kill -l
$ kill -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
is SIGILL, illegal instruction. So either you aren't running on a POWER9+ ppc64le machine, or your qemu setup isn't emulating it.
Power8 machines can fail on SIGILL as apt-get can get compiled with POWER9 instructions (as I found out in docker library issue #12726).
Note 22.04 jammy is POWER 9/10+ per release, and build.

Skaffold and microk8s -- getting started -- x509: certificate signed by unknown authority

Try to get started with skaffold, hitting lots of issues. So I went back to basics and tried to get the examples running:
cloned https://github.com/GoogleContainerTools/skaffold
ran cd ~/git/skaffold/examples/getting-started/ and then tried to get going;
$ skaffold dev --default-repo=aliwatters
Listing files to watch...
- skaffold-example
Generating tags...
- skaffold-example -> aliwatters/skaffold-example:v1.18.0-2-gf0bfcccce
Checking cache...
- skaffold-example: Not found. Building
Building [skaffold-example]...
Sending build context to Docker daemon 3.072kB
Step 1/8 : FROM golang:1.12.9-alpine3.10 as builder
---> e0d646523991
Step 2/8 : COPY main.go .
---> Using cache
---> fb29e25db0a3
Step 3/8 : ARG SKAFFOLD_GO_GCFLAGS
---> Using cache
---> aa8dd4cbab42
Step 4/8 : RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -o /app main.go
---> Using cache
---> 9a666995c00a
Step 5/8 : FROM alpine:3.10
---> be4e4bea2c2e
Step 6/8 : ENV GOTRACEBACK=single
---> Using cache
---> bdb74c01e0b9
Step 7/8 : CMD ["./app"]
---> Using cache
---> 15c248dd54e9
Step 8/8 : COPY --from=builder /app .
---> Using cache
---> 73564337b083
Successfully built 73564337b083
Successfully tagged aliwatters/skaffold-example:v1.18.0-2-gf0bfcccce
The push refers to repository [docker.io/aliwatters/skaffold-example]
37806ae41d23: Preparing
1b3ee35aacca: Preparing
37806ae41d23: Pushed
1b3ee35aacca: Pushed
v1.18.0-2-gf0bfcccce: digest: sha256:a8defaa979650baea27a437318a3c4cd51c44397d6e2c1910e17d81d0cde43ac size: 739
Tags used in deployment:
- skaffold-example -> aliwatters/skaffold-example:v1.18.0-2-gf0bfcccce#sha256:a8defaa979650baea27a437318a3c4cd51c44397d6e2c1910e17d81d0cde43ac
Deploy Failed. Could not connect to cluster microk8s due to "https://127.0.0.1:16443/version?timeout=32s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "10.152.183.1"). Check your connection for the cluster.
So this isn't making sense to me, the https://127.0.0.1:16443/version?timeout=32s is kubectl by the looks of it and it has a self-signed cert (viewed in the browser)
$ snap version
snap 2.48.2
snapd 2.48.2
series 16
ubuntu 20.04
kernel 5.4.0-60-generic
$ snap list
# ...
microk8s v1.20.1 1910 1.20/stable canonical✓ classic
# ...
$ microk8s kubectl version
Client Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.1-34+e7db93d188d0d1", GitCommit:"e7db93d188d0d12f2fe5336d1b85cdb94cb909d3", GitTreeState:"clean", BuildDate:"2021-01-11T23:48:42Z", GoVersion:"go1.15.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.1-34+e7db93d188d0d1", GitCommit:"e7db93d188d0d12f2fe5336d1b85cdb94cb909d3", GitTreeState:"clean", BuildDate:"2021-01-11T23:50:46Z", GoVersion:"go1.15.6", Compiler:"gc", Platform:"linux/amd64"}
$ skaffold version
v1.18.0
$ docker version
Client: Docker Engine - Community
Version: 19.03.4-rc1
...
Where do I start with debugging this?
Thanks for any ideas!
Solved via the github issues https://github.com/GoogleContainerTools/skaffold/issues/5283 (thx briandealwis)
Combo of an alias, and an export of config was needed, so skaffold can understand the my setup.
$ sudo snap unalias kubectl
$ sudo snap install kubectl --classic
$ microk8s.kubectl config view --raw > $HOME/.kube/config
$ skaffold dev --default-repo=<your-docker-repository>
Full output
$ sudo snap unalias kubectl
# just in case
ali#stinky:~/git/skaffold/examples/getting-started (master)$ sudo snap install kubectl --classic
kubectl 1.20.2 from Canonical✓ installed
ali#stinky:~/git/skaffold/examples/getting-started (master)$ which kubectl
/snap/bin/kubectl
ali#stinky:~/git/skaffold/examples/getting-started (master)$ microk8s.kubectl config view --raw > $HOME/.kube/config
ali#stinky:~/git/skaffold/examples/getting-started (master)$ skaffold dev --default-repo=aliwatters
Listing files to watch...
- skaffold-example
Generating tags...
- skaffold-example -> aliwatters/skaffold-example:v1.18.0-2-gf0bfcccce
Checking cache...
- skaffold-example: Found Remotely
Tags used in deployment:
- skaffold-example -> aliwatters/skaffold-example:v1.18.0-2-gf0bfcccce#sha256:a8defaa979650baea27a437318a3c4cd51c44397d6e2c1910e17d81d0cde43ac
Starting deploy...
- pod/getting-started created
Waiting for deployments to stabilize...
Deployments stabilized in 23.793238ms
Press Ctrl+C to exit
Watching for changes...
[getting-started] Hello world!
[getting-started] Hello world!
[getting-started] Hello world!
# ^C
Cleaning up...
- pod "getting-started" deleted

Is my hyperledger peer make successful?

I am following http://hyperledger-fabric.readthedocs.io/en/latest/Setup/Chaincode-setup/ and using Options 1 i.e. vagrant development environment.When I run make membersrvc && membersrvc i get below message :
build/bin/membersrvc
CGO_CFLAGS=" " CGO_LDFLAGS="-lrocksdb -lstdc++ -lm -lz -lbz2 -lsnappy"
GOBIN=/opt/gopath/src/github.com/hyperledger/fabric/build/bin go install -
ldflags "-X github.com/hyperledger/fabric/metadata.Version=0.7.0-snapshot-
131b36c" github.com/hyperledger/fabric/membersrvc
Binary available as build/bin/membersrvc
I assume membersrvc is running because "ps -a | grep membersrvc" returns
2486 pts/0 00:00:01 membersrvc
After this I ran "make peer" and got this :
Building docker javaenv-image
docker build -t hyperledger/fabric-javaenv build/image/javaenv
Sending build context to Docker daemon 44.03 kB
Step 1 : FROM openjdk:8
---> 96cddf5ae9f1
Step 2 : RUN wget https://services.gradle.org/distributions/gradle-2.12-
bin.zip -P /tmp --quiet
---> Using cache
---> 3dbbd6c16d7e
Step 3 : RUN unzip -qo /tmp/gradle-2.12-bin.zip -d /opt && rm /tmp/gradle-
2.12-b in.zip
---> Using cache
---> bd1d42253704
Step 4 : RUN ln -s /opt/gradle-2.12/bin/gradle /usr/bin
---> Using cache
---> 248e99587f37
Step 5 : ENV MAVEN_VERSION 3.3.9
---> Using cache
---> 27105db40f7a
Step 6 : ENV USER_HOME_DIR "/root"
---> Using cache
---> 03f5e84bf9ce
Step 7 : RUN mkdir -p /usr/share/maven /usr/share/maven/ref && curl -fsSL
http ://apache.osuosl.org/maven/maven-
3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_V ERSION-
bin.tar.gz | tar -xzC /usr/share/maven --strip-components=1 && ln -
s /usr/share/maven/bin/mvn /usr/bin/mvn
---> Running in 6ec30acda848
This stays on the window forever and nothing happens after this.
After this i try to run "peer node start --peer-chaincodedev" in another window
but i get below error:
No command 'peer' found, did you mean:
Why is not my peer created yet?
#PySa - a correct build of the Peer will drop you back to the cmd line and if you then issue the cmd peer it will show you the help / switches. To make / build the memberservices and peer all you have to do is the following:
vagrant up
ssh into the machine
cd /hyperledger
make membersrvc
make peer - this can take a LOOOOONG time depending on your
machine & internet connection - the process has to download a LOT of
data to complete correctly.
Once the above is done I would also strongly suggest you run make unit-test and when that's done make behave - again these will take a long to run but assuming all is well by the time it's done you'll be able to run membersrvc and peer node start (each in their own terminal windows) without problems...
FYI - the memberservices does NOT report anything to the console - the peer however does...

using ansible for provisioning docker containers

Hi am using below docker file to build an image. Its running maven playbook perfectly But when started running sonar play book it gets hanged and nothing happens.Only dffrence in playbooks are that sonar playbook has restart statements.Can it a reason for the problem.I have tested each playbook and each one runs perfectly inside the container.
Is ther any way to see container logs side by side as what all ansible is doing when building the image.As of now Docker spits logs only when its done with a step.
FROM xyz.com/akathaku/cidemo:latest
MAINTAINER akathaku <akathaku#gmail.com>
USER root
ENV SCPATH /etc/supervisor/conf.d
ENV PLAYBOOKS /ansible
# RUN yum -y update
# The daemons
RUN yum -y install supervisor
RUN mkdir -p /var/log/supervisor
# Supervisor Configuration
ADD ./supervisord/conf.d/* $SCPATH/
#Running ansible
ADD ./sonar-playbook $PLAYBOOKS/sonar-playbook
ADD ./maven-playbook $PLAYBOOKS/maven-playbook
ADD ./jenkins-playbook $PLAYBOOKS/jenkins-playbook
ADD ./tomcat-playbook $PLAYBOOKS/tomcat-playbook
WORKDIR $PLAYBOOKS
RUN ansible-playbook $PLAYBOOKS/sonar-playbook/sonar.yml -c local
RUN ansible-playbook $PLAYBOOKS/maven-playbook/maven.yml -c local
RUN ansible-playbook $PLAYBOOKS/jenkins-playbook/jenkins.yml -c local
RUN ansible-playbook $PLAYBOOKS/tomcat-playbook/tomcat.yml -c local
# Application Code
CMD ["supervisord", "-c", "/etc/supervisor/conf.d/supervisor.conf"]
EXPOSE 8080 8081 9000 9001 8086
Below is the output of docker run
Step 1 : FROM xyz.com/akathaku/cidemo:latest
---> d477ceab5d3b
Step 2 : MAINTAINER akathaku <akathaku#gmail.com>
---> Using cache
---> a9c3c191aabd
Step 3 : USER root
---> Using cache
---> 24a18ddc6f49
Step 4 : ENV SCPATH /etc/supervisor/conf.d
---> Using cache
---> ea6f4dada89c
Step 5 : ENV PLAYBOOKS /ansible
---> Using cache
---> 9d42760dc51f
Step 6 : RUN yum -y install supervisor
---> Using cache
---> 7af486ce2a8c
Step 7 : RUN mkdir -p /var/log/supervisor
---> Using cache
---> a1b1c145d490
Step 8 : ADD ./supervisord/conf.d/* $SCPATH/
---> Using cache
---> f16e32135351
Step 9 : ADD ./sonar-playbook $PLAYBOOKS/sonar-playbook
---> 170c1dc82ffa
Removing intermediate container bfa474ef9d11
Step 10 : ADD ./maven-playbook $PLAYBOOKS/maven-playbook
---> 90a57735fe3b
Removing intermediate container b5f7bbb3b85d
Step 11 : ADD ./jenkins-playbook $PLAYBOOKS/jenkins-playbook
---> 09ab0f929f45
Removing intermediate container 7dc62423354d
Step 12 : ADD ./tomcat-playbook $PLAYBOOKS/tomcat-playbook
---> 13c3bb5f7aca
Removing intermediate container 4356605f503a
Step 13 : WORKDIR $PLAYBOOKS/maven-playbook
---> Running in 34867677f4e1
---> f48ffe4115db
Removing intermediate container 34867677f4e1
Step 14 : RUN ansible-playbook maven.yml -c local
---> Running in 4eda53bf7e00
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [maven | Install Java 1.8 JRE] ******************************************
changed: [localhost]
TASK: [maven | Install Java 1.8 JDK] ******************************************
changed: [localhost]
TASK: [maven | lineinfile dest='/etc/profile' regexp='^#?\s*export JAVA_HOME=(.*)$' line='export JAVA_HOME=/usr/lib/jvm/java-openjdk' state=present] ***
changed: [localhost]
TASK: [maven | lineinfile dest=/etc/profile regexp='^#?\s*export PATH=(.*)JAVA_HOME(.*)$' line="export PATH=$PATH:$JAVA_HOME/bin" state=present] ***
changed: [localhost]
TASK: [maven | Reload profile] ************************************************
changed: [localhost]
TASK: [maven | Download Apache Maven] *****************************************
changed: [localhost]
TASK: [maven | Untar Maven to /opt] *******************************************
changed: [localhost]
TASK: [maven | Create symbolic link maven to the /opt/apache-{{ maven_version }}] ***
changed: [localhost]
TASK: [maven | lineinfile dest=/etc/profile regexp='^#?\s*export MAVEN_HOME=(.*)$' line='export MAVEN_HOME=/opt/maven' state=present] ***
changed: [localhost]
TASK: [maven | lineinfile dest=/etc/profile regexp='^#?\s*export PATH=(.*)MAVEN_HOME(.*)$' line="export PATH=$PATH:$MAVEN_HOME/bin" state=present] ***
changed: [localhost]
TASK: [maven | Reload profile] ************************************************
changed: [localhost]
TASK: [maven | Create local repository] ***************************************
changed: [localhost]
TASK: [maven | Creates setting.xml file] **************************************
changed: [localhost]
PLAY RECAP ********************************************************************
localhost : ok=14 changed=13 unreachable=0 failed=0
---> fb211f3fbbfd
Removing intermediate container 4eda53bf7e00
Step 15 : WORKDIR $PLAYBOOKS/sonar-playbook
---> Running in b9ba0623b48a
---> d48ad5abbf43
Removing intermediate container b9ba0623b48a
Step 16 : RUN ansible-playbook sonar.yml -c local
---> Running in 2d2716354c5d
Here's what you can do:
Comment out the dockerfile from the line that hangs until the end.
Build the image, start a new container interactively and then run the same line directly from the shell. This is basically equivalent to letting it run during docker build, only you'd get a chance to also inspect whatever's happening on that machine (you could tail logs, you could docker exec into the container while it's hanging and figure out which process is stuck, etc.).

Resources