Skip error in dockerfile during the build - docker

Just wonder if theres a better way to skip a command that could fail (because I'm using jenkins to build and deploy the application)
Now I'm doing something like this
RUN unlink /run/supervisor.sock && etc/init.d/supervisor stop || echo "supervisor was not started"

this is a typical linux trick to ensure a zero exit code
RUN unlink /run/supervisor.sock && etc/init.d/supervisor stop || :
the answer given here essentially uses different syntax to achieve the same
Dockerfile build - possible to ignore error?
there is no other way of preventing build failure at present

Related

Script to automate timeshift backup and azuracast update

I’m running an Azuracast docker instance on Linode and want to try to find a way to automate my updates. Right now my routine is when I notice there are updates by accessing the Azuracast web panel, I usually run timeshift to create a backup using the following command
timeshift —-create —-comment “azuracast update ”
And then I use the following to update azuracast
cd /var/azuracast/
./docker.sh update-self
./docker.sh update
Then it asks me to ensure the azuracast installation is backed up before updating, to which i would usually just press enter.
After that is completed, it asks me if i want to clean up all stopped docker containers and images to save space, which i usually say no to.
What I’m wondering is if there is a way to create a bash script, or python or something to automate all of this, and then have it run on a schedule?
Sure, you can write a shell script to execute these commands and then run it on a schedule using crontab(5).
For example your script might look like:
#! /bin/sh
# Backup azuracast and restart docker container
timeshift --create --comment “azuracast update” && \
cd /var/azuracast/ && \
./docker.sh update-self && \
(yes | ./docker.sh update)
It sounds like this docker.sh program takes some user inputs. See if there are options you can pass to it that will allow you to run it non-interactively. (Seems there isn't, see edit.)
To setup your cron job, you can put the script in /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, or /etc/cron.monthly. Or if you need more control, you can get started configuring a cron job with crontab -e. Better explanation.
EDIT: Assuming this is the script you're using, it doesn't seem to have a way to run update non-interactively. Fear not though, there's a program for this: yes(1). This will answer yes to both of the questions, but honestly running docker system prune -f is probably a good idea. If you really want to answer no to that, you could probably substitute yes for printf "y\nn" to answer yes to the first and no to the second.
Also note that there's at least one other y/n question it could ask you, which you probably want to answer yes to.

Error compiling Go from source in an Alpine Docker container: "loadinternal: cannot find runtime/cgo"

I'm trying to build an Alpine Docker image for the FIPS-enabled version of Go. To do this, I am trying to build Go from source using the dev.boringcrypto branch of the golang/go repository.
Upon running ./all.bash, I get the following errors:
Step 4/4 : RUN cd go/src && ./all.bash
---> Running in 00db552598f7
Building Go cmd/dist using /usr/lib/go.
# _/go/src/cmd/dist
loadinternal: cannot find runtime/cgo
/usr/lib/go/pkg/tool/linux_amd64/link: running gcc failed: exit status 1
/usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../x86_64-
alpine-linux-musl/bin/ld: cannot find Scrt1.o: No such file or directory
/usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../x86_64-
alpine-linux-musl/bin/ld: cannot find crti.o: No such file or directory
/usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find -lssp_nonshared
collect2: error: ld returned 1 exit status
The command '/bin/bash -c cd go/src && ./all.bash' returned a non-zero code: 2
Which causes the installation tests to fail and kicks me out of the Docker image build.
I have the gcc installed on the image, and tried setting the environment variable CGO_ENABLED=0 as suggested in other questions, but neither of these things seem to alleviate the problem.
I'm at my wits end with this problem. Has anybody else run into similar issues in the past? I don't understand why this is happening, as the build runs fine in an Ubuntu container.
Thanks!
I had the same error messages, although I was compiling a different project.
It turns out that alpine needs to have the musl-dev package installed for this to work, so I think you need to make sure that is included in your Dockerfile, or manually install it by running apk add --no-cache musl-dev.
Either go isn't correctly installed on the image or the GOROOT is wrong
Put go tool dist banner and go tool dist env in your all.bash for clues

What's the difference between RUN and bash script in a dockerfile?

I've seen many dockerfiles include all build steps in a RUN statement, like:
RUN echo "Hello" &&
cd /tmp &&
mv a.txt b.txt &&
...
and so on...
My question is: what's the benefits/drawbacks on replace these instructions by a single bash script that gives me highlight syntax, loop capabilities, etc?
Something like:
COPY ./script.sh /tmp
RUN bash /tmp/script.sh
and then
#!/bin/bash
echo "hello" ;
cd /tmp ;
mv a.txt b.txt ;
...
Thanks!
The primary difference is that when you COPY the bash script into the image it will be available for inspection in the running container, whereas the RUN command is a little more opaque. Putting your commands in a file like that is arguably more manageable for other reasons: changes in your VCS history will be a little more clear, and for longer or more complex scripts you will probably find it easier to format things cleanly with the script in a separate file rather than embedded in your Dockerfile in a RUN command.
Otherwise the result is the same (in both cases, you are executing the same set of commands), although the COPY and RUN will result in an extra image layer (vs. just the RUN by itself).
I guess running it off as a shell script gives you more control.
For instance, you can do if-else statements to check whether a command has failed or not and provide a code path to handle it. Whereas RUN is more straight forward and when the return code is not 0 it fails the build immediately.
Obviously the case you have there is a relatively simple one and it would not have had a huge difference. The only impact I can see here is the code readability aspect. Someone would have to read the shell script to know what is happening, comparing to having everything on a single file.
I guess it all comes down to using the right tool for the right job. If it is a simple command and you don't need complex logic handling then do RUN.

Travis CI: Build intermittently fails and log takes forever to load (never loads)

This is my build.
https://travis-ci.org/gogo/protobuf
It intermittently fails for some of the builds.
I think it is struggling with installing a protocol buffer version using wget, but I can't see the logs, since they take forever to load.
It would be great if travis could tell me that it has failed to load the logs instead of just pretending to load them. Sorry I don't know if that is really the case, but that is how it feels.
Also I don't understand why this works some of the time and randomly fails. If the server is overloaded, put me in a queue, please don't fail when there is not something wrong with the code.
Please help I am new to travis, so maybe I am just doing it wrong.
Some of the other builds with the same use of PROTOBUF_VERSION are successful and show some output from the final step of install-protobuf.sh (./configure --prefix=/home/travis && make -j2 && make install). So similar to you, I suspect that the wget step in install-protobuf.sh is what is failing in the failed builds.
I would suggest editing install-protobuf.sh so that you can better see what is going on in the travis-ci logs:
change set command to: set -ex
remove the -q option from your use of wget in:
wget -q https://github.com/google/protobuf/releases/download/v$PROTOBUF_VERSION/$basename.tar.gz

Dockerfile build - possible to ignore error?

I've got a Dockerfile. When building the image, the build fails on this error:
automake: error: no 'Makefile.am' found for any configure output
Error build: The command [/bin/sh -c aclocal && autoconf && automake -a] returned a non-zero code: 1
which in reality is harmless. The library builds fine, but Docker stops the build once it receives this error. Is there any way I can instruct Docker to just ignore this?
Sure. Docker is just responding to the error codes returned by the RUN shell scripts in the Dockerfile. If your Dockerfile has something like:
RUN make
You could replace that with:
RUN make; exit 0
This will always return a 0 (success) exit code. The disadvantage here is that your image will appear to build successfully even if there are actual errors in the build process.
This might be of interest to those, whose potential errors in their images are not harmless enough to go unnoticed/logged. (Also, not enough rep. to comment, so here as an answer.)
As pointed out, the disadvantage of RUN make; exit 0 is you don't get to know, if your build failed. Hence, rather use something like:
make test 2>&1 > /where/ever/make.log || echo "There were failing tests!"
Like this, you get notified via the docker image build process log, and you can see what exactly went bad during make (or whatsoever else execution, this is not restricted to make).
You can also use the standard bash ignore error || true, which is nice if you are in the middle of a chain:
RUN <first stage> && <job that might fail> || true && <next stage>

Resources