Is there any way to using hadolint for multiple dockerfiles? - docker

Hadolint is an awesome tool for linting Dockerfiles. I am trying
to integrated to my CI but I am dealing with for run over multiple Dockerfiles. Does someone know how the syntax look like? Here is how my dirs appears to:
dir1/Dockerfile
dir2/Dockerfile
dir3/foo/Dockerfile
in gitlab-ci
stage: hadolint
image: hadolint/hadolint:latest-debian
script:
- mkdir -p reports
- |
hadolint dir1/Dockerfile > reports/dir1.json \
hadolint dir2/Dockerfile > reports/dir2.json \
hadolint dir3/foo/Dockerfile > reports/dir3.json
But the sample above is now working.

So as far as I found it, hadolint runs recursively. So in my case:
- hadolint */Dockerfile > reports/all_reports.json
But the problem with this approach is that all reports will be in one file which humper the maintenance and clarity

If you want to keep all reports separated (one per top-level directory), you may want to rely on some shell snippet?
I mean something like:
- |
find . -name Dockerfile -exec \
sh -c 'src=${1#./} && { set -x && hadolint "$1"; } | tee -a "reports/${src%%/*}.txt"' sh "{}" \;
Explanation:
find . -name Dockerfile loops over all Dockerfiles in the current directory;
-exec sh -c '…' runs a subshell for each Dockerfile, setting:
$0 = "sh" (dummy value)
$1 = "{}" (the full, relative path of the Dockerfile), "{}" and \; being directly related to the find … -exec pattern;
src=${1#./} trims the path, replacing ./dir1/Dockerfile with dir1/Dockerfile
${src%%/*} extracts the top-level directory name (dir1/Dockerfile → dir1)
and | tee -a … copies the output, appending hadolint's output to the top-level directory report file, for each parsed Dockerfile (while > … should be avoided here for obvious reasons, if you have several Dockerfiles in a single top-level directory).
I have replaced the .json extension with .txt as hadolint does not seem to output JSON data.

Related

How to find which step in Dockerfile added some path?

I have a Docker image which contains a file, say /usr/bin/foo. What's the easiest way to find out which step of the Dockerfile added that path? (Which I thought was equivalent to the question, of which layer of the Docker image does that path come from?)
I wrote a script which prints out all the paths in the image, prefixed by layer ID. It appears to work, but is quite slow:
#!/bin/bash
die() { echo 1>&2 "ERROR: $*"; exit 1; }
dir=$(mktemp -d)
trap "rm -rf $dir" EXIT
img="$1"
[[ -n "$img" ]] || die "wrong arguments"
docker image save "$img" | (cd $dir && tar xf -) ||
die "failed extracting docker image $img"
(cd $dir && find . -name '*.tar' | while read f; do layer=$(echo $f | cut -d/ -f2); tar tf $f | sed -e "s/^/$layer:/"; done) ||
die "failed listing layers"
(It could be made faster if it didn't write anything to disk. The problem is while tar tf - prints the paths in the TAR, it doesn't do the same for the nested layer.tar files. I am thinking I could use the Python tarfile module - but surely somebody else out there has done this already?)
However, I don't know how to translate the layer ID it gives me to a step in the Docker image. I thought I'd correlate it with the layer IDs reported by docker inspect:
docker image inspect $IMAGE | jq -r '.[].RootFS.Layers[]' | nl
But the layer ID which my script reports as containing the path, I can't find in the output of the above command. (Is that a consequence of BuildKit???)
In the end, I gave up on this whole approach. Instead I just made some educated guesses as to which Dockerfile line was probably creating that path, tested each guess by commenting it out (and all the lines after it), and soon I found the answer. Still, there must be a better way, surely? Ideally, what I'd like is something like a --contains-path= option to docker image history – which doesn't exist, but maybe there is something else which does the equivalent?
While dlayer does not have any searching function built-in, it is straight-forward to implement by combining it with a Perl one-liner:
docker image save $IMAGE |
dlayer -n 999999 |
perl -ne 'chomp;$query=quotemeta("usr/bin/foo");$cmd=$_ if $_ =~ m/ [\$] /;print "$cmd\n\t$_\n" if m/ $query/;'
This will print something like:
13 MB $ /opt/bar/install.sh # buildkit
637 B usr/bin/foo
-n 999999 is to increase limit of number of file names output from the default 100, otherwise the path will only be found if it is in the first 100 from that layer.
(I submitted a PR to add a built-in search function to dlayer, which removes the need for this one-line Perl script.)

Permanently change PATH in Dockerfile with dynamic value

I am using security scan software in my Dockerfile and I need to add its bin folder to the path. Its path will contain the version part so I do not know the path until I download the software. My current progress is something like this:
1.Download the software:
RUN curl https://cloud.appscan.com/api/SCX/StaticAnalyzer/SAClientUtil?os=linux --output SAClientUtil.zip
RUN unzip SAClientUtil.zip -d SAClientUtil
2.The desired folder is located: SAClientUtil/SAClientUtil.X.Y.Z/bin/ (xyz mary vary from run to run). Get there using find and cd combination and try to add it to the PATH:
RUN cd "$(dirname "$(find SAClientUtil -type f -name appscan.sh | head -1)")"; \
export PATH="$PATH:$PWD"; # doesn't work
Looks like ENV command is not evaluating the parameter, so
ENV PATH $PATH:"echo $(dirname "$(find SAClientUtil -type f -name appscan.sh | head -1)")"
doesn't work also.
Any ideas on how to dynamically add a folder to the PATH during docker image build?
If you're pretty sure the zip file will contain only a single directory with that exact layout, you can rename it to something fixed.
RUN curl https://cloud.appscan.com/api/SCX/StaticAnalyzer/SAClientUtil?os=linux --output SAClientUtil.zip \
&& unzip SAClientUtil.zip -d tmp \
&& mv tmp/SAClientUtil.* SAClientUtil \
&& rm -rf tmp SAClientUtil.zip
ENV PATH=/SAClientUtil/bin:${PATH}
A simple solution would be to include a small wrapper script in your image, and then use that to run commands from the SAClientUtil directory. For example, if I have the following in saclientwrapper.sh:
#!/bin/sh
cmd=$1
shift
saclientpath=$(ls -d /SAClientUtil/SAClientUtil.*)
echo "got path: $saclientpath"
cd "$saclientpath"
exec "$saclientpath/bin/$cmd" "$#"
Then I can do this:
RUN curl https://cloud.appscan.com/api/SCX/StaticAnalyzer/SAClientUtil?os=linux --output SAClientUtil.zip
RUN unzip SAClientUtil.zip -d SAClientUtil
COPY saclientwrapper.sh /saclientwrapper.sh
RUN sh /saclientwrapper.sh appscan.sh
And this will produce, when building the image:
STEP 6: RUN sh /saclientwrapper.sh appscan.sh
got path: /SAClientUtil/SAClientUtil.8.0.1374
COMMAND SYNTAX
appscan <command> [options]
ADDITIONAL COMMAND HELP
appscan help <command>
.
.
.

Run execlineb when container start failed. Docker for windosw

I'm trying to run simple script inside docker container after start. Initialy previous developer decided to use s6 inside.
#!/usr/bin/execlineb -P
foreground { sleep 2 }
nginx
When i'm trying to start i'm gettings this message
execlineb: usage: execlineb [ -p | -P | -S nmin | -s nmin ] [ -q | -w | -W ] [ -c commandline ] script args
Looks like something wrong with executing this scripts or with execline.
I'm using docker for windows under windows10, however if somebody else trying to build this container in ubuntu(or any othe linux) evething is ok.
Can anybody help with this kind of problem?
DockerImage: simple alpine
According to our research of this "HUGE" problem we found two ways to solve it. Definitely it's a problem with special symbols, like '\r'
Option 1 dostounix:
install dostounix in your container(in docker file)
RUN apk --no-cache add \
dos2unix \
run it againts your sh script.
RUN for file in {PathToYourFiles}; do \
dos2unix $file; \
chmod a+xwr $file; \
done
enjoy your scripts.
Option 2 VsCode(or any textEditor):
Change CRLF 'End Of Line Sequence' to LF
VS Code bottom panel
Line endings options
enjoy your scripts.

Include directory of docker-compose files

How can I run docker-compose with the base docker-compose.yml and a whole directory of docker-compose files.
Like if I had this directory structure:
parentdir
docker-compose.yml
folder1/
foo.yml
bar.yml
folder2/
foo.yml
other.yml
How can I specify which folder of manifests to run when running compose?
I hope i understood your question well.
You could use the -f flags:
docker-compose -f docker-compose1.yml
Edit
To answer your comment: no you can't docker-compose several files with only one command. You need to specify a file path, not a directory path.
What you could do is create a shell script like:
#!/bin/bash
DOCKERFILE_PATH=$DOCKER_PATH
for dockerfile in $DOCKERFILE_PATH
do
if [[ -f $dockerfile ]]; then
docker-compose -f $dockerfile
fi;
done
By calling it like: DOCKER_PATH=dockerfiles/* ./script.sh which will execute docker-compose -f with every files in DOCKER_PATH.
(docs)
My best option was to have a run.bash file in the base directory of my project.
I then put all my compose files in say compose/ directory, then run it with this command:
docker-compose $(./run.bash) up
run.bash:
#!/usr/bin/env bash
PROJECT_NAME='projectname' # need to set manually since it normally uses current directory as project name
DOCKER_PATH=$PWD/compose/*
MANIFESTS=' '
for dockerfile in $DOCKER_PATH
do
MANIFESTS="${MANIFESTS} -f $dockerfile"
done
MANIFESTS="${MANIFESTS} -p $PROJECT_NAME"
echo $MANIFESTS
You can pass multiple files to docker-compose by using -f for each file. For example if you have N files, you can pass them as follows:
docker-compose -f file1 -f file2 ... -f fileN [up|down|pull|...]
If you have files in sub-directories and you want to pass them to docker-compose recursively, you can use the following:
docker-compose $(for i in $(find . -type f | grep yaml)
do
echo -f $i
done
) [up|down|pull|...]

Dockerfile - Defining an ENV variable with a dynamic value

I want to update the PATH environment variable with a dynamic value. This is what I've tried so far in my Dockerfile:
...
ENV PATH '$(dirname $(find /opt -name "ruby" | grep -i bin)):$PATH'
...
But export shows that the command was not interpreted:
root#97287b22c251:/# export
declare -x PATH="\$(dirname \$(find /opt -name \"ruby\" | grep -i bin)):\$PATH"
I don't want to hardcode the value. Is it possible to achieve it?
Thanks
we can't do that, as that would be a huge security issue. Meaning you could run and environment variable like this
ENV PATH $(rm -rf /)
However, you can pass the information through a --build-arg (ARG) when building an image;
ARG DYNAMIC_VALUE
ENV PATH=${DYNAMIC_VALUE:-unknown}
RUN echo $PATH
and build an image with:
> docker build --build-arg DYNAMIC_VALUE=$(dirname $(find /opt -name "ruby" | grep -i bin)):$PATH .
Or, if you want to copy information from an existing env-var on the host;
> export DYNAMIC_VALUE=foobar
> docker build --build-arg DYNAMIC_VALUE .
Not sure if something like this is what you are looking for... slightly modified what you have already. My main question would be, what are you attempting to accomplish with this portion?:
'$(dirname $(find /opt -name "ruby" | grep -i bin)):$PATH'
Part of the problem could be usage of single and double quotes resulting in expansions.
FROM alpine:3.4
RUN PATH_TO_ADD=$(dirname $(find /opt -name "ruby" | grep -i bin)) || echo Error locating files
ENV PATH "$PATH:$PATH_TO_ADD"

Resources