Using Revision Variable in Docker Build Step - docker

Trying to use the $(Rev:.r) variable in my docker build steps (version 1.*) for tagging and it doesn't seem to work. I always get
2019-01-14T21:42:24.4149933Z ##[error]invalid argument
"wp/imagename:0.6$(rev:.r)" for "-t, --tag" flag: invalid reference
format 2019-01-14T21:42:24.4160700Z ##[error]See 'docker build
--help'. 2019-01-14T21:42:24.4274219Z ##[error]/usr/bin/docker failed with return code: 125
No variable substitution seems to be happening and it looks like it's running it through with the Qualify image name option and lower-casing the R. Can anyone else use the $(Rev:.r) variable?
Doesn't matter where I try using that variable, i can use it in the Image Name option or the Arguments option and it gives me the same error.
-t wp/imagename:0.6$(Rev:.r)

You can't get just "the revision number" without parsing, it is not stored as a separate field somewhere. The $(Rev:.r) portion instructs Azure DevOps to come up with the first number that makes the build number unique (and, in that specific example, put a dot in front of it). Only the final build number is available.
At workaround, add a the $(Rev:.r) in the end of your build number (if it not there). add a PowerShell script task (you can do it inline PowerShell) before the Docker task and put this code:
$buildNumber = $Env:BUILD_BUILDNUMBER
$revision= $buildNumber.Substring($buildNumber.LastIndexOf('.') + 1)
Write-Host ("##vso[task.setvariable variable=revision;]$revision")
In your docker use the $revision variable:
-t wp/imagename:0.6$(revision)

I've only been able to get it to be recognized in the Build number format section under options.
If you are using this like a build number, why not just set the build number there instead and then reference using $(Build.BuildNumber)?

Related

Using multiple Artifactory images for scan one at a time in a loop

I have a text file contains of images from Artifactory.
I have a shell script to run blackduck scan on those images in a loop but I am getting errors like invalid reference format.
#!/usr/bin/sh
input="dockerimagesURL.txt"
while read dockerimagesURL
do
docker pull "$dockerimagesURL"
DockerImageID=$(docker images "$dockerimagesURL" --format '{{.ID}}')
sudo -S java -jar /home/dxc/Desktop/synopsis-detect-7.11.0/synopsys-detect-7.11.0.jar -- scan command continued.
done < $1
dockerimagesURL.txt file contains:
buildimages-docker-local.artifactory.com/docker-registry1:tag
buildimages-docker-local.artifactory.com/docker-registry2:tag
buildimages-docker-local.artifactory.com/docker-registry3:tag
buildimages-docker-local.artifactory.com/docker-registry4:tag
The above script is failing for multiple reasons:
Invalid reference format
docker pull -- not happening
Assuming your full error message is
'docker-registry1:tag' is not a valid repository/tag: invalid reference format
'tag' is a placeholder. Replace it by a tag that really exists, e.g. 'latest' (the default).
buildimages-docker-local.artifactory.com/docker-registry1:latest
The error could still exist after that change if the repository is not valid/cannot be found!

Ansible Tower CLI pass Launch Parameters in one command without prompt

I am trying to launch an ansible-tower cli job through Jenkins. But I don't want a prompt that appears on Ansible Tower. I want to pass those parameters in the same command so that a prompt is not required.
I have tried:
tower-cli job launch --job-template=33 -e "param1" -e "param2"
This is the error I get:
Error: failed to pass some of the extra variables
According to the Ansible Tower-CLI documentation the parameter -e is wrong. You need to use --extra-vars. This differs from ansible-playbook command. So an easy example is
tower-cli job launch --job-template 1 --extra-vars '{"x":"y"}'
Be aware that you write all vars in one argument. The --extra-vars expects JSON or YAML format.
Be also aware, that the given job template MUST be configured to ask for extra-vars. Otherwise the argument is ignored on Ansible Tower side.
Also - not the question but a good advice - if your Jenkins needs to wait for the job result add --monitor to the tower-cli command. Then the cli waits for the response code and the stage could "fail" if there is a problem.

Is there any way to execute docker push command from the dockerfile itself?

I tried running docker push command from the docker file but it throw an error
The command returned a non-zero code: 127
No there isn't. Dockerfile is an image build description format, it is not a general purpose scripting language.
Write a wrapper script if you need to do something that requires a scripting language.
No, these are completely different things: a Dockerfile just specifies the image you intend to build; what happens with the image after (e.g. uploading it into a repository) must be handled by something else (e.g. a script).

How do I capture output from one Rundeck step to be used in a later step?

I'm attempting to build, launch, and link a set of docker containers using Rundeck. In short (for those not familiar with docker), when an image is launched, it returns a container ID. I would like to use this container ID in the launching of subsequent jobs.
When run from the command line, it would look something like this (example only!!):
# docker run -Pd 23ABCD45
34DEF123
# docker run -Pd --link 34DEF123:host1 ABC123EF
321CB456
(note the use of the first return value in the second command line)
At this point, there would be two containers running. The second would be linked to the first by the --link option, and it would be addressable using the hostname host1 from inside the second container. To be fair, docker generates (or may be given) a specific container name which can be used in place of the container id. I would prefer to use the container ID to avoid the hassle of having to create/track unique names.
I would like to be able to capture the output of the first command (the container ID) so that it can be reused in the second command. Is this possible?
Edit: These images are being used for testing immediately following a
"docker build" (which also outputs a similar ID I would like to
include in my chain) and might be followed by "docker rm" and "docker
rmi" commands, so there are a number of uses for capturing this type
of output and carrying it through a related set of operations. This
is not just about launching/linking containers.
There is no direct Rundeck implementation that allows you pass an output from one job to another job as an input, but there are work around I've tried in the past, and I've settled on the second approach.
1. Use a file to pass data
Save the ID/output into a tmp file in first job
Second job read that file
Things might go wrong since you depend on a file, but good code can improve.
2. Call two jobs using Rundeck CLI from another job
This is the approach I am using.
JobA printout two random numbers.
echo $RANDOM;echo $RANDOM
JobB print out the second random produced from JobA which is passed as an option "number"
echo "$RD_OPTION_NUMBER is the number JobB received"
JobC calls first job, save last line to a variable and pass it to JobB
#!/bin/bash
OUTPUT_FROM_JOB_A=`run -f --id <ID of JobA> | tail -n 1`
run -f --id <ID of JobB> -- -number $OUTPUT_FROM_JOB_A
Output:
[5394] execution status: succeeded
Job execution started:
[5395] JobB <https://hostname:4443/project/Project/execution/show/5395>
6186 is the number JobB received
[5395] execution status: succeeded
This is just primitive code sample. you can do alot with python subprocess or just use bash.

Disable cache for specific RUN commands

I have a few RUN commands in my Dockerfile that I would like to run with -no-cache each time I build a Docker image.
I understand the docker build --no-cache will disable caching for the entire Dockerfile.
Is it possible to disable cache for a specific RUN command?
There's always an option to insert some meaningless and cheap-to-run command before the region you want to disable cache for.
As proposed in this issue comment, one can add a build argument block (name can be arbitrary):
ARG CACHEBUST=1
before such region, and modify its value each run by adding --build-arg CACHEBUST=$(date +%s) as a docker build argument (value can also be arbitrary, here it is current datetime, to ensure its uniqueness across runs).
This will, of course, disable cache for all following blocks too, as hash sum of the intermediate image will be different, which makes truly selective cache disabling a non-trivial problem, taking into account how docker currently works.
Use
ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache
before the RUN line you want to always run. This works because ADD will always fetch the file/URL and the above URL generates random data on each request, Docker then compares the result to see if it can use the cache.
I have also tested this and works nicely since it does not require any additional Docker command line arguments and also works from a Docker-compose.yaml file :)
If your goal is to include the latest code from Github (or similar), one can use the Github API (or equivalent) to fetch information about the latest commit using an ADD command.
docker build will always fetch an URL from an ADD command, and if the response is different from the one received last time docker build ran, it will not use the subsequent cached layers.
eg.
ADD "https://api.github.com/repos/username/repo_name/commits?per_page=1" latest_commit
RUN curl -sLO "https://github.com/username/repo_name/archive/main.zip" && unzip main.zip
As of February 2016 it is not possible.
The feature has been requested at GitHub
Not directly but you can divide your Dockerfile in several parts, build an image, then FROM thisimage at the beginning of the next Dockerfile, and build the image with or without caching
the feature added a week ago.
ARG FOO=bar
FROM something
RUN echo "this won't be affected if the value of FOO changes"
ARG FOO
RUN echo "this step will be executed again if the value of FOO changes"
FROM something-else
RUN echo "this won't be affected because this stage doesn't use the FOO build-arg"
https://github.com/moby/moby/issues/1996#issuecomment-550020843
Building on #Vladislav’s solution above I used in my Dockerfile
ARG CACHEBUST=0
to invalidate the build cache from hereon.
However, instead of passing a date or some other random value, I call
docker build --build-arg CACHEBUST=`git rev-parse ${GITHUB_REF}` ...
where GITHUB_REF is a branch name (e.g. main) whose latest commit hash is used. That means that docker’s build cache is being invalidated only if the branch from which I build the image has had commits since the last run of docker build.
I believe that this is a slight improvement on #steve's answer, above:
RUN git clone https://sdk.ghwl;erjnv;wekrv;qlk#gitlab.com/your_name/your_repository.git
WORKDIR your_repository
# Calls for a random number to break the cahing of the git clone
# (https://stackoverflow.com/questions/35134713/disable-cache-for-specific-run-commands/58801213#58801213)
ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache
RUN git pull
This uses the Docker cache of the git clone, but then runs an uncached update of the repository.
It appears to work, and it is faster - but many thanks to #steve for providing the underlying principles.
Another quick hack is to write some random bytes before your command
RUN head -c 5 /dev/random > random_bytes && <run your command>
writes out 5 random bytes which will force a cache miss

Resources