Comparing local and remote image built in Docker - docker

I am trying to write a script for simple tagging docker images based on the contents of Dockerfile, basically something like "auto-versioning".
The current process is:
Check the latest version in Docker repository (I am using AWS ECR)
Get the digest for that image
Build image from Dockerfile locally
Compare digests from the remote image and local image
Now here is the problem. The locally built image doesn't have the RepoDigest that I want to compare against, because it wasn't in the repository yet.
Here's the error:
Template parsing error: template: :1:2: executing "" at <index .RepoDigests 0>: error calling index: index out of range: 0
The other approach I could think of is pulling the remote image, building the local one and comparing layers, if the layers are identical, no action, if they are different = new version and I can issue a new tag and push the image. I am not so sure if the layers are reliable for this manner.
Another possible approach would be building the image with some temporary tag e.g. pointer, pushing anyways and in case the tag is identical with the latest version, not issuing a new version and stopping there. That would mean there would always be pointer tag somewhere in the repository. (I am also thinking that this could be a definiton of the latest tag?)
This is the script that I am using for building the images:
#!/usr/bin/env bash
repository=myrepo
path=mypath.dkr.ecr.ohio-1.amazonaws.com/${repository}/
set -e
set -o pipefail
if [[ $# -gt 0 ]]; then
if [[ -d "$1" ]]; then
latest=$(aws ecr describe-images --repository-name ${repository}/$1 --output text --query 'sort_by(imageDetails,& imagePushedAt)[*].imageTags[*]' | tr '\t' '\n' | grep -e '^[0-9]$' | tail -1 ) || true
if [[ -z "$latest" ]]; then
latest=0
fi
else
echo "$1 is not a directory"
exit 1
fi
else
echo "Provide build directory"
exit 1
fi
image="$path$1"
temporaryImage="$image:build"
echo "Building $image..."
docker build -t ${temporaryImage} $1
if [[ ${latest} -gt 0 ]]; then
latestDigest=$(aws ecr describe-images --repository-name ${repository}/$1 --image-ids "imageTag=${latest}" | jq -r '.imageDetails[0].imageDigest')
buildDigest=$(docker inspect --format='{{index .RepoDigests 0}}' ${temporaryImage})
if [[ "$image#$latestDigest" == "$buildDigest" ]]; then
echo "The desired version of the image is already present in the remote repository"
exit 1
fi
version=$((latest+1))
else
version=1
fi
versionedImage="$image:$version"
latestImage="$image:latest"
devImage="$image:dev"
devVersion="$image:$version-dev"
docker tag ${temporaryImage} ${versionedImage}
docker tag ${versionedImage} ${latestImage}
docker push ${versionedImage}
docker push ${latestImage}
echo "Image '$versionedImage' pushed successfully!"
docker build -t ${devImage} $1/dev/
docker tag ${devImage} ${devVersion}
docker push ${devImage}
docker push ${devVersion}
echo "Development image '$devImage' pushed successfully!"

Related

Docker volume: rename or copy operation

As per documentation Docker volumes are advertised this way:
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure and OS of the host machine, volumes are completely managed by Docker.
But if they are so good, why there are no operations to manage them like: copy, rename?
the command:
docker volume --help
gives only these options:
Usage: docker volume COMMAND
Manage volumes
Commands:
create Create a volume
inspect Display detailed information on one or more volumes
ls List volumes
prune Remove all unused local volumes
rm Remove one or more volumes
Documentation also states no other commands, nor any workarounds for having the copy or rename functionality.
I would like to rename currently existing volume and create another (blank) in place of the originally named volume and populate it with the new data for test.
After doing my test I may want (or not) to remove the newly created volume and rename the other one to its previous (original) name to restore the volume setup as it was before.
I would like to not create a backup of the original volume that I want to rename. Renaming is good enough for me and much faster than creating the backup and restoring form it.
Editing the docker-compose file and changing the name of the volume there is something I would like to avoid as well.
Is there any workaround that can work for renaming of a volume?
Can low level manual management from the shell targeting the Docker Root Dir: /var/lib/docker and volumes sub-dir be a solution or that approach may lead to some docker demon data inconsistency?
Not really the answer but I'll post this copy example because I couldn't find any before and searching for it took me to this question.
Docker suggest --volumes-from for backup purposes here.
For offline migration (stopped container) I don't see the point in using --volumes-from. So I just used a middle container with both volumes mounted and a copy command.
To finish off the migration a new container can use the new volume
Here's a quick test
Prepare a volume prova
docker run --name myname -d -v prova:/usr/share/nginx/html nginx:latest
docker exec myname touch /usr/share/nginx/html/added_file
docker stop myname
Verify the volume has nginx data + our file added_file
sudo ls /var/lib/docker/volumes/prova/_data
Output:
50x.html added_file index.html
Migrate the data to volume prova2
docker run --rm \
-v prova:/original \
-v prova2:/migration \
ubuntu:latest \
bash -c "cp -R /original/* /migration/"
Verify the new volume has the same data
sudo ls /var/lib/docker/volumes/prova2/_data
Output:
50x.html added_file index.html
Run a new container with the migrated volume:
docker run --name copyname -d -v prova2:/user/share/nginx/html nginx:latest
Verify the new container sees the migrated data at the original volume moint point:
docker exec copyname ls -al /user/share/nginx/html
For next searchers, I made a script that can do a copy of volume by #Lennonry example. Here it is https://github.com/KOYU-Tech/docker-volume-copy
Script itself for history:
#!/bin/bash
if (( $# < 2 )); then
echo ""
echo "No arguments provided"
echo "Use command example:"
echo "./dcv.sh OLD_VOLUME_NAME NEW_VOLUME_NAME"
echo ""
exit 1
fi
OLD_VOLUME_NAME="$1"
NEW_VOLUME_NAME="$2"
echo "== From '$OLD_VOLUME_NAME' to '$NEW_VOLUME_NAME' =="
function isVolumeExists {
local isOldExists=$(docker volume inspect "$1" 2>/dev/null | grep '"Name":')
local isOldExists=${isOldExists#*'"Name": "'}
local isOldExists=${isOldExists%'",'}
local isOldExists=${isOldExists##*( )}
if [[ "$isOldExists" == "$1" ]]; then
return 1
else
return 0
fi
}
# check if old volume exists
isVolumeExists ${OLD_VOLUME_NAME}
if [[ "$?" -eq 0 ]]; then
echo "Volume $OLD_VOLUME_NAME doesn't exist"
exit 2
fi
# check if new volume exists
isVolumeExists ${NEW_VOLUME_NAME}
if [[ "$?" -eq 0 ]]; then
echo "creating '$NEW_VOLUME_NAME' ..."
docker volume create ${NEW_VOLUME_NAME} 2>/dev/null 1>/dev/null
isVolumeExists ${NEW_VOLUME_NAME}
if [[ "$?" -eq 0 ]]; then
echo "Cannot create new volume"
exit 3
else
echo "OK"
fi
fi
# most important part, data migration
docker run --rm --volume ${OLD_VOLUME_NAME}:/source --volume ${NEW_VOLUME_NAME}:/destination ubuntu:latest bash -c "echo 'copying volume ...'; cp -R /source/* /destination/"
if [[ "$?" -eq 0 ]]; then
echo "Done successfuly 🎉"
else
echo "Some error occured 😭"
fi

How to check if a Docker image with a specific tag exist on registry?

I want to check if docker image with a specific tag exist on registry.
I saw this post:
how-to-check-if-a-docker-image-with-a-specific-tag-exist-locally
But it handles images on local system.
How can I use docker image inspect (or other commands) to check if image with specific tag exists on remote registry ?
I found the way without pulling:
curl -X GET http://my-registry/v2/image_name/tags/list
where:
my-registry - registry name
image_name - the image name I search for
The result shows all the tags in the registry
Another possibility is to use docker pull - if the exit code is 1, it doesn't exist. If the exit code is 0, it does exist.
docker pull <my-registry/my-image:my-tag>
echo $? # print exit code
Disadvantage: If the image actually exists (but not locally), it will pull the whole image, even if you don't want to. Depending on what you actually want to do and achieve, this might be a solution or waste of time.
There is docker search but it only works with Docker Hub. A universal solution will be a simple shell script with docker pull:
#!/bin/bash
function check_image() {
# pull image
docker pull $1 >/dev/null 2>&1
# save exit code
exit_code=$?
if [ $exit_code = 0 ]; then
# remove pulled image
docker rmi $1 >/dev/null 2>&1
echo Image $1 exists!
else
echo "Image $1 does not exist :("
fi
}
check_image hello-world:latest
# will print 'Image hello-world:latest exists!'
check_image hello-world:nonexistent
# will print 'Image hello-world:nonexistent does not exist :('
The downsides of the above are slow speed and free space requirement to pull an image.
If you are using AWS ECR you can use the solution provided here https://gist.github.com/outofcoffee/8f40732aefacfded14cce8a45f6e5eb1
This uses the AWS CLI to query the ECR and will use whatever credentials you have configured. This could make it easier for you as you will not need to directly worry about credentials for this request if you are already using them for AWS.
Copied the solution from the gist here
#!/usr/bin/env bash
# Example:
# ./find-ecr-image.sh foo/bar mytag
if [[ $# -lt 2 ]]; then
echo "Usage: $( basename $0 ) <repository-name> <image-tag>"
exit 1
fi
IMAGE_META="$( aws ecr describe-images --repository-name=$1 --image-ids=imageTag=$2 2> /dev/null )"
if [[ $? == 0 ]]; then
IMAGE_TAGS="$( echo ${IMAGE_META} | jq '.imageDetails[0].imageTags[0]' -r )"
echo "$1:$2 found"
else
echo "$1:$2 not found"
exit 1
fi

Why is wc -l returning 0 in a sh step subshell in Jenkins/groovy

I have a Jenkins script that looks like this
stage ("Build and Deploy") {
steps {
script {
def statusCode = sh(script:"""ssh ${env.SERVER_NAME} << EOF
cd ${env.LOCATION}
git clone -b ${env.GIT_BRANCH} ${env.GIT_URL} ${env.FOLDER}
cd ${env.FOLDER}
... some other stuff goes here but isnt relevant ..
sudo docker-compose up -d --build
if [ ! \$(sudo docker container ls -f "name=config-provider-*" | wc -l ) -eq 4 ]
then
exit 1
fi
""", returnStatus:true).toString().trim()
if (statusCode == "1") {
error("At least one container failed to start")
}
}
}
}
What I want is to exit with error code 1 in the script if the number of running containers is not equal to 3 (wc -l == 4 including the header), but the if statement is evaluating true and exiting with error code 1 even though i know that the containers are successfully running.
I have tried
echo sh(script: """ssh ${env.SERVER_NAME} << EOF
echo \$(sudo docker container ls -f "name=config-provider-*" | wc -l)
""", returnStdout:true).toString()
and
echo sh(script: """ssh ${env.SERVER_NAME} << EOF
echo \$(sudo docker container ls -f "name=config-provider-*")
""", returnStdout:true).toString()
The latter outputted 4 lines within jenkins showing all of the running containers, as expected, but the former which includes "| wc -l" returned and printed out 0 in jenkins.
I have reproduced the steps of this script line by line manually from start to finish and it works as intended when not run from within jenkins.
Additionally, manually running the command:
[ ! $(sudo docker container ls -f "name=config-provider-*" | wc -l ) -eq 4 ] && echo failed
echoes nothing, and the following command returns an output of 4, which is expected.
echo $(sudo docker container ls -f "name=config-provider-*" | wc -l )

Github Actions workflow fails when running steps in a container

I've just started setting up a Github-actions workflow for one of project.I attempted to run the workflow steps inside a container with this workflow definition:
name: TMT-Charts-CI
on:
push:
branches:
- master
- actions-ci
jobs:
build:
runs-on: ubuntu-latest
container:
image: docker://alpine/helm:2.13.0
steps:
- name: Checkout Code
uses: actions/checkout#v1
- name: Validate and Upload Chart to Chart Museum
run: |
echo "Hello, world!"
export PAGER=$(git diff-tree --no-commit-id --name-only -r HEAD)
echo "Changed Components are => $PAGER"
export COMPONENT="NOTSET"
for CHANGE in $PAGER; do ENV_DIR=${CHANGE%%/*}; done
for CHANGE in $PAGER; do if [[ "$CHANGE" != .* ]] && [[ "$ENV_DIR" == "${CHANGE%%/*}" ]]; then export COMPONENT="$CHANGE"; elif [[ "$CHANGE" == .* ]]; then echo "Not a Valid Dir for Helm Chart" ; else echo "Only one component per PR should be changed" && exit 1; fi; done
if [ "$COMPONENT" == "NOTSET" ]; then echo "No component is changed!" && exit 1; fi
echo "Initializing Component => $COMPONENT"
echo $COMPONENT | cut -f1 -d"/"
export COMPONENT_DIR="${COMPONENT%%/*}"
echo "Changed Dir => $COMPONENT_DIR"
cd $COMPONENT_DIR
echo "Install Helm and Upload Chart If Exists"
curl -L https://git.io/get_helm.sh | bash
helm init --client-only
But Workflow fails stating the container stopped due immediately.
I have tried many images including "alpine:3.8" image described in official documentation, but container stops.
According to Workflow syntax for GitHub Actions, in the Container section: "A container to run any steps in a job that don't already specify a container." My assumption is that the container would be started and the steps would be run inside the Docker container.
We can achieve this my making custom docker images, Actually Github runners somehow stops the running container after executing the entrypoint command, I made docker image with entrypoint the make container alive, so container doesn't die after start.
Here is the custom Dockerfile (https://github.com/rizwan937/Helm-Image)
You can publish this image to dockerhub and use it in workflow file like
container:
image: docker://rizwan937/helm
You can add this entrypoint to any docker image so that It remains alive for further steps execution.
This is a temporary solution, if anyone have better one, let me know.

Docker Push Results in an Error

I'm trying to push an image to a repository and I'm doing this using a Travis CI job as below:
after_success:
- if [ $TRAVIS_BRANCH == "master" ]; then
docker login -u $DOCKER_REGISTRY_USERNAME -p $DOCKER_REGISTRY_PASSWORD $DOCKER_REGISTRY_URL;
echo "Pushing image $DOCKER_APP_NAME to repository $DOCKER_REGISTRY_URL";
docker push $DOCKER_APP_NAMEUUU;
fi
- bash <(curl -s https://codecov.io/bash)
Assume that these variables are resolved correctly, but however the image seems not to be pushed to the remote repository! Here is what I see from the build logs:
0.52s$ if [ $TRAVIS_BRANCH == "master" ]; then docker login -u $DOCKER_REGISTRY_USERNAME -p $DOCKER_REGISTRY_PASSWORD $DOCKER_REGISTRY_URL; echo "Pushing image $DOCKER_APP_NAME to repository $DOCKER_REGISTRY_URL"; docker push $DOCKER_APP_NAMEUUU; fi
Login Succeeded
Pushing image repo.treescale.com/joesan/inland24/plant-simulator to repository
"docker push" requires exactly 1 argument(s).
See 'docker push --help'.
Usage: docker push [OPTIONS] NAME[:TAG]
Push an image or a repository to a registry
So what is the problem here?
The error message is pretty specific, docker push requires an argument.
Based on the limited information available, I'd say that $DOCKER_APP_NAMEUUU doesn't resolve to any value (did you misspell $DOCKER_APP_NAME?

Resources