I have to list the Docker container images published in a certain project, but I cannot find an appropriate API using the gcloud CLI tool. Is this possible?
Is there any other solution to list the container images form this private container registry in my Google project?
You can use "gcloud docker search <hostname>/<your-project-id>" to list the images. Hostname should be "gcr.io", or "us.gcr.io" or whatever your images are created under. Please note you have to iterate through all possible hosts to find all images under the project. However, this method only list the repositories, it will not list tags or manifests.
You can also use registry API directly to do that and it will return more information. Using the below script as a starting guide:
#!/bin/bash
HOSTS="gcr.io us.gcr.io eu.gcr.io asia.gcr.io"
PROJECT=your-project-id
function search_gcr() {
local fullpath=""
local host=$1
local project=$2
if [[ -n $3 ]]; then
fullpath=${3}
fi
local result=$(curl -u _token:$(gcloud auth print-access-token) \
--fail --silent --show-error \
https://${host}/v2/${project}${fullpath}/tags/list)
if [[ -z $result ]]; then
printf ""
else
printf $result
fi
}
function recursive_search_gcr() {
local host=$1
local project=$2
local repository=$3
local result=$(search_gcr $host $project ${repository})
local returnVal=$?
if [[ -z $result ]]; then
echo Not able to curl: https://${host}/v2/${project}${fullpath}/tags/list
return
fi
local children="$(python - <<EOF
import json
import sys
obj = json.loads('$result')
if 'child' in obj:
print ' '.join(obj['child'])
else:
print ''
EOF
)"
for child in $children;
do
recursive_search_gcr $host $project ${repository}/${child}
done
local manifests="$(python - <<EOF
import json
import sys
obj = json.loads('$result')
if 'manifest' in obj:
print ' '.join(obj['manifest'])
else:
print ''
EOF
)"
echo Repository ${host}/${project}$repository:
echo " manifests:"
for manifest in $manifests
do
echo " "$manifest
done
echo
local tags="$(python - <<EOF
import json
import sys
obj = json.loads('$result')
if 'tags' in obj:
print ' '.join(obj['tags'])
else:
print ''
EOF
)"
echo " tags:"
for tag in $tags
do
echo " "$tag
done
echo
}
for HOST in $HOSTS;
do
recursive_search_gcr $HOST $PROJECT
done
Use the "gcloud container images" command to find and interact with images in Google Container Registry. For example, this would list all containers in a project called "my-project":
gcloud container images list --repository=gcr.io/my-project
Full documentation is at https://cloud.google.com/container-registry/docs/managing
Related
I'm doing a couple of experiments for a Kubernetes-based local dev environment and for that I'm exporting my local Docker registry credentials like this:
$ kubectl create secret generic -n default regcred \
--from-file=.dockerconfigjson=/home/user/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
This works fine for me (Linux without a desktop environment), but fails for my colleagues using any sort of credentials store, in particular those on Windows/WSL2. Their .docker/config.json files do not contain credentials, but instead a reference to credStore called desktop.exe, which I can only assume to be "Docker Desktop".
Is there a way I could extract credentials from the Windows credential store (mostly) automatically? It's of course OK to make the person executing the script confirm credential store access, but the remainder of the process should ideally be automated.
In the end, it hasn't been all that difficult. I've written a Bash script (also requires jq) which extracts credentials from the credentials store and combines them with the original .docker/config.json.
#!/usr/bin/env bash
set -ue
CREDHELPER=$(jq -r .credsStore < "${HOME}/.docker/config.json" )
STR=.
if [[ -n "$CREDHELPER" && "$CREDHELPER" != "null" ]]; then
CRED_BINARY=docker-credential-$CREDHELPER
REGS=$($CRED_BINARY list | jq -r 'keys | join(" ")' )
for REG in $REGS; do
CRED=$(echo $REG | $CRED_BINARY get | jq -rj '"\(.Username):\(.Secret)"' | base64 -w 0)
STR="$STR * { \"auths\": { \"$REG\": { \"auth\": \"$CRED\" }}}"
done
fi
jq -r "$STR" "${HOME}/.docker/config.json"
I want to check if docker image with a specific tag exist on registry.
I saw this post:
how-to-check-if-a-docker-image-with-a-specific-tag-exist-locally
But it handles images on local system.
How can I use docker image inspect (or other commands) to check if image with specific tag exists on remote registry ?
I found the way without pulling:
curl -X GET http://my-registry/v2/image_name/tags/list
where:
my-registry - registry name
image_name - the image name I search for
The result shows all the tags in the registry
Another possibility is to use docker pull - if the exit code is 1, it doesn't exist. If the exit code is 0, it does exist.
docker pull <my-registry/my-image:my-tag>
echo $? # print exit code
Disadvantage: If the image actually exists (but not locally), it will pull the whole image, even if you don't want to. Depending on what you actually want to do and achieve, this might be a solution or waste of time.
There is docker search but it only works with Docker Hub. A universal solution will be a simple shell script with docker pull:
#!/bin/bash
function check_image() {
# pull image
docker pull $1 >/dev/null 2>&1
# save exit code
exit_code=$?
if [ $exit_code = 0 ]; then
# remove pulled image
docker rmi $1 >/dev/null 2>&1
echo Image $1 exists!
else
echo "Image $1 does not exist :("
fi
}
check_image hello-world:latest
# will print 'Image hello-world:latest exists!'
check_image hello-world:nonexistent
# will print 'Image hello-world:nonexistent does not exist :('
The downsides of the above are slow speed and free space requirement to pull an image.
If you are using AWS ECR you can use the solution provided here https://gist.github.com/outofcoffee/8f40732aefacfded14cce8a45f6e5eb1
This uses the AWS CLI to query the ECR and will use whatever credentials you have configured. This could make it easier for you as you will not need to directly worry about credentials for this request if you are already using them for AWS.
Copied the solution from the gist here
#!/usr/bin/env bash
# Example:
# ./find-ecr-image.sh foo/bar mytag
if [[ $# -lt 2 ]]; then
echo "Usage: $( basename $0 ) <repository-name> <image-tag>"
exit 1
fi
IMAGE_META="$( aws ecr describe-images --repository-name=$1 --image-ids=imageTag=$2 2> /dev/null )"
if [[ $? == 0 ]]; then
IMAGE_TAGS="$( echo ${IMAGE_META} | jq '.imageDetails[0].imageTags[0]' -r )"
echo "$1:$2 found"
else
echo "$1:$2 not found"
exit 1
fi
I am trying to write a script for simple tagging docker images based on the contents of Dockerfile, basically something like "auto-versioning".
The current process is:
Check the latest version in Docker repository (I am using AWS ECR)
Get the digest for that image
Build image from Dockerfile locally
Compare digests from the remote image and local image
Now here is the problem. The locally built image doesn't have the RepoDigest that I want to compare against, because it wasn't in the repository yet.
Here's the error:
Template parsing error: template: :1:2: executing "" at <index .RepoDigests 0>: error calling index: index out of range: 0
The other approach I could think of is pulling the remote image, building the local one and comparing layers, if the layers are identical, no action, if they are different = new version and I can issue a new tag and push the image. I am not so sure if the layers are reliable for this manner.
Another possible approach would be building the image with some temporary tag e.g. pointer, pushing anyways and in case the tag is identical with the latest version, not issuing a new version and stopping there. That would mean there would always be pointer tag somewhere in the repository. (I am also thinking that this could be a definiton of the latest tag?)
This is the script that I am using for building the images:
#!/usr/bin/env bash
repository=myrepo
path=mypath.dkr.ecr.ohio-1.amazonaws.com/${repository}/
set -e
set -o pipefail
if [[ $# -gt 0 ]]; then
if [[ -d "$1" ]]; then
latest=$(aws ecr describe-images --repository-name ${repository}/$1 --output text --query 'sort_by(imageDetails,& imagePushedAt)[*].imageTags[*]' | tr '\t' '\n' | grep -e '^[0-9]$' | tail -1 ) || true
if [[ -z "$latest" ]]; then
latest=0
fi
else
echo "$1 is not a directory"
exit 1
fi
else
echo "Provide build directory"
exit 1
fi
image="$path$1"
temporaryImage="$image:build"
echo "Building $image..."
docker build -t ${temporaryImage} $1
if [[ ${latest} -gt 0 ]]; then
latestDigest=$(aws ecr describe-images --repository-name ${repository}/$1 --image-ids "imageTag=${latest}" | jq -r '.imageDetails[0].imageDigest')
buildDigest=$(docker inspect --format='{{index .RepoDigests 0}}' ${temporaryImage})
if [[ "$image#$latestDigest" == "$buildDigest" ]]; then
echo "The desired version of the image is already present in the remote repository"
exit 1
fi
version=$((latest+1))
else
version=1
fi
versionedImage="$image:$version"
latestImage="$image:latest"
devImage="$image:dev"
devVersion="$image:$version-dev"
docker tag ${temporaryImage} ${versionedImage}
docker tag ${versionedImage} ${latestImage}
docker push ${versionedImage}
docker push ${latestImage}
echo "Image '$versionedImage' pushed successfully!"
docker build -t ${devImage} $1/dev/
docker tag ${devImage} ${devVersion}
docker push ${devImage}
docker push ${devVersion}
echo "Development image '$devImage' pushed successfully!"
I'm fairly new to node and nginx. I've a task of building a simple webserver which host dynamic contents. A very crucial part of the webserver is to take inputs from user about ports to be used , any custom domain to be used (in place of localhost) , SSL certificates etc. from installer [Its supposed to be built for docker ] but I have no idea how to execute a script such that is passes the variable entered by user ( like $SERVER_URI) to nginx.conf and node file and overwrite current data
I will suggest to create a config file and read the value from them so everything will be dynamic.
Here is how you can achieve SSL certificate and other ENV and port dynamically also docker name and image name will be get and set.
Create file docker.config which contain ports, ENV, path mapping, hosts values and links if you wish to link container. leave them blank
if you do not need them. remove host_port:container_port this entry
just for comment purpose.
docker.config
START_PORT_MAPPINGS
host_port:container_port
8080:80
END_PORT_MAPPINGS
START_PATH_MAPPINGS
/path_to_code/:/var/www/htlm/test
/path_to_nginx_config1:/etc/nginx/nginx.conf
/path_to_ssl_certs:/container_path_to_Certs
END_PATH_MAPPINGS
START_LINKING
db:db-server
END_LINKING
START_HOST_MAPPINGS
test.com:192.168.1.23
test2.com:192.168.1.23
END_HOST_MAPPINGS
START_ENV_VARS
MYSQL_ROOT_PASSWORD=1234
OTHER_ENV_VAR=value
END_ENV_VARS
create start.sh this will read the values from docker.config and will run command your docker container.
Need two arguments 1st: docker name and 2nd: image name.
function read_connfig() {
docker_name="${1}"
input="docker.config"
option_key=$(echo "${2}" | cut -d':' -f1)
config_name=$(echo "${2}" | cut -d':' -f2)
post_fix=$(echo "${2}" | cut -d':' -f3)
while IFS=$' \t\n\r' read -r line; do
if [[ $line == END_"${config_name}" ]] ; then
read_prop="no"
fi
if [[ $read_prop == "yes" ]] ; then
echo -n "${option_key}${line}${post_fix} "
fi
if [[ $line == START_"${config_name}" ]] ; then
read_prop="yes"
fi
done < "$input"
}
function get_run_configs() {
docker_name=${1}
declare -a configs=("-p :PORT_MAPPINGS:" "-v :PATH_MAPPINGS:" "--add-host=:HOST_MAPPINGS:" "-e :ENV_VARS:" "--link :LINKING:")
local run_command=""
for config in "${configs[#]}"
do
config_vals=($(read_connfig "${docker_name}" "${config}"))
if [ ! -z "${config_vals}" ];
then
for config_val in "${config_vals[#]}"
do
run_command="${run_command} ${config_val}"
done
else
echo >&2 "No config found for ${config}"
fi
done
echo "${run_command}"
}
container_name=$1;
image_name=$2
docker_command=$(get_run_configs $docker_name)
echo $docker_command
docker run --name $container_name $docker_command -dit $image_name
Resulting command will be. ./start.sh test test
docker run --name test -p host_port:container_port -p 8080:80 -v /path_to_code/:/var/www/htlm/test -v /path_to_nginx_config1:/etc/nginx/nginx.conf -v /path_to_ssl_certs:/container_path_to_Certs --add-host=test.com:192.168.1.23 --add-host=test2.com:192.168.1.23 -e MYSQL_ROOT_PASSWORD=1234 -e OTHER_ENV_VAR=value --link db:db-server -dit test
I am trying to create a Jenkins pipeline where I need to execute multiple shell commands and use the result of one command in the next command or so. I found that wrapping the commands in a pair of three single quotes ''' can accomplish the same. However, I am facing issues while using pipe to feed output of one command to another command. For example
stage('Test') {
sh '''
echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r '.public_url'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r '.code'`
echo $RESULT
'''
}
Commands with pipe are not working properly. Here is the jenkins console output:
+ echo Executing Tests
Executing Tests
+ curl -s http://localhost:4040/api/tunnels/command_line
+ jq -r .public_url
+ URL=null
+ echo null
null
+ curl -sPOST https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=null
I tried entering all these commands in the jenkins snippet generator for pipeline and it gave the following output:
sh ''' echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r \'.public_url\'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r \'.code\'`
echo $RESULT
'''
Notice the escaped single quotes in the commands jq -r \'.public_url\' and jq -r \'.code\'. Using the code this way solved the problem
UPDATE: : After a while even that started to give problems. There were certain commands executing prior to these commands. One of them was grunt serve and the other was ./ngrok http 9000. I added some delay after each of these commands and it solved the problem for now.
The following scenario shows a real example that may need to use multiline shell commands. Which is, say you are using a plugin like Publish Over SSH and you need to execute a set of commands in the destination host in a single SSH session:
stage ('Prepare destination host') {
sh '''
ssh -t -t user#host 'bash -s << 'ENDSSH'
if [[ -d "/path/to/some/directory/" ]];
then
rm -f /path/to/some/directory/*.jar
else
sudo mkdir -p /path/to/some/directory/
sudo chmod -R 755 /path/to/some/directory/
sudo chown -R user:user /path/to/some/directory/
fi
ENDSSH'
'''
}
Special Notes:
The last ENDSSH' should not have any characters before it. So it
should be at the starting position of a new line.
use ssh -t -t if you have sudo within the remote shell command
I split the commands with &&
node {
FOO = world
stage('Preparation') { // for display purposes
sh "ls -a && pwd && echo ${FOO}"
}
}
The example outputs:
- ls -a (the files in your workspace
- pwd (location workspace)
- echo world