Say I have this image tag "node:9.2" as in FROM node:9.2...
is there an API I can hit to see if image with tag "node:9.2" exists and can be retrieved, before I actually try docker build ...?
This script will build only if image not exist.
update for V2
function docker_tag_exists() {
curl --silent -f -lSL https://hub.docker.com/v2/repositories/$1/tags/$2 > /dev/null
}
Use above function for v2
#!/bin/bash
function docker_tag_exists() {
curl --silent -f -lSL https://index.docker.io/v1/repositories/$1/tags/$2 > /dev/null
}
if docker_tag_exists library/node 9.11.2-jessie; then
echo "Docker image exist,...."
echo "pulling existing docker..."
#so docker image exist pull the docker image
docker pull node:9.11.2-jessie
else
echo "Docker image not exist remotly...."
echo "Building docker image..."
#build docker image here with absoult or retlative path
docker build -t nodejs .
fi
With little modification from the link below.
If the registry is private u check this link With username and password
If you have "experimental": enabled set for the Docker daemon then you can use this command:
docker manifest inspect node:9.2
Yes.
docker image pull node:9.2
Related
I want to check if docker image with a specific tag exist on registry.
I saw this post:
how-to-check-if-a-docker-image-with-a-specific-tag-exist-locally
But it handles images on local system.
How can I use docker image inspect (or other commands) to check if image with specific tag exists on remote registry ?
I found the way without pulling:
curl -X GET http://my-registry/v2/image_name/tags/list
where:
my-registry - registry name
image_name - the image name I search for
The result shows all the tags in the registry
Another possibility is to use docker pull - if the exit code is 1, it doesn't exist. If the exit code is 0, it does exist.
docker pull <my-registry/my-image:my-tag>
echo $? # print exit code
Disadvantage: If the image actually exists (but not locally), it will pull the whole image, even if you don't want to. Depending on what you actually want to do and achieve, this might be a solution or waste of time.
There is docker search but it only works with Docker Hub. A universal solution will be a simple shell script with docker pull:
#!/bin/bash
function check_image() {
# pull image
docker pull $1 >/dev/null 2>&1
# save exit code
exit_code=$?
if [ $exit_code = 0 ]; then
# remove pulled image
docker rmi $1 >/dev/null 2>&1
echo Image $1 exists!
else
echo "Image $1 does not exist :("
fi
}
check_image hello-world:latest
# will print 'Image hello-world:latest exists!'
check_image hello-world:nonexistent
# will print 'Image hello-world:nonexistent does not exist :('
The downsides of the above are slow speed and free space requirement to pull an image.
If you are using AWS ECR you can use the solution provided here https://gist.github.com/outofcoffee/8f40732aefacfded14cce8a45f6e5eb1
This uses the AWS CLI to query the ECR and will use whatever credentials you have configured. This could make it easier for you as you will not need to directly worry about credentials for this request if you are already using them for AWS.
Copied the solution from the gist here
#!/usr/bin/env bash
# Example:
# ./find-ecr-image.sh foo/bar mytag
if [[ $# -lt 2 ]]; then
echo "Usage: $( basename $0 ) <repository-name> <image-tag>"
exit 1
fi
IMAGE_META="$( aws ecr describe-images --repository-name=$1 --image-ids=imageTag=$2 2> /dev/null )"
if [[ $? == 0 ]]; then
IMAGE_TAGS="$( echo ${IMAGE_META} | jq '.imageDetails[0].imageTags[0]' -r )"
echo "$1:$2 found"
else
echo "$1:$2 not found"
exit 1
fi
I don't want to push a docker build image to DockerHub. Is there any way to directly deploy a docker image from CircleCI to AWS/vps/vultr without having to push it to DockerHub?
I use docker save/load commands:
# save image to tar locally
docker save -o ./image.tar $IMAGEID
# copy to target host
scp ./image.tar user#host:~/
# load into target docker repo
ssh user#host "docker load -i ~/image.tar"
# tag the loaded target image
ssh user#host "docker tag $LOADED_IMAGE_ID myimage:latest"
PS: LOADED_IMAGE_ID can be retrieved in following way:
REMOTE_IMAGE_ID=`ssh user#host"docker load -i ~/image.tar" | grep -o "sha256:.*"`
Update:
You can gzip output to make it smaller. (Don't forget unzip the image archive before load)
docker save $IMAGEID | gzip > image.tar.gz
You could setup your own registry: https://docs.docker.com/registry/deploying/
Edit: As i.bondarenko said, docker save/load are the better commands for your needs.
Disclaimer: I am the author of Dogger.
I made a blog post about it here, which allows just that: https://medium.com/#mathiaslykkegaardlorenzen/hosting-a-docker-app-without-pushing-an-image-d4503de37b89
I'm trying to push an image to a repository and I'm doing this using a Travis CI job as below:
after_success:
- if [ $TRAVIS_BRANCH == "master" ]; then
docker login -u $DOCKER_REGISTRY_USERNAME -p $DOCKER_REGISTRY_PASSWORD $DOCKER_REGISTRY_URL;
echo "Pushing image $DOCKER_APP_NAME to repository $DOCKER_REGISTRY_URL";
docker push $DOCKER_APP_NAMEUUU;
fi
- bash <(curl -s https://codecov.io/bash)
Assume that these variables are resolved correctly, but however the image seems not to be pushed to the remote repository! Here is what I see from the build logs:
0.52s$ if [ $TRAVIS_BRANCH == "master" ]; then docker login -u $DOCKER_REGISTRY_USERNAME -p $DOCKER_REGISTRY_PASSWORD $DOCKER_REGISTRY_URL; echo "Pushing image $DOCKER_APP_NAME to repository $DOCKER_REGISTRY_URL"; docker push $DOCKER_APP_NAMEUUU; fi
Login Succeeded
Pushing image repo.treescale.com/joesan/inland24/plant-simulator to repository
"docker push" requires exactly 1 argument(s).
See 'docker push --help'.
Usage: docker push [OPTIONS] NAME[:TAG]
Push an image or a repository to a registry
So what is the problem here?
The error message is pretty specific, docker push requires an argument.
Based on the limited information available, I'd say that $DOCKER_APP_NAMEUUU doesn't resolve to any value (did you misspell $DOCKER_APP_NAME?
I am building a Docker image using a command line like the following:
docker build -t myimage .
Once this command has succeeded, then rerunning it is a no-op as the image specified by the Dockerfile has not changed. Is there a way to detect if the Dockerfile (or one of the build context files) subsequently changes without rerunning this command?
looking at docker inspect $image_name from one build to another, several information doesn't change if the docker image hasn't changed. One of them is the docker Id. So, I used the Id information to check if a docker has been changed as follows:
First, one can get the image Id as follows:
docker inspect --format {{.Id}} $docker_image_name
Then, to check if there is a change after a build, you can follow these steps:
Get the image id before the build
Build the image
Get the image id after the build
Compare the two ids, if they match there is no change, if they don't match, there was a change.
Code: Here is a working bash script implementing the above idea:
docker inspect --format {{.Id}} $docker_image_name > deploy/last_image_build_id.log
# I get the docker last image id from a file
last_docker_id=$(cat deploy/last_image_build_id.log)
docker build -t $docker_image_name .
docker_id_after_build=$(docker inspect --format {{.Id}} $docker_image_name)
if [ "$docker_id_after_build" != "$last_docker_id" ]; then
echo "image changed"
else
echo "image didn't change"
fi
There isn't a dry-run option if that's what you are looking for. You can use a different tag to avoid affecting existing images and look for ---> Using cache in the output (then delete the tag if you don't want it).
Does everyone know a docker gitlab image that update itself when new release comes out ?
I fix the version for now because I haven't try the automatique update using the :latest tagged image.
I have tested sameersbn/gitlab and gitlab/gitlab-ce image.
Does anyone has any recommendation for updating safely ?
You can do that.
Upgrading to latest Gitlab
By using docker the upgrade process of gitlab becomes very simple. We update the image in our docker-compose.yml to use gitlab/gitlab-ce:latest
So now whenever we pull the images, it would always pull the latest version.
upgrade_gitlab.sh
#!/bin/bash
# Pull the latest image
docker-compose pull | grep -q "Image is up to date"
IMAGE_UPDATED=$?
if [ $IMAGE_UPDATED -ne 0 ]; then
echo "New image found for Gitlab"
echo "Backing up old Gitlab"
docker-compose exec gitlab gitlab-rake gitlab:backup:create
# Update docker
docker-compose up -d
# Check logs for any issues
docker-compose logs -f
else
echo "No update found for Gitlab"
fi
Now you can schedule this script or execute it using jenkins. Whatever node you like
PS: Taken from my article http://tarunlalwani.com/post/migrating-gitlab-6-mysql-to-latest-gitlab/