I don't want to push a docker build image to DockerHub. Is there any way to directly deploy a docker image from CircleCI to AWS/vps/vultr without having to push it to DockerHub?
I use docker save/load commands:
# save image to tar locally
docker save -o ./image.tar $IMAGEID
# copy to target host
scp ./image.tar user#host:~/
# load into target docker repo
ssh user#host "docker load -i ~/image.tar"
# tag the loaded target image
ssh user#host "docker tag $LOADED_IMAGE_ID myimage:latest"
PS: LOADED_IMAGE_ID can be retrieved in following way:
REMOTE_IMAGE_ID=`ssh user#host"docker load -i ~/image.tar" | grep -o "sha256:.*"`
Update:
You can gzip output to make it smaller. (Don't forget unzip the image archive before load)
docker save $IMAGEID | gzip > image.tar.gz
You could setup your own registry: https://docs.docker.com/registry/deploying/
Edit: As i.bondarenko said, docker save/load are the better commands for your needs.
Disclaimer: I am the author of Dogger.
I made a blog post about it here, which allows just that: https://medium.com/#mathiaslykkegaardlorenzen/hosting-a-docker-app-without-pushing-an-image-d4503de37b89
Related
I cached a docker image on travis-ci. The docker image is created from a dockerfile. Now my dockerfile changed, and I need to remove caches and rebuild the docker image. How can I remove the caches on travis-ci?
My current .travis.yml looks like this:
language: C
services:
- docker
cache:
directories:
- docker_cache
before_script:
- |
echo Now loading...
filename=docker_cache/saved_images.tar
if [[ -f "$filename" ]]; then
echo "Got one from cache."
docker load < "$filename"
else
echo "Got one from scratch";
docker build -t $IMAGE .
docker save -o "$filename" $IMAGE
fi
script:
- docker run -it ${IMAGE} /bin/bash -c "pwd"
env:
- IMAGE=test04
Per the docs there are three methods:
Using the UI: "More options" -> "Caches" on the repo's page
Using the CLI: travis cache --delete
Using the API: DELETE /repos/{repository.id}/caches
That said, Docker images are one of the examples explicitly called out as a thing not to cache:
Large files that are quick to install but slow to download do not
benefit from caching, as they take as long to download from the cache
as from the original source
In your example it's not clear what's involved in the pipeline beyond that Dockerfile - even if the file itself hasn't changed, any of the things that go into it (base image, source code, etc.) might have. Caching the image means you may get false positives, builds that pass even though docker build would have failed.
I have a ci/cd automation in my project with gitlab, and after push my code on master branch, gitlab runner create a new docker image on my server and set latest commit hash as tag on that image, and recreate container with new image. after a while, there is a lot of unused image, and I want to delete them automaticly.
I delete old image manually.
This is my Makefile
NAME := farvisun/javabina
TAG := $$(git log -1 --pretty=%h)
IMG := ${NAME}:${TAG}
LATEST := ${NAME}:latest
app_build:
docker build -t ${IMG} -f javabina.dockerfile . && \
docker tag ${IMG} ${LATEST}
app_up:
docker-compose -p farvisun-javabina up -d javabina
And after all of this, I want a simple bash code, or other tools, to delete unused image, for example keep 3 lastest image, or keep last 2 day past builds, and delete others.
If you are fine with keeping a single image, you can use docker image prune -f, which will remove all images but the ones associated with a container, so if you run this command while the container is running, it will remove the rest of the images.
Don't forget to run docker system prune every often as well to further reduce storage usage.
In your situation where you need to keep more than one image, you can try this:
#!/bin/bash
for tag in $(docker image ls | sed 1,4d | awk '{print $3}')
do
docker image rm -f $tag
done
The first line will list all docker images, remove from the list the first 3 ones which you want to keep, and select only the column with the image ID. Then, for each of the IDs, we remove the image.
If you want to remove more, change the 4d for another number. Notice that the first line is a header, so it must always be removed.
If you want to filter the images by a tag first, you can make your own filters.
You can schedule (e.g. once a day or once a week) in the compilation machine a "docker image prune" command.
https://docs.docker.com/engine/reference/commandline/image_prune/
Here is a way we can remove old images, after a new successful build in our pipeline
# build
- docker build -t APP_NAME:$(git describe --tags --no-abbrev) .
# docker tag
- docker tag APP_NAME:$(git describe --tags --no-abbrev) artifactory.XYZ.com/APP_NAME:latest
# remote old images
- sha=$(docker image inspect artifactory.XYZ.com/APP_NAME:latest -f '{{.ID}}')
- image_sha=$(echo $sha | cut -d':' -f2)
- image_id=$(echo $image_sha | head -c 12)
- docker image ls | grep APP_NAME | while read name tag id others; do if ! [ $id = $image_id ]; then docker image rm --force $id; fi ; done
I committed my container as an image, then used "docker save" to save the image as a tar. Now I'm trying to load the tar on a GCC Centos 7 instance. I packaged it locally on my Ubuntu machine.
I've tried: docker load < image.tar and sudo docker load < image.tar
I also tried chmod 777 image.tar to see if the issue was permissions related.
Each time I try to load the image I get a variation of this error (the xxxx bit is a different number every time):
open /var/lib/docker/tmp/docker-import-xxxxxxxxx/repositories: no such file or directory
I think it might have something to do with permissions, because when I try to cd into /var/lib/docker/ I run into permissions issues.
Are there any other things I can try? Or is it likely to be a corrupted image?
There was a simple answer to this problem
I ran md5 checksums on the images before and after I moved them across systems and they were different. I re-transferred and all is working now.
for me the problem was that I added .tgz as an input. Once I extracted the tarball - there were the .tar file. Using that file as input it was successful.
The sequence of the BUILD(!) command is important, try this sequence:
### 1/3: Build it:
# docker build -f MYDOCKERFILE -t MYCNTNR .
### 2/3: Save it:
# docker save -o ./mycontainer.tar MYCNTNR
### 3/3: Copy it to the target machine:
# rsync/scp/... mycontainer.tar someone#target:.
### 4/4: On the target, load it :
# docker load -i MYCNTNR.tar
<snip>
Loaded image: MYCNTNR
I had the same issue and the following command fixed it for me:
cat <file_name>.tar | docker import - <image_name>:<image_version/tag>
Ref: https://webkul.com/blog/error-open-var-lib-docker-tmp-docker-import/
I'm trying to use the tf-sentencepiece operation in my model found here https://github.com/google/sentencepiece/tree/master/tensorflow
There is no issue building the model and getting a saved_model.pb file with variables and assets. However, if I try to use the docker image for tensorflow/serving, it says
Loading servable: {name: model version: 1} failed:
Not found: Op type not registered 'SentencepieceEncodeSparse' in binary running on 0ccbcd3998d1.
Make sure the Op and Kernel are registered in the binary running in this process.
Note that if you are loading a saved graph which used ops from tf.contrib, accessing
(e.g.) `tf.contrib.resampler` should be done before importing the graph,
as contrib ops are lazily registered when the module is first accessed.
I am unfamiliar with how to build anything manually, and was hoping that I could do this without many changes.
One approach would be to:
Pull a docker development image
$ docker pull tensorflow/serving:latest-devel
In the container, make your code changes
$ docker run -it tensorflow/serving:latest-devel
Modify the code to add the op dependency here.
In the container, build TensorFlow Serving
container:$ tensorflow_serving/model_servers:tensorflow_model_server && cp bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server /usr/local/bin/
Use the exit command to exit the container
Look up the container ID:
$ docker ps
Use that container ID to commit the development image:
$ docker commit $USER/tf-serving-devel-custom-op
Now build a serving container using the development container as the source
$ mkdir /tmp/tfserving
$ cd /tmp/tfserving
$ git clone https://github.com/tensorflow/serving .
$ docker build -t $USER/tensorflow-serving --build-arg TF_SERVING_BUILD_IMAGE=$USER/tf-serving-devel-custom-op -f tensorflow_serving/tools/docker/Dockerfile .
You can now use $USER/tensorflow-serving to serve your image following the Docker instructions
I am building a Docker image using a command line like the following:
docker build -t myimage .
Once this command has succeeded, then rerunning it is a no-op as the image specified by the Dockerfile has not changed. Is there a way to detect if the Dockerfile (or one of the build context files) subsequently changes without rerunning this command?
looking at docker inspect $image_name from one build to another, several information doesn't change if the docker image hasn't changed. One of them is the docker Id. So, I used the Id information to check if a docker has been changed as follows:
First, one can get the image Id as follows:
docker inspect --format {{.Id}} $docker_image_name
Then, to check if there is a change after a build, you can follow these steps:
Get the image id before the build
Build the image
Get the image id after the build
Compare the two ids, if they match there is no change, if they don't match, there was a change.
Code: Here is a working bash script implementing the above idea:
docker inspect --format {{.Id}} $docker_image_name > deploy/last_image_build_id.log
# I get the docker last image id from a file
last_docker_id=$(cat deploy/last_image_build_id.log)
docker build -t $docker_image_name .
docker_id_after_build=$(docker inspect --format {{.Id}} $docker_image_name)
if [ "$docker_id_after_build" != "$last_docker_id" ]; then
echo "image changed"
else
echo "image didn't change"
fi
There isn't a dry-run option if that's what you are looking for. You can use a different tag to avoid affecting existing images and look for ---> Using cache in the output (then delete the tag if you don't want it).