I have list of .tar docker image files , I have tried loading docker images using below commands
docker load -i *.tar
docker load -i alldockerimages.tar
where alldockerimages.tar contains all individual tar files .
Let me know how we can load multiple tar files.
Using xargs:
ls -1 *.tar | xargs --no-run-if-empty -L 1 docker load -i
(A previous revision left off the -i flag to "docker load".)
First I attempted to use the glob expression approach you first described:
# download some images to play with
docker pull alpine
docker pull nginx:alpine
# stream the images to disk as tarballs
docker save alpine > alpine.tar
docker save nginx:alpine > nginx.tar
# delete the images so we can attempt to load them from scratch
docker rmi alpine nginx:alpine
# issue the load command to try and load all images at once
cat *.tar | docker load
Unfortunately this only resulted in alpine.tar being loaded. It was my (presumably faulty) understanding that the glob expression would be expanded and ultimately cause the docker load command to be run for every file into which the glob expression expanded.
Therefore, one has to use a shell for loop to load all tarballs sequentially:
for f in *.tar; do
cat $f | docker load
done
Use the script described in save-load-docker-images.sh to save or load the images. For your case it would be
./save-load-docker-images.sh load -d <directory-location>
You can try the next option using find:
find -type f -name "*.tar" -exec docker load --input "{}" \;
Related
There are around 10 container image files on the current directory, and I want to load them to my Kubernetes cluster that is using containerd as CRI.
[root#test tmp]# ls -1
test1.tar
test2.tar
test3.tar
...
I tried to load them at once using xargs but got the following result:
[root#test tmp]# ls -1 | xargs nerdctl load -i
unpacking image1:1.0 (sha256:...)...done
[root#test tmp]#
The first tar file was successfully loaded, but the command exited and the remaining tar files were not processed.
I have confirmed the command nerdctl load -i succeeded with exit code 0.
[root#test tmp]# nerdctl load -i test1.tar
unpacking image1:1.0 (sha256:...)...done
[root#test tmp]# echo $?
0
Does anyone know the cause?
Your actual ls command piped to xargs is seen as a single argument where file names are separated by null bytes (shortly said... see for example this article for a better in-depth analyze). If your version of xargs supports it, you can use the -0 option to take this into account:
ls -1 | xargs -0 nerdctl load -i
Meanwhile, this is not really safe and you should see why it's not a good idea to loop over ls output in your shell
I would rather transform the above to the following command:
for f in *.tar; do
nerdctl load -i "$f"
done
Could anyone point me to description of directory structure that docker registry relies on?
Background
I cannot pull docker image from our company's artifactory server. All my invocations of docker pull art.server.corp/repo/image:label end with error Unexpected EOF. My colleagues report the same issue.
I have asked for help to our Artifactory support, but waiting for their response takes time, and I don't believe in their result.
In the meanwhile I was able to download contents of that image from a browser through a www-interface of the Artifactory.
I've downloaded file manifest.json and bunch of files with names sha256__<long string with hash>.
Most of them are .tar.gz archives and one is in the JSON format.
How can I import these files into my local docker installation? My goal is to have the same container image as in the registry.
I am new to Docker.
I've tried docker load and docker import. The result is not that expected.
docker load complains about missing json files and does nothing.
$ for f in sha256* ; do docker load < $f ; done
open /var/lib/docker/tmp/docker-import-234007886/var/json: no such file or directory
...
open /var/lib/docker/tmp/docker-import-777861766/var/json: no such file or directory
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker import creates a separate image for each file, while they seem to be different layers of a single file system.
$ for f in sha256* ; do docker import $f image:label; done
sha256:a19634d70ff568616b9c42a0151ae8abbd6b915cb24e9d127e366e14453a0dd4
sha256:28c559e39d3be8267757ba8ca540c6b8440f66b71d4ed640fff1b42a04aa54c5
...
sha256:...
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
image label 7a4c9fe6210c About a minute ago 111MB
<none> <none> 654768c91f55 About a minute ago 404kB
<none> <none> 85d37d403e34 About a minute ago 1.37GB
<none> <none> 28c559e39d3b About a minute ago 63.2MB
Update
I have tried installing Artifactory-OSS with hope to import these blobs in it, but it seems, Docker is not supported in OSS.
Then I've decided to launch local docker registry and import them in it.
I have developed the following script that copies files to blobs and creates files named link.
#!/bin/zsh
source_dir=/path/to/downloaded/blobs
registrty_root=/mnt/volume/docker/registry/v2
image_name=image
image_tag=label
blobs=$registrty_root/blobs/sha256
repo_path=$registrty_root/repositories/$image_name
layers=$repo_path/_layers/sha256
manifests=$repo_path/_manifests/revisions/sha256
#debug=echo
debug=
for f in $source_dir/sha256* ; do
echo $f
bn=$(basename $f)
sha256=$(echo $bn | cut -c9-)
first2=$(echo $sha256 | cut -c-2)
blob_dir=$blobs/$first2/$sha256
layer_dir=$layers/sha256/$sha256
$debug mkdir -p $blob_dir
$debug cp -v $f $blob_dir/data
if [[ $(file $f)=~"gzip" ]] then ;
$debug mkdir -p $layer_dir
$debug echo "sha256:$sha256" > $layer_dir/link
else
$debug mkdir -p $manifests
$debug echo "sha256:$sha256" > $manifests/link
fi
done
man_sha256=$(sha256sum $source_dir/manifest.json)
first2=$(echo $man_sha256 | cut -c-2)
$debug mkdir -p $blobs/$first2/$man_sha256
$debug cp -v $source_dir/manifest.json $blobs/$first2/$man_sha256/data
$debug mkdir -p $manifests/$man_sha256
$debug echo "sha256:$man_sha256" > $manifests/$man_sha256/link
$debug mkdir -p $repo_path/_manifests/tags/$image_tag/{current,index/sha256/$man_sha256}
$debug echo "sha256:$man_sha256" > $repo_path/_manifests/tags/$image_tag/index/sha256/$man_sha256/link
$debug echo "sha256:$man_sha256" > $repo_path/_manifests/tags/$image_tag/current/link
Then I launch local docker registry: docker run -p 5000:5000 -v /mnt/volume/docker:/var/lib/registry registry:2
However, when I try to retrieve the image, I get error message:
$ docker pull localhost:5000/image:label
Error response from daemon: received unexpected HTTP status: 500 Internal Server Error
Docker console shows the following:
time="2021-01-18T14:04:54.269748704Z" level=error msg="response completed with error" err.code=unknown err.detail="invalid checksum digest format" err.message="unknown error" go.version=go1.11.2 http.request.host="localhost:5000" http.request.id=7d9fdc93-e74f-42d6-a722-81b51c371df5 http.request.method=HEAD http.request.remoteaddr="172.17.0.1:39770" http.request.uri="/v2/image/manifests/label" http.request.useragent="docker/20.10.2 go/go1.13.15 git-commit/8891c58 kernel/5.4.0-62-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.2 \(linux\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration=21.43068ms http.response.status=500 http.response.written=70 vars.name="image" vars.reference="label"
I've also consulted with this link: https://notsosecure.com/anatomy-of-a-hack-docker-registry/ and tried to request manifest from a browser. Tag list is retrieved successfully, but manifest gives error.
So, it looks like I'm missing something. Any help?
After closer acquaintance with Docker and Dockerfiles, I see that it was possible to rebuild this image from those archives using ADD instruction.
From https://docs.docker.com/engine/reference/builder/#add
If '< src >' is a local tar archive in a recognized compression format (identity, gzip, bzip2 or xz) then it is unpacked as a directory.
I don't want to push a docker build image to DockerHub. Is there any way to directly deploy a docker image from CircleCI to AWS/vps/vultr without having to push it to DockerHub?
I use docker save/load commands:
# save image to tar locally
docker save -o ./image.tar $IMAGEID
# copy to target host
scp ./image.tar user#host:~/
# load into target docker repo
ssh user#host "docker load -i ~/image.tar"
# tag the loaded target image
ssh user#host "docker tag $LOADED_IMAGE_ID myimage:latest"
PS: LOADED_IMAGE_ID can be retrieved in following way:
REMOTE_IMAGE_ID=`ssh user#host"docker load -i ~/image.tar" | grep -o "sha256:.*"`
Update:
You can gzip output to make it smaller. (Don't forget unzip the image archive before load)
docker save $IMAGEID | gzip > image.tar.gz
You could setup your own registry: https://docs.docker.com/registry/deploying/
Edit: As i.bondarenko said, docker save/load are the better commands for your needs.
Disclaimer: I am the author of Dogger.
I made a blog post about it here, which allows just that: https://medium.com/#mathiaslykkegaardlorenzen/hosting-a-docker-app-without-pushing-an-image-d4503de37b89
I have 20 images TARed, now I want to load those images on another system. However, loading itself is taking 30 to 40 minutes. All images are independent of each other so all images loading should happen in parallel, I believe.
I tried solution like running load command in background(&) and wait till loading finishes, but observed that it is taking even more time. Any help here is highly appreciated.
Note:- not sure about the option -i to docker load command.
Try
find /path/to/image/archives/ -iname "*.tar" -o -iname "*.tar.xz" |xargs -r -P4 -i docker load -i {}
This will load Docker image archives in parallel (adjust -P4 to the desired number of parallel loads or set to -P0 for unlimited concurrency).
For speeding up the pulling/saving processes, you can use ideas from the snippet below:
#!/usr/bin/env bash
TEMP_FILE="docker-compose.image.pull.yaml"
image_name()
{
local name="$1"
echo "$name" | awk -F '[:/]' '{ print $1 }'
}
pull_images_file_gen()
{
local from_file="$1"
cat <<EOF >"$TEMP_FILE"
version: '3.4'
services:
EOF
while read -r line; do
cat <<EOF >>"$TEMP_FILE"
$(image_name "$line"):
image: $line
EOF
done < "$from_file"
}
save_images()
{
local from_file="$1"
while read -r line; do
docker save -o /tmp/"$(image_name "$line")".tar "$line" &>/dev/null & disown;
done < "$from_file"
}
pull_images_file_gen "images"
docker-compose -f $TEMP_FILE pull
save_images "images"
rm -f $TEMP_FILE
images - contains needed Docker images names list line by line.
Good luck!
I want to distribute a Docker Compose system as a single archive. For this I want to run docker save on all the images. But how do I get the list of images from Docker Compose?
This is what I do at the moment:
IMAGES=$(cat docker-compose.yml | sed -n 's/image:\(.*\)/\1/p')
docker save -o images.tar $IMAGES
Do the following:
# Save Compressed Images
IMAGES=`grep '^\s*image' docker-compose.yml | sed 's/image://' | sort | uniq`
docker save $IMAGES | gzip > images.tar.gz
# Load Compressed Images
gunzip -c images.tar.gz | docker load