I have 20 images TARed, now I want to load those images on another system. However, loading itself is taking 30 to 40 minutes. All images are independent of each other so all images loading should happen in parallel, I believe.
I tried solution like running load command in background(&) and wait till loading finishes, but observed that it is taking even more time. Any help here is highly appreciated.
Note:- not sure about the option -i to docker load command.
Try
find /path/to/image/archives/ -iname "*.tar" -o -iname "*.tar.xz" |xargs -r -P4 -i docker load -i {}
This will load Docker image archives in parallel (adjust -P4 to the desired number of parallel loads or set to -P0 for unlimited concurrency).
For speeding up the pulling/saving processes, you can use ideas from the snippet below:
#!/usr/bin/env bash
TEMP_FILE="docker-compose.image.pull.yaml"
image_name()
{
local name="$1"
echo "$name" | awk -F '[:/]' '{ print $1 }'
}
pull_images_file_gen()
{
local from_file="$1"
cat <<EOF >"$TEMP_FILE"
version: '3.4'
services:
EOF
while read -r line; do
cat <<EOF >>"$TEMP_FILE"
$(image_name "$line"):
image: $line
EOF
done < "$from_file"
}
save_images()
{
local from_file="$1"
while read -r line; do
docker save -o /tmp/"$(image_name "$line")".tar "$line" &>/dev/null & disown;
done < "$from_file"
}
pull_images_file_gen "images"
docker-compose -f $TEMP_FILE pull
save_images "images"
rm -f $TEMP_FILE
images - contains needed Docker images names list line by line.
Good luck!
Related
I have a Docker image which contains a file, say /usr/bin/foo. What's the easiest way to find out which step of the Dockerfile added that path? (Which I thought was equivalent to the question, of which layer of the Docker image does that path come from?)
I wrote a script which prints out all the paths in the image, prefixed by layer ID. It appears to work, but is quite slow:
#!/bin/bash
die() { echo 1>&2 "ERROR: $*"; exit 1; }
dir=$(mktemp -d)
trap "rm -rf $dir" EXIT
img="$1"
[[ -n "$img" ]] || die "wrong arguments"
docker image save "$img" | (cd $dir && tar xf -) ||
die "failed extracting docker image $img"
(cd $dir && find . -name '*.tar' | while read f; do layer=$(echo $f | cut -d/ -f2); tar tf $f | sed -e "s/^/$layer:/"; done) ||
die "failed listing layers"
(It could be made faster if it didn't write anything to disk. The problem is while tar tf - prints the paths in the TAR, it doesn't do the same for the nested layer.tar files. I am thinking I could use the Python tarfile module - but surely somebody else out there has done this already?)
However, I don't know how to translate the layer ID it gives me to a step in the Docker image. I thought I'd correlate it with the layer IDs reported by docker inspect:
docker image inspect $IMAGE | jq -r '.[].RootFS.Layers[]' | nl
But the layer ID which my script reports as containing the path, I can't find in the output of the above command. (Is that a consequence of BuildKit???)
In the end, I gave up on this whole approach. Instead I just made some educated guesses as to which Dockerfile line was probably creating that path, tested each guess by commenting it out (and all the lines after it), and soon I found the answer. Still, there must be a better way, surely? Ideally, what I'd like is something like a --contains-path= option to docker image history – which doesn't exist, but maybe there is something else which does the equivalent?
While dlayer does not have any searching function built-in, it is straight-forward to implement by combining it with a Perl one-liner:
docker image save $IMAGE |
dlayer -n 999999 |
perl -ne 'chomp;$query=quotemeta("usr/bin/foo");$cmd=$_ if $_ =~ m/ [\$] /;print "$cmd\n\t$_\n" if m/ $query/;'
This will print something like:
13 MB $ /opt/bar/install.sh # buildkit
637 B usr/bin/foo
-n 999999 is to increase limit of number of file names output from the default 100, otherwise the path will only be found if it is in the first 100 from that layer.
(I submitted a PR to add a built-in search function to dlayer, which removes the need for this one-line Perl script.)
There are around 10 container image files on the current directory, and I want to load them to my Kubernetes cluster that is using containerd as CRI.
[root#test tmp]# ls -1
test1.tar
test2.tar
test3.tar
...
I tried to load them at once using xargs but got the following result:
[root#test tmp]# ls -1 | xargs nerdctl load -i
unpacking image1:1.0 (sha256:...)...done
[root#test tmp]#
The first tar file was successfully loaded, but the command exited and the remaining tar files were not processed.
I have confirmed the command nerdctl load -i succeeded with exit code 0.
[root#test tmp]# nerdctl load -i test1.tar
unpacking image1:1.0 (sha256:...)...done
[root#test tmp]# echo $?
0
Does anyone know the cause?
Your actual ls command piped to xargs is seen as a single argument where file names are separated by null bytes (shortly said... see for example this article for a better in-depth analyze). If your version of xargs supports it, you can use the -0 option to take this into account:
ls -1 | xargs -0 nerdctl load -i
Meanwhile, this is not really safe and you should see why it's not a good idea to loop over ls output in your shell
I would rather transform the above to the following command:
for f in *.tar; do
nerdctl load -i "$f"
done
Hadolint is an awesome tool for linting Dockerfiles. I am trying
to integrated to my CI but I am dealing with for run over multiple Dockerfiles. Does someone know how the syntax look like? Here is how my dirs appears to:
dir1/Dockerfile
dir2/Dockerfile
dir3/foo/Dockerfile
in gitlab-ci
stage: hadolint
image: hadolint/hadolint:latest-debian
script:
- mkdir -p reports
- |
hadolint dir1/Dockerfile > reports/dir1.json \
hadolint dir2/Dockerfile > reports/dir2.json \
hadolint dir3/foo/Dockerfile > reports/dir3.json
But the sample above is now working.
So as far as I found it, hadolint runs recursively. So in my case:
- hadolint */Dockerfile > reports/all_reports.json
But the problem with this approach is that all reports will be in one file which humper the maintenance and clarity
If you want to keep all reports separated (one per top-level directory), you may want to rely on some shell snippet?
I mean something like:
- |
find . -name Dockerfile -exec \
sh -c 'src=${1#./} && { set -x && hadolint "$1"; } | tee -a "reports/${src%%/*}.txt"' sh "{}" \;
Explanation:
find . -name Dockerfile loops over all Dockerfiles in the current directory;
-exec sh -c '…' runs a subshell for each Dockerfile, setting:
$0 = "sh" (dummy value)
$1 = "{}" (the full, relative path of the Dockerfile), "{}" and \; being directly related to the find … -exec pattern;
src=${1#./} trims the path, replacing ./dir1/Dockerfile with dir1/Dockerfile
${src%%/*} extracts the top-level directory name (dir1/Dockerfile → dir1)
and | tee -a … copies the output, appending hadolint's output to the top-level directory report file, for each parsed Dockerfile (while > … should be avoided here for obvious reasons, if you have several Dockerfiles in a single top-level directory).
I have replaced the .json extension with .txt as hadolint does not seem to output JSON data.
I had met a problem while running the code below:
I want to have a loop under a dir and loop in the sub dir.
cd /mysql_back
ls | while read line
do
echo $line
if [[ -d "$line" ]];then
echo $line
cd $line
ls *.txt | while read datafile
do
echo "start load data:" $datafile
echo "copy ${datafile%.txt} from '/docker-entrypoint-initdb.d/mysql_back/${line}/${datafile}' delimiter ',' csv;" >> add_data.sql
done
docker exec -i $pg_continer psql -U postgres -d $line -f "/docker-entrypoint-initdb.d/mysql_back/${line}/add_data.sql" 2>/dev/null
echo "start next dir"
cd ../
fi
done
cd ${RootPath}
the outputs:
dstore_notification
dstore_notification
start load data: global.txt
start load data: message.txt
start load data: templet_info.txt
start load data: templet.txt
start next dir
the loop ended after finish docker cmd in the first sub dir,
after removing the docker cmd code docker exec -i $pg_continer psql -U postgres -d $line -f "/docker-entrypoint-initdb.d/mysql_back/${line}/add_data.sql" 2>/dev/null
the outputs:
dstore_notification
dstore_notification
start load data: global.txt
start load data: message.txt
start load data: templet_info.txt
start load data: templet.txt
start next dir
dstore_rbac
dstore_rbac
start load data: rbac_admin.txt
start load data: rbac_app_admin_role.txt
start load data: rbac_app_admin.txt
start load data: rbac_app_role.txt
start load data: rbac_service_app.txt
start next dir
I can't figure out why
anybody can tell me why this happens?
and how should I do the docker cmd in every sub dir?
Thanks!
You are running the docker exec command with -i. This means that you are running the exec command in interactive mode with the terminal and docker container waiting for input from yourself and providing output from the container.
To run the container to the background, you need to use -d as opposed to -i. This will then also allow the script to continue as expected.
As a side note. Don't parse the output of ls. Instead use "for datafile in *"
I have list of .tar docker image files , I have tried loading docker images using below commands
docker load -i *.tar
docker load -i alldockerimages.tar
where alldockerimages.tar contains all individual tar files .
Let me know how we can load multiple tar files.
Using xargs:
ls -1 *.tar | xargs --no-run-if-empty -L 1 docker load -i
(A previous revision left off the -i flag to "docker load".)
First I attempted to use the glob expression approach you first described:
# download some images to play with
docker pull alpine
docker pull nginx:alpine
# stream the images to disk as tarballs
docker save alpine > alpine.tar
docker save nginx:alpine > nginx.tar
# delete the images so we can attempt to load them from scratch
docker rmi alpine nginx:alpine
# issue the load command to try and load all images at once
cat *.tar | docker load
Unfortunately this only resulted in alpine.tar being loaded. It was my (presumably faulty) understanding that the glob expression would be expanded and ultimately cause the docker load command to be run for every file into which the glob expression expanded.
Therefore, one has to use a shell for loop to load all tarballs sequentially:
for f in *.tar; do
cat $f | docker load
done
Use the script described in save-load-docker-images.sh to save or load the images. For your case it would be
./save-load-docker-images.sh load -d <directory-location>
You can try the next option using find:
find -type f -name "*.tar" -exec docker load --input "{}" \;