I want to use wildcard to select multiple files from a directory in a container and use docker cp to copy these files from container to docker host.
I couldn't find if support for using wildcard is available with docker cp yet or not.
docker cp fd87af99b650:/foo/metrics.csv* /root/metrices_testing/
This results with the error metrics.csv*: no such file or directory
I came across an example where for loop was used to select a few files and then sent to container, but i want to transfer files from container to host and want to do this on docker host itself as script is running on host only.
Using docker exec to select files first and then copying them using docker cp can be an option. But that is a 2 step process.
Can someone please help me do this in one step?
EDIT:
I tried this. A step close but still failing.
# for f in $(docker exec -it SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*");
do docker cp SPSRS:$f /root/metrices_testing/;
done
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-08:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:45
In fact your solution can make your aims just need a little change:
for f in $(docker exec -it SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*"); do docker cp SPSRS:$f /root/metrices_testing/; done
->
for f in $(docker exec SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*"); do docker cp SPSRS:`echo $f | sed 's/\r//g'` /root/metrices_testing/; done
This is because docker exec SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*" will have \r in every matched string, so finally the cp can not find the files in container.
So, we use echo $f | sed 's/\r//g' to get rid of \r for every file name, this could make you work.
NOTE: for alpine, we need to use sh to replace bash, meanwhile, -it should be deleted to avoid colorful print in alpine introduce some invisible characters like ^[[0;0m, etc.
Docker cp command supports to copy folder with all the contents inside a folder
docker cp -a container-id:/opt/tpa/logs/ /root/testing/
In the above example copying files from container folder /opt/tpa/logs to local machine /root/testing/ folder. Here all the files inside /logs/ will be copied to local. The trick here is using -a option along with docker cp
Docker cp still doesn't support wildcards. You can however use them in a Dockerfile in the following way:
COPY hom* /mydir/ # adds all files starting with "hom"
COPY hom?.txt /mydir/ # ? is replaced with any single character, e.g., "home.txt"
Reference: https://docs.docker.com/engine/reference/builder/#copy
Run this inside the container:
dcp() {
if [ "$#" -eq 1 ]; then
printf "docker cp %q .\n" "$(hostname):$(readlink -e "$1")"
else
local archive="$(mktemp -t "export-XXXXX.tgz")"
tar czf "$archive" "$#" --checkpoint=.52428800
printf "docker exec %q cat %q | tar xvz -C .\n" "$(hostname)" "$archive"
fi
}
Then select the files you want to copy out:
dcp /foo/metrics.csv*
It'll create an archive inside of the container and spit out a command for you to run. Run that command on the host.
e.g.
docker exec 1c75ed99fa42 cat /tmp/export-x9hg6.tgz | tar xvz -C .
Or, I guess you could do it without the temporary archive:
dcp() {
if [ "$#" -eq 1 ]; then
printf "docker cp %q .\n" "$(hostname):$(readlink -e "$1")"
else
printf "docker exec %q tar czC %q" "$(hostname)" "$PWD"
printf " %q" "$#"
printf " | tar xzvC .\n"
fi
}
Will generate a command for you, like:
docker exec 1c75ed99fa42 tar czC /root .cache .zcompdump .zinit .zshrc .zshrc.d foo\ bar | tar xzvC .
You don't even need the alias then, it's just a convenience.
docker cp accepts either files, or tar archives, so you can pack the list of files provided as arguments to an tar archive, return the archive to stdout and pipe to docker cp.
#!/bin/bash
if [[ "$#" -lt 2 || "$1" == "-h" || "$1" == "--help" ]]; then
printf "Copy files to docker container directory.\n\n"
echo "Usage: $(basename $0) files... container:directory"
exit 0
fi
SOURCE="${*%${!#}}"
TARGET="${#:$#}"
tar cf - $SOURCE | docker cp - $TARGET
Related
I'm building a docker image as follows:
TEMP_FILE="/home/username/any/directory/temp"
touch $TEMP_FILE
<secrets> > $TEMP_FILE
export DOCKER_BUILDKIT=1
pushd $PROJECT_ROOT
docker build -t $DOCKER_IMAGE_NAME \
--secret id=netrc,src=$TEMP_FILE \
--build-arg=<...> \
-f Dockerfile .
rm $TEMP_FILE
Currently this works.
I'd now like to use $(mktemp) to create the TEMP_FILE in the /tmp directory. However, when I point TEMP_FILE outside of /home, I get the following error:
could not parse secrets: [id=netrc,src=/tmp/temp-file-name]: failed to stat /tmp/temp-file-name: stat /tmp/temp-file-name: no such file or directory
The script itself has no issue, I can easily find and view the temporary file for example with cat $TEMP_FILE.
How do I give docker build access to /tmp?
As per documentation Docker volumes are advertised this way:
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure and OS of the host machine, volumes are completely managed by Docker.
But if they are so good, why there are no operations to manage them like: copy, rename?
the command:
docker volume --help
gives only these options:
Usage: docker volume COMMAND
Manage volumes
Commands:
create Create a volume
inspect Display detailed information on one or more volumes
ls List volumes
prune Remove all unused local volumes
rm Remove one or more volumes
Documentation also states no other commands, nor any workarounds for having the copy or rename functionality.
I would like to rename currently existing volume and create another (blank) in place of the originally named volume and populate it with the new data for test.
After doing my test I may want (or not) to remove the newly created volume and rename the other one to its previous (original) name to restore the volume setup as it was before.
I would like to not create a backup of the original volume that I want to rename. Renaming is good enough for me and much faster than creating the backup and restoring form it.
Editing the docker-compose file and changing the name of the volume there is something I would like to avoid as well.
Is there any workaround that can work for renaming of a volume?
Can low level manual management from the shell targeting the Docker Root Dir: /var/lib/docker and volumes sub-dir be a solution or that approach may lead to some docker demon data inconsistency?
Not really the answer but I'll post this copy example because I couldn't find any before and searching for it took me to this question.
Docker suggest --volumes-from for backup purposes here.
For offline migration (stopped container) I don't see the point in using --volumes-from. So I just used a middle container with both volumes mounted and a copy command.
To finish off the migration a new container can use the new volume
Here's a quick test
Prepare a volume prova
docker run --name myname -d -v prova:/usr/share/nginx/html nginx:latest
docker exec myname touch /usr/share/nginx/html/added_file
docker stop myname
Verify the volume has nginx data + our file added_file
sudo ls /var/lib/docker/volumes/prova/_data
Output:
50x.html added_file index.html
Migrate the data to volume prova2
docker run --rm \
-v prova:/original \
-v prova2:/migration \
ubuntu:latest \
bash -c "cp -R /original/* /migration/"
Verify the new volume has the same data
sudo ls /var/lib/docker/volumes/prova2/_data
Output:
50x.html added_file index.html
Run a new container with the migrated volume:
docker run --name copyname -d -v prova2:/user/share/nginx/html nginx:latest
Verify the new container sees the migrated data at the original volume moint point:
docker exec copyname ls -al /user/share/nginx/html
For next searchers, I made a script that can do a copy of volume by #Lennonry example. Here it is https://github.com/KOYU-Tech/docker-volume-copy
Script itself for history:
#!/bin/bash
if (( $# < 2 )); then
echo ""
echo "No arguments provided"
echo "Use command example:"
echo "./dcv.sh OLD_VOLUME_NAME NEW_VOLUME_NAME"
echo ""
exit 1
fi
OLD_VOLUME_NAME="$1"
NEW_VOLUME_NAME="$2"
echo "== From '$OLD_VOLUME_NAME' to '$NEW_VOLUME_NAME' =="
function isVolumeExists {
local isOldExists=$(docker volume inspect "$1" 2>/dev/null | grep '"Name":')
local isOldExists=${isOldExists#*'"Name": "'}
local isOldExists=${isOldExists%'",'}
local isOldExists=${isOldExists##*( )}
if [[ "$isOldExists" == "$1" ]]; then
return 1
else
return 0
fi
}
# check if old volume exists
isVolumeExists ${OLD_VOLUME_NAME}
if [[ "$?" -eq 0 ]]; then
echo "Volume $OLD_VOLUME_NAME doesn't exist"
exit 2
fi
# check if new volume exists
isVolumeExists ${NEW_VOLUME_NAME}
if [[ "$?" -eq 0 ]]; then
echo "creating '$NEW_VOLUME_NAME' ..."
docker volume create ${NEW_VOLUME_NAME} 2>/dev/null 1>/dev/null
isVolumeExists ${NEW_VOLUME_NAME}
if [[ "$?" -eq 0 ]]; then
echo "Cannot create new volume"
exit 3
else
echo "OK"
fi
fi
# most important part, data migration
docker run --rm --volume ${OLD_VOLUME_NAME}:/source --volume ${NEW_VOLUME_NAME}:/destination ubuntu:latest bash -c "echo 'copying volume ...'; cp -R /source/* /destination/"
if [[ "$?" -eq 0 ]]; then
echo "Done successfuly 🎉"
else
echo "Some error occured 😭"
fi
How can I run docker-compose with the base docker-compose.yml and a whole directory of docker-compose files.
Like if I had this directory structure:
parentdir
docker-compose.yml
folder1/
foo.yml
bar.yml
folder2/
foo.yml
other.yml
How can I specify which folder of manifests to run when running compose?
I hope i understood your question well.
You could use the -f flags:
docker-compose -f docker-compose1.yml
Edit
To answer your comment: no you can't docker-compose several files with only one command. You need to specify a file path, not a directory path.
What you could do is create a shell script like:
#!/bin/bash
DOCKERFILE_PATH=$DOCKER_PATH
for dockerfile in $DOCKERFILE_PATH
do
if [[ -f $dockerfile ]]; then
docker-compose -f $dockerfile
fi;
done
By calling it like: DOCKER_PATH=dockerfiles/* ./script.sh which will execute docker-compose -f with every files in DOCKER_PATH.
(docs)
My best option was to have a run.bash file in the base directory of my project.
I then put all my compose files in say compose/ directory, then run it with this command:
docker-compose $(./run.bash) up
run.bash:
#!/usr/bin/env bash
PROJECT_NAME='projectname' # need to set manually since it normally uses current directory as project name
DOCKER_PATH=$PWD/compose/*
MANIFESTS=' '
for dockerfile in $DOCKER_PATH
do
MANIFESTS="${MANIFESTS} -f $dockerfile"
done
MANIFESTS="${MANIFESTS} -p $PROJECT_NAME"
echo $MANIFESTS
You can pass multiple files to docker-compose by using -f for each file. For example if you have N files, you can pass them as follows:
docker-compose -f file1 -f file2 ... -f fileN [up|down|pull|...]
If you have files in sub-directories and you want to pass them to docker-compose recursively, you can use the following:
docker-compose $(for i in $(find . -type f | grep yaml)
do
echo -f $i
done
) [up|down|pull|...]
I am new to Docker and working on developing the Docker image for our application in Ubuntu environment.
However, the below command is not working when executed from within the Dockerfile/ from within the docker-entrypoint file.
command: “jar xf ./abc.ear”
_/docker-entrypoint.sh: 69: /docker-entrypoint.sh: jar: not found**
I verified and ear file is present in the directory.
I tried passing the full path to ear or passed the full path to Jar command, however, no success.
Please help.
...............
#!/bin/sh
set -e
start=$(date +'%s')
# Setting Environment Variables
DEPLOY_DIR=/home/docker/xyz
SCRIPT_DIR=/usr/local/src
if [ "$(ls -A $DEPLOY_DIR/Install 2> /dev/null)" = "" ]; then
echo "The directory $DEPLOY_DIR/Install is empty."
# Fetch Installable from Artifactory
echo "[INFO] Extracting files from Artifactory"
mkdir -p $DEPLOY_DIR
cd $DEPLOY_DIR
wget -nv ArtifactoryPath
unzip "123.zip" -d $DEPLOY_DIR
# Install
cd $DEPLOY_DIR/
Install command
# Configure JAVA
echo "[INFO] Linking java folder"
ln -s /usr/lib/jvm/java-8-oracle $DEPLOY_DIR/Install/jdk
# Explode ear and war files
echo "[INFO] Explode ear and war files\n"
cd $DEPLOY_DIR/Install/jboss/deployments
ls -al
mv "$WFC_DEPLOY_DIR/Install/jboss/deployments/abc.ear" "$WFC_DEPLOY_DIR/Install/jboss/deployments/abc-old.ear"
mkdir -p abc.ear
cd $DEPLOY_DIR/Install/jboss/deployments/abc.ear
echo $PWD
mv "$DEPLOY_DIR/Install/jboss/deployments/abc-old.ear" ./
ls -al
jar xvf "$DEPLOY_DIR/Install/jboss/deployments/abc.ear/abc-old.ear"
rm -rf $DEPLOY_DIR/Install/jboss/deployments/abc.ear/abc-old.ear
else
echo "$DEPLOY_DIR/Install is not empty."
fi
I have a problem with Docker which does not persist commands launch via "RUN".
Here is my Dockerfile :
FROM jenkins:latest
RUN echo "foo" > /var/jenkins_home/toto ; ls -alh /var/jenkins_home
RUN ls -alh /var/jenkins_home
RUN rm /var/jenkins_home/.bash_logout ; ls -alh /var/jenkins_home
RUN ls -alh /var/jenkins_home
RUN echo "bar" >> /var/jenkins_home/.profile ; cat /var/jenkins_home/.profile
RUN cat /var/jenkins_home/.profile
And here is the output :
Sending build context to Docker daemon 373.8 kB Step 1 : FROM jenkins:latest ---> fc39417bd5fb Step 2 : RUN echo "foo" > /var/jenkins_home/toto ; ls -alh /var/jenkins_home ---> Using cache
---> c614b13d9d83 Step 3 : RUN ls -alh /var/jenkins_home ---> Using cache ---> 8a16a0c92f67 Step 4 : RUN rm /var/jenkins_home/.bash_logout ; ls -alh /var/jenkins_home ---> Using cache ---> f6ca5d5bdc64 Step 5 : RUN ls -alh /var/jenkins_home
---> Using cache ---> 3372c3275b1b Step 6 : RUN echo "bar" >> /var/jenkins_home/.profile ; cat /var/jenkins_home/.profile ---> Running in 79842be2c6e3
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.
# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022
# if running bash if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc"
fi fi
# set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH" fi bar ---> 28559b8fe041 Removing intermediate container 79842be2c6e3 Step 7 : RUN cat /var/jenkins_home/.profile ---> Running in c694e0cb5866
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.
# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022
# if running bash if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc"
fi fi
# set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH" fi ---> b7e47d65d65e Removing intermediate container c694e0cb5866 Successfully built b7e47d65d65e
Do you guys know why "foo" file is not persisted on step 3? Why ".bash_logout" file is recreated on step 5? Why "bar" is not in my ".profile" file anymore on step 7?
And of course, if I start a container based on this image, none of my modifications are persisted... so my Dockerfile is... useless. Any clue?
The reason those changes are not persisted, is that they are inside a volume the Jenkins Dockerfile marks /var/jenkins_home/ as a VOLUME.
Information inside volumes is not persisted during docker build, or more precisely; each build-step creates a new volume based on the image's content, discarding the volume that was used in the previous build step.
How to resolve this?
I think the best way to resolve this, is to;
Add the files you want to modify inside jenkins_home in a different location inside the image, e.g. /var/jenkins_home_overrides/
Create a custom entrypoint based on, or "wrapping", the default entrypoint script that copies the content of your jenkins_home_overrides to jenkins_home the first time the container is started.
Actually...
And just when I wrote that up; It looks like the official Jenkins image already support this out of the box;
https://github.com/jenkinsci/docker/blob/683b0d6ed17016ee3211f247304ef2f265102c2b/jenkins.sh#L5-L23
According to the documentation, you need to add your files to the /usr/share/jenkins/ref/ directory, and those will be copied to /var/jenkins/home upon start.
Also see https://issues.jenkins-ci.org/browse/JENKINS-24986