Docker copy container volume files to host on first run - docker

When creating a container using docker run, is there a way to automatically copy files from a docker volume to the host directory it is mounted on?
When running
docker run -d -v /localpath:containerpath image
the files found in containerpath are not copied to my /localpath directory.
Is there a way to achieve this? The image contains a directory that needs to be accessible on the host machine for local development.

What i did not know was that when adding a file to a volume from the shell of the container, it is also created in the host directory. So after a lot of debugging and testing things i have managed to achieve what i wanted
For clarification if anyone needs this in the future: to automatically generate a docker container that clones a git repo and expose the host public_html directory to a volume with the files already copied to the host ready for editing
# Create Volume for the directory
VOLUME /var/www/html
COPY scripts/start.sh /start.sh
RUN chmod -v +x /start.sh
CMD ["/start.sh"]
start.sh contains the code to clone the repo if the directory is empty
#!/bin/bash
if [ "$(ls -A /var/www/html)" ]; then
echo "Directory already cloned"
else
echo "Repo files do not exist" ;
git clone ...
fi
Thanks for the help

Related

Unable to copy local directory into Docker container

I'm very new to a Docker. I want to copy a local directory to a Docker container but I get error
file not found in build context or excluded by .dockerignore: stat
~/.ssh: file does not exist
Here is the line of COPY code,
COPY ~/.ssh /root/.ssh
I can make sure that I have ~/.ssh that it says dose not exist
I need to do this my Application throw error
java.io.FileNotFoundException: /root/.ssh/id_rsa (No such file or
directory)
Then I've just realised that I need to copy it into a container.
In my app, I need to use id_rsa and known_hosts to connect to a SFTP server.
Please help. Thanks a lot !
As I know, you can only use files from the directory where your Dockerfile is.
You cannot ADD or COPY files outside of the path local to the Dockerfile.
The solution is either mount volume with docker run or docker-compose (what you did already), or copy the directory ~/.ssh/ into your Dockerfile directory and then run docker build again.
Let's say we're in /home/saeed/docker/ where your Dockerfile is located, and it has the following contents:
FROM nginx:alpine
COPY .ssh /root/.ssh
Before running docker build, copy the required directory into the build directory:
cp -r ~/.ssh .
Then you can build and run your image as normal.
I have not found the reason yet but I found the workaround by mounting the volume in docker-compose instead.
- ~/.ssh:/root/.ssh
But if someone could find the solution to my COPY problem I'm willing to learn it!

Why some of the directory in docker container can be mount and share files out and some can not

I'm a new leaner of docker.I came a cross a problem while I'm trying to make my own docker image.
Here's the thing.I create a new DockerFile to build my own mysql image in which I declared MYSQL_ROOT_PASSWORD and put some init scripts in the container.
Here is my Docker
FROM mysql:5.7
MAINTAINER CarbonFace<553127022#qq.com>
ENV TZ Asia/Shanghai
ENV MYSQL_ROOT_PASSWORD Carbon#mysqlRoot7
ENV INIT_DATA_DIR /initData/sql
ENV INIT_SQL_FILE_0 privileges.sql
ENV INIT_SQL_FILE_1 carbon_user_sql.sql
ENV INIT_SQL_FILE_2 carbonface_sql.sql
COPY ./my.cnf /etc/mysql/donf.d/
RUN mkdir -p $INIT_DATA_DIR
COPY ./sqlscript/$INIT_SQL_FILE_0 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_1 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_2 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_0 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_1 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_2 /docker-entrypoint-initdb.d/
CMD ["mysqld"]
I'm trying to build a docker image which contains my own config file and when mounted it would be showed in the local directory and can be modified.
I'm really confused that when I start my container with this image like the official description and also here is my commands:
docker run -dp 3306:3306 \
-v /usr/local/mysql/data:/var/lib/mysql \
-v/usr/local/mysql/conf:/etc/mysql/conf.d \
--name mysql mysql:<my builded tag>
You know I'm trying to mounted the
/usr/local/mysql/conf to the /etc/mysql/conf.d in the container which is been told as the custom config file mounted location.
And I supposed that my custom config file my.cnf which has been copied into the image during docker build and would be show in my local direcroty /usr/local/mysql/conf
And since I already copied my custom config file into image which you can see in my DockerFile.
But it turns out that the directory is empty and the /etc/mysql/conf.d is also overwrite by local directory.
Before I run my container, both /usr/local/mysql/conf and /usr/local/mysql/data is empty at all.
OK fine, I've been told that the volume mounted directory would overwrite the file inside the container.
But how could the empty data directory shows the data files inside the container but the empty conf directory overwrite the conf.d directory in the container.
It make no sense.
I was very confused and I would be very appreciate it if someone can explain why it happens.
My OS is MacOS Big Sur and I used the latest docker.
A host-directory bind mount, -v /host/path:/container/path, always hides the contents of the image and replaces it with the host directory. If the host directory is empty, the container directory will be the same empty directory at container startup time.
The Docker Hub mysql container has an involved entrypoint script that checks to see if the data directory is empty, and if so, initializes the database; abstracted out
#!/bin/sh
# (actually in hundreds of lines of shell code, with more options)
if [ ! -d /var/lib/mysql/data/mysql ]; then
mysql_install_db
# (...and start a temporary database server and run the
# /docker-entrypoint-initdb.d scripts)
fi
# then run the main container command
exec "$#"
Simply the presence of a volume doesn't cause files to be copied (with one exception at one specific point in the lifecycle for named volumes), so if you need to copy content from a container to the host you either need to do it manually with docker cp or have a way in the container code to do it.

Docker volume file permissions

I have create a docker container with this command:
docker run -d -p 20001:80 -v /home/me/folder1/:/usr/local/apache2/htdocs/ httpd:2.4
This container contains scripts which create files and directories in /usr/local/apache2/htdocs/ folder.
I can see this files on host computer in /home/me/folder1/ folder.
I have tried to open one of this files because i want to write something.
I cannot do that because i do not have write permission on this files. This is because they are owned by root user.
What can i do in order to make this files writable be "me" user ? I want to do that automaticaly
Thanks a lot
you have to do
sudo chmod +x nameofscript.sh
whit this command execute by master you set this scripts executable for all users

Get build files to persist on host after docker-compose build is run

I'm trying to run a docker-compose build command with a Dockerfile and a docker-compose.yml file.
Inside the docker-compose.yml file, I'm trying to bind a local folder on the host machine ./dist with a folder on the container app/dist.
version: '3.8'
services:
dev:
build:
context: .
volumes:
- ./dist:app/dist # I'm expecting files to be changed or added to the container's app/dist to be reflected to the host's ./dist folder
Inside the Dockerfile, I build some files with an NPM script that I'm wanting to make available on the host machine once the build is finished. I'm also touching a new file inside the /app/dist/test.md just as a simple test to see if the file ends up on the host machine, but it does not.
FROM node:8.17.0-alpine as example
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run dist
RUN touch /app/dist/test.md
Is there a way to do this? I also tried using the "long syntax" as mentioned in the Docker Compose v3 documentation: https://docs.docker.com/compose/compose-file/compose-file-v3/
The easiest way to do this is to install Node and run the npm commands directly on the host.
$BREW_OR_APT_GET_OR_YUM_OR_SOMETHING install node
npm install
npm run dist
# done
There's not an easy way to use a Dockerfile to build host content. The Dockerfile can't write out directly to the host filesystem; if you use a volume mount, the host volume hides the container content before anything else happens.
That means, if you want to use this approach, you need to launch a temporary container to get the content out. You can do it with a one-off container, mounting the host directory somewhere other than /app, making the main container command be cp:
sudo docker build -t myimage .
sudo docker run --rm \
-v "$PWD/dist:/out" \
myimage \
cp -a /app/dist /out
Or, if you specifically wanted to use docker cp:
sudo docker build -t myimage .
sudo docker create --name to-copy myimage
sudo docker cp -r to-copy:/app/dist ./dist
sudo docker rm to-copy
Note that any of these sequences are more complex than just installing a local Node via a package manager, and require administrator permissions (you can use the same technique to overwrite any host file, including the /etc/shadow file with encrypted passwords).

How to replace only the files not the same, instead of the whole directory when using data volume container as a share storage for other containers

Requirement: when an new version of J2EE application go into production, because development environment and production is not the same, the DBA has to replace some configuration file in war package, those files which need to replace typically contains some sensitive data (like database account, password).
I think is a good idea to create a data volume container, which contain those configuration files that specific to production. In this case, configuration files can be share between containers (applications)'
Let say I have and J2EE application run with docker using tomcat 8, the dockerfile as follows:
FROM tomcat:8
WORKDIR $CATALINA_HOME
RUN midair -p /etc/foo
RUN touch /etc/foo/a
RUN touch /etc/foo/b
RUN touch /etc/foo/c
RUN touch /etc/foo/d
RUN echo "a" >> /etc/foo/a
RUN echo "b" >> /etc/foo/b
RUN echo "c" >> /etc/foo/c
RUN echo "d" >> /etc/foo/d
CMD ["catalina.sh", "run"]
The DBA has to replace the file b and c before the application go into production, as a result, we have a dockerfile as follow:
FROM centos:6.8
RUN mkdir -p /etc/foo
RUN touch /etc/foo/a
RUN touch /etc/foo/d
RUN echo "sub a" >> /etc/foo/a
RUN echo "sub d" >> /etc/foo/d
COPY ./run.sh /root
RUN chmod 755 /root/run.sh
CMD ["/root/run.sh"]
To test whether the data volume container would satisfy my requirement, the following command is run :
docker create -v /etc/foo --name configstore centos:6.8 /bin/bash
docker run -d --volumes-from configstore --name testsubcontainer tomcat:8
And finally I found that the “tomcat” container has “a” and “d” in the /etc/foo b and c has gone.
Q1: How to replace only the files not the same, instead of the whole directory when using data volume container as a share storage for other containers.
Q2: Is there any other better solution to satisfy my requirement
Q1: Using volumes as you are, this is not possible. When you mount a volume to a directory in a container, everything inside that directory is replaced by the data volume. Docker isn't mapping files, it's mounting the volume at a certain point in the container filesystem.
Q2: Have you explored ENV or ARG as a means to set values at runtime?

Resources