How to copy a folder from docker to other folder? - docker

RUN cp /data/ /data/db, this command does not copy the files in /data to /data/db.
Is there an alternate way to do this?

It depends where /data is for you: already in the image, or on your host disk.
A Dockerfile RUN command execute any commands in a new layer on top of the current image and commit the results.
That means /data is the one found in the image as built so far.
Not the /data on your disk.
If you want to copy from your disk to the image /data/db folder, you would need to use COPY or ADD.
At runtime, when you had an existing running container, you could also use docker cp to copy from or to a container.

Related

Why some of the directory in docker container can be mount and share files out and some can not

I'm a new leaner of docker.I came a cross a problem while I'm trying to make my own docker image.
Here's the thing.I create a new DockerFile to build my own mysql image in which I declared MYSQL_ROOT_PASSWORD and put some init scripts in the container.
Here is my Docker
FROM mysql:5.7
MAINTAINER CarbonFace<553127022#qq.com>
ENV TZ Asia/Shanghai
ENV MYSQL_ROOT_PASSWORD Carbon#mysqlRoot7
ENV INIT_DATA_DIR /initData/sql
ENV INIT_SQL_FILE_0 privileges.sql
ENV INIT_SQL_FILE_1 carbon_user_sql.sql
ENV INIT_SQL_FILE_2 carbonface_sql.sql
COPY ./my.cnf /etc/mysql/donf.d/
RUN mkdir -p $INIT_DATA_DIR
COPY ./sqlscript/$INIT_SQL_FILE_0 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_1 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_2 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_0 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_1 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_2 /docker-entrypoint-initdb.d/
CMD ["mysqld"]
I'm trying to build a docker image which contains my own config file and when mounted it would be showed in the local directory and can be modified.
I'm really confused that when I start my container with this image like the official description and also here is my commands:
docker run -dp 3306:3306 \
-v /usr/local/mysql/data:/var/lib/mysql \
-v/usr/local/mysql/conf:/etc/mysql/conf.d \
--name mysql mysql:<my builded tag>
You know I'm trying to mounted the
/usr/local/mysql/conf to the /etc/mysql/conf.d in the container which is been told as the custom config file mounted location.
And I supposed that my custom config file my.cnf which has been copied into the image during docker build and would be show in my local direcroty /usr/local/mysql/conf
And since I already copied my custom config file into image which you can see in my DockerFile.
But it turns out that the directory is empty and the /etc/mysql/conf.d is also overwrite by local directory.
Before I run my container, both /usr/local/mysql/conf and /usr/local/mysql/data is empty at all.
OK fine, I've been told that the volume mounted directory would overwrite the file inside the container.
But how could the empty data directory shows the data files inside the container but the empty conf directory overwrite the conf.d directory in the container.
It make no sense.
I was very confused and I would be very appreciate it if someone can explain why it happens.
My OS is MacOS Big Sur and I used the latest docker.
A host-directory bind mount, -v /host/path:/container/path, always hides the contents of the image and replaces it with the host directory. If the host directory is empty, the container directory will be the same empty directory at container startup time.
The Docker Hub mysql container has an involved entrypoint script that checks to see if the data directory is empty, and if so, initializes the database; abstracted out
#!/bin/sh
# (actually in hundreds of lines of shell code, with more options)
if [ ! -d /var/lib/mysql/data/mysql ]; then
mysql_install_db
# (...and start a temporary database server and run the
# /docker-entrypoint-initdb.d scripts)
fi
# then run the main container command
exec "$#"
Simply the presence of a volume doesn't cause files to be copied (with one exception at one specific point in the lifecycle for named volumes), so if you need to copy content from a container to the host you either need to do it manually with docker cp or have a way in the container code to do it.

Have dockerfile overwrite a file in a volume

I have a Dockerfile which contains:
COPY config.xml /path/to/data/config.xml
And when I run the container, I use a volume which itself contains a config.xml file
volume:
- "/data:/path/to/data
When I build and run the container, I want to config.xml from the Image to take priority (and overwrite) the copy that may already exist in the mounted volume.
Is this possible?
When you add a volume to your Docker services, the data in the volume will overwrite any existent data from the Docker image. If you want to have Docker image files that can be used as default files, you need to do the following
Store the files in the Docker image predefined file ie. (/path/to/default)
Add an entry point to your Docker file, this entry point should take care of copying the default file from /path/to/default to the volume path /path/to/data
Dockerfile
From ruby:2.4.5
COPY config.xml /path/to/default/config.xml
ENTRYPOINT [ "/docker-entrypoint.sh" ]
docker-entrypoint.sh
#!/bin/sh -e
cp -r /path/to/default/config.xml /path/to/data
exec "$#" # or replace this by the command need to run the container

Mis-understand of copy file in docker images

Hello could someone pleas help me on copying docker(I'm starter) host file into my jupyter/pyspark-notebook images. I've pulled this notebook from docker as public available.
I've created a Dockerfile which contains this.
FROM jupyter/pyspark-notebook:latest
ADD /home/abdoulaye/Documents/M2BIGDATA/Jaziri /bin/bash
I've changed /bin/bash to . but nothing is visible.
when I execute docker built it's like it copy files as shown in output below.
when I go to my my notebook I did note found folders. I check my snapshot if I can found these copied folder but I'm very confused.
In clear, I've an running notebook in my docker I use it in y navigator but I can not load data. I like to copy data in place where I can access it in notebook.
You can not copy using absoult path, the path should be relative to Dockerfile, so /home/abdoulaye/Documents/M2BIGDATA/Jaziri this path inside Dockerfile is not correct. Copy file to Dockerfile context and then copy like
ADD M2BIGDATA/Jaziri /work
Now First thing, you should not copy files from host to executable files directory.
For instance,
FROM alpine
copy hello.txt /bin/sh
If you copy like this, it will create a problem to run command inside container as sh or bash will be replaced or corrupt.
2nd, while you are building the docker image with invalid context, it should be the same where your Dockerfile is, so better to run the directory where you place the Dockerfile.
docker build -t my-jupyter .
3rd, you should not run cp command inside container to copy files from host to container.
docker cp /home/abdoulaye/Documents/M2BIGDATA/Jaziri container_id:/work
it will copy your files to /work path of the container.

Troubleshoot directory path error in COPY command in docker file

I am using COPY command in my docker file on top of ubuntu 16.04. I am getting error as no such file or directory eventhough the directory is present. In the below docker file I want to copy the directory "auth" present inside workspace directory to the docker image (at path /home/ubuntu) and then build the image.
FROM ubuntu:16.04
RUN apt-get update
COPY /home/ubuntu/authentication/workspace /home/ubuntu
WORKDIR /home/ubuntu/auth
a Dockerfile COPY command can only refer to files under the context - the current location of the Dockerfile, aka .
so you have a few options now:
if it is possible to copy the /home/ubuntu/authentication/workspace/ directory content to somewhere inside your project before the build (so now it will be included in your Dockerfile context and you can access it via COPY ./path/to/content /home/ubuntu) it can be great. but sometimes you dont want it.
instead of copying the directory, bind it to your container via a volume:
when you run the container, add a -v option:
docker run [....] -v /home/ubuntu/authentication/workspace:/home/ubuntu [...]
mind that a volume is designed so any change you made inside the container dir(/home/ubuntu) will affect the bound directory on your host side (/home/ubuntu/authentication/workspace) and vice versa.
i found a something over here: this guy is forcing the Dockerfile to accept his context- he is sitting inside the /home/ubuntu/authentication/workspace/ directory, and running there
docker build . -f /path/to/Dockerfile
so now inside his Dockerfile he can refer to /home/ubuntu/authentication/workspace as his context (.)

Is it possible to persist a file in a docker image?

I want to prepare a base docker image that executes a command and produces a report. I want this to be persisted within the image for the next docker file. Is this possible?
What you can do is to set a WORKDIR within your dockerfile e.g.
WORKDIR /data
and use a volume with the run command for the built image.
docker run -v /users/home/your_user/use_case_folder:/data -it your_image_tag /bin/bash
When you run your reports/predictions you have to write them to /data, then your files will be placed on your local system. You now can use the next
Dockerfile and use the same volume mount, which is defined as WORKDIR within the new Dockerfile. Sharing the results within the image itself for another image is not possible as far as I know. You will always have to use an outside mounted File System or Database or sth. similar.
Maybe here you can find some infos too:
https://docs.docker.com/storage/volumes/#restore-container-from-backup

Resources