I have a Dockerfile which contains:
COPY config.xml /path/to/data/config.xml
And when I run the container, I use a volume which itself contains a config.xml file
volume:
- "/data:/path/to/data
When I build and run the container, I want to config.xml from the Image to take priority (and overwrite) the copy that may already exist in the mounted volume.
Is this possible?
When you add a volume to your Docker services, the data in the volume will overwrite any existent data from the Docker image. If you want to have Docker image files that can be used as default files, you need to do the following
Store the files in the Docker image predefined file ie. (/path/to/default)
Add an entry point to your Docker file, this entry point should take care of copying the default file from /path/to/default to the volume path /path/to/data
Dockerfile
From ruby:2.4.5
COPY config.xml /path/to/default/config.xml
ENTRYPOINT [ "/docker-entrypoint.sh" ]
docker-entrypoint.sh
#!/bin/sh -e
cp -r /path/to/default/config.xml /path/to/data
exec "$#" # or replace this by the command need to run the container
Related
I would like to have a docker volume to persist data. The persisted data can be accessed by different containers based on different images.
It is not a host volume. It is a volume listed in volumes panel of Docker Desktop.
For example, the name of the volume is theVolume which is mounted at /workspace. The directory I need to inspect is /workspace/project.
I need to check whether a specific directory is available inside the volume. If it is not, create the directory, else leave it as is.
Is it possible to do this from within a Dockerfile or compose file?
It's possible to do this in an entrypoint wrapper script. This runs as the main container process, so it's invoked after the volume is mounted in the container. The script isn't aware of what specific thing might be mounted on /workspace, so this will work whether you've mounted a named volume, a host directory, or nothing at all. It does need to make sure to actually start the main container command when it's done.
#!/bin/sh
# entrypoint.sh
# Create the project directory if it doesn't exist
if [ ! -d /workspace/project ]; then
mkdir /workspace/project
fi
# Run the main container command
exec "$#"
Make sure this file is executable on your host system (run chmod +x entrypoint.sh before checking it in). Make sure it's included in your Docker image, and then make this script be the image's ENTRYPOINT.
COPY entrypoint.sh ./ # if a previous `COPY ./ ./` doesn't already get it
ENTRYPOINT ["./entrypoint.sh"] # must use JSON-array syntax
CMD the main container command # same as you have now
(If you're using ENTRYPOINT for the main container command, you may need to change it to CMD for this to work; if you've split the interpreter into its own ENTRYPOINT line, combine the whole container command into a single CMD.)
A Dockerfile RUN command happens before the volume is mounted (or maybe even exists at all) and so it can't modify the volume contents. A Compose file doesn't have any way to run commands, beyond replacing the image's entrypoint and command.
I'm a new leaner of docker.I came a cross a problem while I'm trying to make my own docker image.
Here's the thing.I create a new DockerFile to build my own mysql image in which I declared MYSQL_ROOT_PASSWORD and put some init scripts in the container.
Here is my Docker
FROM mysql:5.7
MAINTAINER CarbonFace<553127022#qq.com>
ENV TZ Asia/Shanghai
ENV MYSQL_ROOT_PASSWORD Carbon#mysqlRoot7
ENV INIT_DATA_DIR /initData/sql
ENV INIT_SQL_FILE_0 privileges.sql
ENV INIT_SQL_FILE_1 carbon_user_sql.sql
ENV INIT_SQL_FILE_2 carbonface_sql.sql
COPY ./my.cnf /etc/mysql/donf.d/
RUN mkdir -p $INIT_DATA_DIR
COPY ./sqlscript/$INIT_SQL_FILE_0 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_1 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_2 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_0 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_1 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_2 /docker-entrypoint-initdb.d/
CMD ["mysqld"]
I'm trying to build a docker image which contains my own config file and when mounted it would be showed in the local directory and can be modified.
I'm really confused that when I start my container with this image like the official description and also here is my commands:
docker run -dp 3306:3306 \
-v /usr/local/mysql/data:/var/lib/mysql \
-v/usr/local/mysql/conf:/etc/mysql/conf.d \
--name mysql mysql:<my builded tag>
You know I'm trying to mounted the
/usr/local/mysql/conf to the /etc/mysql/conf.d in the container which is been told as the custom config file mounted location.
And I supposed that my custom config file my.cnf which has been copied into the image during docker build and would be show in my local direcroty /usr/local/mysql/conf
And since I already copied my custom config file into image which you can see in my DockerFile.
But it turns out that the directory is empty and the /etc/mysql/conf.d is also overwrite by local directory.
Before I run my container, both /usr/local/mysql/conf and /usr/local/mysql/data is empty at all.
OK fine, I've been told that the volume mounted directory would overwrite the file inside the container.
But how could the empty data directory shows the data files inside the container but the empty conf directory overwrite the conf.d directory in the container.
It make no sense.
I was very confused and I would be very appreciate it if someone can explain why it happens.
My OS is MacOS Big Sur and I used the latest docker.
A host-directory bind mount, -v /host/path:/container/path, always hides the contents of the image and replaces it with the host directory. If the host directory is empty, the container directory will be the same empty directory at container startup time.
The Docker Hub mysql container has an involved entrypoint script that checks to see if the data directory is empty, and if so, initializes the database; abstracted out
#!/bin/sh
# (actually in hundreds of lines of shell code, with more options)
if [ ! -d /var/lib/mysql/data/mysql ]; then
mysql_install_db
# (...and start a temporary database server and run the
# /docker-entrypoint-initdb.d scripts)
fi
# then run the main container command
exec "$#"
Simply the presence of a volume doesn't cause files to be copied (with one exception at one specific point in the lifecycle for named volumes), so if you need to copy content from a container to the host you either need to do it manually with docker cp or have a way in the container code to do it.
I have been trying to read up on Docker volumes for a while now, but something still seems to be missing.
Take this Dockerfile: (taken from docs)
FROM ubuntu
RUN mkdir /myvol
RUN echo "hello world" > /myvol/greeting
VOLUME /myvol
If I build and run this simply as docker run <img>, then a new volume is created (its name can be found using docker inspect <containerid>). The contents of /var/lib/docker/volumes/<volumeid> is just a file greeting containing hello world, as expected.
So far so good. Now, say I wanted, as a user, to place this volume to a custom location. I would run docker run -v /host/path/:/myvol. This does not work anymore. /host/path is created (if it does not exist), but is empty.
So, the question is, how do I put some files in a Volume in Dockerfile, whilst allowing the user to choose where they put this volume? This seems to me to be one of the reasons to use Volume directive in the first place.
So, the question is, how do I put some files in a Volume in
Dockerfile, whilst allowing the user to choose where they put this
volume?
You don't. A volume is created at runtime, while your Dockerfile is consulted at build time.
If you want to be able to pre-populate a volume at an arbitrary location when your container runs, allow the user to provide the path to the volume in an environment variable and then have an ENTRYPOINT script that copies the files into the volume when the container starts.
An alternative is to require the use of a specific path, and then taking advantage of the fact that if the user mounts a new, empty volume on that path, Docker will copy the contents of that directory into the volume. For example, if I have:
FROM alpine
RUN mkdir /data; echo "This is a test" > /data/datafile
VOLUME /data
And I build an image testimage from that, and then run:
docker run -it testimage sh
I see that my volume in /data contains the file(s) from the underlying filesystem:
/ # ls /data
datafile
This won't work if I mount a host path on that location, for example if I do:
host# mkdir /tmp/empty
host# docker run -it -v /tmp/empty:/data testimage sh
Then the resulting path will be empty:
/ # ls /data
/ #
This is another situation in which an ENTRYPOINT script could be used to populate the target directory.
I want to make a docker image that keeps my application configuration, so when something changes I can only change the config container and don't have to build a new image for my application.
Here is my Dockerfile:
FROM scratch
RUN mkdir -p /config
ADD config.properties /config
VOLUME /config
ENTRYPOINT /bin/true
But it can't even create the directory. Is there a best practice for such things?
Keep in mind that the scratch image is literally completely empty. You cannot create the directory, because there's no /usr/bin/mkdir executable in that image.
To create the directory anyway, you can exploit the fact that the ADD statement in a Dockerfile also implicitly creates directories, so the following Dockerfile should be enough:
FROM scratch
ADD config.properties /config/config.properties
VOLUME /config
Regarding the ENTRYPOINT; there's also no /bin/true in your image. This means that the container will not start (i.e. exit immediately with exec: "/bin/true": stat /bin/true: no such file or directory). However, as you intend to use this image for a data-only container, that's probably OK. Simply use docker create instead of docker run to create the container without starting it:
docker build -t config_image .
docker create --name config config_image
RUN cp /data/ /data/db, this command does not copy the files in /data to /data/db.
Is there an alternate way to do this?
It depends where /data is for you: already in the image, or on your host disk.
A Dockerfile RUN command execute any commands in a new layer on top of the current image and commit the results.
That means /data is the one found in the image as built so far.
Not the /data on your disk.
If you want to copy from your disk to the image /data/db folder, you would need to use COPY or ADD.
At runtime, when you had an existing running container, you could also use docker cp to copy from or to a container.