Docker "config" Container / Docker image - docker

I want to make a docker image that keeps my application configuration, so when something changes I can only change the config container and don't have to build a new image for my application.
Here is my Dockerfile:
FROM scratch
RUN mkdir -p /config
ADD config.properties /config
VOLUME /config
ENTRYPOINT /bin/true
But it can't even create the directory. Is there a best practice for such things?

Keep in mind that the scratch image is literally completely empty. You cannot create the directory, because there's no /usr/bin/mkdir executable in that image.
To create the directory anyway, you can exploit the fact that the ADD statement in a Dockerfile also implicitly creates directories, so the following Dockerfile should be enough:
FROM scratch
ADD config.properties /config/config.properties
VOLUME /config
Regarding the ENTRYPOINT; there's also no /bin/true in your image. This means that the container will not start (i.e. exit immediately with exec: "/bin/true": stat /bin/true: no such file or directory). However, as you intend to use this image for a data-only container, that's probably OK. Simply use docker create instead of docker run to create the container without starting it:
docker build -t config_image .
docker create --name config config_image

Related

How to check content of a volume from Dockerfile or compose file?

I would like to have a docker volume to persist data. The persisted data can be accessed by different containers based on different images.
It is not a host volume. It is a volume listed in volumes panel of Docker Desktop.
For example, the name of the volume is theVolume which is mounted at /workspace. The directory I need to inspect is /workspace/project.
I need to check whether a specific directory is available inside the volume. If it is not, create the directory, else leave it as is.
Is it possible to do this from within a Dockerfile or compose file?
It's possible to do this in an entrypoint wrapper script. This runs as the main container process, so it's invoked after the volume is mounted in the container. The script isn't aware of what specific thing might be mounted on /workspace, so this will work whether you've mounted a named volume, a host directory, or nothing at all. It does need to make sure to actually start the main container command when it's done.
#!/bin/sh
# entrypoint.sh
# Create the project directory if it doesn't exist
if [ ! -d /workspace/project ]; then
mkdir /workspace/project
fi
# Run the main container command
exec "$#"
Make sure this file is executable on your host system (run chmod +x entrypoint.sh before checking it in). Make sure it's included in your Docker image, and then make this script be the image's ENTRYPOINT.
COPY entrypoint.sh ./ # if a previous `COPY ./ ./` doesn't already get it
ENTRYPOINT ["./entrypoint.sh"] # must use JSON-array syntax
CMD the main container command # same as you have now
(If you're using ENTRYPOINT for the main container command, you may need to change it to CMD for this to work; if you've split the interpreter into its own ENTRYPOINT line, combine the whole container command into a single CMD.)
A Dockerfile RUN command happens before the volume is mounted (or maybe even exists at all) and so it can't modify the volume contents. A Compose file doesn't have any way to run commands, beyond replacing the image's entrypoint and command.

Why some of the directory in docker container can be mount and share files out and some can not

I'm a new leaner of docker.I came a cross a problem while I'm trying to make my own docker image.
Here's the thing.I create a new DockerFile to build my own mysql image in which I declared MYSQL_ROOT_PASSWORD and put some init scripts in the container.
Here is my Docker
FROM mysql:5.7
MAINTAINER CarbonFace<553127022#qq.com>
ENV TZ Asia/Shanghai
ENV MYSQL_ROOT_PASSWORD Carbon#mysqlRoot7
ENV INIT_DATA_DIR /initData/sql
ENV INIT_SQL_FILE_0 privileges.sql
ENV INIT_SQL_FILE_1 carbon_user_sql.sql
ENV INIT_SQL_FILE_2 carbonface_sql.sql
COPY ./my.cnf /etc/mysql/donf.d/
RUN mkdir -p $INIT_DATA_DIR
COPY ./sqlscript/$INIT_SQL_FILE_0 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_1 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_2 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_0 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_1 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_2 /docker-entrypoint-initdb.d/
CMD ["mysqld"]
I'm trying to build a docker image which contains my own config file and when mounted it would be showed in the local directory and can be modified.
I'm really confused that when I start my container with this image like the official description and also here is my commands:
docker run -dp 3306:3306 \
-v /usr/local/mysql/data:/var/lib/mysql \
-v/usr/local/mysql/conf:/etc/mysql/conf.d \
--name mysql mysql:<my builded tag>
You know I'm trying to mounted the
/usr/local/mysql/conf to the /etc/mysql/conf.d in the container which is been told as the custom config file mounted location.
And I supposed that my custom config file my.cnf which has been copied into the image during docker build and would be show in my local direcroty /usr/local/mysql/conf
And since I already copied my custom config file into image which you can see in my DockerFile.
But it turns out that the directory is empty and the /etc/mysql/conf.d is also overwrite by local directory.
Before I run my container, both /usr/local/mysql/conf and /usr/local/mysql/data is empty at all.
OK fine, I've been told that the volume mounted directory would overwrite the file inside the container.
But how could the empty data directory shows the data files inside the container but the empty conf directory overwrite the conf.d directory in the container.
It make no sense.
I was very confused and I would be very appreciate it if someone can explain why it happens.
My OS is MacOS Big Sur and I used the latest docker.
A host-directory bind mount, -v /host/path:/container/path, always hides the contents of the image and replaces it with the host directory. If the host directory is empty, the container directory will be the same empty directory at container startup time.
The Docker Hub mysql container has an involved entrypoint script that checks to see if the data directory is empty, and if so, initializes the database; abstracted out
#!/bin/sh
# (actually in hundreds of lines of shell code, with more options)
if [ ! -d /var/lib/mysql/data/mysql ]; then
mysql_install_db
# (...and start a temporary database server and run the
# /docker-entrypoint-initdb.d scripts)
fi
# then run the main container command
exec "$#"
Simply the presence of a volume doesn't cause files to be copied (with one exception at one specific point in the lifecycle for named volumes), so if you need to copy content from a container to the host you either need to do it manually with docker cp or have a way in the container code to do it.

docker-compose and listing volume contents

Maybe I'm just not understanding correctly but I'm trying to visually verify that I have used volumes properly.
In my docker-compose I'd have something like
some-project:
volumes:
- /some-local-path/some-folder:/v-test
I can verify it's contents via "ls -la /some-local-path/some-folder"
In some-projects Dockerfile I'd have something like
RUN ls -la /v-test
which returns 'No such file or directory"
Is this the correct way to use it? If so, why can't I view the contents from inside the container?
Everything in the Dockerfile runs before anything outside the build: block in the docker-compose.yml file is considered. The image build doesn't see volumes or environment variables that get declared only in docker-compose.yml, and it can't access other services.
In your example, first the Dockerfile tries to ls the directory, then Compose will start the container with the bind mount.
If you're just doing this for verification, you can docker-compose run a container with most of its settings from the docker-compose.yml file, but an alternate command:
docker-compose run some-project \
ls -la /v-test
(Doing this requires that the image's CMD is a well-formed shell command; either it has no ENTRYPOINT or the ENTRYPOINT is a wrapper script that ends in exec "$#" to run the CMD. If you only have ENTRYPOINT, change it to CMD; if you've split the command across both directives, consolidate it into a single CMD line.)

Docker: How to copy a file from one folder in a container to another?

I want to copy my compiled war file to tomcat deployment folder in a Docker container. As COPY and ADD deals with moving files from host to container, I tried
RUN mv /tmp/projects/myproject/target/myproject.war /usr/local/tomcat/webapps/
as a modification to the answer for this question. But I am getting the error
mv: cannot stat ΓÇÿ/tmp/projects/myproject/target/myproject.warΓÇÖ: No such file or directory
How can I copy from one folder to another in the same container?
You can create a multi-stage build:
https://docs.docker.com/develop/develop-images/multistage-build/
Build the .war file in the first stage and name the stage e.g. build, like that:
FROM my-fancy-sdk as build
RUN my-fancy-build #result is your myproject.war
Then in the second stage:
FROM my-fancy-sdk as build2
COPY --from=build /tmp/projects/myproject/target/myproject.war /usr/local/tomcat/webapps/
A better solution would be to use volumes to bind individual war files inside docker container as done here.
Why your command fails
The command you are running tries to access files which are out of context to for the dockerfile. When you build the image using docker build . the daemon sends context to the builder and only those files are accessible during the build. In docker build . the context is ., the current directory. Therefore, it will not be able to access /tmp/projects/myproject/target/myproject.war.
Copying from inside the container
Another option would be to copy while you are inside the container. First use volumes to mount the local folder inside the container and then go inside the container using docker exec -it <container_name> bash and then copy the required files.
Recommendation
But still, I highly recommend to use
docker run -v "/tmp/projects/myproject/target/myproject.war:/usr/local/tomcat/webapps/myproject.war" <image_name>

Docker mount happens before or after entrypoint execution

I'm building a Docker image to run my Spring Boot based application. I want to have user to be able to feed a run time properties file by mounting the folder containing application.properties into container. Here is my Dockerfile,
FROM java:8
RUN mkdir /app
RUN mkdir /app/config
ADD myapp.jar /app/
ENTRYPOINT ["java","-jar","/app/myapp.jar"]
When kicking off container, I run this,
docker run -d -v /home/user/config:/app/config myapp:latest
where /home/user/config contains the application.properties I want the jar file to pick up during run time.
However this doesn't work, the app run doesn't pick up this mounted properties file, it's using the default one packed inside the jar. But when I exec into the started container and manually run the entrypoint cmd again, it works as expected by picking up the file I mounted in. So I'm wondering is this something related to how mount works with entrypoint? Or I just didn't write the Dockerfile correctly for this case?
Spring Boot searches for application.properties inside a /config subdirectory of the current directory (among other locations). In your case, current directory is / (docker default), so you need to change it to /app. To do that, add
WORKDIR /app
before the ENTRYPOINT line.
And to answer your original question: mounts are done before anything inside the container is run.

Resources