Docker mount happens before or after entrypoint execution - docker

I'm building a Docker image to run my Spring Boot based application. I want to have user to be able to feed a run time properties file by mounting the folder containing application.properties into container. Here is my Dockerfile,
FROM java:8
RUN mkdir /app
RUN mkdir /app/config
ADD myapp.jar /app/
ENTRYPOINT ["java","-jar","/app/myapp.jar"]
When kicking off container, I run this,
docker run -d -v /home/user/config:/app/config myapp:latest
where /home/user/config contains the application.properties I want the jar file to pick up during run time.
However this doesn't work, the app run doesn't pick up this mounted properties file, it's using the default one packed inside the jar. But when I exec into the started container and manually run the entrypoint cmd again, it works as expected by picking up the file I mounted in. So I'm wondering is this something related to how mount works with entrypoint? Or I just didn't write the Dockerfile correctly for this case?

Spring Boot searches for application.properties inside a /config subdirectory of the current directory (among other locations). In your case, current directory is / (docker default), so you need to change it to /app. To do that, add
WORKDIR /app
before the ENTRYPOINT line.
And to answer your original question: mounts are done before anything inside the container is run.

Related

How to check content of a volume from Dockerfile or compose file?

I would like to have a docker volume to persist data. The persisted data can be accessed by different containers based on different images.
It is not a host volume. It is a volume listed in volumes panel of Docker Desktop.
For example, the name of the volume is theVolume which is mounted at /workspace. The directory I need to inspect is /workspace/project.
I need to check whether a specific directory is available inside the volume. If it is not, create the directory, else leave it as is.
Is it possible to do this from within a Dockerfile or compose file?
It's possible to do this in an entrypoint wrapper script. This runs as the main container process, so it's invoked after the volume is mounted in the container. The script isn't aware of what specific thing might be mounted on /workspace, so this will work whether you've mounted a named volume, a host directory, or nothing at all. It does need to make sure to actually start the main container command when it's done.
#!/bin/sh
# entrypoint.sh
# Create the project directory if it doesn't exist
if [ ! -d /workspace/project ]; then
mkdir /workspace/project
fi
# Run the main container command
exec "$#"
Make sure this file is executable on your host system (run chmod +x entrypoint.sh before checking it in). Make sure it's included in your Docker image, and then make this script be the image's ENTRYPOINT.
COPY entrypoint.sh ./ # if a previous `COPY ./ ./` doesn't already get it
ENTRYPOINT ["./entrypoint.sh"] # must use JSON-array syntax
CMD the main container command # same as you have now
(If you're using ENTRYPOINT for the main container command, you may need to change it to CMD for this to work; if you've split the interpreter into its own ENTRYPOINT line, combine the whole container command into a single CMD.)
A Dockerfile RUN command happens before the volume is mounted (or maybe even exists at all) and so it can't modify the volume contents. A Compose file doesn't have any way to run commands, beyond replacing the image's entrypoint and command.

Why some of the directory in docker container can be mount and share files out and some can not

I'm a new leaner of docker.I came a cross a problem while I'm trying to make my own docker image.
Here's the thing.I create a new DockerFile to build my own mysql image in which I declared MYSQL_ROOT_PASSWORD and put some init scripts in the container.
Here is my Docker
FROM mysql:5.7
MAINTAINER CarbonFace<553127022#qq.com>
ENV TZ Asia/Shanghai
ENV MYSQL_ROOT_PASSWORD Carbon#mysqlRoot7
ENV INIT_DATA_DIR /initData/sql
ENV INIT_SQL_FILE_0 privileges.sql
ENV INIT_SQL_FILE_1 carbon_user_sql.sql
ENV INIT_SQL_FILE_2 carbonface_sql.sql
COPY ./my.cnf /etc/mysql/donf.d/
RUN mkdir -p $INIT_DATA_DIR
COPY ./sqlscript/$INIT_SQL_FILE_0 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_1 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_2 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_0 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_1 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_2 /docker-entrypoint-initdb.d/
CMD ["mysqld"]
I'm trying to build a docker image which contains my own config file and when mounted it would be showed in the local directory and can be modified.
I'm really confused that when I start my container with this image like the official description and also here is my commands:
docker run -dp 3306:3306 \
-v /usr/local/mysql/data:/var/lib/mysql \
-v/usr/local/mysql/conf:/etc/mysql/conf.d \
--name mysql mysql:<my builded tag>
You know I'm trying to mounted the
/usr/local/mysql/conf to the /etc/mysql/conf.d in the container which is been told as the custom config file mounted location.
And I supposed that my custom config file my.cnf which has been copied into the image during docker build and would be show in my local direcroty /usr/local/mysql/conf
And since I already copied my custom config file into image which you can see in my DockerFile.
But it turns out that the directory is empty and the /etc/mysql/conf.d is also overwrite by local directory.
Before I run my container, both /usr/local/mysql/conf and /usr/local/mysql/data is empty at all.
OK fine, I've been told that the volume mounted directory would overwrite the file inside the container.
But how could the empty data directory shows the data files inside the container but the empty conf directory overwrite the conf.d directory in the container.
It make no sense.
I was very confused and I would be very appreciate it if someone can explain why it happens.
My OS is MacOS Big Sur and I used the latest docker.
A host-directory bind mount, -v /host/path:/container/path, always hides the contents of the image and replaces it with the host directory. If the host directory is empty, the container directory will be the same empty directory at container startup time.
The Docker Hub mysql container has an involved entrypoint script that checks to see if the data directory is empty, and if so, initializes the database; abstracted out
#!/bin/sh
# (actually in hundreds of lines of shell code, with more options)
if [ ! -d /var/lib/mysql/data/mysql ]; then
mysql_install_db
# (...and start a temporary database server and run the
# /docker-entrypoint-initdb.d scripts)
fi
# then run the main container command
exec "$#"
Simply the presence of a volume doesn't cause files to be copied (with one exception at one specific point in the lifecycle for named volumes), so if you need to copy content from a container to the host you either need to do it manually with docker cp or have a way in the container code to do it.

How to copy a file from the host into a container while starting?

I am trying to build a docker image using the dockerfile, my purpose is to copy a file into a specific folder when i run the "docker run" command!
this my dockerfile code:
FROM openjdk:7
MAINTAINER MyPerson
WORKDIR /usr/src/myapp
ENTRYPOINT ["cp"]
CMD ["/usr/src/myapp"]
CMD ls /usr/src/myapp
After building my image without any error (using the docker build command), i tried to run my new image:
docker run myjavaimage MainClass.java
i got this error: ** cp: missing destination file operand after ‘MainClass.java’ **
How can i resolve this? thx
I think you want this Dockerfile:
FROM openjdk:7
WORKDIR /usr/src/myapp
COPY MainClass.java .
RUN javac MainClass.java
ENV CLASSPATH=/usr/src/myapp
CMD java MainClass
When you docker build this image, it COPYs your Java source file from your local directory into the image, compiles it, and sets some metadata telling the JVM where to find the resulting .class files. Then when you launch the container, it will run the single application you've packaged there.
It's common enough to use a higher-level build tool like Maven or Gradle to compile multiple files into a single .jar file. Make sure to COPY all of the source files you need in before running the build. In Java it seems to be common to build the .jar file outside of Docker and just COPY that in without needing a JDK, and that's a reasonable path too.
In the Dockerfile you show, Docker combines ENTRYPOINT and CMD into a single command and runs that command as the single main process of the container. If you provide a command of some sort at the docker run command, that overrides CMD but does not override ENTRYPOINT. You only get one ENTRYPOINT and one CMD, and the last one in the Dockerfile wins. So you're trying to run container processes like
# What's in the Dockerfile
cp /bin/sh -c "ls /usr/src/myapp"
# Via your docker run command
cp MainClass.java
As #QuintenScheppermans suggests in their answer you can use a docker run -v option to inject the file at run time, but this will happen after commands like RUN javac have already happened. You don't really want a workflow where the entire application gets rebuilt every time you docker run the container. Build the image during docker build time, or before.
Two things.
You have used CMD twice.
CMD can only be used once, think of it as the purpose of your docker image. Every time a container is run, it will always execute CMD if you want multiple commands, you should use RUN and then lastly, used CMD
FROM openjdk:
MAINTAINER MyPerson
WORKDIR /usr/src/
ENTRYPOINT ["cp"]
RUN /usr/src/myapp
RUN ls /usr/src/myapp
Copying stuff into image
There is a simple command COPY the syntax being COPY <from-here> <to-here>
Seems like you want to run myjavaimage so what you will do is
COPY /path/to/myjavaimage /myjavaimage
CMD myjavaimage MainClass.java
Where you see the arrows, I've just written dummy code. Replace that with the correct code.
Also, your Dockerfile is badly created.
ENTRYPOINT -> not sure why you'd do "cp", but it's an actual entrypoint. Could point to the root dir of your project or to an app that will be run.
Don't understand why you want to do ls /usr/src/myapp but if you do want to do it, use RUN and not CMD
Lastly,
Best way to debug docker containers are in interactive mode. That means ssh'ing in to your container, have a look around, run code, and see what is the problem.
Run this: docker run -it <image-name> /bin/bash and then have a look inside and it's usually the best way to see what causes issues.
This stackoverflow page perfectly answers your question.
COPY foo.txt /data/foo.txt
# where foo.txt is the relative path on host
# and /data/foo.txt is the absolute path in the image
If you need to mount a file when running the command:
docker run --name=foo -d -v ~/foo.txt:/data/foo.txt -p 80:80 image_name

Troubleshoot directory path error in COPY command in docker file

I am using COPY command in my docker file on top of ubuntu 16.04. I am getting error as no such file or directory eventhough the directory is present. In the below docker file I want to copy the directory "auth" present inside workspace directory to the docker image (at path /home/ubuntu) and then build the image.
FROM ubuntu:16.04
RUN apt-get update
COPY /home/ubuntu/authentication/workspace /home/ubuntu
WORKDIR /home/ubuntu/auth
a Dockerfile COPY command can only refer to files under the context - the current location of the Dockerfile, aka .
so you have a few options now:
if it is possible to copy the /home/ubuntu/authentication/workspace/ directory content to somewhere inside your project before the build (so now it will be included in your Dockerfile context and you can access it via COPY ./path/to/content /home/ubuntu) it can be great. but sometimes you dont want it.
instead of copying the directory, bind it to your container via a volume:
when you run the container, add a -v option:
docker run [....] -v /home/ubuntu/authentication/workspace:/home/ubuntu [...]
mind that a volume is designed so any change you made inside the container dir(/home/ubuntu) will affect the bound directory on your host side (/home/ubuntu/authentication/workspace) and vice versa.
i found a something over here: this guy is forcing the Dockerfile to accept his context- he is sitting inside the /home/ubuntu/authentication/workspace/ directory, and running there
docker build . -f /path/to/Dockerfile
so now inside his Dockerfile he can refer to /home/ubuntu/authentication/workspace as his context (.)

Docker "config" Container / Docker image

I want to make a docker image that keeps my application configuration, so when something changes I can only change the config container and don't have to build a new image for my application.
Here is my Dockerfile:
FROM scratch
RUN mkdir -p /config
ADD config.properties /config
VOLUME /config
ENTRYPOINT /bin/true
But it can't even create the directory. Is there a best practice for such things?
Keep in mind that the scratch image is literally completely empty. You cannot create the directory, because there's no /usr/bin/mkdir executable in that image.
To create the directory anyway, you can exploit the fact that the ADD statement in a Dockerfile also implicitly creates directories, so the following Dockerfile should be enough:
FROM scratch
ADD config.properties /config/config.properties
VOLUME /config
Regarding the ENTRYPOINT; there's also no /bin/true in your image. This means that the container will not start (i.e. exit immediately with exec: "/bin/true": stat /bin/true: no such file or directory). However, as you intend to use this image for a data-only container, that's probably OK. Simply use docker create instead of docker run to create the container without starting it:
docker build -t config_image .
docker create --name config config_image

Resources