Create files / folders on docker-compose build or docker-compose up - docker

I'm trying my first steps into Docker, so I tried making a Dockerfile that creates a simple index.html and a directory images (See code below)
Then I run docker-compose build to create the image, and docker-compose-up to run the server. But I get no file index.html or folder images.
This is my Dockerfile:
FROM php:apache
MAINTAINER brent#dropsolid.com
WORKDIR /var/www/html
RUN touch index.html \
&& mkdir images
And this is my docker-compose.yml
version: '2'
services:
web:
build: .docker/web
ports:
- "80:80"
volumes:
- ./docroot:/var/www/html
I would expect that this would create a docroot folder with an image directory and an index.html, but I only get the docroot.

The image does contain those files
The Dockerfile contains instructions on how to build an image. The image you built from that Dockerfile does contain index.html and images/.
But, you over-rode them in the container
At runtime, you created a container from the image you built. In that container, you mounted the external directory ./docroot as /var/www/html.
A mount will hide whatever was at that path before, so this mount will hide the prior contents of /var/www/html, replacing them with whatever is in ./docroot.
Putting stuff in your mount
In the comments you asked
is there a possibility then to first mount and then create files or something? Or is that impossible?
The way you have done things, you mounted over your original files, so they are no longer accessible once the container is created.
There are a couple of ways you can handle this.
Change their path in the image
If you put these files in a different path in your image, then they will not be overwritten by the mount.
WORKDIR /var/www/alternate-html
RUN touch index.html \
&& mkdir images
WORKDIR /var/www/html
Now, at runtime you will still have this mount at /var/www/html, which will contain the contents from the external directory. Which may or may not be an empty directory. You can tell the container on startup to run a script and copy things there, if that's what you want.
COPY entrypoint.sh /entrypoint.sh
RUN chmod 0755 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
(This is assuming you do not have a defined entrypoint - if you do, you'll maybe just need to adjust your existing script instead.)
entrypoint.sh:
#!/bin/sh
cp -r /var/www/alternate-html/* /var/www/html
exec "$#"
This will run the cp command, and then hand control over to whatever the CMD for this image is.
Handling it externally
You also have the option of simply pre-populating the files you want into ./docroot externally. Then they will just be there when the container starts and adds the directory mount.

Related

Why some of the directory in docker container can be mount and share files out and some can not

I'm a new leaner of docker.I came a cross a problem while I'm trying to make my own docker image.
Here's the thing.I create a new DockerFile to build my own mysql image in which I declared MYSQL_ROOT_PASSWORD and put some init scripts in the container.
Here is my Docker
FROM mysql:5.7
MAINTAINER CarbonFace<553127022#qq.com>
ENV TZ Asia/Shanghai
ENV MYSQL_ROOT_PASSWORD Carbon#mysqlRoot7
ENV INIT_DATA_DIR /initData/sql
ENV INIT_SQL_FILE_0 privileges.sql
ENV INIT_SQL_FILE_1 carbon_user_sql.sql
ENV INIT_SQL_FILE_2 carbonface_sql.sql
COPY ./my.cnf /etc/mysql/donf.d/
RUN mkdir -p $INIT_DATA_DIR
COPY ./sqlscript/$INIT_SQL_FILE_0 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_1 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_2 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_0 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_1 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_2 /docker-entrypoint-initdb.d/
CMD ["mysqld"]
I'm trying to build a docker image which contains my own config file and when mounted it would be showed in the local directory and can be modified.
I'm really confused that when I start my container with this image like the official description and also here is my commands:
docker run -dp 3306:3306 \
-v /usr/local/mysql/data:/var/lib/mysql \
-v/usr/local/mysql/conf:/etc/mysql/conf.d \
--name mysql mysql:<my builded tag>
You know I'm trying to mounted the
/usr/local/mysql/conf to the /etc/mysql/conf.d in the container which is been told as the custom config file mounted location.
And I supposed that my custom config file my.cnf which has been copied into the image during docker build and would be show in my local direcroty /usr/local/mysql/conf
And since I already copied my custom config file into image which you can see in my DockerFile.
But it turns out that the directory is empty and the /etc/mysql/conf.d is also overwrite by local directory.
Before I run my container, both /usr/local/mysql/conf and /usr/local/mysql/data is empty at all.
OK fine, I've been told that the volume mounted directory would overwrite the file inside the container.
But how could the empty data directory shows the data files inside the container but the empty conf directory overwrite the conf.d directory in the container.
It make no sense.
I was very confused and I would be very appreciate it if someone can explain why it happens.
My OS is MacOS Big Sur and I used the latest docker.
A host-directory bind mount, -v /host/path:/container/path, always hides the contents of the image and replaces it with the host directory. If the host directory is empty, the container directory will be the same empty directory at container startup time.
The Docker Hub mysql container has an involved entrypoint script that checks to see if the data directory is empty, and if so, initializes the database; abstracted out
#!/bin/sh
# (actually in hundreds of lines of shell code, with more options)
if [ ! -d /var/lib/mysql/data/mysql ]; then
mysql_install_db
# (...and start a temporary database server and run the
# /docker-entrypoint-initdb.d scripts)
fi
# then run the main container command
exec "$#"
Simply the presence of a volume doesn't cause files to be copied (with one exception at one specific point in the lifecycle for named volumes), so if you need to copy content from a container to the host you either need to do it manually with docker cp or have a way in the container code to do it.

docker-compose subfolders doesn't appear in volume folder

Dockerfile:
FROM golang:latest
RUN mkdir /app/
RUN mkdir /app/subfolder1
RUN mkdir /app/subfolder2
VOLUME /app/
docker-compose.yml
version: '3.3'
services:
my_test:
build: .
volumes:
- ./app:/app
I watched (in mysql Dockerfile) how the database mysql files are shared, I decided to do the same. I expect that the first time start docker-compose up, two subfolders from outside will be created in the /app folder. But during running docker-compose up, only one folder /app is created without subfolders inside. What am I doing wrong?
Please tell me how can I achieve the same behavior as with the MySQL container, when at the first start my external folder is filled with files and folders, and then it’s just used:
version: '3'
services:
mysql:
image: mysql:5.7
volumes:
- ./data/db:/var/lib/mysql
Example above works, but my first example doesn't work
The mysql image has an involved entrypoint script that does the first-time setup. That specifically checks to see whether the data directory exists or not:
if [ -d "$DATADIR/mysql" ]; then
DATABASE_ALREADY_EXISTS='true'
fi
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
docker_init_database_dir "$#"
...
fi
Note that this does not rely on any built-in Docker functionality, and does not copy any content out of the original image; it runs a fairly involved sequence of steps to populate the initial database setup, configure users, and run the contents in the /docker-entrypoint-initdb.d directory.
If you want to copy some sort of seed data into a mounted volume, your container generally needs to handle this itself. You could write an entrypoint script like:
#!/bin/sh
# If the data directory doesn't have content, copy it
if ! [ -d /data/content ]; then
cp -a /app/data/content /data
fi
# Run whatever the container's main command is
exec "$#"
(There is a case where Docker will populate named volumes from image content. This has some severe limitations: it only works on named volumes and not bind-mounted host directories; it doesn't work on Kubernetes, if that's in your future; if the image content is updated, the volume will not be changed. Writing out the setup code explicitly at startup will give you more predictable behavior.)

How to specify working directory for ENTRYPOINT in Dockerfile

The Docker image (Windows-based) includes an application directory at C:\App. Inside that directory reside several sub-folders and files, including a batch file called process.bat. The Dockerfile (used to build the image) ends like this:
ENTRYPOINT [ "C:\\App\\process.bat" ]
When I instantiate this image using the command: docker run company/app, the batch file runs, but it fails at the point where other files under C:\App are referenced. Essentially, the working directory is still C:\ from the Docker container's entry-point.
Is there a way to set the working directory within the Dockerfile? Couple of alternatives do exist:
Add -w C:\App to the docker run
In the batch file, I can add a line at the beginning cd /D C:\App
But is there a way to specify the working directory in the Dockerfile?
WORKDIR /App is a command you can use in your dockerfile to change the working directory.
If /App is a mounted volume then you should specify VOLUME /App before WORKDIR to use it with ENTRYPOINT, otherwise it does not be seen by ENTRYPOINT:
VOLUME ["/App"]
WORKDIR /App
ENTRYPOINT ["sh", "start.sh"]
Which start.sh is within /App directory.

docker-compose persistent data on host and container

I have a problem with volumes in docker-compose yml 3.0+
So I know that a volume behaves like a mount.. But I have set up a wiki and when i set a volume in the docker-compose, the data on the container will be removed (hidden)
So how can I save data from my container to my host first and the next time I start the container, it will just overrides the data I saved.
So the current situation is:
I start with "docker-compose up --build" and a volume is created (empty) and will be copied to the container.. Everything in that folder on the container is deleted as a result
docker-compose.yml
version: '3.1'
services:
doku-wiki:
build: .
ports:
- '4000:80'
Dockerfile
FROM php:7.1-apache
COPY dokuwiki-stable /var/www/html/
COPY entrypoint.sh /entrypoint.sh
RUN chmod 777 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 80
It sounds like you are using a host volume where you map a host directory into the container. When you do this, anything at that location inside your image will not be visible, only the files as they exist on the host.
If you want to have a copy of the files from inside your image to initialize the volume, you have two options:
Switched to a named volume. Docker will automatically initialize these to the contents of the image, including any permissions. If you don't require direct access to the files from outside of docker, this is the preferred solution.
Change your image entrypoint and the location where you store your files in the image.
On the second option, if you want /data to be a volume for your application, you could have an entrypoint.sh that does:
#!/bin/sh
if [ ! -d "/data" ]; then
ln -s /data_save /data
elif [ -z "$(ls -A /data)" ]; then
cp -a /data_save/. /data/
fi
exec "$#"
Your image would need to save all the initial files to /data_save instead of /data. Then if the directory is empty it would do a copy of /data_save to your volume /data. If the volume wasn't mapped at all, then it just creates a symlink from /data to /data_save. The last line runs the CMD from your Dockerfile or docker run cli as if the entrypoint wasn't ever there. The added lines to your Dockerfile would look like:
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

Docker mount happens before or after entrypoint execution

I'm building a Docker image to run my Spring Boot based application. I want to have user to be able to feed a run time properties file by mounting the folder containing application.properties into container. Here is my Dockerfile,
FROM java:8
RUN mkdir /app
RUN mkdir /app/config
ADD myapp.jar /app/
ENTRYPOINT ["java","-jar","/app/myapp.jar"]
When kicking off container, I run this,
docker run -d -v /home/user/config:/app/config myapp:latest
where /home/user/config contains the application.properties I want the jar file to pick up during run time.
However this doesn't work, the app run doesn't pick up this mounted properties file, it's using the default one packed inside the jar. But when I exec into the started container and manually run the entrypoint cmd again, it works as expected by picking up the file I mounted in. So I'm wondering is this something related to how mount works with entrypoint? Or I just didn't write the Dockerfile correctly for this case?
Spring Boot searches for application.properties inside a /config subdirectory of the current directory (among other locations). In your case, current directory is / (docker default), so you need to change it to /app. To do that, add
WORKDIR /app
before the ENTRYPOINT line.
And to answer your original question: mounts are done before anything inside the container is run.

Resources