Dockerfile:
FROM golang:latest
RUN mkdir /app/
RUN mkdir /app/subfolder1
RUN mkdir /app/subfolder2
VOLUME /app/
docker-compose.yml
version: '3.3'
services:
my_test:
build: .
volumes:
- ./app:/app
I watched (in mysql Dockerfile) how the database mysql files are shared, I decided to do the same. I expect that the first time start docker-compose up, two subfolders from outside will be created in the /app folder. But during running docker-compose up, only one folder /app is created without subfolders inside. What am I doing wrong?
Please tell me how can I achieve the same behavior as with the MySQL container, when at the first start my external folder is filled with files and folders, and then it’s just used:
version: '3'
services:
mysql:
image: mysql:5.7
volumes:
- ./data/db:/var/lib/mysql
Example above works, but my first example doesn't work
The mysql image has an involved entrypoint script that does the first-time setup. That specifically checks to see whether the data directory exists or not:
if [ -d "$DATADIR/mysql" ]; then
DATABASE_ALREADY_EXISTS='true'
fi
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
docker_init_database_dir "$#"
...
fi
Note that this does not rely on any built-in Docker functionality, and does not copy any content out of the original image; it runs a fairly involved sequence of steps to populate the initial database setup, configure users, and run the contents in the /docker-entrypoint-initdb.d directory.
If you want to copy some sort of seed data into a mounted volume, your container generally needs to handle this itself. You could write an entrypoint script like:
#!/bin/sh
# If the data directory doesn't have content, copy it
if ! [ -d /data/content ]; then
cp -a /app/data/content /data
fi
# Run whatever the container's main command is
exec "$#"
(There is a case where Docker will populate named volumes from image content. This has some severe limitations: it only works on named volumes and not bind-mounted host directories; it doesn't work on Kubernetes, if that's in your future; if the image content is updated, the volume will not be changed. Writing out the setup code explicitly at startup will give you more predictable behavior.)
Related
I am working on a docker app. The purpose of this repo is to output some json into a volume. I am using a Dockerfile, docker-compose and a Makefile. I'll show the contents of each file below. Goal/desired outcome is that when I run using make up that the container runs and outputs the json.
Directory looks like this:
docker-compose.yaml
Dockerfile
Makefile
main/ # a directory
Here are the contents of directory Main:
example.R
Not sure the best order to show these files. Throughout my setup I refer to a variable $PROJECTS_DIR which is a global on the host / local:
echo $PROJECTS_DIR
/home/doug/Projects
Here are my files:
docker-compose.yaml:
version: "3.5"
services:
nextzen_ga_extract_marketing:
build:
context: .
environment:
start_date: "2020-11-18"
start_date: "2020-11-19"
volumes:
- ${PROJECTS_DIR}/Zen/nextzen_google_analytics_extract_pipeline:/home/rstudio/Projects/nextzen_google_analytics_extract_pipeline
Dockerfile:
FROM rocker/tidyverse:latest
ADD main main
WORKDIR "/main"
RUN apt-get update && apt-get install -y \
less \
vim
ENTRYPOINT ["Rscript", "example.R"]
Makefile:
.PHONY: build
build:
docker-compose build
.PHONY: up
up:
docker-compose pull
docker-compose up -d
.PHONY: restart
restart:
docker-compose restart
.PHONY: down
down:
docker-compose down
Here is the contents of the 'main' file of the Docker app, example.R:
library(jsonlite)
unlink("../output_data", recursive = TRUE) # delete any existing data from previous runs
dir.create('../output_data')
write(toJSON(mtcars), '../output_data/ga_tables.json')
If I navigate into ${PROJECTS_DIR}/Zen/nextzen_google_analytics_extract_pipeline/main and then run sudo Rscript example.R then the file runs and outputs the json in '../output_data/ga_tables.json as expected.
I am struggling to get this to happen when running the container. If I navigate into ${PROJECTS_DIR}/Zen/nextzen_google_analytics_extract_pipeline/ and then in the terminal run make up for:
docker-compose pull
docker-compose up -d
I then see:
make up
docker-compose pull
docker-compose up -d
Creating network "nextzengoogleanalyticsextractpipeline_default" with the default driver
Creating nextzengoogleanalyticsextractpipeline_nextzen_ga_extract_marketing_1 ...
Creating nextzengoogleanalyticsextractpipeline_nextzen_ga_extract_marketing_1 .
It 'looks' like everything ran as expected with no errors. Except no output appears in directory output_data as expected?
I guess I'm misunderstanding or misusing ENTRYPOINT in the Dockerfile with ENTRYPOINT ["Rscript", "example.R"]. My goal is that this file would run when the container is run.
How can I 'run' (if that's the correct terminology) my app so that it outputs json into /output_data/ga_tables.json?
Not sure what other info to provide? Any help much appreciated, I'm still getting to grips with docker.
If you run your application from /main and its output is supposed to go into ../output_data (so effectively /output_data), you need to bind mount this directory to have this output available on host. Therefore I would update your docker-compose.yaml to read something like this:
volumes:
- /path/to/output_data/on/host:/output_data
Bear in mind however that your script will not be able to remove /output_data when bind-mounted this way, so you might want to change your step to removing directory contents and not directory itself.
In my case, I got this working when I used full paths as opposed to relative paths.
The docker-compose file is as follows:
version: "3"
services:
backend:
build:
context: .
dockerfile: dockerfile_backend
image: backend:dev1.0.0
entrypoint: ["sh", "-c"]
command: python manage.py runserver
ports:
- "4000:4000"
The docker build creates a folder lets say /docker_container/configs which has files like config.json and db.sqlite3. The mounting of this folder as volumes is necessary because during runtime the content of the folder gets modified or updated,these changes should not be lost.
I have tried adding a volumes as follows :
volumes:
- /host_path/configs:/docker_container/configs
Here the problem is mount point of the hostpath(/host_path/configs) is empty initially so the container image folder(/docker_container/configs) also gets empty.
How could this problem be solved?.
You are using a Bind Mount which will "hide" the content already existing in your image as you describe - /host_path/configs being empty, /docker_container/configs will be empty as well.
You can use named Volumes instead which will automatically populate the volume with content already existing in the image and allow you to perform updates as you described:
services:
backend:
# ...
#
# content of /docker_container/configs from the image
# will be copied into backend-volume
# and accessible at runtime
volumes:
- backend-volume:/docker_container/configs
volumes:
backend-volume:
As stated in the Volume doc:
If you start a container which creates a new volume [...] and the container has files or directories in the directory to be mounted [...] the directory’s contents are copied into the volume
You can pre-populate the host directory once by copying the content from the image to directory.
docker run --rm backend:dev1.0.0 tar -cC /docker_container/config/ . | tar -xC /host_path/configs
Then start your compose project as it is and the host path already has the original content from the image.
Another approach is to have an entrypoint script that copies content to the mounted volume.
You can mount the host path to a different path(say /docker_container/config_from_host) and have an entrypoint script which copies content from /docker_container/configs into /docker_container/config_from_host if the directory is empty.
Sample pseudo code:
$ cat Dockerfile
RUN cp entrypoint.sh /entrypoint.sh
CMD /entrypoint.sh
$ cat entrypoint.sh:
#!/bin/bash
if /docker_container/config_from_host is empty; then
cp -r /docker_container/config/* /docker_container/config_from_host
fi
python manage.py runserver
I want to run M/Monit (https://mmonit.com/) in a docker container and found this Dockerfile: https://github.com/mlebee/docker-mmonit/blob/master/Dockerfile
I'm using it with a simple docker-compose.yml in my test environment:
version: '3'
services:
mmonit:
build: .
ports:
- "8080:8080"
#volumes:
#- ./db/:/opt/mmonit/db/
It does work, but I want to extend the Dockerfile so that the path /opt/mmonit/db/ is exported as a volume. I'm struggling to implement the following behaviour:
When the volume mapped to /opt/mmonit/db/ is empty (for example on first setup) the files from the install archive should be written to the volume. The db folder is part of the archive.
When the database file /opt/mmonit/db/mmonit.db already exists in the volume, it should not be overwritten in any circumstances.
I do have an idea how to script the required operations / checks in bash, but I'm not even sure if it would be better to replace the ENTRYPOINT with a custom start script or if it should be done by modifying the Dockerfile only.
That's why I ask for the recommended way.
In general the strategy you lay out is the correct path; it's essentially what the standard Docker Hub database images do.
The image you link to is a community image, so you shouldn't feel particularly bound to that image's decisions. Given the lack of any sort of license file in the GitHub repository you may not be able to copy it as-is, but it's also not especially complex.
Docker supports two "halves" of the command to run, the ENTRYPOINT and CMD. CMD is easy to provide on the Docker command line, and if you have both, Docker combines them together into a single command. So a very typical pattern is to put the actual command to run (mmmonit -i) as the CMD, and have the ENTRYPOINT be a wrapper script that does the required setup and then exec "$#".
#!/bin/sh
# I am the Docker entrypoint script
# Create the database, but only if it does not already exist:
if ! test -f /opt/mmonit/db/mmonit.db; then
cp -a /opt/monnit/db_base /opt/monnit/db
fi
# Replace this script with the CMD
exec "$#"
In your Dockerfile, then, you'd specify both the CMD and ENTRYPOINT:
# ... do all of the installation ...
# Make a backup copy of the preinstalled data
RUN cp -a db db_base
# Install the custom entrypoint script
COPY entrypoint.sh /opt/monit/bin
RUN chmod +x entrypoint.sh
# Standard runtime metadata
USER monit
EXPOSE 8080
# Important: this must use JSON-array syntax
ENTRYPOINT ["/opt/monit/bin/entrypoint.sh"]
# Can be either JSON-array or bare-string syntax
CMD /opt/monit/bin/mmonit -i
I would definitely make these kind of changes in a Dockerfile, either starting FROM that community image or building your own.
I did docker-compose down/docker container rm and noticed that I lost all my data created in the container. Yes, I forgot to mount my local directory as a volume in the first place. 😭
To prevent this, on startup, I want to warn users that "the data will be non-persistent", if the local volume's not mounted.
Is there a way to detect from inside container whether a file or a directory is a mounted one via Docker?
I googled it but couldn't find a good way. And my current workaround is:
FROM alpine:latest
RUN \
mkdir /data && \
touch /data/.replaceme
...
ENTRYPOINT /detect-mount-and-start.sh
detect-mount-and-start.sh checks if /data/.replaceme exists. If so, it warns to mount a local volume and exits.
Are there a better way to detect it?
Note (2019/09/12): This container is not only used via docker-compose up but docker run --rm too. And the directory name in local are not specified. Meaning, it can be -v $(pwd)/mydata:/data or something like -v $(pwd)/data_local:/data, etc.
Note (2019/09/15): The situation is: I launched a container of Markdown Editor and created something like 100 of .md files. Those files were saved on /data on the root of the container. I should have mounted the volume like -v $(pwd)/data:/data before everything. But I didn't ... and noticed it after removing the container. My bad I know.
I don't know if I understand your question, but when you use docker-compose down, depending on how you create your docker-compose.yml, it will destroy your data, see:
Will delete you data when you execute down:
version: '2'
mysqldb:
image: mysql: 5.7
Don't will delete when you execute down:
version: '2'
mysqldb:
image: mysql: 5.7
volumes:
- ./data:/var/lib/mysql
Don't will delete when you execute down:
version: '2'
mysqldb:
image: mysql: 5.7
volumes:
- data-volume:/var/lib/mysql
volumes:
data-volume:
external: true
PS: I do not have a rating to comment on your question, so I am answering.
The way you are doing may also work, but I was working with one of my client on a project in development environment, It was nodejs based application and they need to make sure the server.js exist before starting the container and server.js was expected from mount location so I came up with this approach, As I did not find a way to sense shared docker volume inside container.
Dockerfile
FROM alpine
run mkdir -p /myapp
copy . /myapp
copy entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh
#!/bin/sh
APP_PATH="/myapp"
files=$(ls /myapp/*.js)
echo "Files in Docker mount location: $files"
if [ -f "$APP_PATH/server.js" ] && [ -f "$APP_PATH/index.js" ]; then
echo "Starting container with host mount files"
echo "Starting server.js"
cd $APP_PATH;
node server.js
else
>&2 echo "Error: Pls mount the host location on /myapp path of the container i.e -v host_node_project:/myapp. Current files $(ls $APP_PATH)"
break
fi
build and run
docker build -t myapp .
docker run -it --rm --name myapp myapp
docker-compose stop - doesn't destroy your containers
Then you can use:
docker-compose start
I'm trying my first steps into Docker, so I tried making a Dockerfile that creates a simple index.html and a directory images (See code below)
Then I run docker-compose build to create the image, and docker-compose-up to run the server. But I get no file index.html or folder images.
This is my Dockerfile:
FROM php:apache
MAINTAINER brent#dropsolid.com
WORKDIR /var/www/html
RUN touch index.html \
&& mkdir images
And this is my docker-compose.yml
version: '2'
services:
web:
build: .docker/web
ports:
- "80:80"
volumes:
- ./docroot:/var/www/html
I would expect that this would create a docroot folder with an image directory and an index.html, but I only get the docroot.
The image does contain those files
The Dockerfile contains instructions on how to build an image. The image you built from that Dockerfile does contain index.html and images/.
But, you over-rode them in the container
At runtime, you created a container from the image you built. In that container, you mounted the external directory ./docroot as /var/www/html.
A mount will hide whatever was at that path before, so this mount will hide the prior contents of /var/www/html, replacing them with whatever is in ./docroot.
Putting stuff in your mount
In the comments you asked
is there a possibility then to first mount and then create files or something? Or is that impossible?
The way you have done things, you mounted over your original files, so they are no longer accessible once the container is created.
There are a couple of ways you can handle this.
Change their path in the image
If you put these files in a different path in your image, then they will not be overwritten by the mount.
WORKDIR /var/www/alternate-html
RUN touch index.html \
&& mkdir images
WORKDIR /var/www/html
Now, at runtime you will still have this mount at /var/www/html, which will contain the contents from the external directory. Which may or may not be an empty directory. You can tell the container on startup to run a script and copy things there, if that's what you want.
COPY entrypoint.sh /entrypoint.sh
RUN chmod 0755 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
(This is assuming you do not have a defined entrypoint - if you do, you'll maybe just need to adjust your existing script instead.)
entrypoint.sh:
#!/bin/sh
cp -r /var/www/alternate-html/* /var/www/html
exec "$#"
This will run the cp command, and then hand control over to whatever the CMD for this image is.
Handling it externally
You also have the option of simply pre-populating the files you want into ./docroot externally. Then they will just be there when the container starts and adds the directory mount.