Docker volume mounting empty - docker

I'm making an image to host a PHP application. I'm using COPY to populate /var/www/html with the app files and creating a VOLUME /var/www/html to be able to mount the dir on the host and edit files like config.
But:
When I mount the volume on docker-compose.yml, the directory is empty.
When I omit the "volmue" entry on the docker-compose.yml and connect with the container shell, the directory /var/www/html is filled.
I already read tons of examples and documentations but, sincerely, don't know what is wrong.
dockerfile:
FROM php:8.0-apache-buster
LABEL description="OCOMON 3.2 Frontend (noDB)"
RUN docker-php-ext-install pdo_mysql
WORKDIR /var/www/html/ocomon
COPY ./ocomon .
VOLUME /var/www/html/ocomon
docker-compose.yml:
version: '3.5'
services:
ocomon:
image: ocomon:3.2
container_name: ocomon
volumes:
- ./volumes/ocomon/ocomon:/var/www/ocomon
ports:
- 4682:80

Assuming your host's directory is ${PWD}/www/html, then you need only provide the volumes value in docker-compose.yml and it should be:
Dockerfile:
FROM php:8.0-apache-buster
LABEL description="OCOMON 3.2 Frontend (noDB)"
RUN docker-php-ext-install pdo_mysql
WORKDIR /var/www/html/ocomon
COPY ./www/html/ocomon .
and:
docker-compose.yml:
version: '3.5'
services:
ocomon:
image: ocomon:3.2
container_name: ocomon
volumes:
- ${PWD}/www/html:/var/www/html
ports:
- 4682:80
Explanation
Dockerfile VOLUMES creates a volume in the image. By following WORKDIR (which always create if the path does not exist), with VOLUME, you overwrite whatever was either in (or created by WORKDIR). NOTE this is done during image build.
Docker Compose volumes mounts a directory from the host into the image. The syntax is volumes: - ${HOST_PATH}:${IMAGE_PATH}. NOTE this is done during image run.

Related

Docker keeps data but creates a new volume from host folder every "run"

It seems to be a misunderstood point from me about volumes. I have a docker-compose file with two services : jobs which is a Flask api built from a Dockerfile (see below), and mongo which is from official MongoDb image.
I have two volumes : - .:/code is linked from my host working directory to /code folder in the container, and a named volume mongodata.
version: "3"
services:
jobs:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
environment:
FLASK_ENV: ${FLASK_ENV}
FLASK_APP: ${FLASK_APP}
depends_on:
- mongo
mongo:
image: "mongo:3.6.21-xenial"
restart: "always"
ports:
- "27017:27017"
volumes:
- mongodata:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
volumes:
mongodata:
Dockerfile for jobs service :
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=job-checker
ENV FLASK_ENV=development
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run", "--host=0.0.0.0"]
Every time I remove these container and re-run, everything is fine, I still have my data in mongodata volume. But when I check the volume list I can see that a new volume is created from - .:/code with a long volume name, for example :
$ docker volume ls
DRIVER VOLUME NAME
local 55c08cd008a1ed1af8345cef01247cbbb29a0fca9385f78859607c2a751a0053
local abe9fd0c415ccf7bf8c77346f31c146e0c1feeac58b3e0e242488a155f6a3927
local job-checker_mongodata
Here I ran docker-compose up, then I removed containers, then ran up again, so I have two volumes from my working folder.
Is this normal that every up create a new volume instead of using the previous one ?
Thanks
Hidden at the end of the Docker Hub mongo image documentation is a note:
This image also defines a volume for /data/configdb...
The image's Dockerfile in turn contains the line
VOLUME /data/db /data/configdb
When you start the container, you mount your own volume over /data/db, but you don't mount anything on the second path. This causes Docker to create an anonymous volume there, which is the volume you're seeing with only a long hex ID.
It should be safe to remove the extra volumes, especially if you're sure they're not attached to a container and they don't have interesting content.
This behavior has nothing to do with the bind mount in the other container; bind mounts never show up in the docker volume ls listing at all.

In Docker, how do I copy files from a local directory so that I can then copy those files into my Docker container?

I'm using Docker
Docker version 19.03.8, build afacb8b
I have the following docker-compose.yml file ...
version: "3.2"
services:
sql-server-db:
build: ./
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "Password1!"
ACCEPT_EULA: "Y"
and here is the Docker file it uses to build ...
FROM microsoft/mssql-server-linux:latest
# Create work directory
RUN mkdir -p /usr/work
WORKDIR /usr/work
# Copy all scripts into working directory
COPY . /usr/work/
# Grant permissions for the import-data script to be executable
RUN chmod +x /usr/work/import-data.sh
EXPOSE 1433
CMD /bin/bash ./entrypoint.sh
On my local machine, I have some files in a "../../scripts/myproject/*.sql" directory (the ".." are relative to the directory where my docker-compose.yml file is stored). Is there a way I can run "docker-compose up" and have those files copied into a directory from which I can then copy them into the container's "/usr/work" directory?
There are 2 ways to solve this, with one being easier than the other, but both have use cases.
The easy way
You could mount the directory directly to the container through the docker-compose like this:
version: "3.2"
services:
sql-server-db:
build: ./
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "Password1!"
ACCEPT_EULA: "Y"
volumes:
- ../../scripts/myproject:/path/to/dir
Note the added volumes compared to the yaml in your question. This will mount the myproject directory to /path/to/dir within the container. What this will also mean is that if the sql-server-db container writes to any of the files in /path/to/dir, then the file in myproject on the host machine will also change, since the files are mounted.
The less easy way
You could copy the files directly during the build of the image. This is a little bit harder, since the build stage of docker doesn't allow the copying of parent directories unless you add some extra arguments. What needs to happen is that you set the context of the build stage to a different directory than the current directory. The context determines which files are sent to the build stage. This is the same directory as the directory the Dockerfile resides in by default.
To take this approach, you need the following in your docker-compose.yml:
version: "3.2"
services:
sql-server-db:
build:
context: ../..
dockerfile: path/to/Dockerfile # Here you should specify the path to your Dockerfile, this is a relative path from your context
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "Password1!"
ACCEPT_EULA: "Y"
So above the context is now ../.. so that you are able to copy files two directories above. You can then copy the myproject directory in your Dockerfile like this:
FROM microsoft/mssql-server-linux:latest
COPY ./scripts/myproject /myfiles
The advantage of this approach is that the files are copied instead of being mounted, so the docker container can write whatever it wants to these files, without affecting the host machine.

creating a redis docker container with an exising rdb and load module at initiation?

I am trying to start a docker container using a redis db that I have a persistent copy saved to a local machine.
I currently have a docker container loading redis with a volume using this docker-compose.yml but it misses my redis.conf (which contains the loadmodule command) is located in the volume with the rdb file
version: '3'
services:
redis:
image: redis
container_name: "redis"
ports:
- "6379:6379"
volumes:
- E:\redis_backup_conf:/data
This begins to load the RDB but crashes out because the data uses this time series module.
I can load a seperate docker container with a fresh redis db that has the time seriese module loaded using the following dockerfile. My issue is I can't figure out how to do both at the same time!
Is there someway of calling a dockerfile from a docker-compose.yml or declaring the volume in the dockerfile?
That, or should I be creating my own image that I can call in the docker-compose.yml?
Any help woule be appreciated, I'm honeslty just going round in circles I think.
dockerfile
# BUILD redisfab/redistimeseries:${VERSION}-${ARCH}-${OSNICK}
ARG REDIS_VER=6.0.1
# stretch|bionic|buster
ARG OSNICK=buster
# ARCH=x64|arm64v8|arm32v7
ARG ARCH=x64
#----------------------------------------------------------------------------------------------
FROM redisfab/redis:${REDIS_VER}-${ARCH}-${OSNICK} AS builder
ARG REDIS_VER
ADD ./ /build
WORKDIR /build
RUN ./deps/readies/bin/getpy2
RUN ./system-setup.py
RUN make fetch
RUN make build
#----------------------------------------------------------------------------------------------
FROM redisfab/redis:${REDIS_VER}-${ARCH}-${OSNICK}
ARG REDIS_VER
ENV LIBDIR /usr/lib/redis/modules
WORKDIR /data
RUN mkdir -p "$LIBDIR"
COPY --from=builder /build/bin/redistimeseries.so "$LIBDIR"
EXPOSE 6379
CMD ["redis-server", "--loadmodule", "/usr/lib/redis/modules/redistimeseries.so"]
EDIT:
ok.. slight improvement i can call a redis-timeseries image in the docker-compose.yml
services:
redis:
image: redislabs/redistimeseries
container_name: "redis"
ports:
- "6379:6379"
volumes:
- E:\redis_backup_conf:/data
This is a start however I still need to increase the maximum number of db's, I have been using the redis.conf to do this in the past.
You can just have docker-compose build your dockerfile directly. Assume your docker-compose file is in folder called myproject . Also assume your dockerfile is in a folder called myredis and that myredis is in the myproject folder. Then you can replace this line in your docker-compose file:
Image: redis
With:
Build: ./myredis
That will build and use your custom image

Files inside Docker container not updating when I edit in host

I am using Docker which is running fine.
I can start a Docker image using docker-compose.
docker-compose rm nodejs; docker-compose rm db; docker-compose up --build
I attached a shell to the Docker container using
docker exec -it nodejs_nodejs_1 bash
I can view files inside the container
(inside container)
cat server.js
Now when I edit the server.js file inside the host, I would like the file inside the container to change without having to restart Docker.
I have tried to add volumes to the docker-compose.yml file or to the Dockerfile, but somehow I cannot get it to work.
(Dockerfile, not working)
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
VOLUMES ["/usr/src/app"]
EXPOSE 8080
CMD [ "npm", "run", "watch" ]
or
(docker-compose.yml, not working)
version: "3.3"
services:
nodejs:
build: ./nodejs-server
ports:
- "8001:8080"
links:
- db:db
env_file:
- ./.env-example
volumes:
- src: /usr/src/app
db:
build: ./mysql-server
volumes:
- ./mysql-server/data:/docker-entrypoint-initdb.d #A folder /mysql-server/data with a .sql file needs to exist
env_file:
- ./.env-example
volumes:
src:
There is probably a simple guide somewhere, but I havn't found it yet.
If you want a copy of the files to be visible in the container, use a bind mount volume (aka host volume) instead of a named volume.
Assuming your docker-compose.yml file is in the root directory of the location that you want in /usr/src/app, then you can change your docker-compose.yml as follows:
version: "3.3"
services:
nodejs:
build: ./nodejs-server
ports:
- "8001:8080"
links:
- db:db
env_file:
- ./.env-example
volumes:
- .:/usr/src/app
db:
build: ./mysql-server
volumes:
- ./mysql-server/data:/docker-entrypoint-initdb.d #A folder /mysql-server/data with a .sql file needs to exist
env_file:
- ./.env-example

How to add files in docker container volume on build time

I have a python and java app that I want to run inside a container.
I have a folder named pass-hash with:
--h2o-start folder containing Dockerfile that I use to start h2o.jarwhich starts a server for machine learning.
--model-generator folder containing passhash.py and a data.csv file.
The passhash.py app contains h2o.import_file("/var/my-data/data.csv") which takes the data.csv file from the my-data folder I created in the container and generates a POJO file with it.
The h2o-start Dockerfile contains:
FROM openjdk:8
ADD h2o.jar h2o.jar
EXPOSE 54321
EXPOSE 54322
ENTRYPOINT ["java", "-jar", "h2o.jar"]
The model-generator Dockerfile contains:
FROM python:2.7-slim
WORKDIR /model-generator
ADD . /model-generator
RUN mkdir /var/my-data
COPY data.csv /var/my-data
RUN chmod 777 /var/my-data/data.csv
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 8080
ENV NAME World
CMD ["python", "passhash.py"]
The docker-compose.yml file contains:
version: "3"
services:
h2o-start:
image: milanpanic2/h2o-start
build:
context: ./h2o-start
ports:
- "54321:54321"
- "54322:54322"
volumes:
- "home/data"
model-generator:
image: milanpanic2/model-generator
build:
context: ./model-generator
ports:
- "8080:8080"
depends_on:
- "h2o-start"
volumes:
- "csvdata:/var/my-data"
volumes:
csvdata:
Docker Volumes are designed to share folder in between Host Machine and Docker containers. If you copy any file on your host machine (volume location path), it will be automatically going to available inside containers.
The syntax for docker volume is as below:
-v /home/data:/data
In the above syntax /home/data is folder available on the host machine and /data this folder is available inside docker container.
If you copy any file on the host machine inside /home/data folder, it will be automatically going to be available inside container /data folder.
Hope this is clear to you.
If you are using docker-compose then add volume tag as below
volumes:
- /home/data:/data
for example:
version '3'
services:
app:
image: nginx:alpine
ports:
- 80:80
volumes:
- /home/data:/data
I don't know, that is the solution i came up with. Can you tell me the better solution for my problem? My problem is: I have a python app that uses a data.csv file to generate a POJO machine learning model. When I specify a path to this python app to the data file, it gives an exception that the file doen't exist. Also, I have another app written in java, that uses the generated POJO file and it gives predictions based on that data. The java app also updates the data.csv file every day. I want every app (microservice) to run in separate container, but want them both to use the data.csv
To answer this, You need to use volumes.
Try below code.
This is your docker-compose file
version: "3"
services:
h2o-start:
image: milanpanic2/h2o-start
build: context: ./h2o-start
ports:
- "54321:54321" - "54322:54322"
volumes:
- /home/data:/var/my-data
model-generator:
image: milanpanic2/model-generator
build: context: ./model-generator
ports:
- "8080:8080"
depends_on:
- "h2o-start"
volumes:
- /home/data:/var/my-data
This is your docker file
FROM python:2.7-slim
WORKDIR /model-generator
ADD . /model-generator
RUN mkdir /var/my-data
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 8080
ENV NAME World
CMD ["python", "passhash.py"]
And where is your java docker file?
Now just create a default data.csv file and copy it in your host machine location /home/data.
and run application let me know.
If you mean add files when do docker build, take a look at ADD & COPY instructions.

Resources