docker-compose: use file from volume in Dockerfile - docker

I defined a volume in my docker-compose.yml. I want to use one of these files from the volume in my Dockerfile, but I get the error: "No such file or directory"
If I create the container without the access to the files in the Dockerfile I will see all files from the volume inside the container at the specific location from the docker-compose.yml file.
Is this how it should work or do I something wrong? I think I am missing something.
repository: https://github.com/Lightshadow244/OwnMusicWeb
docker-compose.yml:
version: '3'
services:
ownmusicweb:
build: .
container_name: ownmusicweb
hostname: ownmusicweb
volumes:
- ~/OwnMusicWeb/ownmusicweb:/ownmusicweb
ports:
- 83:8000
tty: true
Dockerfile:
FROM ubuntu:latest
WORKDIR /ownmusicweb
RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "python-pip"]
RUN ["pip", "install", "--upgrade", "pip"]
RUN ["pip", "install", "Django", "eyeD3", "djangorestframework", "markdown", "django-filter"]
RUN ["python", "/ownmusicweb/manage.py", "migrate"]
RUN ["python", "/ownmusicweb/manage.py", "runserver", "0.0.0.0:8000"]

Summarising discussion in comments:
RUN directive has no access to volume because it's not mounted yet. Docker creates build context only, which is neccessary to use ADD directive. But in this way the files will remain in the compiled container so you will need a rebuild to update those.
After build is finished, triggered by "build: ." in docker-compose.yml, docker launches the container and adds a volume. But it's too late in your case.
Suggested mechanism is to use ENTRYPOINT with a scipt which launches your stuff. It's being executed after the build in the phase of launch, so you'll have access to the volume.
Another approach, which seems to me a bit cleaner is to use command directive of docker-compose. You can put the same script inside. It depends of the way you're doing deployment and the way you're using docker in the development environment.

Related

Dockerfile its not copying all files

I'm trying to run a discord.py bot on a docker container. But when I'm running the container, docker says that I'm "missing a module". The Dockerfile its not copying all the files/folders from the source code.
This is my directory:
These are the contents of my docker-compose.yml:
version: '3'
services:
bot:
build: .
restart: always
volumes:
- ./.env:/usr/src/app/.env
This is my Dockerfile:
FROM python:bullseye
WORKDIR /usr/app/src
COPY bot bot
CMD ["python", "-m", "bot"]
When I run # sudo docker compose up It fails with the following log:
Checking the docker image files, it seems like its copying all the contents inside of the bot folder, but its not copying the folder itself.
The code works fine if I run it outside of the container, so is not related to this discord bot code.
How can I fix this?
This is my first docker container I'm new really with this.
The correct syntax should be:
COPY bot bot/
By design, COPY always copies the contents of the directory if the source is a directory, and by adding the trailing / to the destination you tell docker that the destination is a directory, so it will create it for you if needed.
See the full documentation.

Can't access files of bind mounter volume during build process

If I attach myself to the container and check the files inside /app I can see my host content inside valve_controller, modify it, etc.
I can't see the files during the build process (RUN ls /app/ trows nothing). I need to verify the code and then compile it.
Are volumes mounted after the build generation?
Which option do I have that doesn't involve COPY?
version: '3.7'
services:
valve_controller:
container_name: "valve_controller"
build:
context: .
dockerfile: ./valve_controller/Dockerfile
working_dir: /app
tty: true
volumes:
- ./valve_controller:/app
Dockerfile
VOLUME /app
RUN ls /app/
Volumes are mounted only when the container is run, not during the build process. This is intentional, since the image generation should not depend on anything outside your build context (the directory where your Dockerfile is). If you need any files during image build, you should COPY them in.
Volumes are mounted the moment you run the container. Therefore you can't refer to the files during the build process.
Adding a command tag in the docker-compose with a list of commands that need to run separated with && or ; would do the trick.
It's also possible to create and initial image with the volume and import that one.

Using the same volume for two Docker containers

I have two containers, one of which provides a file that I need in another container, and I want to make the first container write that file to a volume, then have the second container access that volume and read the file.
I have the following docker-compose.yml file:
version: '3'
volumes:
web_data:
services:
build_jar:
build:
context: .
dockerfile: Dockerfile-gradle
volumes:
- web_data:/workdir
generate_html:
depends_on:
- build_jar
ports:
- "8080:80"
build: .
volumes:
- web_data:/workdir
Dockerfile-gradle
FROM gradle:latest AS builder
USER root
RUN mkdir /workspace
ADD . /workspace
RUN cd /workspace && gradle shadowJar --no-daemon
RUN mkdir /workdir
RUN cp /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar /workdir/stat.jar
Dockerfile
FROM openjdk:8-jre-slim AS java
USER root
RUN java -jar /workdir/stat.jar
First of all, I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually, which seems to not be the case. So I create it using mkdir and I do actually get my data saved: I can go to var/lib/docker/volumes on my host machine and find the corresponding volume with the data the container wrote. Great.
Well, secondly, now I need to use this volume with another container, which also does not have the workdir directory existing already. So if I try to access /workdir/stat.jar, it does not exist, and if I manually create /workdir, it's an empty directory. How do I get the files on the volume that the first container put there? Am I missing something in either Dockerfiles or docker-compose.yml?
When you build a Docker image, the Dockerfile has no access to Docker networking, volumes, or any other part of the Docker ecosystem. It's not unreasonable to think of docker build as acting like Maven or Gradle: it produces an image that you can copy to other systems and run elsewhere, but then at build time it can't access data that will eventually be present when you run it.
Correspondingly, as a general rule, Docker images should be self-contained. An image should usually contain its language runtime and any code or artifacts necessary to run the application; sharing code (or jar files) via volumes isn't usually a best practice. (Of particular note, if you do this successfully, Docker will always use the old jar file in the volume, in both containers, in preference to what's built into the image.)
In this context it seems more like you're looking for a multi-stage build. You can combine these two Dockerfiles together, and then COPY the jar file from the first image to the second one. That results in
FROM gradle:latest AS builder
WORKDIR /workspace
COPY . .
RUN gradle shadowJar --no-daemon
FROM openjdk:8-jre-slim AS java
WORKDIR /workdir
COPY --from=builder /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar stat.jar
CMD java -jar /workdir/stat.jar
In the docker-compose.yml file, you can delete volume along with the no-op container that does the build:
version: '3.8'
services:
generate_html:
ports:
- "8080:80"
build: .
I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually
That is not supposed, when you declare a volume mapping for some service you only declare mapping between volume and path in the future container. Your container image should guarantee that something exists on that path.
I need to use this volume with another container, which also does not have the workdir directory existing already
Your confusion is probably related to the fact that you expect volumes to work in build time that is not true unfortunately.

Docker: How to update your container when your code changes

I am trying to use Docker for local development. The problem is that when I make a change to my code, I have to run the following commands to see the updates locally:
docker-compose down
docker images # Copy the name of the image
docker rmi <IMAGE_NAME>
docker-compose up -d
That's quite a mouthful, and takes a while. (Possibly I could make it into a bash script, but do you think that is a good idea?)
My real question is: Is there a command that I can use (even manually each time) that will update the image & container? Or do I have to go through the entire workflow above every time I make a change in my code?
Just for reference, here is my Dockerfile and docker-compose.yml.
Dockerfile
FROM node:12.18.3
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 4000
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
web:
build:
context: .
dockerfile: Dockerfile
container_name: web
restart: always
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
Even though there are multiple good answers to this question, I think they missed the point, as the OP is asking about the local dev environment. The command I usually use in this situation is:
docker-compose up -d --build
If there aren't any errors in Dockerfile, it should rebuild all the images before bringing up the stack. It could be used in a shell script if needed.
#!/bin/bash
sudo docker-compose up -d --build
If you need to tear down the whole stack, you can have another script:
#!/bin/bash
sudo docker-compose down -v
The -v flag removes all the volumes so you can have a fresh start.
NOTE: In some cases, sudo might not be needed to run the command.
When a docker image is build the artifacts are already copied and no new change can reflect until you rebuild the image.
But
If it is only for local development, then you can leverage volume sharing to update code inside container in runtime. The idea is to share your app/repo directory on host machine with /usr/src/app (as per your Dockerfile) and with this approach your code (and new changes) will be appear on both host and the running container.
Also, you will need to restart the server on every change and for this you can run your app using nodemon (as it watches for changes in code and restarts the server)
Changes required in Dockerfile.
services:
web:
...
container_name: web
...
volumes:
- /path/in/host/machine:/usr/src/app
...
...
ports:
- "3000:3000"
depends_on:
- mongo
You may use Docker Swarm as an orchestration tool to apply rolling updates. Check Apply rolling updates to a service.
Basically you issue docker compose up once and do it with a shell script maybe, and once you get your containers running and then you may create a Jenkinsfile or configure a CI/CD pipeline to pull the updated image and apply it to running container with previous image with docker service update <NEW_IMAGE>.

docker-compose named volume copy contents on initial start

I may be a little confused on how volumes work and I keep reading the same things over and over and to me it should be working. I want the contents from a folder inside the container to copy over if the volume gets initialized the first time.
I have something like this:
I have a Dockerfile like this:
https://github.com/docker-library/tomcat/blob/f6dc3671bf56465917b52c8df4356fa8f0ebafcd/7/jre7/Dockerfile
And before
EXPOSE 8080
CMD ["catalina.sh", "run"]
I have something like
Tomcat Dockerfile
VOLUME ["/opt/tomcat/conf"]
EXPOSE 8080
CMD ["catalina.sh", "run"]
When i build this image, I tag it as tomcat.
Then I have another Dockerfile with a bunch of environment variables that I set and a script.
Like so:
MyApp Dockerfile
FROM tomcat
ENV SOME_VAR=Test1
COPY assets/script.sh /script.sh
The second image builds from the first image and just adds a script and sets some settings. So far so good.
I want to do something like this in my docker-compose.yml file:
Docker Compose file
website:
image: myapp
ports:
- "8000:8080"
volumes:
- /srv/myapp/conf:/opt/tomcat/conf
I want the contents of /opt/tomcat/conf to copy into /srv/myapp/conf when that folder first gets created. Everything I read suggests that this should work, but it just creates the folder and doesn't copy the contents. Am I missing something here?
Basically I have this issue:
https://github.com/moby/moby/issues/18670
Oh and my docker-compose yaml file is using version 2.1 if that makes a difference.
What you are looking for is not possible when you are binding host volume inside the container. It will only work if you have a named volume. Then docker will copy the content of the folder to a container. You need to change you compose file to
version: '3'
services:
website:
image: myapp
ports:
- "8000:8080"
volumes:
- appconfig:/opt/tomcat/conf
volumes:
appconfig: {}
If you want to get the config out then you can use a shell script and your original compose file
#!/bin/bash
if [ ! -d "/srv/myapp/conf" ]; then
mkdir /srv/myapp/conf
docker create --name myappconfig myapp
docker cp myapp:/opt/tomcat/conf /srv/myapp/
docker rm myapp
fi
docker-compose up -d
For this to work the directory should not exist for the first time.

Resources