Dockerfile its not copying all files - docker

I'm trying to run a discord.py bot on a docker container. But when I'm running the container, docker says that I'm "missing a module". The Dockerfile its not copying all the files/folders from the source code.
This is my directory:
These are the contents of my docker-compose.yml:
version: '3'
services:
bot:
build: .
restart: always
volumes:
- ./.env:/usr/src/app/.env
This is my Dockerfile:
FROM python:bullseye
WORKDIR /usr/app/src
COPY bot bot
CMD ["python", "-m", "bot"]
When I run # sudo docker compose up It fails with the following log:
Checking the docker image files, it seems like its copying all the contents inside of the bot folder, but its not copying the folder itself.
The code works fine if I run it outside of the container, so is not related to this discord bot code.
How can I fix this?
This is my first docker container I'm new really with this.

The correct syntax should be:
COPY bot bot/
By design, COPY always copies the contents of the directory if the source is a directory, and by adding the trailing / to the destination you tell docker that the destination is a directory, so it will create it for you if needed.
See the full documentation.

Related

Can't access files of bind mounter volume during build process

If I attach myself to the container and check the files inside /app I can see my host content inside valve_controller, modify it, etc.
I can't see the files during the build process (RUN ls /app/ trows nothing). I need to verify the code and then compile it.
Are volumes mounted after the build generation?
Which option do I have that doesn't involve COPY?
version: '3.7'
services:
valve_controller:
container_name: "valve_controller"
build:
context: .
dockerfile: ./valve_controller/Dockerfile
working_dir: /app
tty: true
volumes:
- ./valve_controller:/app
Dockerfile
VOLUME /app
RUN ls /app/
Volumes are mounted only when the container is run, not during the build process. This is intentional, since the image generation should not depend on anything outside your build context (the directory where your Dockerfile is). If you need any files during image build, you should COPY them in.
Volumes are mounted the moment you run the container. Therefore you can't refer to the files during the build process.
Adding a command tag in the docker-compose with a list of commands that need to run separated with && or ; would do the trick.
It's also possible to create and initial image with the volume and import that one.

Using the same volume for two Docker containers

I have two containers, one of which provides a file that I need in another container, and I want to make the first container write that file to a volume, then have the second container access that volume and read the file.
I have the following docker-compose.yml file:
version: '3'
volumes:
web_data:
services:
build_jar:
build:
context: .
dockerfile: Dockerfile-gradle
volumes:
- web_data:/workdir
generate_html:
depends_on:
- build_jar
ports:
- "8080:80"
build: .
volumes:
- web_data:/workdir
Dockerfile-gradle
FROM gradle:latest AS builder
USER root
RUN mkdir /workspace
ADD . /workspace
RUN cd /workspace && gradle shadowJar --no-daemon
RUN mkdir /workdir
RUN cp /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar /workdir/stat.jar
Dockerfile
FROM openjdk:8-jre-slim AS java
USER root
RUN java -jar /workdir/stat.jar
First of all, I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually, which seems to not be the case. So I create it using mkdir and I do actually get my data saved: I can go to var/lib/docker/volumes on my host machine and find the corresponding volume with the data the container wrote. Great.
Well, secondly, now I need to use this volume with another container, which also does not have the workdir directory existing already. So if I try to access /workdir/stat.jar, it does not exist, and if I manually create /workdir, it's an empty directory. How do I get the files on the volume that the first container put there? Am I missing something in either Dockerfiles or docker-compose.yml?
When you build a Docker image, the Dockerfile has no access to Docker networking, volumes, or any other part of the Docker ecosystem. It's not unreasonable to think of docker build as acting like Maven or Gradle: it produces an image that you can copy to other systems and run elsewhere, but then at build time it can't access data that will eventually be present when you run it.
Correspondingly, as a general rule, Docker images should be self-contained. An image should usually contain its language runtime and any code or artifacts necessary to run the application; sharing code (or jar files) via volumes isn't usually a best practice. (Of particular note, if you do this successfully, Docker will always use the old jar file in the volume, in both containers, in preference to what's built into the image.)
In this context it seems more like you're looking for a multi-stage build. You can combine these two Dockerfiles together, and then COPY the jar file from the first image to the second one. That results in
FROM gradle:latest AS builder
WORKDIR /workspace
COPY . .
RUN gradle shadowJar --no-daemon
FROM openjdk:8-jre-slim AS java
WORKDIR /workdir
COPY --from=builder /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar stat.jar
CMD java -jar /workdir/stat.jar
In the docker-compose.yml file, you can delete volume along with the no-op container that does the build:
version: '3.8'
services:
generate_html:
ports:
- "8080:80"
build: .
I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually
That is not supposed, when you declare a volume mapping for some service you only declare mapping between volume and path in the future container. Your container image should guarantee that something exists on that path.
I need to use this volume with another container, which also does not have the workdir directory existing already
Your confusion is probably related to the fact that you expect volumes to work in build time that is not true unfortunately.

Nothing happens when copying file with Dockerfile

I use docker-compose for a simple keycloak container and I've been trying to install a new theme for keycloak.
However I've been unable to copy even a single file to the container usnig a Dockerfile. The Dockerfile and docker-compose.yml are in the same directory
Neither of these commands work or cause any events or warnings in the logs.
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
Copying manually with
sudo docker cp test docker_keycloak_1:/tmp
works without any issues.
Quick understanding on Docker.
docker build: Create an image from a Dockerfile
docker run: Create a container from an image.
(you can create yourself the image or use a existing image from docker hub )
Based on what you said, you have 2 options.
Create a new docker image based on the existing one and add the theme.
something like
# Dockerfile
FROM jboss/keycloak
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
and then use docker build to create your new image
Volume the theme in the correct directory
using docker-compose volume
version: '3'
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
volumes:
- "./docker/kctheme/theme/login:/opt/jboss/keycloak/themes/keycloak/login"
Files have to be in the same directory or a subdirectory of the directory with your Dockerfile build file if you use COPY, and have to be present at build time. No absolute paths.
/tmp as destination is also a bit tricky, because the startup process of the container might have a /tmp cleanout, which means that you would never see that file in a running container.

docker-compose: use file from volume in Dockerfile

I defined a volume in my docker-compose.yml. I want to use one of these files from the volume in my Dockerfile, but I get the error: "No such file or directory"
If I create the container without the access to the files in the Dockerfile I will see all files from the volume inside the container at the specific location from the docker-compose.yml file.
Is this how it should work or do I something wrong? I think I am missing something.
repository: https://github.com/Lightshadow244/OwnMusicWeb
docker-compose.yml:
version: '3'
services:
ownmusicweb:
build: .
container_name: ownmusicweb
hostname: ownmusicweb
volumes:
- ~/OwnMusicWeb/ownmusicweb:/ownmusicweb
ports:
- 83:8000
tty: true
Dockerfile:
FROM ubuntu:latest
WORKDIR /ownmusicweb
RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "python-pip"]
RUN ["pip", "install", "--upgrade", "pip"]
RUN ["pip", "install", "Django", "eyeD3", "djangorestframework", "markdown", "django-filter"]
RUN ["python", "/ownmusicweb/manage.py", "migrate"]
RUN ["python", "/ownmusicweb/manage.py", "runserver", "0.0.0.0:8000"]
Summarising discussion in comments:
RUN directive has no access to volume because it's not mounted yet. Docker creates build context only, which is neccessary to use ADD directive. But in this way the files will remain in the compiled container so you will need a rebuild to update those.
After build is finished, triggered by "build: ." in docker-compose.yml, docker launches the container and adds a volume. But it's too late in your case.
Suggested mechanism is to use ENTRYPOINT with a scipt which launches your stuff. It's being executed after the build in the phase of launch, so you'll have access to the volume.
Another approach, which seems to me a bit cleaner is to use command directive of docker-compose. You can put the same script inside. It depends of the way you're doing deployment and the way you're using docker in the development environment.

Docker VOLUME for different users

I'm using docker and docker-compose for building my app. There are two developers now for the project hosted on github.
Our project structure is:
sup
dockerfiles
dev
build
.profile
Dockerfile
docker-compose.yml
Now we have ./dockerfiles/dev/docker-compose.yml like this:
app:
container_name: sup-dev
build: ./build
and ./dockerfiles/dev/build/Dockerfile:
FROM sup:dev
# docker-compose tries to find .profile relative to build dir:
# ./dockerfiles/dev/build
COPY .profile /var/www/
We run container like so:
docker-compose up -d
Everything works fine, but due to different OS we have our code in different places: /home/aliance/www/project for me and /home/user/other/path/project for the second developer. So I can not just add volume instruction into Dockerfile.
Now we solve this problem in this wrong way:
- I am using lsyncd with my personal config to transfer files into the container
- While the second one uses volume instruction into Dockerfile but not commited it.
May be you know how can I write an unified Dockerfile for docker-compose to volume out code into app container from different paths?
The file paths on the host shouldn't matter. Why do you need absolute paths?
You can use paths that are relative to the docker-compose.yml so they should be the same for both developers.
The VOLUME instructions in the Dockerfile are always relative to the build context, so if you want, you can use something like this:
app:
container_name: sup-dev
build: ..
dockerfile: build/Dockerfile
That way the build context for the Dockerfile will be the project root.
Maybe you should keep your Dockerfile at the root of your project. Then you could add an instruction in the Dockerfile:
COPY ./ /usr/src/app/
or (not recommended in prod)
VOLUME /usr/src/app
+ (option while running the container as I don't know docker-compose)
-v /path/to/your/code:/usr/src/app

Resources