Docker compose relative folder from host volume [duplicate] - docker

This question already has answers here:
Docker-compose volume mount before run
(2 answers)
Closed 5 years ago.
I am trying to mount a folder from the host as a volume to the container. I am wondering at what step is this mounting process done - I would expect it to be during the build phase, but it seems that my folder is not mounted.
The folder structure:
-> app
|-> requirements.txt
- docker
|-> web
|-> Dockerfile
-> docker-compose.yml
docker-compose.yml contains the following:
version: '2'
services:
web:
build: ./docker/web
volumes:
- './app:/myapp'
The docker file of the container:
FROM ubuntu:latest
WORKDIR /myapp
RUN ls
I am mounting the app directory from the host into /myapp inside the container, the build process sets the working directory and runs ls to see the content and I am expecting my reuqiremetns.txt file to be there.
What am I doing wrong?
docker-compose v1.16.1, docker v1.16.1. I am using docker for windows.

Your requirements.txt file isn't copied into the image at build time because it's not picked up by the context and requirements.txt is not present at build time.
Volumes are mounted at container creation time, not at build time. If you want your file to be available inside your Dockerfile (e.g. at build time) you need to include it in the context and COPY it to make it available.
From the Docker documentation here: https://docs.docker.com/engine/reference/builder/
The first thing a build process does is send the entire
context (recursively) to the daemon
It's two issues:
You aren't sending your requirements file with your build context, because your Dockerfile is in a separate directory structure, so requirements.txt is not not available at build time
You aren't copying the file into the image before you run the ls command (COPY ./app/requirements.txt /myapp/)
If can change your directory structure to make requirements.txt available at build time, and add a COPY command to your Dockerfile before you run your ls you should see the behavior you expect during the build.

Related

Docker is not copying file on a shared volume during build

I would like to have the files created on the building phase stored on my local machine
I have this Dockerfile
FROM node:17-alpine as builder
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
RUN npm i -g #angular/cli
COPY . .
RUN ng build foo --prod
RUN touch test.txt #This is just for test
CMD ["ng", "serve"] #Just for let the container running
I also created a shared volume via docker compose
services:
client:
build:
dockerfile: Dockerfile.prod
context: ./foo
volumes:
- /app/node_modules
- ./foo:/app
If I attach a shell to the running container and run touch test.txt, the file is created on my local machine.
I can't understand why the files are not created on the building phase...
If I use a multi stage Dockerfile the dist folder on the container is created (just adding this to the Dockerfile), but still I can't see it on the local machine
FROM nginx
EXPOSE 80
COPY --from=builder /app/dist/foo /usr/share/nginx/html
I can't understand why the files are not created on the building
phase...
That's because the build phase doesn't involve volume mounting.
Mounting volumes only occur when creating containers, not building images. If you map a volume to an existing file or directory, Docker "overrides" the image's path, much like a traditional linux mount. Which means, before creating the container, you image has everything from /app/* pre-packaged, and that's why you're able to copy the contents in the multistage build.
However, as you defined a volume with the - ./foo:/app config in your docker-compose file, the container won't have those files anymore, and instead the /app folder will have the current contents of your ./foo directory.
If you wish to copy the contents of the image to a mounted volume, you'll have to do it in the ENTRYPOINT, as it runs upon container instantiation, and after the volume mounting.

Docker container not using latest composer.json file

I'm going crazy here.
I've been working on a Dockerfile and docker-compose.yml file for my project. I recently updated my project's dependencies. When I build the project outside of a container using composer install, it builds with the correct dependencies. However, when I build the project inside a docker container, it downloads and installs the latest dependencies, but then somehow runs the application using obsolete dependencies!
First of all, this is what my Dockerfile looks like:
FROM composer
# Set the working directory within the docker container
WORKDIR /app
# Copy in the app, then install dependencies.
COPY . /app
RUN composer install
I have excluded the composer.lock file and the vendor directory in my .dockerignore:
vendor
composer.lock
Here's my docker-compose.yml:
version: "3"
services:
app:
build: .
volumes:
- app:/app
webserver:
image: richarvey/nginx-php-fpm
volumes:
- app:/var/www/html
volumes:
app:
Note that the build process occurs within the app volume. I don't think this should be part of the problem, as I run docker system prune each time, to purge all existing volumes.
This is what I do to run the container. While troubleshooting, I have been running these commands to eliminate any cached files before starting the container:
$ docker system prune
$ docker-compose build --no-cache
$ docker-compose up --force-recreate
As I watch the dependencies install and download, I can see that it is downloading and installing the right versions! So it must have the correct composer.json file at some point in the process.
Yet somehow, once the build is complete and the application starts, I get the same old warnings about obsolete dependencies, and sure enough, and the composer.json inside the container is obsolete!
So my questions are:
How TF is the composer.json file in the container obsolete?
WHERE is it getting the obsolete file from, since it no longer exists in any image or cache??
How TF is it managing to install the latest dependencies with this obsolete composer.json file, but then not using them, and in fact reverting the composer.json file and the dependencies??
I think the problem is, that you copy your local files into the app-container and run composer install on this copy. Since this will not affect your host system, your webserver, which will actually serve your project will still use the outdated local version, instead of the copy from your other image.
You could try using multi-stage builds or something like this:
COPY FROM app:latest /app /var/www/html
This will copy the artifact from your "build-container", i.e. your project with the installed dependency in app, into the actual container that is running the code, i.e. webserver. Unfortunately, I don't think this will work (well) with your setup, where you mount the volume into that location.
Well, I finally fixed this issue, although parts of my original problem still confuse me.
Here's what I learned:
The docker-compose up process goes in this order:
If an image already exists, use it, even if the Dockerfile (or files used by it) has changed. (This can be avoided with docker-compose up --build).
If there is no existing image, build the image from the Dockerfile.
Mount the volumes specified in the docker-compose file.
A huge part of my problem was that I thought that the volumes were mounted before the build process, and that my application would be installed into this volume as a result of these commands:
COPY . /app
RUN composer install
However, these files were later overwritten when the volume was mounted at the same location within the container (/app).
Now, since I was not mounting a host directory, just an ephemeral, named volume, the /app directory should have been empty. I still don't understand why it wasn't, considering I was clearing my existing Docker volumes with docker system prune before each build. Whatever.
In the end, I used #dbrumann's solution. This was simpler, did not require the use of any Docker volumes, and avoids having a live composer container after the build process has completed (this would be bad for production). My Dockerfile now looks like this:
Dockerfile:
# Install dependencies using the composer image
FROM composer AS composer
# Set the working directory within the docker container
WORKDIR /app
# Copy in the app, then install dependencies.
COPY . .
RUN composer install
# Start the nginx server
FROM richarvey/nginx-php-fpm
# Copy over files from the composer image, which is then discarded automatically
WORKDIR /var/www/html
COPY --from=composer /app .
And the new docker-compose.yml:
version: "3.7"
services:
webserver:
build: .
tty: true
ports:
- "80:80"
- "443:443"

How to copy local filesystem into Docker container

I have a local project directory structure like:
config
test
docker-compose.yaml
DockerFile
pip-requirements.txt
src
app
app.py
I'm trying to use Docker to spin up a container to run app.py. Simple in concept, but this has proven extraordinarily difficult. I'm keeping my Docker files in a separate sub-folder because I plan on having a large number of different environments, and I don't want to clutter my top-level folder with dozens of files like Dockerfile.1, Dockerfile.2, etc.
My docker-compose.yaml looks like:
version: '3'
services:
worker:
image: myname:mytag
build:
context: .
dockerfile: ./Dockerfile
volumes:
- ./src/app:/usr/local/myproject/src/app
My Dockerfile looks like:
FROM python:2.7
# Set the working directory.
WORKDIR /usr/local/myproject/src/app
# Copy the current directory contents into the container.
COPY src/app /usr/local/myproject/src/app
COPY pip-requirements.txt pip-requirements.txt
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r pip-requirements.txt
# Define environment variable
ENV PYTHONUNBUFFERED 1
CMD ["./app.py"]
If I run from the top-level directory of my project:
docker-compose -f config/test/docker-compose.yaml up
it succeeds in building the image, but fails when attempting to run the image with the error:
ERROR: for worker Cannot start service worker: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"./app.py\": stat ./app.py: no such file or directory": unknown
If I inspect the image's filesystem with:
docker run --rm -it --entrypoint=/bin/bash myname:mytag
it correctly dumps me into /usr/local/myproject/src/app. However, this directory is empty, explaining the runtime error. Why is this empty? Shouldn't the COPY statement and volumes have populated the image with my application code?
For one, you're clobbering the data set by including the content during the build stage and then using docker-compose to overlay a directory on top of it. Let's first discuss the differences between the Dockerfile (Image) and the Docker-compose (Runtime)
Normally, you would use the COPY directive in the dockerfile to copy a component of your local directory into the image so that it is immutable. In most application deployments, this means we bundle our entire application into the directory and prepare it to run. This means that it is not dynamic (Meaning changes you make to the code after that are not visible in the container) but is a gain in terms of security.
Docker-compose is a runtime specification meaning, "Once I have an image, I want to programmatically define how it runs". By defining a volume here, you're saying "I want the local directory (From the perspective of the compose file) /src/app to be overlaid onto /usr/local/myproject/src/app
Thus anything you built into the image doesn't really matter. You're adding another layer on top of the image which will take precedance over what was built into the image.
It may also be something to do with you specifying the Workdir already and then specifying a ./ reference in the CMD. Would be worth trying it as just CMD ["app.py"]
What happens if you
Build the image: docker build -t "test" .
Run the image manually : "docker run --rm -it test

Troubleshoot directory path error in COPY command in docker file

I am using COPY command in my docker file on top of ubuntu 16.04. I am getting error as no such file or directory eventhough the directory is present. In the below docker file I want to copy the directory "auth" present inside workspace directory to the docker image (at path /home/ubuntu) and then build the image.
FROM ubuntu:16.04
RUN apt-get update
COPY /home/ubuntu/authentication/workspace /home/ubuntu
WORKDIR /home/ubuntu/auth
a Dockerfile COPY command can only refer to files under the context - the current location of the Dockerfile, aka .
so you have a few options now:
if it is possible to copy the /home/ubuntu/authentication/workspace/ directory content to somewhere inside your project before the build (so now it will be included in your Dockerfile context and you can access it via COPY ./path/to/content /home/ubuntu) it can be great. but sometimes you dont want it.
instead of copying the directory, bind it to your container via a volume:
when you run the container, add a -v option:
docker run [....] -v /home/ubuntu/authentication/workspace:/home/ubuntu [...]
mind that a volume is designed so any change you made inside the container dir(/home/ubuntu) will affect the bound directory on your host side (/home/ubuntu/authentication/workspace) and vice versa.
i found a something over here: this guy is forcing the Dockerfile to accept his context- he is sitting inside the /home/ubuntu/authentication/workspace/ directory, and running there
docker build . -f /path/to/Dockerfile
so now inside his Dockerfile he can refer to /home/ubuntu/authentication/workspace as his context (.)

Docker: How to copy a file from one folder in a container to another?

I want to copy my compiled war file to tomcat deployment folder in a Docker container. As COPY and ADD deals with moving files from host to container, I tried
RUN mv /tmp/projects/myproject/target/myproject.war /usr/local/tomcat/webapps/
as a modification to the answer for this question. But I am getting the error
mv: cannot stat ΓÇÿ/tmp/projects/myproject/target/myproject.warΓÇÖ: No such file or directory
How can I copy from one folder to another in the same container?
You can create a multi-stage build:
https://docs.docker.com/develop/develop-images/multistage-build/
Build the .war file in the first stage and name the stage e.g. build, like that:
FROM my-fancy-sdk as build
RUN my-fancy-build #result is your myproject.war
Then in the second stage:
FROM my-fancy-sdk as build2
COPY --from=build /tmp/projects/myproject/target/myproject.war /usr/local/tomcat/webapps/
A better solution would be to use volumes to bind individual war files inside docker container as done here.
Why your command fails
The command you are running tries to access files which are out of context to for the dockerfile. When you build the image using docker build . the daemon sends context to the builder and only those files are accessible during the build. In docker build . the context is ., the current directory. Therefore, it will not be able to access /tmp/projects/myproject/target/myproject.war.
Copying from inside the container
Another option would be to copy while you are inside the container. First use volumes to mount the local folder inside the container and then go inside the container using docker exec -it <container_name> bash and then copy the required files.
Recommendation
But still, I highly recommend to use
docker run -v "/tmp/projects/myproject/target/myproject.war:/usr/local/tomcat/webapps/myproject.war" <image_name>

Resources