How to copy local filesystem into Docker container - docker

I have a local project directory structure like:
config
test
docker-compose.yaml
DockerFile
pip-requirements.txt
src
app
app.py
I'm trying to use Docker to spin up a container to run app.py. Simple in concept, but this has proven extraordinarily difficult. I'm keeping my Docker files in a separate sub-folder because I plan on having a large number of different environments, and I don't want to clutter my top-level folder with dozens of files like Dockerfile.1, Dockerfile.2, etc.
My docker-compose.yaml looks like:
version: '3'
services:
worker:
image: myname:mytag
build:
context: .
dockerfile: ./Dockerfile
volumes:
- ./src/app:/usr/local/myproject/src/app
My Dockerfile looks like:
FROM python:2.7
# Set the working directory.
WORKDIR /usr/local/myproject/src/app
# Copy the current directory contents into the container.
COPY src/app /usr/local/myproject/src/app
COPY pip-requirements.txt pip-requirements.txt
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r pip-requirements.txt
# Define environment variable
ENV PYTHONUNBUFFERED 1
CMD ["./app.py"]
If I run from the top-level directory of my project:
docker-compose -f config/test/docker-compose.yaml up
it succeeds in building the image, but fails when attempting to run the image with the error:
ERROR: for worker Cannot start service worker: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"./app.py\": stat ./app.py: no such file or directory": unknown
If I inspect the image's filesystem with:
docker run --rm -it --entrypoint=/bin/bash myname:mytag
it correctly dumps me into /usr/local/myproject/src/app. However, this directory is empty, explaining the runtime error. Why is this empty? Shouldn't the COPY statement and volumes have populated the image with my application code?

For one, you're clobbering the data set by including the content during the build stage and then using docker-compose to overlay a directory on top of it. Let's first discuss the differences between the Dockerfile (Image) and the Docker-compose (Runtime)
Normally, you would use the COPY directive in the dockerfile to copy a component of your local directory into the image so that it is immutable. In most application deployments, this means we bundle our entire application into the directory and prepare it to run. This means that it is not dynamic (Meaning changes you make to the code after that are not visible in the container) but is a gain in terms of security.
Docker-compose is a runtime specification meaning, "Once I have an image, I want to programmatically define how it runs". By defining a volume here, you're saying "I want the local directory (From the perspective of the compose file) /src/app to be overlaid onto /usr/local/myproject/src/app
Thus anything you built into the image doesn't really matter. You're adding another layer on top of the image which will take precedance over what was built into the image.
It may also be something to do with you specifying the Workdir already and then specifying a ./ reference in the CMD. Would be worth trying it as just CMD ["app.py"]
What happens if you
Build the image: docker build -t "test" .
Run the image manually : "docker run --rm -it test

Related

How to copy a subproject to the container in a multi container Docker app with Docker Compose?

I want to build a multi container docker app with docker compose. My project structure looks like this:
docker-compose.yml
...
webapp/
...
Dockerfile
api/
...
Dockerfile
Currently, I am just trying to build and run the webapp via docker compose up with the correct build context. When building the webapp container directly via docker build, everything runs smoothly.
However, with my current specifications in the docker-compose.yml the line COPY . /webapp/ in webapp/Dockerfile (see below) copies the whole parent project to the container, i.e. the directory which contains the docker-compose.yml, and not just the webapp/ sub directory.
For some reason the line COPY requirements.txt /webapp/ works as expected.
What is the correct way of specifying the build context in docker compose? Why is the . in the Dockerfile interpretet as relative to the docker-compose.yml, while the requirements.txt is relative to the Dockerfile as expected? What am I missing?
Here are the contents of the docker-compose.yml:
version: "3.8"
services:
frontend:
container_name: "pc-frontend"
volumes:
- .:/webapp
env_file:
- ./webapp/.env
build:
context: ./webapp
ports:
- 5000:5000
and webapp/Dockerfile:
FROM python:3.9-slim
# set environment variables
ENV PYTHONWRITEBYTECODE 1
ENV PYTHONBUFFERED 1
# set working directory
WORKDIR /webapp
# copy dependencies
COPY requirements.txt /webapp/
# install dependencies
RUN pip install -r requirements.txt
# copy project
COPY . /webapp/ # does not work as intended
# add entrypoint to app
# ENTRYPOINT ["start-gunicorn.sh"]
CMD [ "ls", "-la" ] # for debugging
# expose port
EXPOSE 5000
The COPY directive is (probably) working the way you expect. But, you have volumes: that are overwriting the image content with something else. Delete the volumes: block.
The image build sequence is working exactly the way you expect. build: { context: ./webapp } uses the webapp subdirectory as the build context and sends it to the Docker daemon. When the Dockerfile for example COPY requirements.txt . it comes out of this directory. If you, for example, docker-compose run frontend pip freeze, you should see the installed Python packages.
After the image is built, Compose starts a container, and at that point volumes: take effect. When you say volumes: ['.:/webapp'], here the . before the colon refers to the directory containing the docker-compose.yml file (and not the webapp subdirectory), and then it hides everything in the /webapp directory in the container. So you're replacing the image's /webapp (which had been built from the webapp subdirectory) with the current directory on the host (one directory higher).
You should usually be able to successfully combine an ordinary host-based development environment and a Docker deployment setup. Use a non-Docker Python virtual environment to build the application and run its unit tests, then use docker-compose up --build to run integration tests and the complete application. With a setup like this, you don't need to deal with the inconveniences of the Python runtime being "somewhere else" as you're developing, and you can safely remove the volumes: block.

when is docker volume available using docker compose?

I'm relatively new to Docker. I have a docker-compose.yml file that creates a volume. In one of my Dockerfiles I check to see the volume is created by listing the volume's contents. I get an error saying the volume doesn't exist. When does a volume actually become available when using docker compose?
Here's my docker-compse.yml:
version: "3.7"
services:
app-api:
image: api-dev
container_name: api
build:
context: .
dockerfile: ./app-api/Dockerfile.dev
ports:
- "5000:5000"
volumes:
- ../library:/app/library
environment:
ASPNETCORE_ENVIRONMENT: Development
I also need to have the volume available when creating my container because I use it in my dotnet restore command.
Here my Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS api-env
#list volume contents
RUN ls -al /app/library
WORKDIR /app/app-api
COPY ./app-api/*.csproj .
#need to have volume created before this command
RUN dotnet restore --source https://api.nuget.org/v3/index.json --source /app/library
#copies all files into current directory
COPY ./app-api/. .
RUN dotnet run Api.csproj
EXPOSE 5000
RUN echo "'dotnet running'"
I thought by adding -volumes: .... to docker-compose.yml it automatically creates the volume. Do I still need to add a create volume command in my Dockerfile?
TL;DR:
The commands you give in RUN are executed before mounting volumes.
The CMD will be executed after mounting the volumes.
Longer answer
The Dockerfile is used when building an image of the container. The image will then be used in a docker-compose.yml file to start up a container, to which a volume will be connected. The RUN command you are executing is executed when the image is built, so it will not have access to the volume.
You would normally issue a set of RUN commands, which would prepare the container image. Finally, you would define a CMD command, which would tell what program should be executed when a container starts, based on this image.

Why the file created in dockerfile RUN not on host, I already mount host dir for service

I am using docker-compose to run my golang app.
Here is my Dockerfile
FROM golang:1.13
WORKDIR /app
COPY go.mod ./
RUN go mod download
COPY . .
RUN go build -o main .
CMD ["/app/main"]
and my docker-compose.yml
version: '3.7'
services:
app:
build: ./myapp
container_name: myapp
volumes:
- ./myapp:/app
When I run docker-compose build The main file not appear on myapp dir.
docker-compose up myapp not work, because main file not found.
But docker run mypp can work. How can i build main.go in dockerfile and stay the main in my host?
A docker-compose build step runs to completion, ignoring everything else in your docker-compose.yml file. The resulting image is then executed taking these options into account. So the sequence you show does two things:
Build the image, COPYing its source code in, and producing a /app/main binary inside the image; nothing on the host system is affected.
Run a container based on that image, but mounting the current ./myapp directory and its contents over /app in the container, hiding anything there.
In this sequence of steps nothing ever gets copied out of the container, you are only ever pushing things into Docker space. If you'd like to run the binary you built, delete the volumes: mount to let the image in the binary run.
In comments you suggest your goal is to just build the Go program and get a binary out. You don't want Docker for that; the Go Getting Started page has instructions for how to install a Go toolchain, and you can just go install your program.

Docker container not using latest composer.json file

I'm going crazy here.
I've been working on a Dockerfile and docker-compose.yml file for my project. I recently updated my project's dependencies. When I build the project outside of a container using composer install, it builds with the correct dependencies. However, when I build the project inside a docker container, it downloads and installs the latest dependencies, but then somehow runs the application using obsolete dependencies!
First of all, this is what my Dockerfile looks like:
FROM composer
# Set the working directory within the docker container
WORKDIR /app
# Copy in the app, then install dependencies.
COPY . /app
RUN composer install
I have excluded the composer.lock file and the vendor directory in my .dockerignore:
vendor
composer.lock
Here's my docker-compose.yml:
version: "3"
services:
app:
build: .
volumes:
- app:/app
webserver:
image: richarvey/nginx-php-fpm
volumes:
- app:/var/www/html
volumes:
app:
Note that the build process occurs within the app volume. I don't think this should be part of the problem, as I run docker system prune each time, to purge all existing volumes.
This is what I do to run the container. While troubleshooting, I have been running these commands to eliminate any cached files before starting the container:
$ docker system prune
$ docker-compose build --no-cache
$ docker-compose up --force-recreate
As I watch the dependencies install and download, I can see that it is downloading and installing the right versions! So it must have the correct composer.json file at some point in the process.
Yet somehow, once the build is complete and the application starts, I get the same old warnings about obsolete dependencies, and sure enough, and the composer.json inside the container is obsolete!
So my questions are:
How TF is the composer.json file in the container obsolete?
WHERE is it getting the obsolete file from, since it no longer exists in any image or cache??
How TF is it managing to install the latest dependencies with this obsolete composer.json file, but then not using them, and in fact reverting the composer.json file and the dependencies??
I think the problem is, that you copy your local files into the app-container and run composer install on this copy. Since this will not affect your host system, your webserver, which will actually serve your project will still use the outdated local version, instead of the copy from your other image.
You could try using multi-stage builds or something like this:
COPY FROM app:latest /app /var/www/html
This will copy the artifact from your "build-container", i.e. your project with the installed dependency in app, into the actual container that is running the code, i.e. webserver. Unfortunately, I don't think this will work (well) with your setup, where you mount the volume into that location.
Well, I finally fixed this issue, although parts of my original problem still confuse me.
Here's what I learned:
The docker-compose up process goes in this order:
If an image already exists, use it, even if the Dockerfile (or files used by it) has changed. (This can be avoided with docker-compose up --build).
If there is no existing image, build the image from the Dockerfile.
Mount the volumes specified in the docker-compose file.
A huge part of my problem was that I thought that the volumes were mounted before the build process, and that my application would be installed into this volume as a result of these commands:
COPY . /app
RUN composer install
However, these files were later overwritten when the volume was mounted at the same location within the container (/app).
Now, since I was not mounting a host directory, just an ephemeral, named volume, the /app directory should have been empty. I still don't understand why it wasn't, considering I was clearing my existing Docker volumes with docker system prune before each build. Whatever.
In the end, I used #dbrumann's solution. This was simpler, did not require the use of any Docker volumes, and avoids having a live composer container after the build process has completed (this would be bad for production). My Dockerfile now looks like this:
Dockerfile:
# Install dependencies using the composer image
FROM composer AS composer
# Set the working directory within the docker container
WORKDIR /app
# Copy in the app, then install dependencies.
COPY . .
RUN composer install
# Start the nginx server
FROM richarvey/nginx-php-fpm
# Copy over files from the composer image, which is then discarded automatically
WORKDIR /var/www/html
COPY --from=composer /app .
And the new docker-compose.yml:
version: "3.7"
services:
webserver:
build: .
tty: true
ports:
- "80:80"
- "443:443"

docker copy issues and set host env variable

I am new to docker.
I would like to understand the following questions. I have been searching but I can't find the answers to my questions.
Why do I always get a wrong path when I tried to copy the file?
Does that mean I can only copy the files into the docker image from the same directory where I have my dockerfile? Is there a way to COPY files from other directories on the host?
Is there a way to passing in host's environment variables directly in the Dockerfile without using "ARG" and --build-arg flag?
Below is what I currently have
file structure is like this:
/home/user1/docker
|__ Dockerfile
In the Dockerfile:
From
ARG BLD_DIR=/tmp
RUN mkdir /workdir
WORKDIR /workdir
COPY ${BLD_DIR}/a.file /workdir
I ran
root#localhost> echo $BLD_DIR
/tmp/build <-- BLD_DIR is a custom variable; meaning it's different on each dev env
docker build --build-arg BLD_DIR=${BLD_DIR} -t docker-test:1.0 -f Dockerfile
Always got error like
COPY failed: stat
/var/lib/docker/tmp/docker-builder035089075/tmp/build/a.file: no such file
or directory
In a Dockerfile, you can only copy files that are available in the current Docker build context.
By default, all files in the directory where you run your docker build command are copied to the Docker context folder.
So, when you use ADD or COPY commands, all your paths are in fact relative to build folder, as the documentation states:
Multiple resources may be specified but the paths of files and directories will be interpreted as relative to the source of the context of the build.
This is voluntary because building an image using docker build should not depend on auxiliary files on your system: the same Docker image should not be different if built on 2 different machines.
However, you can have a directory structure like such:
/home/user1/
|___file1
|___docker/
|___|___ Dockerfile
If you run docker build -t test -f docker/Dockerfile . in the /home/user1 folder, your build context will be /home/user1, so you can COPY file1 in your Dockerfile.
For the very same reason, you cannot use environment variables directly in a Dockerfile. The idea is that your docker build command should "pack" all the information needed to generate the same image on 2 different systems.
However, you can hack your way around it using docker-compose, like explaned here: Pass host environment variables to dockerfile.

Resources