I'm relatively new to Docker. I have a docker-compose.yml file that creates a volume. In one of my Dockerfiles I check to see the volume is created by listing the volume's contents. I get an error saying the volume doesn't exist. When does a volume actually become available when using docker compose?
Here's my docker-compse.yml:
version: "3.7"
services:
app-api:
image: api-dev
container_name: api
build:
context: .
dockerfile: ./app-api/Dockerfile.dev
ports:
- "5000:5000"
volumes:
- ../library:/app/library
environment:
ASPNETCORE_ENVIRONMENT: Development
I also need to have the volume available when creating my container because I use it in my dotnet restore command.
Here my Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS api-env
#list volume contents
RUN ls -al /app/library
WORKDIR /app/app-api
COPY ./app-api/*.csproj .
#need to have volume created before this command
RUN dotnet restore --source https://api.nuget.org/v3/index.json --source /app/library
#copies all files into current directory
COPY ./app-api/. .
RUN dotnet run Api.csproj
EXPOSE 5000
RUN echo "'dotnet running'"
I thought by adding -volumes: .... to docker-compose.yml it automatically creates the volume. Do I still need to add a create volume command in my Dockerfile?
TL;DR:
The commands you give in RUN are executed before mounting volumes.
The CMD will be executed after mounting the volumes.
Longer answer
The Dockerfile is used when building an image of the container. The image will then be used in a docker-compose.yml file to start up a container, to which a volume will be connected. The RUN command you are executing is executed when the image is built, so it will not have access to the volume.
You would normally issue a set of RUN commands, which would prepare the container image. Finally, you would define a CMD command, which would tell what program should be executed when a container starts, based on this image.
Related
I would like to have the files created on the building phase stored on my local machine
I have this Dockerfile
FROM node:17-alpine as builder
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
RUN npm i -g #angular/cli
COPY . .
RUN ng build foo --prod
RUN touch test.txt #This is just for test
CMD ["ng", "serve"] #Just for let the container running
I also created a shared volume via docker compose
services:
client:
build:
dockerfile: Dockerfile.prod
context: ./foo
volumes:
- /app/node_modules
- ./foo:/app
If I attach a shell to the running container and run touch test.txt, the file is created on my local machine.
I can't understand why the files are not created on the building phase...
If I use a multi stage Dockerfile the dist folder on the container is created (just adding this to the Dockerfile), but still I can't see it on the local machine
FROM nginx
EXPOSE 80
COPY --from=builder /app/dist/foo /usr/share/nginx/html
I can't understand why the files are not created on the building
phase...
That's because the build phase doesn't involve volume mounting.
Mounting volumes only occur when creating containers, not building images. If you map a volume to an existing file or directory, Docker "overrides" the image's path, much like a traditional linux mount. Which means, before creating the container, you image has everything from /app/* pre-packaged, and that's why you're able to copy the contents in the multistage build.
However, as you defined a volume with the - ./foo:/app config in your docker-compose file, the container won't have those files anymore, and instead the /app folder will have the current contents of your ./foo directory.
If you wish to copy the contents of the image to a mounted volume, you'll have to do it in the ENTRYPOINT, as it runs upon container instantiation, and after the volume mounting.
I want to build a multi container docker app with docker compose. My project structure looks like this:
docker-compose.yml
...
webapp/
...
Dockerfile
api/
...
Dockerfile
Currently, I am just trying to build and run the webapp via docker compose up with the correct build context. When building the webapp container directly via docker build, everything runs smoothly.
However, with my current specifications in the docker-compose.yml the line COPY . /webapp/ in webapp/Dockerfile (see below) copies the whole parent project to the container, i.e. the directory which contains the docker-compose.yml, and not just the webapp/ sub directory.
For some reason the line COPY requirements.txt /webapp/ works as expected.
What is the correct way of specifying the build context in docker compose? Why is the . in the Dockerfile interpretet as relative to the docker-compose.yml, while the requirements.txt is relative to the Dockerfile as expected? What am I missing?
Here are the contents of the docker-compose.yml:
version: "3.8"
services:
frontend:
container_name: "pc-frontend"
volumes:
- .:/webapp
env_file:
- ./webapp/.env
build:
context: ./webapp
ports:
- 5000:5000
and webapp/Dockerfile:
FROM python:3.9-slim
# set environment variables
ENV PYTHONWRITEBYTECODE 1
ENV PYTHONBUFFERED 1
# set working directory
WORKDIR /webapp
# copy dependencies
COPY requirements.txt /webapp/
# install dependencies
RUN pip install -r requirements.txt
# copy project
COPY . /webapp/ # does not work as intended
# add entrypoint to app
# ENTRYPOINT ["start-gunicorn.sh"]
CMD [ "ls", "-la" ] # for debugging
# expose port
EXPOSE 5000
The COPY directive is (probably) working the way you expect. But, you have volumes: that are overwriting the image content with something else. Delete the volumes: block.
The image build sequence is working exactly the way you expect. build: { context: ./webapp } uses the webapp subdirectory as the build context and sends it to the Docker daemon. When the Dockerfile for example COPY requirements.txt . it comes out of this directory. If you, for example, docker-compose run frontend pip freeze, you should see the installed Python packages.
After the image is built, Compose starts a container, and at that point volumes: take effect. When you say volumes: ['.:/webapp'], here the . before the colon refers to the directory containing the docker-compose.yml file (and not the webapp subdirectory), and then it hides everything in the /webapp directory in the container. So you're replacing the image's /webapp (which had been built from the webapp subdirectory) with the current directory on the host (one directory higher).
You should usually be able to successfully combine an ordinary host-based development environment and a Docker deployment setup. Use a non-Docker Python virtual environment to build the application and run its unit tests, then use docker-compose up --build to run integration tests and the complete application. With a setup like this, you don't need to deal with the inconveniences of the Python runtime being "somewhere else" as you're developing, and you can safely remove the volumes: block.
I was reading Quickstart: Compose and Django when I came across "defining a build in a compose file". Well I've seen it before but what I'm curious about here is what's the purpose of it? I just can't get it.
Why we just don't build the image once (or update it whenever we want) and use it multiple times in different docker-compose files?
Here is the Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
And here is docker-compose.yml:
version: '3'
web:
# <<<<
# Why not building the image and using it here like "image: my/django"?
# <<<<
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
You might say: "well, do as you wish!" Why I'm asking is because I think there might be some benefits that I'm not aware of.
PS:
I mostly use Docker for bringing up some services (DNS, Monitoring, etc. Never used it for
development).
I have already read this What is the difference between `docker-compose build` and `docker build`?
There's no technical difference between docker build an image and specifying an image: in the docker-compose.yml file, and specifying the build: metadata directly in the docker-compose.yml.
The benefits to using docker-compose build to build images are more or less the same as using docker-compose up to run containers. If you have a complex set of -f path/Dockerfile --build-arg ... options, you can write those out in the build: block and not have to write them repeatedly. If you have multiple custom images that need to be built then docker-compose build can build them all in one shot.
In practice you'll frequently be iterating on your containers, which means you will need to run local unit tests, then rebuild images, then relaunch containers. Being able to drive the Docker end of this via docker-compose down; docker-compose up --build will be easier will be more convenient than remembering all of the individual docker build commands you need to run.
The one place where this doesn't work well is if you have a custom base image. So if you have a my/base image, and your application image is built FROM my/base, you need to explicitly run
docker build -t my/base base
docker build -t my/app app
docker run ... my/app
Compose doesn't help with the multi-level docker-build sequence; you'll have to explicitly docker build the base image.
I have a local project directory structure like:
config
test
docker-compose.yaml
DockerFile
pip-requirements.txt
src
app
app.py
I'm trying to use Docker to spin up a container to run app.py. Simple in concept, but this has proven extraordinarily difficult. I'm keeping my Docker files in a separate sub-folder because I plan on having a large number of different environments, and I don't want to clutter my top-level folder with dozens of files like Dockerfile.1, Dockerfile.2, etc.
My docker-compose.yaml looks like:
version: '3'
services:
worker:
image: myname:mytag
build:
context: .
dockerfile: ./Dockerfile
volumes:
- ./src/app:/usr/local/myproject/src/app
My Dockerfile looks like:
FROM python:2.7
# Set the working directory.
WORKDIR /usr/local/myproject/src/app
# Copy the current directory contents into the container.
COPY src/app /usr/local/myproject/src/app
COPY pip-requirements.txt pip-requirements.txt
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r pip-requirements.txt
# Define environment variable
ENV PYTHONUNBUFFERED 1
CMD ["./app.py"]
If I run from the top-level directory of my project:
docker-compose -f config/test/docker-compose.yaml up
it succeeds in building the image, but fails when attempting to run the image with the error:
ERROR: for worker Cannot start service worker: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"./app.py\": stat ./app.py: no such file or directory": unknown
If I inspect the image's filesystem with:
docker run --rm -it --entrypoint=/bin/bash myname:mytag
it correctly dumps me into /usr/local/myproject/src/app. However, this directory is empty, explaining the runtime error. Why is this empty? Shouldn't the COPY statement and volumes have populated the image with my application code?
For one, you're clobbering the data set by including the content during the build stage and then using docker-compose to overlay a directory on top of it. Let's first discuss the differences between the Dockerfile (Image) and the Docker-compose (Runtime)
Normally, you would use the COPY directive in the dockerfile to copy a component of your local directory into the image so that it is immutable. In most application deployments, this means we bundle our entire application into the directory and prepare it to run. This means that it is not dynamic (Meaning changes you make to the code after that are not visible in the container) but is a gain in terms of security.
Docker-compose is a runtime specification meaning, "Once I have an image, I want to programmatically define how it runs". By defining a volume here, you're saying "I want the local directory (From the perspective of the compose file) /src/app to be overlaid onto /usr/local/myproject/src/app
Thus anything you built into the image doesn't really matter. You're adding another layer on top of the image which will take precedance over what was built into the image.
It may also be something to do with you specifying the Workdir already and then specifying a ./ reference in the CMD. Would be worth trying it as just CMD ["app.py"]
What happens if you
Build the image: docker build -t "test" .
Run the image manually : "docker run --rm -it test
What I am trying to do is use a Docker image I found online timwiconsulting/ionic-v1.3, and run my ionic project within Docker. I want to mount my ionic project in Docker and forward my ports so I can run the emulator in a browser.
I want to ask how do I create a docker-compose.yml file for an existing container?
I found a Docker image timwiconsulting/ionic-v1.3 that I want to run, which has the correct version of the tools that I want.
Now I want to create a compose file to forward the ports to my computer, and mount the project files. I create this docker-compose.yml file:
version: '3'
services:
web:
build: .
ports:
- "8100:8100"
- "35729:35729"
volumes:
- /Users/leetcat/project/:/project
But every time I try to do docker-compose up I get the error:
~/user: docker-compose up
Building web
Step 1/6 : FROM timwiconsulting:ionic-v1.3
ERROR: Service 'web' failed to build: pull access denied for timwiconsulting, repository does not exist or may require 'docker login
I am doing something wrong. I think I want to be creating a docker-compose.yml file for the container timwiconsulting/ionic-v1.3. Feel free to tell me I am totally off the mark with what docker is.
Here is my Dockerfile:
# Use an official Python runtime as a parent image
FROM timwiconsulting:ionic-v1.3
# Set the working directory to /app
WORKDIR /project
# Copy the current directory contents into the container at /app
ADD . /project
# Install any needed packages specified in requirements.txt
# RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 8100
EXPOSE 35729
# Define environment variable
ENV NAME World
# Run app.py when the container launches
# CMD ["python", "app.py"]
# docker exec -it <container_hash> /bin/bash/