Docker unable to COPY with relative path in WSL2 - docker

I have a project that has a docker-compose.yml set up to get it running locally for development purposes. It runs great on Linux (natively) and macOS (using Docker Desktop). I am just finishing getting it running on Windows using WSL2 and Docker Desktop 2.3.0.3 (that has proper WSL2 support). The problem is that my Dockerfile is doing a COPY ./from /to command and Docker doesn't seem to be able to find the file. I have set up a minimal test to recreate the problem.
I have the project set up with this directory structure:
docker/
nginx/
Dockerfile
nginx.conf
docker-compose.yml
The nginx Dockerfile contains:
FROM nginx:1.17.9-alpine
# Add nginx configs
COPY ./docker/nginx/nginx.conf /etc/nginx/nginx.conf
# Copy source code for things like static assets
COPY . /application
# Expose HTTP/HTTPS ports
EXPOSE 80 443
And the docker-compose.yml file contains:
version: "3.1"
services:
nginx:
build: docker/nginx
working_dir: /application
volumes:
- .:/application
ports:
- "80:80"
This is pretty basic - it's just copying the nginx.conf configuration file to /etc/nginx/nginx.conf inside the container.
When I run docker-compose up for this project, from the project root, inside WSL, I receive the following error:
Building nginx
Step 1/4 : FROM nginx:1.17.9-alpine
---> 377c0837328f
Step 2/4 : COPY ./docker/nginx/nginx.conf /etc/nginx/nginx.conf
ERROR: Service 'nginx' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder502655363/docker/nginx/nginx.conf: no such file or directory
This is not what I expect (and not what happens on linux/mac systems) - but I assume it's messing up because of the relative path specified in the Dockerfile? Is this a Docker Desktop bug specifically with WSL, and does anybody know a workaround for the mean time? Thank you!

The path in the Dockerfile should be relative to the build context path. In this example just nginx.conf because the context path is docker/nginx.

Related

How to share prepared files on build stage between containers with docker compose

I have 2 services: nginx and web
When I build web image I build the frontend via the command npm install && npm run build
But I need prepared files in both containers: in the web and in the nginx.
How to share files between containers (images)? I can't simply use volumes, because they will be mounted only in runtime.
The Dockerfile COPY directive can copy files from an arbitrary image. While it's most commonly used in multi-stage builds, you can use it with any image, even one you built yourself.
Say your docker-compose.yml file looks like:
version: '3.8'
services:
web:
build: .
image: my/web
nginx:
build:
context: .
dockerfile: Dockerfile.nginx
ports: [8000:80]
Note that we've explicitly given the web image a name; also notice that there are no volumes: in this setup.
In the proxy image, we can then copy files out of that image:
# Dockerfile.nginx
FROM nginx
COPY --from=my/web /app/static /usr/share/nginx/html
The only complication here is that Compose doesn't know that one image is built off of the other. You'll probably have to manually tell it to rebuild the application image so that it gets built before the proxy image.
docker-compose build web
docker-compose build
docker-compose up -d
You can use this in a more production-oriented setup to deploy this application without having the code directly available. You can create a base docker-compose.yml that names an image: for both containers, and then add a separate docker-compose.override.yml file that has the build: blocks. After running docker-compose build twice as above, you can docker-compose push the built images, and then run this container stack on your production system getting the images from the registry; without a local copy of the source tree and without volumes.

Docker container purely for frontend files

My web-application consists of a vue frontend (purely client-side), a .NET backend and a postgres db. For hosting I'm using docker and docker-compose (my first time).
The setup consists of 4 containers.
postgres db
.net backend
vue frontend (not running, just the built files)
nginx instance
The nginx container serves as a reverse proxy for my backend and serves the static files for the frontend. I'm using only one container for both since I'm planning on hosting on a raspberry pi with limited resources and I also wanted to avoid coupling vue and nginx.
In order to achieve this, I'm mounting a named volume frontend-volume to read the frontend files from which previously is mounted to the static files built by the frontend image. I have copied (hopefully all) the relevant parts of the docker-compose file and the frontend dockerfile below. The full files are on GitHub:
docker-compose.yml
frontend/Dockerfile
Now my setup works fine initially but when I want to update some frontend-code, it just won't apply these changes in the container since the volume that contains the frontend files already exists and contains data (my assumption). I've tried docker-compose up --build and docker-compose up --build --force-recreate. Building manually with docker-compose build --no-cache frontend and then docker-compose up --force-recreate doesn't work either.
I had hoped these old files would just be overridden but apparently that's not the case. The only way I found to get the frontend to update correctly is to delete the volumes with docker-compose down -v and then running the up command again. Since I also have a volume for my database, this isn't a feasible solution unfortunately.
My goal was to have a setup that enables me to do a git pull on the raspi followed by a docker-compose up --build to update all the containers to the newest state while retaining the volumes containing the database-data. But that in itself might be wrong, I just want something comparable.
So my question: How can I create a file-only container for the frontend without having my files "frozen"?
Alternatively: what's the correct way of doing this (is it just wrong on every level)?
Dockerfile:
FROM node:14 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM alpine:latest as production-stage
COPY --from=build-stage /app/dist /app
VOLUME [ "/app" ]
docker-compose.yml:
version: '3'
services:
nginx:
container_name: nginx
image: nginx:latest
restart: always
ports:
- 5001:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- frontend-volume:/app:ro
frontend:
container_name: frontend
build:
context: ./frontend
dockerfile: Dockerfile
volumes:
- frontend-volume:/app
volumes:
frontend-volume:
I also tried this dockerfile:
FROM node:14 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM alpine:latest as production-stage
VOLUME /app
# RUN rm -R /app/* uncommenting this doesn't work either, it fails with 'rm: can't remove '/app/*': No such file or directory'
COPY --from=build-stage /app/dist /app
A container, first and foremost, wraps a process; a "file-only container" doesn't really make sense as a concept.
Once you compile your Vue application, as far as the Nginx process is concerned, it's just a bunch of files to be served. You can compile these into the Nginx image. A multi-stage build would be a very common approach to this. I wouldn't really consider this "coupling" different parts of the application together; you have one step that uses one set of tools to build the application, and a second step that serves it as static files.
# frontend/Dockerfile
# First stage: build the Vue app. (Probably exactly what you have now.)
FROM node:14 as build-stage
WORKDIR /app
...
RUN npm run build
# Final stage: build an image that can serve the application.
# (Not just a bunch of files, an actual server.)
FROM nginx
COPY --from=build-stage /app/dist /usr/share/nginx/html
# (The base image provides a correct CMD already)
Then in your docker-compose.yml file, there isn't a separate container for the built files; they are already included in the image.
version: '3.8'
services:
nginx:
build: ./frontend
restart: always
ports:
- 5001:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
# no volumes: for the code; it's built into the image
# no separate frontend container
As a general rule, you shouldn't put your code or other outputs from your build process in volumes. As you already note in the question, Docker will only copy content into a named volume the very first time a container runs, so using a volume here causes any updates to the application to be ignored (or to static files, or your node_modules directory, or ...). This approach also doesn't work in other container environments like Kubernetes, where getting a volume that can be shared between containers is actually a little tricky, and where the container system won't automatically copy anything into a volume for you.
First and foremost you should know that containers should run a single master process, and if saving resources is on your mind, think about the fact that if you need to run two types of applications on the same container you'd have to create a special base image that would be hard to maintain feature and security wise, not to speak of using a more general container image that in the end might consume even more resources than two tailor made, small and concise images.
Regards not being tied to nginx with your frontend, the buety of using container means you don't have to install different pieces of software or versions of them directly on your machine and switching to node 16 from 14 for example is easy as changing your build stage base image, so I wouldn't worry about it especially cause you have many guides if you want to switch back from nginx and find a production dockerfile in a pinch.
My advice (cause I got a bit confused from your setup) is to build your frontend image with, first, your build stage as you've done and then in the 'production stage' copy the static files built in the 'build stage' to the appropriate nginx html folder (which is I think /usr/share/nginx/html ) copy the nginx.conf also to it's location and specify in the nginx configuration file to proxy requests with /api to the backend url.
On the other hand, if you currently want to debug fast with local mounted volumes, you could skip the 'build stage' and run the commands in it on your local machine then binding the created build files to nginx html folder (again /usr/share/nginx/html) as well as the nginx configuration file, both at run-time.
Running like this enables you to debug fast without messing around with stages and configuration and when your finished, using the better option with the full pipeline that will "freeze" the files.

Mount folder to docker container via dockerfile or docker-compose.yml file?

I need to edit nginx.conf file in /etc/nginx/ folder from a service from within a docker container. Is there a way to do this through Dockerfile or docker-compose.yml file? All the solutions I have come across only mention using docker run command.
Well there are multiple ways, I assume you want your docker container to have specific files while running right? then I would recommend use in Dockerfile like this
COPY nginx.conf /etc/nginx/
I would highly suggest copy command because this copy of file will live along with image.
or you can mount this via docker-compose like this
services:
frontend:
build: ./nginx
volumes:
- ./nginx.conf:/etc/nginx/
container_name: nginx

Dockerized nginx image fetches the config file of an entirely different app

I'm using Docker for two apps I'm currently working on, each in a different folder, sharing the same parent:
projects/
app_one/
docker-compose.yml
config/
.Dockerfile-nginx
nginx.conf
app_two/
docker-compose.yml
nginx/
.Dockerfile-nginx
nginx.conf
I usually start working on app_one:
projects/app_one$ sudo docker-compose build && docker-compose up
then shut if off projects/app_one$ docker-compose down and start working on app_two: projects/app_two$ sudo docker-compose build && docker-compose up.
When I do this, the nginx container of app_two still has the nginx.conf file of app_one. I found out about this when I checked the dockerized nginx /etc/nginx/ directory because my django app refused to load my app_two's static files.
Here's the nginx part of my docker-compose of my app_two:
nginx:
build:
context: ./nginx
Dockerfile: .Dockerfile-nginx
And here's its Dockerfile:
FROM nginx:latest
COPY ./nginx.conf /etc/nginx/conf.d
Can someone tell me what's causing this behavior that, in my opinion, defies the very purpose of Docker?
update
I renamed app_one's nginx.conf, and but this has no effect. app_two still has the "old" nginx.conf of app_one.
I was finally able to find out the cause of this issue. It's in my app_one's docker-compose.yml:
nginx:
image: nginx:latest
build:
context: ./config
dockerfile: Dockerfile-nginx
It turns out specifying both image and build directives means that Docker builds the image, then tags it with the image.
In this case, app_two's Dockerfile was basing its image on the one built by app_one. Removing the image directive of the app_one docker-compose file fixed the issue.

Can't copy project folder on Windows 10 to docker container

I'm learning about Docker and trying to up a container using php, apache and Lumen Framework. When I execute the command to build the container, returns success.
The problem is when I open the http://localhost:8080 and the page show me 403 - forbidden on apache. I access the container by ssh and I look on the folder /srv/app/ and there's no files. I think that the problem is the mapping of the folder root on the host machine Windows.
I'm using Windows 10;
Anyone can help me?
My DockerFile
FROM php:7.2-apache
LABEL maintainer="rIckSanchez"
COPY docker/php/php.ini /usr/local/etc/php/
COPY . /srv/app
COPY docker/apache/vhost.conf /etc/apache2/site-available/000-default.conf
My docker-compose file
version: '3'
services:
phpinfo:
build: .
ports:
- "8080:80"

Resources