Docker is not copying file on a shared volume during build - docker

I would like to have the files created on the building phase stored on my local machine
I have this Dockerfile
FROM node:17-alpine as builder
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
RUN npm i -g #angular/cli
COPY . .
RUN ng build foo --prod
RUN touch test.txt #This is just for test
CMD ["ng", "serve"] #Just for let the container running
I also created a shared volume via docker compose
services:
client:
build:
dockerfile: Dockerfile.prod
context: ./foo
volumes:
- /app/node_modules
- ./foo:/app
If I attach a shell to the running container and run touch test.txt, the file is created on my local machine.
I can't understand why the files are not created on the building phase...
If I use a multi stage Dockerfile the dist folder on the container is created (just adding this to the Dockerfile), but still I can't see it on the local machine
FROM nginx
EXPOSE 80
COPY --from=builder /app/dist/foo /usr/share/nginx/html

I can't understand why the files are not created on the building
phase...
That's because the build phase doesn't involve volume mounting.
Mounting volumes only occur when creating containers, not building images. If you map a volume to an existing file or directory, Docker "overrides" the image's path, much like a traditional linux mount. Which means, before creating the container, you image has everything from /app/* pre-packaged, and that's why you're able to copy the contents in the multistage build.
However, as you defined a volume with the - ./foo:/app config in your docker-compose file, the container won't have those files anymore, and instead the /app folder will have the current contents of your ./foo directory.
If you wish to copy the contents of the image to a mounted volume, you'll have to do it in the ENTRYPOINT, as it runs upon container instantiation, and after the volume mounting.

Related

Dockerfile for angular development not updating node_modules

I'm using the following Dockerfile for development of an Angular project:
FROM node:18-alpine
WORKDIR /code
COPY package*.json /code/
RUN npm ci --quiet
It gets started with docker compose. My code folder is mounted as a volume so the development server inside the container detects changes when editing and keeps live updates going:
version: "3"
services:
ui:
build: ./PathOnHostWithProjectRepo
command: sh -c "npm start"
ports:
- 4200:4200
volumes:
- ./PathOnHostWithProjectRepo:/code
- node_modules:/code/node_modules
volumes:
node_modules:
node_modules gets created when the image is created and, to my understanding, would only update if my package.json is changed. However, today I updated package.json with a new dependency and it is not being installed inside of the volume. I have tried everything I can think of. docker compose down, docker system prune -a -f, and rebuilding. Every time the container starts there is an error that it cannot find the new dependency added. If I step into the container and inspect the node_modules folder the library isn't there. It is present on my host machine if I run npm install locally without Docker, so I know the package and imports must be correct.
With this setup your node_modules will never be updated. Docker will completely ignore any changes in your package.json file. You've told it that directory contains user data that must not be modified.
For the setup you show you don't need Docker at all. It's straightforward to install Node and OS package managers like Debian/Ubuntu APT or MacOS Homebrew generally have a prepackaged version. If you use Node directly then you won't have problems like this; everything will work normally.
If you must use Docker here, the most straightforward thing to do is to make sure all of your application code is in a subdirectory; then you can mount only the subdirectory containing the code and leave the image's node_volumes directory intact.
$ ls -F
Dockerfile
docker-compose.yml
node_modules/
package.json
package-lock.json
src/
# Dockerfile
FROM node:lts
WORKDIR /code
COPY package*.json ./
RUN npm ci
COPY src/ ./src/
# RUN npm build
CMD ["npm", "start"]
# docker-compose.yml
version: '3.8'
services:
ui:
build: .
ports:
- '4200:4200'
volumes:
- ./src:/code/src
Mounting only the src subdirectory avoids the trouble of storing node_modules in a named volume (or an anonymous one). If you change your package.json file you will need to re-run docker-compose build, but since you're directly using the library tree in your image then this will in fact get updated.
If you're going to deploy this image somewhere, remember to delete the volumes: block during your local integration testing so that you're actually running the image you're going to deploy, and not a hybrid of an image and your potentially-modified local code.

Docker-compose volumes sharing issue with /app folder

I'm trying to build a web application based on flask and vue.js, using docker containers.
I use volume sharing in docker-compose and I'm facing an issue with the container structure.
I'd like to share the application folder from the host with the /app container folder. To do so the docker-compose is set up as
volumes:
- type: bind
source: ./
target: /app
inspecting the container shows that the data from the host is placed inside the folder /app/app and not inside the folder /app as expected. The working directory is set up inside the docker container:
FROM continuumio/miniconda3:latest
WORKDIR /app
COPY dependency.yml .
RUN conda env create -f dependency.yml
COPY setup.py .
RUN pip install -e .
In an attempt to try to understand the relative/absolute path I tried to change the target volume to /data in the docker-compose file. In this case the application files are installed in the /app and the host files are copied in the /data folder, as expected.
The question is: why if I try to use the absolute /app folder in the container does the system use it as relative to the WORKDIR, and this happens only if the WORKDIR has the same name as the target folder?

cannot find modules on docker even if I added a node_modules as volume when docker run

I'm trying to run docker but it can't find modules
I have already have docker image after build step and here's some information.
docker run --rm -v $(pwd):/code -v /code/node_modules -w /code $dockerBuilderImage npm run dist-admin-web
package.json has a script of dist-admin-web with rm -rf bin %% tsc
My docker file looks like
FROM node:12.18.2
COPY package.json ./
COPY package-lock.json ./
RUN npm install
(... some global installations)
As I said, when I do commands docker run and it doesn't work! FYI, I have docker-compose for local development and it works with that image.
My docker compose is following... (I've deleted unneccessary information like env..)
webpack_dev_server:
build:
context: ./
dockerfile: Dockerfile
image: nodewebpack_ts:12.18.2-ts3.5.2
ports:
- "3000:3000"
volumes:
- ./:/ROOT/
- /ROOT/node_modules
working_dir: /ROOT
As I know, I have to add node_modules at volumes because of this. That's why docker compose works
The above code works just fine, the issue is that you forgot to set WORKDIR in your Dockerfile.
WORKDIR /code
which means that you are copying those package.json files into the root directory and node_modules will also be installed there (once the npm install is processed). Then you change workdir when you run the container and since you are using volumes, you will see some strange behavior (things are not exactly where they should be).
Also, while the trick with the additional volume works (node_modules inside of container are not obscured by the mount) there are drawbacks to this approach.
You are creating new unnamed volume each time you run the container. If you don't take care of the old volumes, soon your system will be filled with a lot of dangling volumes.
You are preventing node_modules syncing which is not exactly convenient during development. If you try to install some additional packages once the container is running, you will need to stop the container, build a new image and run it again because it is using the old node_modules which are created during build time.
I guess that this is a matter of taste but I prefer to sync local node_modules with the container via bind mount to avoid the above mentioned problems.

docker-compose not mounting folder for test results

After running unittests as part of building a docker-compose file, a file created in the container is not showing up on my local filesystem.
I have the following Dockerfile:
# IDM.Test/Dockerfile
FROM microsoft/aspnetcore-build:2.0
WORKDIR /src
# Variables
ENV RESTORE_POLICY --no-restore
ENV IGNORE_WARNINGS -nowarn:msb3202,nu1503
# Restore
COPY IDM.sln ./
# Copying and restoring other projects...
COPY IDM.Test/IDM.Test.csproj IDM.Test/
RUN dotnet restore IDM.Test/IDM.Test.csproj $IGNORE_WARNINGS
# Copy
COPY . .
# Test
RUN dotnet test IDM.Test/IDM.Test.csproj -l "trx;LogFileName=test-results.xml"
RUN ls -alR
When running RUN ls -alR I can see that the file /src/IDM.Test/TestResults/test-results.xml is produced within the container. So far so good.
I'm using docker-compose -f docker-compose.test.yml build to start building.
The docker-compose looks like this:
version: '3'
services:
idm.webapi:
image: idmwebapi
build:
context: .
dockerfile: IDM.Test/Dockerfile
volumes:
- ./IDM.Test/TestResults:/IDM.Test/TestResults/
I have created the folder IDM.Test/TestResults locally, but nothing appears after successfully running the docker-compose build command.
Any clues?
Maybe with this explanation we can solve it. Let me say some obvious things to avoid confusion, step by step. Container creation has two steps:
docker build / docker-compose build -> Creates image
docker run / docker compose up / docker-compose run -> Creates container
Volumes are created in SECOND STEP (container creation), while your command dotnet test IDM.Test/IDM.Test.csproj -l "trx;LogFileName=test-results.xml" is being executed in first one (image creation).
If you creates a folder inside container in the same path where you've
mounted volume, data in this new folder will only be available locally
inside container.
Definitively, my recommendation can be resumed in the following points:
Check that destination folder of mounted volume is not created in building phase, so, is not defined any RUN mkdir /IDM.Test/TestResults/ in your Dockerfile.
Another little recommendation is not mandatory but I like to define volumes with absolute path in docker-compose file.
Don't execute commands in Dockerfile which produce data you want outside, unless you specify this command as an ENTRYPOINT or CMD, not RUN.
In Dockerfile, ENTRYPOINT or CMD (or command: in docker-compose) specify commands executed after buildind, when container starts.
Try with this Dockerfile:
# IDM.Test/Dockerfile
FROM microsoft/aspnetcore-build:2.0
WORKDIR /src
# Variables
ENV RESTORE_POLICY --no-restore
ENV IGNORE_WARNINGS -nowarn:msb3202,nu1503
# Restore
COPY IDM.sln ./
# Copying and restoring other projects...
COPY IDM.Test/IDM.Test.csproj IDM.Test/
RUN dotnet restore IDM.Test/IDM.Test.csproj $IGNORE_WARNINGS
# Copy
COPY . .
# Test
CMD dotnet test IDM.Test/IDM.Test.csproj -l "trx;LogFileName=test-results.xml"
Or this docker-compose:
version: '3'
services:
idm.webapi:
image: idmwebapi
build:
context: .
dockerfile: IDM.Test/Dockerfile
volumes:
- ./IDM.Test/TestResults:/IDM.Test/TestResults/
command: >
dotnet test IDM.Test/IDM.Test.csproj -l "trx;LogFileName=test-results.xml"
After container creation, you can check with ls your generated files.

Docker compose relative folder from host volume [duplicate]

This question already has answers here:
Docker-compose volume mount before run
(2 answers)
Closed 5 years ago.
I am trying to mount a folder from the host as a volume to the container. I am wondering at what step is this mounting process done - I would expect it to be during the build phase, but it seems that my folder is not mounted.
The folder structure:
-> app
|-> requirements.txt
- docker
|-> web
|-> Dockerfile
-> docker-compose.yml
docker-compose.yml contains the following:
version: '2'
services:
web:
build: ./docker/web
volumes:
- './app:/myapp'
The docker file of the container:
FROM ubuntu:latest
WORKDIR /myapp
RUN ls
I am mounting the app directory from the host into /myapp inside the container, the build process sets the working directory and runs ls to see the content and I am expecting my reuqiremetns.txt file to be there.
What am I doing wrong?
docker-compose v1.16.1, docker v1.16.1. I am using docker for windows.
Your requirements.txt file isn't copied into the image at build time because it's not picked up by the context and requirements.txt is not present at build time.
Volumes are mounted at container creation time, not at build time. If you want your file to be available inside your Dockerfile (e.g. at build time) you need to include it in the context and COPY it to make it available.
From the Docker documentation here: https://docs.docker.com/engine/reference/builder/
The first thing a build process does is send the entire
context (recursively) to the daemon
It's two issues:
You aren't sending your requirements file with your build context, because your Dockerfile is in a separate directory structure, so requirements.txt is not not available at build time
You aren't copying the file into the image before you run the ls command (COPY ./app/requirements.txt /myapp/)
If can change your directory structure to make requirements.txt available at build time, and add a COPY command to your Dockerfile before you run your ls you should see the behavior you expect during the build.

Resources