Module Not found after attaching volume in docker - docker

This is my dockerfile
FROM node:15
# sets the folder structure to /app directory
WORKDIR /app
# copy package.json to /app folder
COPY package.json .
RUN npm install
# Copy all files from current directory to current directory in docker(app)
COPY . ./
EXPOSE 3000
CMD ["node","index.js"]
I am using this command in my powershell to run the image in a container
docker run -v ${pwd}:/app -p 3000:3000 -d --name node-app node-app-image
${pwd}
returns the current directory.
But as soon as I hit enter, somehow node_modules isn't being installed in the container and I get "express not found" error in the log.
[![Docker log][1]][1]
I can't verify if node_modules isn't being installed because I can't get the container up to run the exec --it command.
I was following a freecodecamp tutorial and it seems to work in his pc and I've tried this command in command prompt too by replacing ${pwd} by %cd%.
This used to work fine before I added the volume flag in the command.
[1]: https://i.stack.imgur.com/4Fifu.png

Your problem was you build your image somewhere and then try to map another folder to it.
|_MyFolder/
|_ all-required-files
|_ all-required-folders
|_ Dockerfile
docker build -t node-app-image .
docker run -p 3000:3000 -d --name node-app node-app-image
Simplified Dockerfile
FROM node:15
# sets the folder structure to /app directory
WORKDIR /app
# Copy all files from current directory to current directory in docker(app)
COPY . ./
RUN npm install
EXPOSE 3000
CMD ["node","index.js"]

Related

docker image is not rebuilt automatically on file change

I am running docker containers with WSL2. When I make changes to my files in the /client directory the changes are not reflected and I have to do docker compose stop client, docker compose build client and docker compose start client. If I cat a file after changing domething one can see the change.
Here is my Dockerfile:
FROM node:16.17.0-alpine
RUN mkdir -p /client/node_modules
RUN chown -R node:node /client/node_modules
RUN chown -R node:node /root
WORKDIR /client
# Copy Files
COPY . .
# Install Dependencies
COPY package.json ./
RUN npm install --force
USER root
I alse have a /server directory with the following Dockerfile and the automatic image rebuild happens on file change there just fine:
FROM node:16.17.0-alpine
RUN mkdir -p /server/node_modules
RUN chown -R node:node /server/node_modules
WORKDIR /server
COPY . .
# Install Dependencies
COPY package.json ./
RUN npm install --force --verbose
USER root
Any help is appreciated.
Solved by adding the following to my docker-compose.yml:
environment:
WATCHPACK_POLLING: "true"
Docker does not take care of the hot-reload.
You should look into the hot-reload documentation of the tools you are building with.

Where the new folder is created when using Dockerfile

Please help me understand where the new folder is created. When I docker exec -it <mycontainer> bash the container, the created folder is not there.
Dockerfile:
FROM python:3.7-alpine
WORKDIR /app
RUN pip install -r requirements.txt
RUN mkdir -p /new_folder
COPY . .
CMD ["gunicorn", "-w 4", "main:app"]
I also tried copying the local stuff before creating a new folder, still can't see the folder created in the container.
your working directory is /app and you are copying files from your current dierctory to /app. when your container is running do this
docker exec -it <container_id> pwd
you'll see /app in output
but you are creating new_folder as a root directory. so you can't see it inside /app
to see your root directories you can run this:
docker exec -it container_id /bin/sh -c "ls -lah ..
also my Dockerfile is this:
FROM python:3.7-alpine
WORKDIR /app
RUN mkdir -p /new_folder
COPY . .
CMD ["python" , "-c" , "import time; time.sleep(10000)"]
Your folder is created at the filesystem root : /
try docker exec -it <mycontainer> bash and then, in the container, ls /

dockerfile COPY does not copy all the files

I do
git clone https://github.com/openzipkin/zipkin.git
cd zipkin
The create a Dockerfile as below
FROM openjdk
RUN mkdir app
WORKDIR /app
COPY ./ .
ENTRYPOINT ["sleep", "1000000"]
then
docker build -t abc .
docker run abc
I then run docker exec -it CONTAINER_ID bash
pwd returns /app which is expected
but I ls and see that the files are not copied
only the directories and the xml file is copied into the /app directory
What is the reason? how to fix it?
Also I tried
FROM openjdk
RUN mkdir app
WORKDIR /app
COPY . /app
ENTRYPOINT ["sleep", "1000000"]
That repository contains a .dockerignore file which excludes everything except a set of things it selects.
That repository's docker directory also contains several build scripts for official images and you may find it easier to start your custom image FROM openzipkin/zipkin rather than trying to reinvent it.

Mount docker volume to host machine path

I need to access test result files in the host from the container. I know that I need to create a volume which maps between host and container, like below, but I get nothing written to the host.
docker run --rm -it -v <host_directory_path>:<container_path> imagename
Dockerfile:
FROM microsoft/dotnet:2.1-sdk AS builder
WORKDIR /app
COPY ./src/MyApplication.Program/MyApplication.Program.csproj ./src/MyApplication.Program/MyApplication.Program.csproj
COPY nuget.config ./
WORKDIR ./src/MyApplication.Program/
RUN dotnet restore
WORKDIR /app
COPY ./src ./src
WORKDIR ./src/MyApplication.Program/
RUN dotnet build MyApplication.Program.csproj -c Release
FROM builder as tester
WORKDIR /app
COPY ./test/MyApplication.UnitTests/MyApplication.UnitTests.csproj ./test/MyApplication.UnitTests/MyApplication.UnitTests.csproj
WORKDIR ./test/MyApplication.UnitTests/
RUN dotnet restore
WORKDIR /app
COPY ./test ./test
WORKDIR ./test/MyApplication.UnitTests/
RUN dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura
ENTRYPOINT ["dotnet", "reportgenerator", "-reports:coverage.cobertura.xml", "-targetdir:codecoveragereports", "-reportTypes:htmlInline"]
The command at the entry point is working correctly. It is writing the output to the MyApplication.UnitTests/codecoveragereports directory, but not to the host directory.
My docker run looks as follows:
docker run --rm -it -v /codecoveragereports:/app/test/MyApplication.UnitTests/codecoveragereports routethink.tests:latest
What could I be doing wrong?
Looks like a permission issue.
-v /codecoveragereports:/app/***/codecoveragereports is mounting a directory under the root / which is dangerous and you may not have the permission.
It's better to mount locally, like -v $PWD/codecoveragereports:/app/***/codecoveragereports, where $PWD is an environment variable equal to the current working directory.

will the commands in Dockerfile run as follows?

docker build Dockerfile .//running it correctly.
1.) I have mentioned in the comments each command will execute as written, Is that correct working of this Dockerfile?
2.)These commands will be used to make the image when I ran docker build, so
[ec2-user#ip-xx-xx-xx-xx ~]$cd /project/p1
[ec2-user#ip-xx-xx-xx-xx p1]$ls
Dockerfile a b c d
My Dockerfile consists of following commands.
Dockerfile
node 8.1.0 //puls the image from hub
RUN mkdir -p /etc/x/y //make directory in the host at path /etc/x/y
RUN mkdir /app //make directory in the host at path /app
COPY . /app //copy all the files that is
WORKDIR /app //cd /app; now the working directory will be /app for next commands i.e npm install.
RUN npm install
EXPOSE 3000 //what this will do?
Question 1: how to run docker build?
docker build Dockerfile . # am I running it correctly.
No, you run it with docker build . and docker will automatically look for the Dockerfile in the current directory. Or you use docker build -f Path_to_the_docker_file/DockerFile where you clearly specify the path to the DockerFile.
Question 2: Fixing errors and clarifying commands
There are few mistakes in the Dockerfile, check the edited comments:
# pulls the image from dockerhub : YES
# Needs to be preceeded with FROM
FROM node 8.1.0
# all directories are made inside the docker image
# make directory in the image at path /etc/x/y : YES
RUN mkdir -p /etc/x/y
# make directory in the image at path /app : YES
RUN mkdir /app
COPY . /app # copy all the files that is : YES
WORKDIR /app # cd /app; now the working directory will be /app for next commands i.e npm install. : YES
RUN npm install
EXPOSE 3000 # what this will do? => tells all docker instances of this image to listen on port 3000.

Resources