I am trying to run webpack inside a docker container for a node app. I get the following error.
sh: 1: webpack: Permission denied
The Dockerfile works fine on a normal build.
FROM node
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 3001
#This launches webpack which fails.
CMD [ "npm", "start" ]
I had the same issue, as I was migrating an existing project to docker. I resolved it by not copying the entire project contents (COPY . /usr/src/app in your docker file) and instead only copying the files and directories actually required.
In my case, the unnecessary directories added when copying the whole project were, among other things, node_modules, the build directory and the entire .git repo directory.
I still don't know why copying the entire directory doesn't work (something conflicts with something? something has incorrect permissions?), but copying only what you need is better for image size anyway.
Related
This question is asked before yet After reviewing the answers I am still not able to copy the solution.
I am still new to docker and after watching tutorials and following articles I was able to create a Dockerfile for an existing GitHub repository.
I started by using the nearest available image as a base then adding what I need.
from what I read the problem is in WORKDIR and CMD commands
This is error message:
python: can't open file 'save_model.py': [Errno 2] No such file or directory*
This is my Dockerfile:
# syntax=docker/dockerfile:1
FROM tensorflow/serving:2.3.0-rc0-devel-gpu
WORKDIR app
COPY requirements-gpu.txt .
# install dependencies
RUN pip install -r requirements-gpu.txt
# copy the content of the local src directory to the working directory
COPY /home/pc/Desktop/yolo4_deep .
# command to run on container start
CMD ["python","./app/save_model.py","./app/object_tracker.py" ]
src
save_model.py
object_tracker.py
...
requirements.txt
Dockerfile
I tried WORKDIR command to set the absolute path: WORKDIR /home/pc/Desktop/yolo4_Deep_sort_nojupitor the result was Same Error.
I see multiple issues in your Dockerfile.
COPY /home/pc/Desktop/yolo4_deep .
The COPY command copies files from your local machine to the container. The path on your local machine must be path relative to your build context. The build context is the path you pass in when you run docker build . — in this case the . (the current directory) is the build context. Also the local machine path can only reference files located under the build context — i.e. paths containing .. (parent directory) or / (root directory) are not allowed.
WORKDIR app
WORKDIR sets the path inside the container not on your local machine. So WORKDIR /app means that all commands — RUN, CMD, ENTRYPOINT — will be executed from the /app directory.
CMD ["python","./app/save_model.py","./app/object_tracker.py" ]
As mentioned above WORKDIR /app causes all operations to be executed from the /app directory. So ./app/save_model.py is actually translated as /app/app/save_model.py.
Thanks for help Everyone.
As I mentioned earlier I'm beginner in the docker world. I solved the issue by editing the copy command.
# syntax=docker/dockerfile:1
FROM tensorflow/serving:2.3.0-rc0-devel-gpu
WORKDIR /home/pc/Desktop/yolo4_deep
COPY requirements-gpu.txt .
# install dependencies
RUN pip install -r requirements-gpu.txt
# copy the content of the local src directory to the working directory
COPY src/ .
# command to run on container start
ENTRYPOINT ["./start.sh"]
I am trying to create a docker image but I am getting this error(Couldn't find a pages directory. Please create one under the project root) in the "npm run build" step. But I have that directory in my application root folder. In my local server it is running fine and creating .next folder. My folder structure is ex: app/pages/index.js
I don't know why it is failing in the docker build. Can you guys help me with this?
Below is my Dockerfile
FROM node:14-alpine
RUN mkdir -p /usr/src/next-website
WORKDIR /usr/src/next-website
COPY package*.json ./
RUN npm i
RUN npm run build
COPY . .
EXPOSE 94
CMD ["node", "server.js"]
Thanks in advance.
I am working on creating a docker image with the following
FROM node:lts-alpine
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "start"]
I am confused about the following line
# Bundle app source
COPY . .
What exactly is meant here by bundling? Copy everything? IF that is the case why is it copying the package.json file beforehand?
I was confused about the exact same thing.
The solution is the distinction between your application and its dependencies: Running npm install after copying package.json only installs the dependencies (creating the node_modules folder), but your application code is still not inside the container. That's what COPY . . does. I don't think the word "bundle" has any special meaning here, since it is just a normal file copy operation.
Separating those steps means the results of npm install (i.e. the state of the container after executing the command) can be cached by docker and thus don't have to be executed every time a part of the application code changes. This makes deploys faster.
PS: When talking about making deploys faster, have a look at npm ci: https://blog.npmjs.org/post/171556855892/introducing-npm-ci-for-faster-more-reliable
To bundle your app's source code inside the Docker image, use the COPY instruction:
I have a simple web application that I would like to place in a docker container. The angular application exists in the frontend/ folder, which is withing the application/ folder.
When the Dockerfile is in the application/ folder and reads as follows:
FROM node
ADD frontend/ frontend/
RUN (cd frontend/; npm install;)
CMD (cd frontend/; npm start;)
everything runs correctly.
However, when I move the Dockerfile into the frontend/ folder and change it to read
FROM node
ADD . frontend/
RUN (cd frontend/; npm install;)
CMD (cd frontend/; npm start;)
no files are copied and the project does not run.
How can I add every file and folder recursively in the current directory to my docker image?
The Dockerfile that ended up working was
FROM node
ADD . / frontend/
RUN (cd frontend/; npm install;)
CMD (cd frontend/; npm start;)
Shoutout to #Matt for the lead on . / ./, but I think the only reason that didn't work was because for some reason my application will only run when it is inside a directory, not in the 'root'. This might have something to do with #VonC's observation that the node image doesn't have a WORKDIR.
First, try COPY just to test if the issue persists.
Second, make sure that no files are copied by changing your CMD to a ls frontend
I do not see a WORKDIR in node/7.5/Dockerfile, so frontend could be in /frontend: check ls /frontend too.
I've got a node_modules folder which is 120MB+ and I'm wondering if we can somehow only push the node_modules folder if it has changed?
This is what my docker file looks like at the moment:
FROM node:6.2.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
CMD export NODE_ENV=production
EXPOSE 80:7000
# EXPOSE 7000
CMD [ "npm", "start" ]
So what I'm wanting to do is only push the node_modules folder if it has changed! I don't mind manually specifying when the node_modules folder has changed, whether I do this by passing a flag & using an if statement, I don't know?
Use case:
I only made changes to my application code and didn't add any new packages.
I added some packages and require the node_modules folder to be pushed.
Edit:
So I tried the following docker file which brought in some logic from
http://bitjudo.com/blog/2014/03/13/building-efficient-dockerfiles-node-dot-js/
When I run docker built -t <name> . with the below Dockerfile & then gcloud docker -- push <url> it will still try push my whole directory to the registry?!
FROM node:6.2.0
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
# Create app directory
RUN mkdir -p /usr/src/app && cp -a /tmp/node_modules /usr/src/app/
WORKDIR /usr/src/app
# Install app dependencies
# COPY package.json /usr/src/app/
# RUN npm install
# Bundle app source
ADD . /usr/src/app
CMD export NODE_ENV=production
EXPOSE 80:7000
# EXPOSE 7000
CMD [ "npm", "start" ]
Output from running gcloud docker -- push etc...:
f614bb7269f3: Pushed
658140f06d81: Layer already exists
be42b5584cbf: Layer already exists
d70c0d3ee1a2: Layer already exists
5f70bf18a086: Layer already exists
d0b030d94fc0: Layer already exists
42d0ce0ecf27: Layer already exists
6ec10d9b4afb: Layer already exists
a80b5871b282: Layer already exists
d2c5e3a8d3d3: Layer already exists
4dcab49015d4: Layer already exists
f614bb7269f3 is always being pushed and I can't figure out why (new to Docker). It's trying to push the whole directory which my app is in!?
Any ideas?
This blog post explains how to cache your dependencies in subsequent builds of your image by creating a layer that can be cached as long as the package.json file hasn't changes - http://bitjudo.com/blog/2014/03/13/building-efficient-dockerfiles-node-dot-js/
This is a link to the gist code snippet - https://gist.github.com/dweinstein/9468644
Worked wonders for our node app in my organization.