I have a simple web application that I would like to place in a docker container. The angular application exists in the frontend/ folder, which is withing the application/ folder.
When the Dockerfile is in the application/ folder and reads as follows:
FROM node
ADD frontend/ frontend/
RUN (cd frontend/; npm install;)
CMD (cd frontend/; npm start;)
everything runs correctly.
However, when I move the Dockerfile into the frontend/ folder and change it to read
FROM node
ADD . frontend/
RUN (cd frontend/; npm install;)
CMD (cd frontend/; npm start;)
no files are copied and the project does not run.
How can I add every file and folder recursively in the current directory to my docker image?
The Dockerfile that ended up working was
FROM node
ADD . / frontend/
RUN (cd frontend/; npm install;)
CMD (cd frontend/; npm start;)
Shoutout to #Matt for the lead on . / ./, but I think the only reason that didn't work was because for some reason my application will only run when it is inside a directory, not in the 'root'. This might have something to do with #VonC's observation that the node image doesn't have a WORKDIR.
First, try COPY just to test if the issue persists.
Second, make sure that no files are copied by changing your CMD to a ls frontend
I do not see a WORKDIR in node/7.5/Dockerfile, so frontend could be in /frontend: check ls /frontend too.
Related
I am trying to create a docker image but I am getting this error(Couldn't find a pages directory. Please create one under the project root) in the "npm run build" step. But I have that directory in my application root folder. In my local server it is running fine and creating .next folder. My folder structure is ex: app/pages/index.js
I don't know why it is failing in the docker build. Can you guys help me with this?
Below is my Dockerfile
FROM node:14-alpine
RUN mkdir -p /usr/src/next-website
WORKDIR /usr/src/next-website
COPY package*.json ./
RUN npm i
RUN npm run build
COPY . .
EXPOSE 94
CMD ["node", "server.js"]
Thanks in advance.
I have this very simple image:
FROM node:11-alpine
WORKDIR /app
COPY src /app/src
RUN cd src \
&& npm i --no-cache \
&& npm run build
CMD cd src \
&& npm run start
Everything is ok during the build, e.g. a simple ls -R / reveals the following tree:
/:
app/
/app:
src/
/app/src:
package.json ...
But when I try to start it I find the following structure:
/:
app/
/app/:
src/
/app/src/:
src/ ... more files from the context dir that I never COPYed
/app/src/src/:
package.json ...
If I RUN ls -R / just after npm run build I get the 'good' tree, even running ls -R / just one layer before CMD I get the same 'good' tree, but any layer after CMD (including CMD itself) gets me the 'wrong' tree, e.g:
CMD ls -R / && cd src && npm run start
It shows /app/src/src, just as if it was taking all the contents of the context dir and putting them below the WORKDIR/src (i.e. /app/src)
Why is docker doing this?
I'm running
Docker version 18.09.3, build 774a1f4
docker-compose version 1.23.2, build 1110ad0
After fiddling around in a somewhat "cleaner" environment at home, I found out that, as a comment suggests, the culprit was a stuck volume, it was mounting ./myapp:/app/src even though such thing was no longer around in the volumes section of my docker-compose.yaml file, and a simple yes|docker system prune did the trick.
I've got a node_modules folder which is 120MB+ and I'm wondering if we can somehow only push the node_modules folder if it has changed?
This is what my docker file looks like at the moment:
FROM node:6.2.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
CMD export NODE_ENV=production
EXPOSE 80:7000
# EXPOSE 7000
CMD [ "npm", "start" ]
So what I'm wanting to do is only push the node_modules folder if it has changed! I don't mind manually specifying when the node_modules folder has changed, whether I do this by passing a flag & using an if statement, I don't know?
Use case:
I only made changes to my application code and didn't add any new packages.
I added some packages and require the node_modules folder to be pushed.
Edit:
So I tried the following docker file which brought in some logic from
http://bitjudo.com/blog/2014/03/13/building-efficient-dockerfiles-node-dot-js/
When I run docker built -t <name> . with the below Dockerfile & then gcloud docker -- push <url> it will still try push my whole directory to the registry?!
FROM node:6.2.0
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
# Create app directory
RUN mkdir -p /usr/src/app && cp -a /tmp/node_modules /usr/src/app/
WORKDIR /usr/src/app
# Install app dependencies
# COPY package.json /usr/src/app/
# RUN npm install
# Bundle app source
ADD . /usr/src/app
CMD export NODE_ENV=production
EXPOSE 80:7000
# EXPOSE 7000
CMD [ "npm", "start" ]
Output from running gcloud docker -- push etc...:
f614bb7269f3: Pushed
658140f06d81: Layer already exists
be42b5584cbf: Layer already exists
d70c0d3ee1a2: Layer already exists
5f70bf18a086: Layer already exists
d0b030d94fc0: Layer already exists
42d0ce0ecf27: Layer already exists
6ec10d9b4afb: Layer already exists
a80b5871b282: Layer already exists
d2c5e3a8d3d3: Layer already exists
4dcab49015d4: Layer already exists
f614bb7269f3 is always being pushed and I can't figure out why (new to Docker). It's trying to push the whole directory which my app is in!?
Any ideas?
This blog post explains how to cache your dependencies in subsequent builds of your image by creating a layer that can be cached as long as the package.json file hasn't changes - http://bitjudo.com/blog/2014/03/13/building-efficient-dockerfiles-node-dot-js/
This is a link to the gist code snippet - https://gist.github.com/dweinstein/9468644
Worked wonders for our node app in my organization.
I'm inside my docker container:
This is my working directory
root#19b84a014662:/usr/src/ghost#
I have a script in:
root#19b84a014662:/
I'm able to cd to / and execute the script. But I need to execute the script from my docker-compose file. I tried
./test.sh
But this actually means it's searching in /usr/src/ghost/ for the script instead of /
How can I execute the script, in the / of my container?
Example: I ssh into my container:
root#19b84a014662:/usr/src/ghost# ls
Gruntfile.js LICENSE PRIVACY.md README.md config.example.js config.js content core index.js node_modules npm-shrinkwrap.json package.json
I have a script in the root of my container. I want to execute it with:
./test.sh
Than it show me only folders/scripts which are in /usr/src/ghost and not in /
root#19b84a014662:/usr/src/ghost# ./
content/ core/ node_modules/
Just replace your ./test.sh by
/test.sh
Because ./ means you're starting from current directory (which is the working dir /usr/src/ghost/ in this case). Inspite of this / means you're starting from root directory and that's what you want to do.
Alternatively you could switch to root dir and execute your script in one command using the && concatenator below. But I'll recommend the above.
cd / && ./test.sh
I am trying to run webpack inside a docker container for a node app. I get the following error.
sh: 1: webpack: Permission denied
The Dockerfile works fine on a normal build.
FROM node
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 3001
#This launches webpack which fails.
CMD [ "npm", "start" ]
I had the same issue, as I was migrating an existing project to docker. I resolved it by not copying the entire project contents (COPY . /usr/src/app in your docker file) and instead only copying the files and directories actually required.
In my case, the unnecessary directories added when copying the whole project were, among other things, node_modules, the build directory and the entire .git repo directory.
I still don't know why copying the entire directory doesn't work (something conflicts with something? something has incorrect permissions?), but copying only what you need is better for image size anyway.