Nodemon doesn't reload my app.js on my docker container - docker

I decided to make a server with Express in a container and installed nodemon to watch and reload my code modifications, but, for some reason, the nodemon on container doesn't reload when I modify my code. How can I fix that?
My Dockerfile:
FROM node:14-alpine
WORKDIR /usr/app
RUN npm install -g nodemon
COPY . .
EXPOSE 3000
CMD ["npm","start"]
My package.json:
{
"name": "prog-web-2",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "nodemon app.js"
},
"repository": {
"type": "git",
"url": "git+https://github.com/willonf/prog-web-2.git"
},
"keywords": [],
"author": "",
"license": "ISC",
"bugs": {
"url": "https://github.com/willonf/prog-web-2/issues"
},
"homepage": "https://github.com/willonf/prog-web-2#readme",
"dependencies": {
"express": "^4.17.1"
}
}
My app.js:
const express = require("express")
const app = express()
app.get("/", (req, res) => {
res.end("Hello, World!")
});
app.listen(3000);

You are copying all of your files into the docker container.
COPY . .
So when you modify your local file, you are not modifying the file inside the docker container and the nodemon inside the docker container can't detect any changes.
It is possible to get the behavior you want using Docker Volumes. You can configure them, so that the docker container shares the working directory with the host system. If you change a file on the host nodemon would detect changes in this case.
The answer in this post is showing an example on how to accomplish that.

Related

Can not get cypress to work in docker container

I want to run cypress in docker container and for this purpose i created.
For this i followed some guides and tutorials
e2e directory that has Dockerfile
FROM cypress/base:14
WORKDIR /app
# dependencies will be installed only if the package files change
COPY package.json .
COPY cypress.json .
RUN npm install
RUN npx cypress open
RUN npx cypress verify
cypress.json
{
"baseUrl": "http://localhost:3000",
"reporter": "junit",
"reporterOptions": {
"mochaFile": "cypress/results/output.xml"
}
}
package.json
{
"name": "e2e",
"version": "1.0.0",
"description": "Cypress tests",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"private": true,
"devDependencies": {
"cypress": "9.5.0"
}
}
and .npmrc
registry=http://registry.npmjs.org/
save-exact=true
progress=false
package-lock=true
modified docker-compose.yml at the root to be
version: '3.8'
services:
nextjsApp:
// rest
cypress:
image: "cypress/included:3.2.0"
depends_on:
- nextjs
environment:
- CYPRESS_baseUrl=http://nextjs:3000
working_dir: /e2e
command: npx cypress run
volumes:
- ./:/e2e
networks:
backend:
external: true```
Running docker-compose build does not cause or show any errors
But when i run docker-compose up i get
cypress_1 | Could not find any tests to run.
cypress_1 |
cypress_1 | We looked but did not find a cypress.json file in this folder: /e2e
How can this be solved ?

Docker Volumes - Changes not reflected

Using node.js
I am having an issue with docker volumes. I set up a volume in my docker-compose.yml file, but for some reason, changes I am making locally are not being reflected. Any idea why?
I have the current docker-compose.yml file
version: "3"
services:
posts:
build:
dockerfile: Dockerfile.dev
context: ./posts
ports:
- "4000:4000"
volumes:
- /app/node_modules
- ./posts:/app
...// more services here
Excerpt from index.js in posts
app.get("/posts", (req, res) => {
console.log("Quackss!");
res.send(posts);
});
Now lets say I run
docker-compose up --build posts
When I make my first request via postman to /posts
I see "Quackss!" in my console.
Now when I change the code to
app.get("/posts", (req, res) => {
console.log("Double Quack");
res.send(posts);
});
and save, then make a request via postman
I still see "Quacks!!" instead of "Double Quack".
I do have nodemon setup, so I didn't think that was the issue.
I ran docker ps to see the name of the container
Then ran docker exec -it <container name> sh
then cat index.js
If volumes were setup correctly, I'd expect to see
app.get("/posts", (req, res) => {
console.log("Double Quack");
res.send(posts);
});
Instead, I saw the original
app.get("/posts", (req, res) => {
console.log("Quackss!");
res.send(posts);
});
Here is my posts package.json
{
"name": "posts",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "nodemon"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"axios": "^0.19.2",
"cors": "^2.8.5",
"express": "^4.17.1",
"nodemon": "^2.0.4"
}
}
And dockerfile.dev
FROM node:alpine
WORKDIR /usr/app/
COPY ./package.json ./
RUN npm install
COPY ./ ./
CMD ["npm", "start"]
Any idea why this is happening?
Your dockerfile.dev has a WORKDIR of /usr/app/ and you're copying the source code into that directory.
Your docker-compose.yml file is mapping the volumes to /app instead.
One solution is to change the WORKDIR to /app/ and rebuild your image.

dockerfile cant find entrypoint.sh

My filesystem:
Dockerfile
entrypoint.sh
package.json
/shared_volume/
Dockerfile
FROM node:8
# Create and define the node_modules's cache directory.
RUN mkdir /usr/src/cache
WORKDIR /usr/src/cache
COPY . .
RUN npm install
# Create and define the application's working directory.
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
# entrypoint to copy the node_modules and root files into /usr/src/app, to be shared with my local volume.
ENTRYPOINT ["/usr/src/cache/entrypoint.sh"]
package.json
{
"name": "test1",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "echo hello world start"
},
"keywords": [],
"author": "",
"license": "ISC",
"devDependencies": {
"rimraf": "^3.0.1"
}
}
entrypoint.sh
#!/bin/bash
cp -r /usr/src/cache/. /usr/src/app/.
command line - bash script
If I run this code (note: using windows 10 with cmder, hence %cd% not pwd):
docker run -it --rm -v %cd%/shared_volume:/app --privileged shared-volume-example bash
Error
standard_init_linux.go:211: exec user process caused "no such file or directory"
If I take out the reference to entrypoint, then the code works, so what is going on with the entrypoint?
Any suggestions.
thanks
ok someone else posted an answer regarding line endings. I used atom line-ending-selector and set it to LF in replacement, and resaved the file and now it works.

lerna bootstrap command does not install the local yarn packages

I am trying to install my local npm packages (using yarn workspaces) in my docker container but it does not seem to be installing the local packages. Though it does install the global packages successfully.
Also, when I bash into the container and run "lerna bootstrap", that installs my local packages successfully. I have been searching all over the internet why is this happening.
My docker file's content looks like
FROM node:12.4.0-alpine
RUN apk add --no-cache bash
RUN apk add --no-cache yarn
WORKDIR "/app"
RUN yarn global add lerna
COPY . .
RUN yarn install
RUN lerna bootstrap
CMD ["npm", "run", "dev"]
My root package.json file looks like
{
"name": "my-test",
"version": "1.0.0",
"description": "Test",
"main": "app.js",
"private": true,
"workspaces": [
"packages/**"
],
"scripts": {
"start": "node app.js",
"dev": "nodemon -L app.js"
},
"author": "Phantom",
"license": "ISC",
"dependencies": {
"config": "3.0.1",
"dotenv": "7.0.0",
"express": "4.16.4",
"node-locale": "2.0.0",
},
"devDependencies": {
"lerna": "^3.15.0",
"#lerna/bootstrap": "3.8.5",
"nodemon": "1.18.10",
}
}
My lerna.json file looks like
{
"version": "1.0.0",
"npmClient": "yarn",
"useWorkspaces": true,
"packages": ["packages/*"]
}
As a workaround, I am runing the following docker command once the container is up.
docker exec -w /app <my-container-name> lerna bootstrap
I know this is not a proper solution, so can someone please help me out?

docker: removing file/image dependencies on other docker images

How do you remove files dependencies from a docker image? Everytime I build and upload a new docker image it takes layers and information from past images. How do I remove this connection so that all docker builds are independent from all others images.
Eg: I had a working file, and when I reuploaded it with no changes it stopped working and since then, I can't reupload a working file.
dockerfile:
FROM 10.119.222.132:5000/node:8.5.0-wheezy
ENV http_proxy=http://www-proxy.abc.ca:3128/
ENV https_proxy=http://www-proxy.abc.ca:3128/
ENV PORT 5000
RUN apt-get update
WORKDIR /usr/src/app
COPY package.json /usr/src/app
COPY . .
CMD ["node", "server.js"]
package.json:
{
"name": "api-proxy",
"version": "1.0.0",
"description": "API Gateway",
"main": "server.js",
"dependencies": {
"body-parser": "^1.17.2",
"crypto": "^1.0.1",
"cors": "^2.8.4",
"express": "^4.15.4",
"jsonwebtoken": "^7.4.2",
"mongodb": "^2.2.31",
"request": "^2.81.0"
},
"devDependencies": {},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node server.js"
},
"author": "",
"license": "ISC"
}
You could build it using
docker build --no-cache my_new_image:v1.0.0 .

Resources