pm2-runtime ecosystem npm script fail - docker

****Google Translator used****
I don't know why "ecosystem.config.js" is still included in npm agrs ...
So in the "ecosystem.config.js" file, args only has run and start, but when you build a docker, it looks like it works with npm ecosystem.config.js run start.
Please tell me why
// dockerfile
FROM node:lts-alpine
RUN npm install pm2 -g
COPY . /usr/src/nuxt/
WORKDIR /usr/src/nuxt/
RUN npm install
EXPOSE 8080
RUN npm run build
# start the app
CMD ["pm2-runtime", "ecosystem.config.js"]
// ecosystem.config.js
module.exports = {
apps: [
{
name: 'webapp',
exec_mode: 'cluster',
instances: 2,
script: 'npm',
args: ['run', 'start'],
env: {
HOST: '0.0.0.0',
PORT: 8080
},
autorestart: true,
max_memory_restart: '1G'
}
]
}

I struggled with ecosystem.config.js, I ended up using the yaml format instead: create process.yaml and enter your config>
apps:
- script: /app/index.js
name: 'app'
instances: 2
error_file: ./errors.log
exec_mode: cluster
env:
NODE_ENV: production
PORT: 12345
Then in the docker file:
COPY ./dist/index.js /app/
COPY process.yaml /app/
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
# Expose the listening port of your app
EXPOSE 12345
CMD [ "pm2-runtime", "/app/process.yaml"]
Just change the directories and files to the way you want things setup

Related

express is not loading static folder with docker

I'm running webpack client side and express for server with docker the server will run fine but express won't load the static files
folder structure
client
docker
Dockerfile
src
css
js
public
server
docker
Dockerfile
src
views
client dockerfile
FROM node:19-bullseye
WORKDIR /usr/src/app
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
COPY package*.json ./
RUN pnpm install
COPY . .
EXPOSE 8080
CMD ["pnpm", "start"]
Server dockerfile
FROM node:19-bullseye
WORKDIR /usr/src/app
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
COPY package*.json ./
RUN pnpm install
COPY . .
EXPOSE 8081
CMD ["pnpm", "start"]
docker compose
version: '3.8'
services:
api:
image: server
ports:
- "8081:8081"
volumes:
- ./server/:/usr/src/app
- /usr/src/app/node_modules
client:
image: client
stdin_open: true
ports:
- "8080:8080"
volumes:
- ./client/:/usr/src/app
- /usr/src/app/node_modules
express
import path from 'path'
import { fileURLToPath } from 'url'
import express from 'express'
const __dirname = path.dirname(fileURLToPath(import.meta.url))
const app = express()
const port = 8081
// view engine
app.set("views", path.join(__dirname, 'views'));
app.set("view engine", "pug");
app.locals.basedir = app.get('views')
// Middlewares
app.use(express.static(path.resolve(__dirname, '../../client/public/')))
app.get('/', (req, res) => {
res.render('pages/home')
})
app.listen(port)
the closest thing that comes to my mind is that the public folder is not being copied by docker since this folder will be generated once i run the webpack server, or what might be causing this issue ?
The issue is going to be that you are not adding the folder /client/public to the server docker container.
Because of your folder structure, you could add the following line to server/dockerfile
copy ../../client/public ./client/public
then you would need to update your path statement in express.js
let p = path.resolve(__dirname, '../../client/public/');
if(!fs.existsSync(p)){
p = path.resolve(__dirname, './client/public/');
}
app.use(express.static(p))
The other option you have is to copy the whole project into both docker files and set the CWD, however, this method is not preferred. For example your server file would become
FROM node:19-bullseye
WORKDIR /usr/src/app
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
COPY package*.json ./
RUN pnpm install
COPY ../../ ./
WORKDIR /usr/src/app/server/src
EXPOSE 8081
CMD ["pnpm", "start"]
You can also inspect the file / folder structure by using docker exec

Docker container works from Dockerfile but get next: not found from docker-compose container

I am having an issue with my docker-compose configuration file. My goal is to run a Next.js app with a docker-compose file and enable hot reload.
Running the Next.js app from its Dockerfile works but hot reload does not work.
Running the Next.js app from the docker-compose file triggers an error: /bin/sh: next: not found and I was not able to figure what's wrong...
Dockerfile: (taken from Next.js' documentation website)
[Notice it's a multistage build however, I am only referencing the builder stage in the docker-compose file.]
# Install dependencies only when needed
FROM node:18-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install # --frozen-lockfile
# Rebuild the source code only when needed
FROM node:18-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:18-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
# COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3001
ENV PORT 3001
CMD ["node", "server.js"]
docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${POSTGRESQL_PASSWORD}
backend:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
environment:
DATABASE_USERNAME: ${MYAPP_DATABASE_USERNAME}
DATABASE_PASSWORD: ${POSTGRESQL_PASSWORD}
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
target: builder
command: yarn dev
volumes:
- ./frontend:/app
expose:
- "3001"
ports:
- "3001:3001"
depends_on:
- backend
environment:
FRONTEND_BUILD: ${FRONTEND_BUILD}
PORT: 3001
package.json:
{
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"dependencies": {
"next": "latest",
"react": "^18.1.0",
"react-dom": "^18.1.0"
}
}
When calling yarn dev from docker-compose.yml it actually calls next dev and that's when it triggers the error /bin/sh: next: not found. However, running the container straight from the Dockerfile works and does not lead to this error.
[Update]:
If I remove the volume attribute from my docker-compse.yml file, I don't get the /bin/sh: next: not found error and the container runs however, I now don't get the hot reload feature I am looking for. Any idea why the volume is messing up with the /bin/sh next command?
This is happening because your local filesystem is being mounted over what is in the docker container. Your docker container does build the node modules in the builder stage, but I'm guessing you don't have the node modules available in your local file system.
To see if this is what is happening, on your local file system, you can do a yarn install. Then try running your container via docker again. I'm predicting that this will work, as yarn will have installed next locally, and it is actually your local file system's node modules that will be run in the docker container.
One way to fix this is to volume mount everything except the node modules folder. Details on how to do that: Add a volume to Docker, but exclude a sub-folder
So in your case, I believe you can add a line to your compose file:
frontend:
...
volumes:
- ./frontend:/app
- ./frontend/node_modules # <-- try adding this!
...
That should allow the docker container's node_modules to not be overwritten by any volume mount.

vue3 and vite.js, docker build production failed "Error: Could not resolve entry module (index.html)."

I am trying to build a vue3 project with vite.js. I want to build it in a Dockerfile, but I get the following error.
vite v2.9.5 building for production...
✓ 0 modules transformed.
Could not resolve entry module (index.html).
error during build:
Error: Could not resolve entry module (index.html).
at error (/panda-planner/frontend-planner/node_modules/rollup/dist/shared/rollup.js:198:30)
at ModuleLoader.loadEntryModule (/panda-planner/frontend-planner/node_modules/rollup/dist/shared/rollup.js:22480:20)
at async Promise.all (index 0)
Error response from daemon: The command '/bin/sh -c npm run build' returned a non-zero code: 1
Failed to deploy '<unknown> Dockerfile: Dockerfile': Can't retrieve image ID from build stream
I have been looking for information on rollup but I don't understand what it is. Also my command npm run build works perfectly on my computer.
Can someone help me, please?
My vite.config.js
import { defineConfig } from "vite";
import vue from "#vitejs/plugin-vue";
import eslintPlugin from "vite-plugin-eslint";
const path = require("path");
// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue(), eslintPlugin()],
resolve: {
alias: {
"#": path.resolve(__dirname, "src"),
},
},
});
My Dockerfile
# Build backend application
FROM node:14.19.1-alpine AS builder
WORKDIR /panda-planner/backend-planner/
COPY /backend-planner/package*.json .
RUN npm install
COPY . .
RUN npm run build
EXPOSE 1337
CMD ["npm", "run", "start" ]
# Build frontend application
FROM builder as frontend
WORKDIR /panda-planner/frontend-planner/
COPY /frontend-planner/package*.json .
RUN npm install --legacy-peer-deps
COPY . .
RUN npm run build
# Setup nginx server for frontend
FROM nginx:stable-alpine as nginx
COPY --from=frontend /frontend-planner/dist /usr/share/nginx/html
#COPY ./default.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;" ]
My bad -_-!
I made a mistake in the COPY path in the frontend.
This solution works:
# Build backend application
FROM node:14.19.1-alpine AS builder
WORKDIR /panda-planner/backend-planner/
COPY /backend-planner/package*.json .
RUN npm install
COPY /backend-planner/ .
RUN npm run build
EXPOSE 1337
CMD ["npm", "run", "start" ]
# Build frontend application
FROM builder AS frontend
WORKDIR /panda-planner/frontend-planner/
COPY /frontend-planner/package*.json .
RUN npm install --legacy-peer-deps
COPY /frontend-planner/ .
RUN npm run build
# Setup nginx server for frontend
FROM nginx:stable-alpine AS nginx
COPY --from=frontend /panda-planner/frontend-planner/dist/ /usr/share/nginx/html/
#COPY ./default.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;" ]

Problem connecting between containers in Pod

I have a pod with 3 containers in it: client, server, mongodb (MERN)
The pod has a mapped id to the host and the client listens to it -> 8184:3000
The website comes up and is reachable. Server logs says that it has been conented to the mogodb and is listening at port 3001 as I have assigned.
It seems that the client can not connect to the server side and therefor can not check the credentials for login which leads to get wrong pass or user all the time.
The whol program works localy on my windows.
Am I missing some part in docker or crating the pod. As far as I undrstood the containers in a pod should communicate as if they were running in a local network.
This is the gitlab-yml:
stages:
- build
variables:
GIT_SUBMODULE_STRATEGY: recursive
TAG_LATEST: $CI_REGISTERY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTERY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
TAG_NAME_Client: gitlab.comp.com/sdx-licence-manager:$CI_COMMIT_REF_NAME-client
TAG_NAME_Server: gitlab.comp.com/semdatex/sdx-licence-manager:$CI_COMMIT_REF_NAME-server
cache:
paths:
- client/node_modules/
- server/node_modules/
build_pod:
tags:
- sdxuser-pod-shell
stage: build
script:
- podman pod rm -f -a
- podman pod create --name lm-pod-$CI_COMMIT_SHORT_SHA -p 8184:3000
build_db:
image: mongo:4.4
tags:
- sdxuser-pod-shell
stage: build
script:
- podman run -dt --pod lm-pod-$CI_COMMIT_SHORT_SHA -v ~/lmdb_volume:/data/db:z --name mongo -d mongo
build_server:
image: node:16.6.1
stage: build
tags:
- sdxuser-pod-shell
script:
- cd server
- podman build -t $TAG_NAME_Server .
- podman run -dt --pod lm-pod-$CI_COMMIT_SHORT_SHA $TAG_NAME_Server
build_client:
image: node:16.6.1
stage: build
tags:
- sdxuser-pod-shell
script:
- cd client
- podman build -t $TAG_NAME_Client .
- podman run -d --pod lm-pod-$CI_COMMIT_SHORT_SHA $TAG_NAME_Client
Docker File Server:
FROM docker.io/library/node:16.6.1
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . ./
EXPOSE 3001
CMD [ "npm", "run", "start" ]
Docker File Client:
FROM docker.io/library/node:16.6.1
WORKDIR /app
COPY package*.json ./
RUN npm install
RUN npm install -g npm#7.21.0
COPY . ./
EXPOSE 3000
# start app
CMD [ "npm", "run", "start" ]
snippet from index.js at clientside trying to reach the server side checking log in credentials:
function Login(props) {
const [email, setEmail] = useState('');
const [password, setPassword] = useState('');
async function loginUser(credentials) {
return fetch('http://127.0.0.1:3001/login', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(credentials),
})
.then((data) => data.json());
}
}
pod:
Actually it has nothing to do with podman. Sorry about that. I added a proxy to my package.json and it redirected the requests correctly:
"proxy": "http://localhost:3001"

VueJs docker image not loading in the browser

QUESTION: (edited: solution is added at the end of this post)
I have VueJS project (developed in webpack), which I want to docker-size.
My Dockerfile looks like:
FROM node:8.11.1 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "run", "dev"]
which is basically following the flow from this post.
I also have a .dockerignore file, where I copied the same files from my .gitignore and it looks like:
.DS_Store
node_modules/
/dist/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Editor directories and files
.idea
.vscode
*.suo
*.ntvs*
*.njsproj
*.sln
.git/
I have created a docker image with the command:
docker build -t test/my-image-name .
and then run it into a container with the command:
docker run -it -p 8080:8080 --rm --name my-container-name test/my-image-name
as a result of this last command, I got the same output in the terminal (which is normally showing in cases of debugging with webpack / vuejs) as when I run the app locally:
BUT: at the end, in the browser window the app is not loaded
If I run the commands docker images and docker ps I can see that the image and the container are there, and while creating them, I did not got any error messages.
I found this post and had a few tries for changing the Dockerfile as:
Option 1
FROM node:8.11.1 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
ENTRYPOINT ["ng", "serve", "-H", "0.0.0.0"]
Option 2
FROM node:8.11.1 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
ENTRYPOINT ["ng", "serve", "-H", "0.0.0.0"]
EXPOSE 8080
CMD ["npm", "run", "dev"]
But it seems none of them is working.
btw. my package.json file looks like:
"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js",
"start": "npm run dev",
"build": "node build/build.js"
}
So I'm wondering: how to make the app to be opened in the browser from the docker image?
SOLUTION: not, sure if this was the reason for the fix, but I did two things. As mentioned, I'm working with the VueJS and webpack, so inside of the file named config/index.js, which initially looked like:
module.exports = {
dev: {
// Paths
assetsSubDirectory: 'static',
assetsPublicPath: '/',
proxyTable: {},
// Various Dev Server settings
host: 'localhost', // <---- this one
port: 8080,
I changed the host property from 'localhost' into '0.0.0.0', removed the EXPOSE 8080 line from the Dockerfile (the initial Docker file from my question above) since I noticed that the port from the config file is used by default and also restarted the installed Docker tool on my local machine.

Resources