express is not loading static folder with docker - docker

I'm running webpack client side and express for server with docker the server will run fine but express won't load the static files
folder structure
client
docker
Dockerfile
src
css
js
public
server
docker
Dockerfile
src
views
client dockerfile
FROM node:19-bullseye
WORKDIR /usr/src/app
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
COPY package*.json ./
RUN pnpm install
COPY . .
EXPOSE 8080
CMD ["pnpm", "start"]
Server dockerfile
FROM node:19-bullseye
WORKDIR /usr/src/app
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
COPY package*.json ./
RUN pnpm install
COPY . .
EXPOSE 8081
CMD ["pnpm", "start"]
docker compose
version: '3.8'
services:
api:
image: server
ports:
- "8081:8081"
volumes:
- ./server/:/usr/src/app
- /usr/src/app/node_modules
client:
image: client
stdin_open: true
ports:
- "8080:8080"
volumes:
- ./client/:/usr/src/app
- /usr/src/app/node_modules
express
import path from 'path'
import { fileURLToPath } from 'url'
import express from 'express'
const __dirname = path.dirname(fileURLToPath(import.meta.url))
const app = express()
const port = 8081
// view engine
app.set("views", path.join(__dirname, 'views'));
app.set("view engine", "pug");
app.locals.basedir = app.get('views')
// Middlewares
app.use(express.static(path.resolve(__dirname, '../../client/public/')))
app.get('/', (req, res) => {
res.render('pages/home')
})
app.listen(port)
the closest thing that comes to my mind is that the public folder is not being copied by docker since this folder will be generated once i run the webpack server, or what might be causing this issue ?

The issue is going to be that you are not adding the folder /client/public to the server docker container.
Because of your folder structure, you could add the following line to server/dockerfile
copy ../../client/public ./client/public
then you would need to update your path statement in express.js
let p = path.resolve(__dirname, '../../client/public/');
if(!fs.existsSync(p)){
p = path.resolve(__dirname, './client/public/');
}
app.use(express.static(p))
The other option you have is to copy the whole project into both docker files and set the CWD, however, this method is not preferred. For example your server file would become
FROM node:19-bullseye
WORKDIR /usr/src/app
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
COPY package*.json ./
RUN pnpm install
COPY ../../ ./
WORKDIR /usr/src/app/server/src
EXPOSE 8081
CMD ["pnpm", "start"]
You can also inspect the file / folder structure by using docker exec

Related

2 docker builds into a multi-build

I have 1 dockerfile, 1 stage of the build for the node server, serving some data, and the 2nd stage is a react app. I use a docker compose file to run the dockerfile.
I am able to access the react app via port 3000, but the 2nd stage server isn't running so I can't access the data.
Any idea how to solve it?
FROM node:12.6
WORKDIR /usr/src/app
COPY package.json .
COPY . .
EXPOSE 5500 // node server
CMD ["npm","run", "server"]
FROM node:12.6
WORKDIR /usr/src/app
COPY package.json .
RUN npm i
COPY . .
EXPOSE 3000 // react app
CMD ["npm","run", "dev"]
version: "3.9"
services:
testingapp:
container_name: testingApp
build: .
volumes:
- ./src:/app/src:delegated
ports:
- "3000:3000"
I have read various docs online.
You're trying to run the front- and back-ends in the same container. A container only runs one process, though; if you need two separate processes from the same code base then you can run two separate containers off the same image, overriding the command: on one of them.
So reduce the Dockerfile to copy the code base in, and declare one process or the other as the main container command:
FROM node:12.6
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm ci
COPY ./ ./
EXPOSE 3000
CMD ["npm", "run", "server"]
Now in your Compose file, declare two separate containers. For the second, override the command: with the alternate program to run. Both can build: the same image; the second build will come entirely from the Docker layer cache and be all but free. The code is built into the image and you don't need to replace it using volumes:.
version: '3.8'
services:
express:
build: .
ports: ['5500:3000']
react:
build: .
command: npm run dev
ports: ['3000:3000']

Dockerfile permission for volume

FROM --platform=$BUILDPLATFORM maven:3.8.5-eclipse-temurin-17 AS builder
WORKDIR /server
COPY pom.xml /server/pom.xml
RUN mvn dependency:go-offline
COPY src /server/src
RUN mvn install
# install Docker tools (cli, buildx, compose)
COPY --from=gloursdocker/docker / /
CMD ["mvn", "spring-boot:run"]
FROM builder as prepare-production
RUN mkdir -p target/dependency
WORKDIR /server/target/dependency
RUN jar -xf ../*.jar
FROM eclipse-temurin:17-jre-focal
EXPOSE 8080
VOLUME /app
ARG DEPENDENCY=/server/target/dependency
COPY --from=prepare-production ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=prepare-production ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=prepare-production ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","com.server.backend.BackendApplicaiton"]
and I need to save the files in /app to the /opt/containers/backend directory (absolute). Code bellow is my docker compose file.
version: "3.9"
services:
backend:
container_name: "backend"
build: backend
environment:
- ${MSSQL_PASSWORD}
ports:
- 3000:8080
volumes:
- /opt/containers/backend:/app
networks:
- backend
networks:
backend:
name: backend
driver: bridge
internal: false
if I run this and create volume in docker, everything works, files are saved inside docker volume, but when I set absolute path as in the docker compose file, directory is empty and app does not run. I am sure the error is in permissions, but I cant figured it out where and I could not find any solutions :(
Thank you for all your replies and help.

Docker container works from Dockerfile but get next: not found from docker-compose container

I am having an issue with my docker-compose configuration file. My goal is to run a Next.js app with a docker-compose file and enable hot reload.
Running the Next.js app from its Dockerfile works but hot reload does not work.
Running the Next.js app from the docker-compose file triggers an error: /bin/sh: next: not found and I was not able to figure what's wrong...
Dockerfile: (taken from Next.js' documentation website)
[Notice it's a multistage build however, I am only referencing the builder stage in the docker-compose file.]
# Install dependencies only when needed
FROM node:18-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install # --frozen-lockfile
# Rebuild the source code only when needed
FROM node:18-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:18-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
# COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3001
ENV PORT 3001
CMD ["node", "server.js"]
docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${POSTGRESQL_PASSWORD}
backend:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
environment:
DATABASE_USERNAME: ${MYAPP_DATABASE_USERNAME}
DATABASE_PASSWORD: ${POSTGRESQL_PASSWORD}
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
target: builder
command: yarn dev
volumes:
- ./frontend:/app
expose:
- "3001"
ports:
- "3001:3001"
depends_on:
- backend
environment:
FRONTEND_BUILD: ${FRONTEND_BUILD}
PORT: 3001
package.json:
{
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"dependencies": {
"next": "latest",
"react": "^18.1.0",
"react-dom": "^18.1.0"
}
}
When calling yarn dev from docker-compose.yml it actually calls next dev and that's when it triggers the error /bin/sh: next: not found. However, running the container straight from the Dockerfile works and does not lead to this error.
[Update]:
If I remove the volume attribute from my docker-compse.yml file, I don't get the /bin/sh: next: not found error and the container runs however, I now don't get the hot reload feature I am looking for. Any idea why the volume is messing up with the /bin/sh next command?
This is happening because your local filesystem is being mounted over what is in the docker container. Your docker container does build the node modules in the builder stage, but I'm guessing you don't have the node modules available in your local file system.
To see if this is what is happening, on your local file system, you can do a yarn install. Then try running your container via docker again. I'm predicting that this will work, as yarn will have installed next locally, and it is actually your local file system's node modules that will be run in the docker container.
One way to fix this is to volume mount everything except the node modules folder. Details on how to do that: Add a volume to Docker, but exclude a sub-folder
So in your case, I believe you can add a line to your compose file:
frontend:
...
volumes:
- ./frontend:/app
- ./frontend/node_modules # <-- try adding this!
...
That should allow the docker container's node_modules to not be overwritten by any volume mount.

Dockerfile returns built dist folder

I have simple vue.js app, and I wanted to use Dockerfile to build it:
FROM node:14.14.0-stretch
WORKDIR '/app'
COPY package*.json ./
RUN npm install
COPY . ./
RUN env
RUN npm run generate
is it possible now, using docker-compose, and in compose not build it, but use already prepared image, to get from volume dist folder ? so I could copy it to nginx ?
I suggest that you wanna serve dist folder using nginx.
You should use multi stage build for this https://docs.docker.com/develop/develop-images/multistage-build/
Dockerfile
FROM node:14.14.0-stretch as build
WORKDIR '/app'
COPY package*.json ./
RUN npm install
COPY . ./
RUN env
RUN npm run generate
# create cdn stage from nginx image
FROM nginx:stable-alpine as cdn
# copy nginx config to serve files from /data/www
COPY nginx.conf /etc/nginx/nginx.conf
# copy built files from build stage to /data/www
COPY --from=build /app/dist /data/www
# nginx listen on port 80
EXPOSE 80
CMD [ "nginx" ]
nginx.conf
events {
worker_connections 1024;
}
http {
server {
location / {
root /data/www;
}
}
}
docker-compose.yml
version: '2.4'
services:
nginx-cdn:
build:
context: path/to/Dockerfile
target: cdn
ports:
- '80:80'

Docker Compose build command using Cache and not picking up changed files while copying to docker

I have a docker-compose.yml file comprising of two services (both based on a DockerFile). I have build the images once (using command: docker-compose build) and they were up and running once I ran this command (docker-compose up).
I had to change the source code used for one of the services, however, when I rebuilt the images (docker-compose build), the code changes were not reflected once I ran the services (docker-compose up).
docker-compose.yml
version: '2'
services:
serviceOne:
build:
context: ./ServerOne
args:
PORT: 4000
ports:
- "4000:4000"
env_file:
- ./ServerOne/.env
environment:
- PORT=4000
serviceTwo:
build:
context: ./serviceTwo
args:
PORT: 3000
ports:
- "3000:3000"
env_file:
- ./serviceTwo/.env
environment:
- PORT=3000
- serviceOne_URL=http://serviceOne:4000/
depends_on:
- serviceOne
serviceOne/DockerFile
FROM node:8.10.0
RUN mkdir -p /app
WORKDIR /app
ADD package.json package-lock.json /app/
RUN npm install
COPY . /app/
RUN npm build
EXPOSE ${ACC_PORT}
CMD [ "npm", "start" ]
serviceTwo/DockerFile
FROM node:8.10.0
RUN mkdir -p /app
WORKDIR /app
ADD package.json package-lock.json /app/
RUN npm install
COPY . /app/
RUN npm build
EXPOSE ${ACC_PORT}
CMD [ "npm", "start" ]
Following is the output of the docker-compose when it is ran for the second time.
It is some how using the cached images again when COPY and npm build command are ran.
How could the DockerFile or docker-compose file be changed so that the new source code is deployed?
You can force the build to ignore the cache by adding on the --no-cache option to the docker-compose build

Resources