Dockerfile returns built dist folder - docker

I have simple vue.js app, and I wanted to use Dockerfile to build it:
FROM node:14.14.0-stretch
WORKDIR '/app'
COPY package*.json ./
RUN npm install
COPY . ./
RUN env
RUN npm run generate
is it possible now, using docker-compose, and in compose not build it, but use already prepared image, to get from volume dist folder ? so I could copy it to nginx ?

I suggest that you wanna serve dist folder using nginx.
You should use multi stage build for this https://docs.docker.com/develop/develop-images/multistage-build/
Dockerfile
FROM node:14.14.0-stretch as build
WORKDIR '/app'
COPY package*.json ./
RUN npm install
COPY . ./
RUN env
RUN npm run generate
# create cdn stage from nginx image
FROM nginx:stable-alpine as cdn
# copy nginx config to serve files from /data/www
COPY nginx.conf /etc/nginx/nginx.conf
# copy built files from build stage to /data/www
COPY --from=build /app/dist /data/www
# nginx listen on port 80
EXPOSE 80
CMD [ "nginx" ]
nginx.conf
events {
worker_connections 1024;
}
http {
server {
location / {
root /data/www;
}
}
}
docker-compose.yml
version: '2.4'
services:
nginx-cdn:
build:
context: path/to/Dockerfile
target: cdn
ports:
- '80:80'

Related

express is not loading static folder with docker

I'm running webpack client side and express for server with docker the server will run fine but express won't load the static files
folder structure
client
docker
Dockerfile
src
css
js
public
server
docker
Dockerfile
src
views
client dockerfile
FROM node:19-bullseye
WORKDIR /usr/src/app
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
COPY package*.json ./
RUN pnpm install
COPY . .
EXPOSE 8080
CMD ["pnpm", "start"]
Server dockerfile
FROM node:19-bullseye
WORKDIR /usr/src/app
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
COPY package*.json ./
RUN pnpm install
COPY . .
EXPOSE 8081
CMD ["pnpm", "start"]
docker compose
version: '3.8'
services:
api:
image: server
ports:
- "8081:8081"
volumes:
- ./server/:/usr/src/app
- /usr/src/app/node_modules
client:
image: client
stdin_open: true
ports:
- "8080:8080"
volumes:
- ./client/:/usr/src/app
- /usr/src/app/node_modules
express
import path from 'path'
import { fileURLToPath } from 'url'
import express from 'express'
const __dirname = path.dirname(fileURLToPath(import.meta.url))
const app = express()
const port = 8081
// view engine
app.set("views", path.join(__dirname, 'views'));
app.set("view engine", "pug");
app.locals.basedir = app.get('views')
// Middlewares
app.use(express.static(path.resolve(__dirname, '../../client/public/')))
app.get('/', (req, res) => {
res.render('pages/home')
})
app.listen(port)
the closest thing that comes to my mind is that the public folder is not being copied by docker since this folder will be generated once i run the webpack server, or what might be causing this issue ?
The issue is going to be that you are not adding the folder /client/public to the server docker container.
Because of your folder structure, you could add the following line to server/dockerfile
copy ../../client/public ./client/public
then you would need to update your path statement in express.js
let p = path.resolve(__dirname, '../../client/public/');
if(!fs.existsSync(p)){
p = path.resolve(__dirname, './client/public/');
}
app.use(express.static(p))
The other option you have is to copy the whole project into both docker files and set the CWD, however, this method is not preferred. For example your server file would become
FROM node:19-bullseye
WORKDIR /usr/src/app
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
COPY package*.json ./
RUN pnpm install
COPY ../../ ./
WORKDIR /usr/src/app/server/src
EXPOSE 8081
CMD ["pnpm", "start"]
You can also inspect the file / folder structure by using docker exec

Upload file to dockerized app on the fly, then serve

I've got a Python web app that runs inside two Docker containers, one for the FastAPI backend, the other for the Vue.JS front-end (and there's besides a third one, with the Postgres Db). Now my task is to upload a file from inside the front-end client to the server, store it permanently and serve, so that I can use static URLs in my img tags.
A possible similar question is here: How to upload file outside Docker container in Flask app But it concerns an approach where the server must be restarted upon upload to reflect the changes. I need to do everything on the fly, as the app is running.
Dockerfile for front-end:
FROM node:16.14.2 as builder
WORKDIR /admin
COPY package*.json ./
COPY vite.config.js ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:1.21
COPY ./nginx/nginx.conf /etc/nginx/nginx.conf
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /admin/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Dockerfile for backend:
FROM tiangolo/uvicorn-gunicorn:python3.9
EXPOSE 8000
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
COPY ./requirements.txt /tmp/requirements.txt
RUN pip install --no-cache-dir -r /tmp/requirements.txt
WORKDIR /app
COPY . /app
I can certainly transfer a file (as raw bytes stream) from the front-end to the backend via Axios and receive it on the backend Python side. I can then process it however I like. But how can I store it in the location where the front-end container could read it and serve statically?
UPDATE (Use Docker Volumes)
As suggested in this question, I've tried to use a shared volume for my purpose. My COMPOSE file now looks like this:
version: "3.8"
services:
use_frontend:
container_name: 'use_frontend'
# --> ADDED <--
volumes:
- 'myshare:/etc/nginx'
- 'myshare:/usr/share/nginx/html'
build:
context: ./admin
dockerfile: Dockerfile
restart: always
depends_on:
- use_backend
ports:
- 8090:80
use_db:
container_name: use_db
image: postgres:14.2
# etc etc...
use_backend:
container_name: 'use_backend'
volumes:
# --> ADDED <--
- 'myshare:/usr/share/nginx/html'
build:
context: ./api
dockerfile: Dockerfile
restart: always
depends_on:
- use_db
# etc etc...
# --> ADDED <--
volumes:
myshare:
driver: local
The Dockerfile for use_frontend hasn't changed (must it?)
FROM node:16.14.2 as builder
WORKDIR /admin
# copy out files for npm
COPY package*.json ./
COPY vite.config.js ./
# install and build Vue.js
RUN npm install
COPY . .
RUN npm run build
# Nginx image
FROM nginx:1.21
# copy Nginx conf file to VOLUME mounted folder
COPY ./nginx/nginx.conf /etc/nginx/nginx.conf
# clean everything in VOLUME mounted folder (app entry point)
RUN rm -rf /usr/share/nginx/html/*
# copy compiled app to to VOLUME mounted folder (app entry point)
COPY --from=builder /admin/dist /usr/share/nginx/html
# expose port 80 for HTTP access
EXPOSE 80
# run Nginx
CMD ["nginx", "-g", "daemon off;"]
The Nginx conf file hasn't changed either:
events {}
http {
server {
listen 80;
# app entry point
root /usr/share/nginx/html;
# MIME types
include /etc/nginx/mime.types;
client_max_body_size 20M;
location / {
try_files $uri /index.html;
}
# etc etc...
}
}
But after doing
docker compose build
docker compose up
I'm getting file not found errors from Nginx:
use_frontend | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
use_frontend | /docker-entrypoint.sh: Configuration complete; ready for start up
use_frontend | 2022/07/24 23:21:32 [emerg] 1#1: open() "/etc/nginx/nginx.conf" failed (2: No such file or directory)
use_frontend | nginx: [emerg] open() "/etc/nginx/nginx.conf" failed (2: No such file or directory)
What exactly am I doing wrong with the Docker volume mounting?

Nuxt Docker: Exit code 0

I am building a Dockerfile for my Nuxt app. Whenever the container starts it gets exited with error code 0 immediately.
Here is my Dockerfile:
# Builder image
FROM node:16-alpine as builder
# Set up the working directory
WORKDIR /app
# Copy all files (Nuxt app) into the container
COPY ../frontend .
# Install dependencies
RUN npm install
# Build the app
RUN npm run build
# Serving image
FROM node:16-alpine
# Set up the working directory
WORKDIR /app
# Copy the built app
COPY --from=builder /app ./
# Specify the host variable
ENV HOST 0.0.0.0
# Expose the Nuxt port
ENV NUXT_PORT=3000
EXPOSE 3000
CMD ["npm", "run", "start"]
my docker-compose.yml file has:
frontend:
container_name: frontend
build:
context: .
dockerfile: ./docker/nuxt/Dockerfile
ports:
- "3000:3000"
networks:
- app-network
When I try to see the log file of the container, it only shows this.. which doesn't help me.
> frontend#1.0.0 start
> nuxt start
OK, I needed to add .dockerignore file
frontend/.nuxt/
frontend/dist/
frontend/node_modules/

Using docker to deploy Vue frontend and .net backend, frontend not resolving backend container name

I am trying to deploy a Vue app in the frontend and the .net core api in the backend. Using docker-compose file I have been to able to spin up the network and containers but I am struggling to have them communicate. I am pretty new to docker however, do understand that the Vue app dockerfile uses an Nginx base to feed the web app but it seems that may be affecting the network communication as the frontend container does not resolve the backend container name though it should since they are in the same virtual private network with the default bridge driver. When executing a bash shell in the frontend container and doing a curl to the backend container, I do get the correct response.
I did try to expose the backend container to the host and then use the server IP to reach it from the frontend and that does work however, I was hoping to not have to expose my api this way and wanted to make it work through the docker virtual private network if possible.
Example url I am trying in the frontend which encounters a name not resolved error: littlepiggy-api/api or littlepiggy-api:5000/api
Frontend Dockerfile
FROM node:14.18-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Backend Dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 5000
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["LittlePiggy.Api/LittlePiggy.Api.csproj", "LittlePiggy.Api/"]
RUN dotnet restore "LittlePiggy.Api/LittlePiggy.Api.csproj"
COPY . .
WORKDIR "/src/LittlePiggy.Api"
RUN dotnet build "LittlePiggy.Api.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "LittlePiggy.Api.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "LittlePiggy.Api.dll"]
Docker-compose file
version: '3'
services:
front:
container_name: littlepiggy-front
image: josh898/angie-app-front:latest
ports:
- 80:80
networks:
- littlepiggy
depends_on:
- api
api:
container_name: littlepiggy-api
image: josh898/angie-app-backend:latest
networks:
- littlepiggy
networks:
littlepiggy:
driver: bridge
You need to configure Nginx to pass requests that match the /api route on to the backend service.
If you create a nginx configuration file like this, called nginx.conf and place it in your frontend directory
server {
listen 80;
location / {
index index.html;
root /usr/share/nginx/html;
try_files $uri $uri/ $uri.html =404;
}
location /api/ {
proxy_pass http://littlepiggy-api/;
}
}
Then copy it into your frontend container by changing your frontend Dockerfile to
FROM node:14.18-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Then requests to http://localhost/api/xxxx should be passed on to the backend and requests to http://localhost/index.html should be served by the Nginx container directly.

How to avoid node_modules folder being deleted

I'm trying to create a Docker container to act as a test environment for my application. I am using the following Dockerfile:
FROM node:14.4.0-alpine
WORKDIR /test
COPY package*.json ./
RUN npm install .
CMD [ "npm", "test" ]
As you can see, it's pretty simple. I only want to install all dependencies but NOT copy the code, because I will run that container with the following command:
docker run -v `pwd`:/test -t <image-name>
But the problem is that node_modules directory is deleted when I mount the volume with -v. Any workaround to fix this?
When you bind mount test directory with $PWD, you container test directory will be overridden/mounted with $PWD. So you will not get your node_modules in test directory anymore.
To fix this issue you can use two options.
You can run npm install in separate directory like /node and mount your code in test directory and export node_path env like export NODE_PATH=/node/node_modules
then Dockerfile will be like:
FROM node:14.4.0-alpine
WORKDIR /node
COPY package*.json ./
RUN npm install .
WORKDIR /test
CMD [ "npm", "test" ]
Or you can write a entrypoint.sh script that will copy the node_modules folder to the test directory at the container runtime.
FROM node:14.4.0-alpine
WORKDIR /node
COPY package*.json ./
RUN npm install .
WORKDIR /test
COPY Entrypoint.sh ./
ENTRYPOINT ["Entrypoint.sh"]
and Entrypoint.sh is something like
#!/bin/bash
cp -r /node/node_modules /test/.
npm test
Approach 1
A workaround is you can do
CMD npm install && npm run dev
Approach 2
Have docker install node_modules on docker-compose build and run the app on docker-compose up.
Folder Structure
docker-compose.yml
version: '3.5'
services:
api:
container_name: /$CONTAINER_FOLDER
build: ./$LOCAL_FOLDER
hostname: api
volumes:
# map local to remote folder, exclude node_modules
- ./$LOCAL_FOLDER:/$CONTAINER_FOLDER
- /$CONTAINER_FOLDER/node_modules
expose:
- 88
Dockerfile
FROM node:14.4.0-alpine
WORKDIR /test
COPY ./package.json .
RUN npm install
# run command
CMD npm run dev

Resources