Dockerfile contains:
FROM node:16-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install
COPY public ./public
COPY src/ ./src
RUN npm run-script build
FROM caddy:2.2.0-alpine
WORKDIR /app
COPY --from=builder /app/build build
COPY Caddyfile .
EXPOSE 3000
CMD ["/docker-entrypoint.sh"]
docker-compose.yml says:
version: '3.7'
services:
frontend:
container_name: frontend
build:
context: .
dockerfile: Dockerfile
ports:
- '3000:3000'
And after starting the container, I see some additional bindings that came from nowhere, or at least not from what I defined in configuration files:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
3eae13fc3a1b frontend_frontend "/docker-entrypoint.…" 5 minutes ago Up 5 minutes 80/tcp, 443/tcp, 2019/tcp, 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp
frontend
Where do these 80, 443 and 2019 bindings may be coming from?
Those are exposed in the caddy Dockerfile
Related
I am building a Dockerfile for my Nuxt app. Whenever the container starts it gets exited with error code 0 immediately.
Here is my Dockerfile:
# Builder image
FROM node:16-alpine as builder
# Set up the working directory
WORKDIR /app
# Copy all files (Nuxt app) into the container
COPY ../frontend .
# Install dependencies
RUN npm install
# Build the app
RUN npm run build
# Serving image
FROM node:16-alpine
# Set up the working directory
WORKDIR /app
# Copy the built app
COPY --from=builder /app ./
# Specify the host variable
ENV HOST 0.0.0.0
# Expose the Nuxt port
ENV NUXT_PORT=3000
EXPOSE 3000
CMD ["npm", "run", "start"]
my docker-compose.yml file has:
frontend:
container_name: frontend
build:
context: .
dockerfile: ./docker/nuxt/Dockerfile
ports:
- "3000:3000"
networks:
- app-network
When I try to see the log file of the container, it only shows this.. which doesn't help me.
> frontend#1.0.0 start
> nuxt start
OK, I needed to add .dockerignore file
frontend/.nuxt/
frontend/dist/
frontend/node_modules/
I'm trying to build a docker which consists of two containers. One for nginx, and another for storybook (UI docs).
My prod.yml file:
version: '3.7'
services:
storybook_container:
image: app_prod_storybook:latest
build:
context: ../
target: builder
dockerfile: Dockerfile
container_name: app_prod_storybook
ports:
- "8080:8080"
storybook_nginx:
image: app_prod_storybook_nginx:latest
build:
context: ../
target: production-build
dockerfile: Dockerfile
container_name: app_prod_storybook_nginx
restart: always
ports:
- "80:80"
depends_on:
- storybook_container
And my Dockerfile:
FROM node:lts-alpine as builder
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "build-storybook"]
FROM nginx:stable-alpine as production-build
COPY nginx/nginx.conf /etc/nginx/nginx.conf
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /usr/src/app/storybook-static /usr/share/nginx/html/docs/storybook-static
EXPOSE 80
ENTRYPOINT ["nginx", "-g", "daemon off;"]
This build only works if I first execute two commands locally:
npm i
npm run storybook build
Otherwise, the /usr/src/app/storybook-static directory does not exist. Although the assembly container fulfills and turns off. Before turning it off, I see the storybook-static directory and it contains all the necessary files.
What am I doing wrong?
I am trying to use docker to set up a simple virtual host, serving the static files.
However, after I executed docker compose up, the localhost page will always be "Welcome to nginx!". I wonder which part did I do wrong.
Here's my code:
(1) Dockerfile:
FROM node:16-alpine AS builder
WORKDIR /app
COPY package.json ./
RUN npm install
COPY ./ ./
RUN npm run build
FROM nginx
EXPOSE 80
WORKDIR /usr/share/ngnix/html
COPY --from=builder /app/build .
ENTRYPOINT [ "nginx", "-g", "daemon off;" ]
(2) docker-compose
version: '3'
services:
deployment-production:
build:
context: .
dockerfile: Dockerfile
ports:
- '80:80'
(3) folder strucutre:
Thanks for any help!!
This is my dockerFile located at vDocker/Dockerfile
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
RUN apk add --no-cache bash
COPY ./vDocker/nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
I also have docker-compose located at root directory.
version: '3'
services:
web_client:
build:
context: .
dockerfile: ./vDocker/Dockerfile
container_name: web_client
restart: unless-stopped
tty: true
volumes:
- /var/www/app/ssl/certbot/conf:/etc/letsencrypt
- /var/www/app/ssl/certbot/www:/var/www/certbot
ports:
- 80:80
- 443:443
After running docker-compose build, It gives me the following error: Service 'web_client' failed to build: COPY failed: stat /var/lib/docker/overlay2/67b326c995a1ce52fb3ee2a792d84ffe9bc403aa5962755a2b89f1ab925a1242/merged/app/dist: no such file or directory
Any idea why?
You don't need to name the second stage.
How your build looks like depends on how you set it up and I don't know it. But what you can do is:
run the first stage as a separate Dockerfile
after the last RUN add RUN ls -lart -> this should print the contents of the directory and you can check if the /app/dist really exists
For the rest your code looks good.
I have a docker-compose.yml file comprising of two services (both based on a DockerFile). I have build the images once (using command: docker-compose build) and they were up and running once I ran this command (docker-compose up).
I had to change the source code used for one of the services, however, when I rebuilt the images (docker-compose build), the code changes were not reflected once I ran the services (docker-compose up).
docker-compose.yml
version: '2'
services:
serviceOne:
build:
context: ./ServerOne
args:
PORT: 4000
ports:
- "4000:4000"
env_file:
- ./ServerOne/.env
environment:
- PORT=4000
serviceTwo:
build:
context: ./serviceTwo
args:
PORT: 3000
ports:
- "3000:3000"
env_file:
- ./serviceTwo/.env
environment:
- PORT=3000
- serviceOne_URL=http://serviceOne:4000/
depends_on:
- serviceOne
serviceOne/DockerFile
FROM node:8.10.0
RUN mkdir -p /app
WORKDIR /app
ADD package.json package-lock.json /app/
RUN npm install
COPY . /app/
RUN npm build
EXPOSE ${ACC_PORT}
CMD [ "npm", "start" ]
serviceTwo/DockerFile
FROM node:8.10.0
RUN mkdir -p /app
WORKDIR /app
ADD package.json package-lock.json /app/
RUN npm install
COPY . /app/
RUN npm build
EXPOSE ${ACC_PORT}
CMD [ "npm", "start" ]
Following is the output of the docker-compose when it is ran for the second time.
It is some how using the cached images again when COPY and npm build command are ran.
How could the DockerFile or docker-compose file be changed so that the new source code is deployed?
You can force the build to ignore the cache by adding on the --no-cache option to the docker-compose build