Nuxt Docker: Exit code 0 - docker

I am building a Dockerfile for my Nuxt app. Whenever the container starts it gets exited with error code 0 immediately.
Here is my Dockerfile:
# Builder image
FROM node:16-alpine as builder
# Set up the working directory
WORKDIR /app
# Copy all files (Nuxt app) into the container
COPY ../frontend .
# Install dependencies
RUN npm install
# Build the app
RUN npm run build
# Serving image
FROM node:16-alpine
# Set up the working directory
WORKDIR /app
# Copy the built app
COPY --from=builder /app ./
# Specify the host variable
ENV HOST 0.0.0.0
# Expose the Nuxt port
ENV NUXT_PORT=3000
EXPOSE 3000
CMD ["npm", "run", "start"]
my docker-compose.yml file has:
frontend:
container_name: frontend
build:
context: .
dockerfile: ./docker/nuxt/Dockerfile
ports:
- "3000:3000"
networks:
- app-network
When I try to see the log file of the container, it only shows this.. which doesn't help me.
> frontend#1.0.0 start
> nuxt start

OK, I needed to add .dockerignore file
frontend/.nuxt/
frontend/dist/
frontend/node_modules/

Related

Docker container works from Dockerfile but get next: not found from docker-compose container

I am having an issue with my docker-compose configuration file. My goal is to run a Next.js app with a docker-compose file and enable hot reload.
Running the Next.js app from its Dockerfile works but hot reload does not work.
Running the Next.js app from the docker-compose file triggers an error: /bin/sh: next: not found and I was not able to figure what's wrong...
Dockerfile: (taken from Next.js' documentation website)
[Notice it's a multistage build however, I am only referencing the builder stage in the docker-compose file.]
# Install dependencies only when needed
FROM node:18-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install # --frozen-lockfile
# Rebuild the source code only when needed
FROM node:18-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:18-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
# COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3001
ENV PORT 3001
CMD ["node", "server.js"]
docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${POSTGRESQL_PASSWORD}
backend:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
environment:
DATABASE_USERNAME: ${MYAPP_DATABASE_USERNAME}
DATABASE_PASSWORD: ${POSTGRESQL_PASSWORD}
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
target: builder
command: yarn dev
volumes:
- ./frontend:/app
expose:
- "3001"
ports:
- "3001:3001"
depends_on:
- backend
environment:
FRONTEND_BUILD: ${FRONTEND_BUILD}
PORT: 3001
package.json:
{
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"dependencies": {
"next": "latest",
"react": "^18.1.0",
"react-dom": "^18.1.0"
}
}
When calling yarn dev from docker-compose.yml it actually calls next dev and that's when it triggers the error /bin/sh: next: not found. However, running the container straight from the Dockerfile works and does not lead to this error.
[Update]:
If I remove the volume attribute from my docker-compse.yml file, I don't get the /bin/sh: next: not found error and the container runs however, I now don't get the hot reload feature I am looking for. Any idea why the volume is messing up with the /bin/sh next command?
This is happening because your local filesystem is being mounted over what is in the docker container. Your docker container does build the node modules in the builder stage, but I'm guessing you don't have the node modules available in your local file system.
To see if this is what is happening, on your local file system, you can do a yarn install. Then try running your container via docker again. I'm predicting that this will work, as yarn will have installed next locally, and it is actually your local file system's node modules that will be run in the docker container.
One way to fix this is to volume mount everything except the node modules folder. Details on how to do that: Add a volume to Docker, but exclude a sub-folder
So in your case, I believe you can add a line to your compose file:
frontend:
...
volumes:
- ./frontend:/app
- ./frontend/node_modules # <-- try adding this!
...
That should allow the docker container's node_modules to not be overwritten by any volume mount.

i read old question but cant fix: docker Error: Unable to access jarfile

hi, I am new to docker and trying to containerize a simple spring boot application. The docker file is as below.
version:
win 11
docker desktop : newest version
dockerfile
FROM openjdk:8-jre-alpine
RUN mkdir app
WORKDIR /app
# Copy the jar to the production image from the builder stage.
COPY target/taco-cloud-*.jar app/taco-cloud.jar
# Run the web service on container startup.
EXPOSE 9090
CMD ["java", "-jar", "taco-cloud.jar"]
docker-compose
version: '2.4'
services:
mysql:
container_name: test-data
image: mysql:latest
networks:
- kell-network
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=taco_cloud
- MYSQL_USER=kell
- MYSQL_PASSWORD=dskell0502
volumes:
- mysql-data:/var/lib/mysql
- ./schema.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "3307:3306"
web:
container_name: test-web
image: test:ver1
ports:
- "9090:9090"
depends_on:
- mysql
networks:
- kell-network
volumes:
mysql-data:
networks:
kell-network:
driver: bridge
when I am trying to run docker-compose, I am getting "Error: Unable to access jarfile taco-clound.jar"
test-web | Error: Unable to access jarfile taco-cloud.jar
I tried to edit the dockerfile but it still doesn't work
FROM maven:latest
RUN mkdir /app
WORKDIR /app
COPY . .
EXPOSE 8080
CMD ["mvn", "spring-boot:run"]
and
# Use the official maven/Java 8 image to create a build artifact: https://hub.docker.com/_/maven
FROM maven:3.5-jdk-8-alpine as builder
# Copy local code to the container image.
RUN mkdir app
WORKDIR /app
COPY pom.xml .
COPY src ./src
# Build a release artifact.
RUN mvn package -DskipTests
# Use the Official OpenJDK image for a lean production stage of our multi-stage build.
# https://hub.docker.com/_/openjdk
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM openjdk:8-jre-alpine
# Copy the jar to the production image from the builder stage.
COPY --from=builder target/taco-cloud-*.jar app/taco-cloud.jar
# Run the web service on container startup.
EXPOSE 9090
CMD ["java", "-jar", "taco-cloud.jar"]
WORKDIR /app
COPY target/taco-cloud-*.jar app/taco-cloud.jar
AFAIK COPY command don't support * wildcards. You have to know exact filename or rename it to taco-cloud.jar before building image and run COPY taco-cloud.jar ./taco-cloud.jar
Even if taco-cloud-*.jar will be copied, it will be copied into /app/app/taco-cloud.jar. Do you mean COPY {original_file} /app/taco-cloud.jar or COPY {original_file} ./taco-cloud.jar?

Docker: Shared volume when build

I have this files:
docker-compose.yml (shortened):
version: '3.7'
services:
php-fpm:
build:
context: .
dockerfile: docker/php/Dockerfile
target: dev
volumes:
- .:/app
frontend:
build:
context: .
dockerfile: docker/php/Dockerfile
target: frontend
volumes:
- .:/app
docker/php/Dockerfile (shortened):
FROM alpine:3.13 AS frontend
WORKDIR /app
COPY . .
RUN apk add npm
RUN npm install
RUN npx webpack -p --color --progress
FROM php:7.4-fpm AS dev
ENTRYPOINT ["docker-php-entrypoint"]
WORKDIR /app
COPY ./docker/php/www-dev.conf /usr/local/etc/php-fpm.d/www.conf
CMD ["php-fpm"]
I want to use all what building in frontend (as I understood at the stage build at this time volumes are not available) in php-fpm container, but I get something like this: file_get_contents(/app/static/frontend.version): failed to open stream.
How I can do this? I don't understand very well in Docker and the only solution I have is to move build script to php-fpm container.
You need to delete the volumes: in your docker-compose.yml file. They replace the entire contents of the image's /app directory with content from the host, which means everything that gets done in the Dockerfile gets completely ignored.
The Dockerfile you show uses a setup called a multi-stage build. The important thing you can do with this is build the first part of your image using Node, then COPY --from=frontend the static files into the second part. You do not need to declare a second container in docker-compose.yml to run the first stage, the build sequence runs this automatically. This at a minimum looks like
COPY --from=frontend /app/build ./static
You will also need to COPY the rest of your application code into the image.
If you move the Dockerfile up to the top of your project's source tree, then the docker-compose.yml file becomes as simple as
version: '3.8'
services:
php-fpm:
build: . # default Dockerfile, default target (last stage)
# do not overwrite application code with volumes:
# no separate frontend: container
But you've put a little bit more logic in the Dockerfile. I might write:
FROM node:lts AS frontend # use a prebuilt Node image
WORKDIR /app
COPY package*.json . # install dependencies first to save time on rebuild
RUN npm install
COPY . . # (or a more specific subdirectory?)
RUN npx webpack -p --color --progress
FROM php:7.4-fpm AS dev
WORKDIR /app
COPY . . # (or a more specific subdirectory?)
COPY --from=frontend /app/build ./static
COPY ./docker/php/www-dev.conf /usr/local/etc/php-fpm.d/www.conf
# don't need to repeat unmodified ENTRYPOINT/CMD from base image

docker multistage build fails on vue.js

This is my dockerFile located at vDocker/Dockerfile
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
RUN apk add --no-cache bash
COPY ./vDocker/nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
I also have docker-compose located at root directory.
version: '3'
services:
web_client:
build:
context: .
dockerfile: ./vDocker/Dockerfile
container_name: web_client
restart: unless-stopped
tty: true
volumes:
- /var/www/app/ssl/certbot/conf:/etc/letsencrypt
- /var/www/app/ssl/certbot/www:/var/www/certbot
ports:
- 80:80
- 443:443
After running docker-compose build, It gives me the following error: Service 'web_client' failed to build: COPY failed: stat /var/lib/docker/overlay2/67b326c995a1ce52fb3ee2a792d84ffe9bc403aa5962755a2b89f1ab925a1242/merged/app/dist: no such file or directory
Any idea why?
You don't need to name the second stage.
How your build looks like depends on how you set it up and I don't know it. But what you can do is:
run the first stage as a separate Dockerfile
after the last RUN add RUN ls -lart -> this should print the contents of the directory and you can check if the /app/dist really exists
For the rest your code looks good.

Docker Compose build command using Cache and not picking up changed files while copying to docker

I have a docker-compose.yml file comprising of two services (both based on a DockerFile). I have build the images once (using command: docker-compose build) and they were up and running once I ran this command (docker-compose up).
I had to change the source code used for one of the services, however, when I rebuilt the images (docker-compose build), the code changes were not reflected once I ran the services (docker-compose up).
docker-compose.yml
version: '2'
services:
serviceOne:
build:
context: ./ServerOne
args:
PORT: 4000
ports:
- "4000:4000"
env_file:
- ./ServerOne/.env
environment:
- PORT=4000
serviceTwo:
build:
context: ./serviceTwo
args:
PORT: 3000
ports:
- "3000:3000"
env_file:
- ./serviceTwo/.env
environment:
- PORT=3000
- serviceOne_URL=http://serviceOne:4000/
depends_on:
- serviceOne
serviceOne/DockerFile
FROM node:8.10.0
RUN mkdir -p /app
WORKDIR /app
ADD package.json package-lock.json /app/
RUN npm install
COPY . /app/
RUN npm build
EXPOSE ${ACC_PORT}
CMD [ "npm", "start" ]
serviceTwo/DockerFile
FROM node:8.10.0
RUN mkdir -p /app
WORKDIR /app
ADD package.json package-lock.json /app/
RUN npm install
COPY . /app/
RUN npm build
EXPOSE ${ACC_PORT}
CMD [ "npm", "start" ]
Following is the output of the docker-compose when it is ran for the second time.
It is some how using the cached images again when COPY and npm build command are ran.
How could the DockerFile or docker-compose file be changed so that the new source code is deployed?
You can force the build to ignore the cache by adding on the --no-cache option to the docker-compose build

Resources