Why can't I build and then run my container locally - docker

I have a multi stage Dockerfile
# Base Build
FROM alpine:3.7 AS base
RUN apk add --no-cache nodejs
WORKDIR /root/app
COPY . .
ARG TARGET_ENV
COPY .env.$TARGET_ENV .env
RUN rm .env.*
RUN npm set progress=false && npm config set depth 0
RUN npm install --only=production
RUN cp -R node_modules prod_node_modules
RUN npm install
RUN npm run build
# Prod Build
FROM base AS release
COPY --from=base /root/app/prod_node_modules ./node_modules
COPY --from=base /root/app/package.json .
COPY --from=base /root/app/package-lock.json .
COPY --from=base /root/app/dist .
CMD npm start
EXPOSE 3000
I'd like to build my container and then run it locally.
It builds just fine but when I run it a hash is output, but the container is not running.
docker build --build-arg TARGET_ENV=local -t express-app .
docker run -d -p 3000:3000 -it express-app

Your container could be crashing on start.
Check the output of $ docker run -p 3000:3000 -it express-app for error messages.

Related

How to pass Prisma database url at build time with docker secrets?

I am trying to dockerize an express app using prisma and supabase, but am unable to get prisma to set the supabase url at build time with docker. Prisma is expecting the url to be in an environment variable named DATABASE_URL. To avoid exposing the url I am passing it as a secret to docker and trying to set it an an environment variable but cannot get it to work. Here are two different approaches I've tried with my Dockerfile:
Dockerfile
# syntax=docker/dockerfile:1.2
FROM node:18.9.0
WORKDIR /app
COPY package*.json .
COPY yarn.lock .
COPY prisma .
RUN --mount=type=secret,id=_env,dst=/etc/secrets/.env \
export $(egrep -v '^#' /etc/secrets/.env | xargs) \
&& yarn install
RUN --mount=type=secret,id=_env,dst=/etc/secrets/.env \
export $(egrep -v '^#' /etc/secrets/.env | xargs) \
&& yarn prisma generate
COPY . .
RUN yarn build
CMD ["yarn", "start:dev"]
Dockerfile2
# syntax=docker/dockerfile:1.2
FROM node:18.9.0
WORKDIR /app
COPY package*.json .
COPY yarn.lock .
COPY prisma .
RUN --mount=type=secret,id=dburl \
DATABASE_URL="$(cat /run/secrets/dburl)" \
&& yarn install
RUN --mount=type=secret,id=dburl \
DATABASE_URL="$(cat /run/secrets/dburl)" \
&& yarn prisma generate
COPY . .
RUN yarn build
CMD ["yarn", "start:dev"]
The build commands for each one are the following respectively:
docker build --progress=plain --no-cache --secret id=_env,src=.env .
docker build --progress=plain --no-cache --secret id=dburl,src=dburl.txt .
Whenever I try this the first call to the prisma client inside the app produces a segmentation fault and crashes the app, whereas it works fine outside of the docker container.
Any help would be greatly appreciated.
Turns out either method actually works. The segmentation fault issue is another problem entirely:
https://github.com/prisma/prisma/issues/10649

Dockerfile for Meteor 2.2 proyect

I have been trying for almost 3 weeks to build and run a meteor app (bundle) using Docker, I have tried all the majors recommendations in Meteor forums, Stackoverflow and official documentarion with no success, first I tried to make the bundle and put inside the docker but have awful results, then I realized what I need to do is to make the bundle inside the docker and use a multistage Dockerfile, here is the one I am using right now:
FROM chneau/meteor:alpine as meteor
USER 0
RUN mkdir -p /build /app
RUN chown 1000 -R /build /app
WORKDIR /app
COPY --chown=1000:1000 ./app .
COPY --chown=1000:1000 ./app/packages.json ./packages/
RUN rm -rf node_modules/*
RUN rm -rf .meteor/local/*
USER 1000
RUN meteor update --packages-only
RUN meteor npm ci
RUN meteor build --architecture=os.linux.x86_64 --directory /build
FROM node:lts-alpine as mid
USER 0
RUN apk add --no-cache python make g++
RUN mkdir -p /app
RUN chown 1000 -R /app
WORKDIR /app
COPY --chown=1000:1000 --from=meteor /build/bundle .
USER 1000
WORKDIR /app/programs/server
RUN rm -rf node_modules
RUN rm -f package-lock.json
RUN npm i
FROM node:lts-alpine
USER 0
ENV TZ=America/Santiago
RUN apk add -U --no-cache tzdata && cp /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir -p /app
RUN chown 1000 -R /app
WORKDIR /app
COPY --chown=1000:1000 --from=mid /app .
USER 1000
ENV LC_ALL=C.UTF-8
ENV ROOT_URL=http://localhost/
ENV MONGO_URL=mongodb://locahost:21027/meteor
ENV PORT=3000
EXPOSE 3000
ENTRYPOINT [ "/usr/local/bin/node", "/app/main.js" ]
if I build with docker build -t my-image:v1 . then run my app with docker run -d --env-file .dockerenv --publish 0.0.0.0:3000:3000 --name my-bundle my-image:v1 it exposes port:3000 but when I try to navigate with my browser to http://127.0.0.1:3000 it redirects to https://localhost, if i do docker exec -u 0 -it my-bundle sh, then apk add curl, then curl 127.0.0.1:3000 I can see the meteor app running inside docker.
Has anyone had this issue before, maybe I am missing some configuration? my bundle also works fine outside docker with node main.js in the bundle folder and can visit http://127.0.0.1:3000 with my browser.
Thanks in advance

docker: can not start xvfb-run with docker run, but can start with docker exec -it

I'm trying to run a node app with xvfb-run, here is my Dockerfile
FROM node:lts-alpine
RUN apk --no-cache upgrade && apk add --no-cache chromium coreutils xvfb xvfb-run
ENV CHROME_BIN="/usr/bin/chromium-browser"\
PUPPETEER_SKIP_CHROMIUM_DOWNLOAD="true" \
UPLOAD_ENV="test"
WORKDIR /app
COPY package.json .
COPY .npmrc .
RUN npm install
COPY . .
# EXPOSE 9999
ENTRYPOINT xvfb-run -a npm run dev
I can successfully build the image, but when I run it with docker run, it gets stuck without any log
But when I open an interactive shell and run the ENTRYPOINT command, it works...
How do I fix it?
You should add --init to docker run, for example:
docker run --init --rm -it $IMAGE$ xvfb-run $COMMAND$

Build NextJS Docker image with nginx server

I am new to docker and trying to learn it by it's documentation. AS i need to create a NextJS build using docker image for nginx server i have followed the below process.
Install the nginx
Seeding the port 80 to 3000 in the default config.
Symlink the out directory to base nginx directory
CMD to take care the production build and symlinking of the out directory.
FROM node:alpine AS deps
RUN apk add --no-cache libc6-compat git
RUN apt-get install nginx -y
WORKDIR /sample-app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
FROM node:alpine AS builder
WORKDIR /sample-app
COPY . .
COPY --from=deps /sample-app/node_modules ./node_modules
RUN yarn build
FROM node:alpine AS runner
WORKDIR /sample-app
ENV NODE_ENV production
RUN ls -SF /sample-app/out /usr/share/nginx/html
RUN -p 3000:80 -v /sample-app/out:/usr/share/nginx/html:ro -d nginx
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
RUN chown -R nextjs:nodejs /sample-app/out
USER nextjs
CMD ["nginx -g daemon=off"]
While running the docker build shell script command as sudo docker build . -t sample-app it throws the error The command '/bin/sh -c apt-get install nginx -y' returned a non-zero code: 127
I do not have much experience with alpine images, but I think that you have to use apk (Alpine Package Keeper) for installing packages
try apk add nginx instead of apt-get install nginx -y

Why do I get "curl: not found" inside my node:alpine Docker container?

My api-server Dockerfile is following
FROM node:alpine
WORKDIR /src
COPY . .
RUN rm -rf /src/node_modules
RUN rm -rf /src/package-lock.json
RUN yarn install
CMD yarn start:dev
After docker-compose up -d
I tried
$ docker exec -it api-server sh
/src # curl 'http://localhost:3000/'
sh: curl: not found
Why is the command curl not found?
My host is Mac OS X.
node:alpine image doesn't come with curl. You need to add the installation instruction to your Dockerfile.
RUN apk --no-cache add curl
Full example from your Dockerfile would be:
FROM node:alpine
WORKDIR /src
COPY . .
RUN rm -rf /src/node_modules
RUN rm -rf /src/package-lock.json
RUN apk --no-cache add curl
RUN yarn install
CMD yarn start:dev

Resources