SvelteKit - npm run build - how to run the application for production? (Dockerized, if possible) - docker

I built a Svelte app with Sveltekit. After running npm run build, I get a .svelte-kit folder with different folders. There is an output folder with subfolders called client and server, which might be promising. I also try to Dockerize my app.
I am able to run npm run preview, but this seems not to be suited for production. What is the way to actually run my svelte application? I have seen in other repos, where the app is run with the following command: CMD ["node", "build/index.js"]
I currently have this:
FROM node:18-alpine AS builder
WORKDIR /app
COPY ./frontend/package*.json ./
RUN npm install
COPY ./frontend .
RUN npm run build
FROM node:18-alpine
RUN mkdir /app
COPY --from=builder /app/package.json .
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/.svelte-kit/output ./output
EXPOSE 5577
# See if increase libuv thread pool size makes performance better
# The default value 4
# http://docs.libuv.org/en/v1.x/threadpool.html
ENV UV_THREADPOOL_SIZE=32
CMD ["node", "/output/server/index.js"]
The command itself works, but running the index.js file just runs through with no error.
How do I actually run my app after creating the production build?
Thanks in advance

Related

Playwright won't launch browser in Docker container

I am trying to deploy my playwright automation framework in a docker container. However I assume that the browser won't launch (don't have any logs).
When I run my tests locally in VS code, they look like this:
When I run my tests in Docker container, they look like this:
It is clear that it is missing the [Google Chrome] or [chromium] at the beginning of the line. I assume that the browser is not getting launched.
My dockerfile looks like this:
# playwright:bionic has everything to run playwright (node, npm, chromium, dependencies)
#FROM mcr.microsoft.com/playwright:bionic
#COPY .. .
FROM node:14
FROM mcr.microsoft.com/playwright:focal
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package*.json /app/
#COPY features/ /app/features/
COPY src/ /app/src/
#COPY cucumber.js /app/
#COPY tsconfig.json /app/
#COPY reports/ /app/reports/
COPY *.config.json /app/
RUN npm install
RUN npx playwright install
CMD npm run test
#ENTRYPOINT ["npm run test"]
Any ideas how to get the tests to run in a container?
If not using the mcr.microsoft.com/playwright:bionic with all the dependencies, add this line after the RUN npx playwright install to get the browser binaries:
COPY /root/.cache/ms-playwright/ /root/.cache/ms-playwright/
This problem was fixed by adding:
FROM mcr.microsoft.com/playwright:bionic
which added all the needed dependencies.
You can start from the provided image mcr.microsoft.com/playwright:v1.16.2-focal:
FROM mcr.microsoft.com/playwright:v1.16.2-focal
# copy project files
COPY . /e2e
WORKDIR /e2e
# Install dependencies
RUN npm install
RUN npx playwright install
# Run playwright test
CMD [ "npx", "playwright", "test", "--reporter=list" ]
The --reporter=list option is to print a line for each test being executed.

Dockerfile copy from build failing for create-react-app

I have a react app I'm trying to dockerize for production. It was based off create-react-app. To run the app locally, I am in the app's root folder and I run npm start. This works. I built the app with npm run build. Then I try to create the docker image with docker build . -t app-name. This is failing for not being able to find the folder I'm trying to copy the built app from (I think).
Here's what's in my Dockerfile:
FROM node:13.12.0-alpine as build
WORKDIR /src
ENV PATH /node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
COPY . ./
RUN npm run build
FROM nginx:alpine
COPY --from=build build /usr/share/nginx/html
EXPOSE 80
CMD ["npm", "start"]
I'm pretty sure I've got something wrong on the COPY --from line.
The app structure is like this, if it matters
-app-name (folder)
-src (folder)
-build (folder)
-dockerfile
-other stuff, but I think I listed what matters
The error I get is failed to compute cache key: "/build" not found: not found
I'm running my commands in windows powershell.
What do I need to change?
You were almost correct,
Just that the path where the build folder is generated is at /src/build and not at /build.
and hence the error you see,
and why the /src coming?
it's due to the WORKDIR /src.
and hence this should work: COPY --from=build /src/build /usr/share/nginx/html
besides, since you are using nginx server to serve the build static files,
you don't need to or you cant run npm start with CMD.
instead, just leave it, and you can access the application at port 80.
so the possible working Dockerfile would be:
FROM node:13.12.0-alpine as build
WORKDIR /src
ENV PATH /node_modules/.bin:$PATH
COPY package*.json ./
RUN npm install --silent
COPY . ./
RUN npm run build
FROM nginx:alpine
COPY --from=build /src/build /usr/share/nginx/html
EXPOSE 80
This is in accordance with the Dockerfile in the above question,
in some specific cases, advanced configuration might be required.

Yarn install errors with "ENOENT: no such file or directory

I have a Dockerfile and when I run it locally, everything works fine, however my build through GitHub actions seems to fail, the error I am getting is:
error An unexpected error occurred: "ENOENT: no such file or directory, stat '/home/runner/work/akira/akira/README.md'".
I tried to remove the yarn.lock but without success, a full log of the build that fails can be found here, my Dockerfile is below:
Dockerfile:
FROM node:14.0.0 AS base
WORKDIR /usr/src/app
FROM base as builder
COPY ./lerna.json .
COPY ./package.json .
COPY ./tsconfig.json .
COPY ./yarn.lock .
COPY ./packages/akira/prisma ./packages/akira/prisma
COPY ./packages/akira/src ./packages/akira/src
COPY ./packages/akira/types ./packages/akira/types
COPY ./packages/akira/package*.json ./packages/akira/
COPY ./packages/akira/tsconfig.json ./packages/akira
RUN yarn install --frozen-lockfile
RUN yarn build
FROM builder as migrate
RUN yarn workspace akira prisma migrate up --experimental
FROM base AS app
COPY --from=builder /usr/src/app/yarn.lock .
COPY --from=builder /usr/src/app/packages/akira/dist ./dist
COPY --from=builder /usr/src/app/packages/akira/prisma ./prisma
COPY --from=builder /usr/src/app/packages/akira/package.json .
RUN yarn install --production
USER node
ENV NODE_ENV=production
EXPOSE 4000
CMD ["node", "dist/index.js"]
If you look at your GitHub Actions workflow,
or the log from the failing build that you linked, it seems to be running yarn commands outside of docker.
It looks like yarn is struggling with the README symlink, not sure why, but as it seems you want to build with docker, I would try the following:
Replace this part of the yaml
- name: Use Node.js
uses: actions/setup-node#master
with:
node-version: 14.4.0
- name: Install dependencies
run: yarn --frozen-lockfile
- name: Build packages
run: yarn build
with something like
- name: Build docker image
run: docker build .
Edit:
As pointed out in below comment, the Dockerfile includes a side-effect of deploying database migrations.
If you don't want to run everything from the Dockerfile in the Build pipeline,
you can leverage multi-stage builds and stop at a specific stage.
I.e., move the migrations into its own stage:
FROM node:14.0.0 AS base
WORKDIR /usr/src/app
FROM base as builder
COPY ./lerna.json .
<< lines omitted >>
RUN yarn install --frozen-lockfile
RUN yarn build
FROM builder AS migr
RUN yarn workspace akira prisma migrate up --experimental
FROM base AS app
COPY --from=builder /usr/src/app/yarn.lock .
<< lines omitted >>
Then you can stop after the builder stage with
docker build --target builder .
Edit 2:
Or you could keep the build pipeline and Dockerfile as it is, and instead fix the broken symlink, i.e. revert commit 0c87fa3

Flask and React App in single Docker Container

Good day SO,
I know this is bad practice and that I am supposed to have one App per container, but is there a way for me to have two services running concurrently in the same container, and how would I go about writing the Dockerfile for it?
My current Dockerfile for the Flask (Backend) App:
FROM python:3.6.9-slim-buster
WORKDIR /app/flask_backend
ENV PYTHONPATH "${PYTHONPATH}:/app"
COPY ./flask_backend ./
COPY requirements.txt .
RUN pip install -r requirements.txt
CMD python3 app/webapp/app.py
My React (Frontend) Dockerfile:
FROM node:12.18.0-alpine as build
WORKDIR /app/react_frontend
ENV PATH /app/node_modules/.bin:$PATH
ENV NODE_OPTIONS="--max-old-space-size=8192"
COPY ./react_frontend/package.json ./
COPY ./react_frontend/package-lock.json ./
RUN npm ci
RUN npm install react-scripts#3.4.1 -g
RUN npm install serve -g
COPY ./react_frontend ./
CMD ["serve", "-s", "build", "-l", "3000"]
My attempt to launch both apps within the same Docker Container was to merge the two Dockerfiles, but the resulting container does not have the data from the first Dockerfile, and I am unsure how to proceed.
My merged Dockerfile:
FROM python:3.6.9-slim-buster
WORKDIR /app/flask_backend
ENV PYTHONPATH "${PYTHONPATH}:/app"
COPY ./flask_backend ./
COPY requirements.txt .
RUN pip install -r requirements.txt
CMD python3 app/webapp/app.py
FROM node:12.18.0-alpine as build
WORKDIR /app/react_frontend
ENV PATH /app/node_modules/.bin:$PATH
ENV NODE_OPTIONS="--max-old-space-size=8192"
COPY ./react_frontend/package.json ./
COPY ./react_frontend/package-lock.json ./
RUN npm ci
RUN npm install react-scripts#3.4.1 -g
RUN npm install serve -g
COPY ./react_frontend ./
CMD ["serve", "-s", "build", "-l", "3000"]
I am a beginner in using Docker, and hence I forsee that there will be several problems, such as communications between the two apps (Backend uses port 5000), using this method. Any guidiance will be greatly appreciated!
A React application doesn't usually have a server per se (development-only Docker setups aside). Instead, you run a tool like Webpack to compile it down to static files, which you can then serve to the browser, which then runs them.
On your host system you'd run something like
yarn build
which produces a dist directory; then you'd copy this into your Flask static directory.
If you do this entirely ahead-of-time, then you can run your application out of a Python virtual environment, which will be a much easier development and test setup, and the Dockerfile you show won't change.
If you want to build this entirely in Docker (for example to take advantage of a more Docker-native automated build system) a multi-stage build matches well here. You can use a first stage to build the front-end application, and then COPY that into the final application in the second stage. That looks roughly like:
FROM node:12.18.0-alpine as build
WORKDIR /app/react_frontend
COPY ./react_frontend/package.json ./
COPY ./react_frontend/package-lock.json ./
RUN npm ci
COPY ./react_frontend ./
RUN npm run build
FROM python:3.6.9-slim-buster
WORKDIR /app/flask_backend
ENV PYTHONPATH "${PYTHONPATH}:/app"
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY ./flask_backend ./
COPY --from=build /app/react_frontend/dist/ ./static/
CMD python3 app/webapp/app.py
This approach is not compatible with setups that overwrite Docker image contents using bind mounts. A non-Docker host Node and Python setup will be a much easier development environment, and for this particular setup isn't likely to be substantially different from the Docker setup.

Does automatically updating my package.json at commit time disable docker build to reuse cache?

I've just started to get deep inside dockerfile syntax.
Here is the one I use currently:
FROM node:12-alpine as install
WORKDIR /Backend-graphql
COPY ./src ./src
COPY ./index.js ./index.js
COPY ./schema.graphql ./schema.graphql
COPY ./package.json ./
COPY ./package-lock.json ./package-lock.json
RUN npm install
FROM node:12-alpine as prismawork
WORKDIR /PrismaWork
COPY --from=install /Backend-graphql .
COPY ./datamodel.prisma ./datamodel.prisma
COPY ./prisma.yml ./prisma.yml
RUN npx prisma deploy
RUN npx prisma generate
FROM node:12-alpine
#curl needed for healthcheck
RUN apk --update --no-cache add curl
WORKDIR /app
COPY --from=prismawork /PrismaWork .
ENTRYPOINT ["npm", "start"]
EXPOSE 4000
From personnals tests and documentations founds online, i've respected the following advice :
Use multi-stage builds
But I notice something, docker do not reuse cache after the first COPY layer different in current and followings build stages. And I think its a problem, because I use an automatic bump version git hook based on commit message semantic versionning syntax who modifies my package.json. So, at each commit docker build re-RUN npm install and subsequents layers.
First of all, have I understood docker cache layering system ?
Secondly, should I use an other file for automatic bumping version and COPY it at the very end of my Dockerfile ?
First of all, have I understood docker cache layering system?
Yes, you should.
If any change happens in any step, like changes in package.json, the docker will rebuild the rest of the steps.
No need to copy from the same image multiple times. We are also doing npm install after performing unrelated steps to catching other steps.
FROM node:12-alpine
#curl needed for healthcheck
RUN apk --update --no-cache add curl
WORKDIR /app
COPY ./src ./src
COPY ./index.js ./index.js
COPY ./schema.graphql ./schema.graphql
COPY ./datamodel.prisma ./datamodel.prisma
COPY ./prisma.yml ./prisma.yml
COPY ./package.json ./
COPY ./package-lock.json ./package-lock.json
RUN npm install
RUN npx prisma deploy
RUN npx prisma generate
ENTRYPOINT ["npm", "start"]
EXPOSE 4000
Multiple staging useful when you need to build between multiple images like this example:
FROM node:12-alpine
RUN npm install -g gzipper
WORKDIR /build
ADD . .
RUN npm install
ARG CONFIGURATION
RUN npm run build:${CONFIGURATION}
RUN gzipper --gzip-level=6 ./dist
FROM nginx:latest
WORKDIR /usr/share/nginx/html
COPY --from=0 /build/dist .
COPY nginx/default.conf /etc/nginx/conf.d/default.conf

Resources