Run npx webpack as part of Cloud Run deployment - docker

I have a web application that's a hybrid of JS/NPM/Webpack for the frontend and Python/Django for the backend. The backend code and the source code for the frontend are stored in the code repository but the "compiled" frontend code is not as the expectation is that Webpack would build this code after deployment.
Currently, I have the following package.json:
{
"name": "Name",
"description": "...",
"scripts": {
"start": "npx webpack --config webpack.config.js"
},
"engines": {
"npm": ">=8.11.0",
"node": ">=16.15.1"
},
"devDependencies": {
[...]
},
"dependencies": {
[...]
}
}
The app is deployed to Google Cloud's Run Cloud via the deploy command, specifically:
/gcloud/google-cloud-sdk/bin/gcloud run deploy [SERVICE-NAME] --source . --region us-west1 --allow-unauthenticated
However, the command npx webpack --config webpack.config.js is apparently never executed as the built files are not generated. Django returns the error:
Error reading /app/webpack-stats.json. Are you sure webpack has generated the file and the path is correct?
What's the most elegant/efficient way to execute the build command in production? Should I include in the Dockerfile via RUN npx webpack --config webpack.config.js? I'm not even sure this would work.
Edit 1:
My current Dockerfile:
# Base image is one of Python's official distributions.
FROM python:3.8.13-slim-buster
# Declare generic app variables.
ENV APP_ENVIRONMENT=Dev
# Update and install libraries.
RUN apt update
RUN apt -y install \
sudo \
curl \
install-info \
git-all \
gnupg \
lsb-release
# Install nodejs.
RUN curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
RUN sudo apt install -y nodejs
RUN npx webpack --config webpack.config.js
# Copy local code to the container image. This is ncessary
# for the installation on Cloud Run to work.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Handle requirements.txt first so that we don't need to re-install our
# python dependencies every time we rebuild the Dockerfile.
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
# Timeout is set to 0 to disable the timeouts of the workers to allow Cloud Run to handle instance scaling.
# Note that the $PORT variable is available by default on Cloud Run.
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 --chdir project/ backbone.wsgi:application

According to your error message (Error reading /app/webpack-stats.json), there is a reference to /app directory in webpack.config.js. It could be the problem, because at that point the directory does not exist. Try to run npx webpack command after WORKDIR /app.

Related

Dockerizing Nuxt 3 app for development purposes

I'm trying to dockerize Nuxt 3 app, but I have strange issue.
This Dockerfile is working with this docker run command:
docker run -v /Users/my_name/developer/nuxt-app:/app -it -p 3000:3000 nuxt-app
# Dockerfile
FROM node:16-alpine3.14
# create destination directory
RUN mkdir -p /usr/src/nuxt-app
WORKDIR /usr/src/nuxt-app
# update and install dependency
RUN apk update && apk upgrade
RUN apk add git
# copy the app, note .dockerignore
COPY . /usr/src/nuxt-app/
RUN npm install
# RUN npm run build
EXPOSE 3000
# ENV NUXT_HOST=0.0.0.0
# ENV NUXT_PORT=3000
CMD [ "npm", "run", "dev"]
I don't understand why despite mounting it to /app folder in the container and declaring /usr/src/nuxt-app in Dockerfile it works.
When I try to match them then I get this error:
ERROR (node:18) PromiseRejectionHandledWarning: Promise rejection was handled asynchronously (rejection id: 3) 20:09:42
(Use `node --trace-warnings ...` to show where the warning was created)
✔ Nitro built in 571 ms nitro 20:09:43
ERROR [unhandledRejection]
You installed esbuild for another platform than the one you're currently using.
This won't work because esbuild is written with native code and needs to
install a platform-specific binary executable.
Specifically the "#esbuild/darwin-arm64" package is present but this platform
needs the "#esbuild/linux-arm64" package instead. People often get into this
situation by installing esbuild on Windows or macOS and copying "node_modules"
into a Docker image that runs Linux, or by copying "node_modules" between
Windows and WSL environments.
If you are installing with npm, you can try not copying the "node_modules"
directory when you copy the files over, and running "npm ci" or "npm install"
on the destination platform after the copy. Or you could consider using yarn
instead of npm which has built-in support for installing a package on multiple
platforms simultaneously.
If you are installing with yarn, you can try listing both this platform and the
other platform in your ".yarnrc.yml" file using the "supportedArchitectures"
feature: https://yarnpkg.com/configuration/yarnrc/#supportedArchitectures
Keep in mind that this means multiple copies of esbuild will be present.
Another alternative is to use the "esbuild-wasm" package instead, which works
the same way on all platforms. But it comes with a heavy performance cost and
can sometimes be 10x slower than the "esbuild" package, so you may also not
want to do that.
at generateBinPath (node_modules/vite/node_modules/esbuild/lib/main.js:1841:17)
at esbuildCommandAndArgs (node_modules/vite/node_modules/esbuild/lib/main.js:1922:33)
at ensureServiceIsRunning (node_modules/vite/node_modules/esbuild/lib/main.js:2087:25)
at build (node_modules/vite/node_modules/esbuild/lib/main.js:1978:26)
at runOptimizeDeps (node_modules/vite/dist/node/chunks/dep-3007b26d.js:42941:26)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
I have absolutely no clue what is going on here except architecture mismatch (that doesn't seem to be the case with working version - I'm on MacBook Air M1).
The second issue is that mounting doesn't update the page.
Okay, found the way. The issue was with Vite because HMR module is using port 24678 and I didn't expose it so it couldn't reload the page. This is how it should be looking:
docker run --rm -it \
-v /path/to/your/app/locally:/usr/src/app \
-p 3000:3000 \
-p 24678:24678 \
nuxt-app
Dockerfile
FROM node:lts-slim
WORKDIR /usr/src/app
CMD ["sh", "-c", "npm install && npm run dev"]

Permission error at WSL2 files mounted into Docker container

I want to make a development setup of a Blitz.js app with Docker (because it will be deployed and tested with it, too). I am developing on Windows, the code resides within WSL2.
After starting up, the container exits with:
ready - started server on 0.0.0.0:3000, url: http://localhost:3000
Environment variables loaded from .env
Prisma schema loaded from db/schema.prisma
Prisma Studio is up on http://localhost:5555
info - Using webpack 5. Reason: Enabled by default https://nextjs.org/docs/messages/webpack5
[Error: EACCES: permission denied, unlink '/home/node/app/.next/server/blitz-db.js'] {
errno: -13,
code: 'EACCES',
syscall: 'unlink',
path: '/home/node/app/.next/server/blitz-db.js'
}
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
error Command failed with exit code 1.
This is what my Dockerfile looks like:
# Create a standard base image that has all the defaults
FROM node:16-slim as base
ENV NODE_ENV=production
ENV PATH /home/node/app/node_modules/.bin:$PATH
ENV TINI_VERSION v0.19.0
WORKDIR /home/node/app
RUN apt-get update && apt-get install -y openssl --no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& chown -R node:node /home/node/app
# Blitz.js recommends using tini, see why: https://github.com/krallin/tini/issues/8
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
USER node
COPY --chown=node:node package*.json yarn.lock* ./
RUN yarn config list && yarn install --frozen-lockfile && yarn cache clean --force
# Create a development image
FROM base as dev
ENV NODE_ENV=development
USER node
COPY --chown=node:node . .
RUN yarn config list && yarn install && yarn cache clean --force
CMD ["bash", "-c", "yarn dev"]
Within WSL2, I run docker-compose up -d to make use of the following docker-compose.yml:
version: "3.8"
services:
app:
container_name: itb_app
build: .
image: itb_app:dev
ports:
- 3000:3000
volumes:
# Only needed during development: Container gets access to app files on local development machine.
# Without access, changes made during development would only be reflected
# every time the container's image is built (hence on every `docker-compose up`).
- ./:/home/node/app/
The file in question (blitz-db.js) is generated by yarn dev (see Dockerfile). I checked the owner of it within WSL2: It seems to be root. But I wouldn't know how to change it under these circumstances, let alone know to which user.
I wonder how I can mount the WSL2 directory into my container for Blitz.js to use it.
The issue is that the .next directory and its content (the code of the compiled Blitz.js app) was created by the host system before the docker container was introduced. So, the host system user was owner of the directory and its files. Thus, the container's user did not have write permissions and couldn't compile its own app version into the .next directory, raising the error above.
The solution is to delete the .next folder from the host system and restarting the container, giving it the ability to compile the app.

Deploying flask app to Cloud Run with Pytorch

I am trying to deploy a Flask app to cloud run using the cloud shell editor. I am getting the following error:
Failed to build the app. Error: unable to stream build output: The command '/bin/sh -c pip3 install torch==1.8.0' returned a non-zero code: 1
This is the docker file I am using:
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.9-slim
# Allow statements and log messages to immediately appear in the Knative logs
ENV PYTHONUNBUFFERED True
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Install production dependencies.
RUN pip3 install torch==1.8.0
RUN pip3 install sentence-transformers==2.0.0
RUN pip3 install ultimate-sitemap-parser==0.5
RUN pip3 install Flask-Cors==3.0.10
RUN pip3 install firebase-admin
RUN pip3 install waitress==2.0.0
RUN pip3 install Flask gunicorn
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
# Timeout is set to 0 to disable the timeouts of the workers to allow Cloud Run to handle instance scaling.
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
This is my first time deploying to cloud run and I am very inexperienced using Docker. Can you give me any suggestions of what I might be doing wrong?
I fixed this by changing:
FROM python:3.9-slim
To
FROM python:3.8
Issue is with your torch installation. Check all the requirements for torch is mentioned in your docker file.Or go with a stable version of torch.

sh: curl: not found even install curl inside k8s pod

It might be simple question but I could not find the proper solution.
I have a Docker image as below.. The things that I would like to do simply run curl command inside kubernetes pod but I received an error as below.. I could not able to exec via bash also.
$ kubectl exec -ti hub-cronjob-dev-597cc575f-6lfdc -n hub-dev sh
Defaulting container name to hub-cronjob.
Use 'kubectl describe pod/hub-cronjob-dev-597cc575f-6lfdc -n hub-dev' to see all of the containers in this pod.
/usr/src/app $ curl
sh: curl: not found
Tried with bash
$ kubectl exec -ti cronjob-dev-597cc575f-6lfdc -n hub-dev bash
mand in container: failed to exec in container: failed to start exec "8019bd0d92aef2b09923de78753eeb0c8b60a78619543e4cd27069128a30da92": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown
Dockerfile
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
#Install curl
RUN apk --no-cache add curl -> did not work
RUN apk update && apk add curl curl-dev bash -> did not work
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
Your Dockerfile consists of multiple stages, which is also called multi-stage build.
Each FROM statement is a new stage and new image. In your case you have 2 stages:
builder where you build you app and install curl
second stage which copies /usr/src/app from builder stage
In this case second FROM node:12-alpine statement will contain only basic alpine packages, node tools and /usr/src/app which you have copied from the first stage.
If you want to have curl in your final image you need to install curl in second stage (after second FROM node:12-alpine):
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
# Do not install
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
#Install curl
RUN apk update && apk add curl
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
As it was mentioned in comments you can test it by running docker container directly - no need to run pod in k8s cluster:
docker build -t image . && docker run -it image sh -c 'which curl'
It is common to use multi-stage build for applications implemented in compiled programming languages.
In the first stage you install all necessary dev tools and compilers and then compile sources into a binary file. Since you don't need and probably don't want sources and developer's tools in a production image you should create a new stage.
In the second stage you copy compiled binary file and run it as CMD or ENTRYPOINT. This way your image contains only executable code, which makes them smaller.
We can add curl using apk in the k8s pod.
apk add curl

AWS lambda container not picking system dependency

I am Building a lambda to denoise audio files. Python soundfile uses libsndfile system dependency which I am installing via apt in my DockerFile. The container is running fine locally but when I run it after deploying to lambda it says [ "errorMessage": "sndfile library not found",
"errorType": "OSError", ]. Here is my Dockerfile,
# Define global args
ARG FUNCTION_DIR="/home/app"
ARG RUNTIME_VERSION="3.7"
# Stage 1 - bundle base image + runtime
# Grab a fresh copy of the image and install GCC if not installed ( In case of debian its already installed )
FROM python:${RUNTIME_VERSION} AS python-3.7
# Stage 2 - build function and dependencies
FROM python-3.7 AS build-image
# Install aws-lambda-cpp build dependencies ( In case of debian they're already installed )
RUN apt-get update && apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev
# Include global args in this stage of the build
ARG FUNCTION_DIR
ARG RUNTIME_VERSION
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy handler function
COPY app/requirements.txt ${FUNCTION_DIR}/app/requirements.txt
# Optional – Install the function's dependencies
RUN python${RUNTIME_VERSION} -m pip install -r ${FUNCTION_DIR}/app/requirements.txt --target ${FUNCTION_DIR}
# Install Lambda Runtime Interface Client for Python
# RUN python${RUNTIME_VERSION} -m pip install awslambdaric --target ${FUNCTION_DIR}
# Stage 3 - final runtime image
# Grab a fresh copy of the Python image
FROM python-3.7
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}
# Copy in the built dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
# Install librosa system dependencies
RUN apt-get update -y && apt-get install -y \
libsndfile1 \
ffmpeg
# (Optional) Add Lambda Runtime Interface Emulator and use a script in the ENTRYPOINT for simpler local runs
# ADD https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie /usr/bin/aws-lambda-rie
# COPY entry.sh /
COPY app ${FUNCTION_DIR}/app
ENV NUMBA_CACHE_DIR=/tmp
RUN ln -s /usr/lib/x86_64-linux-gnu/libsndfile.so.1 /usr/local/bin/libsndfile.so.1
# enable below for local testing
# COPY events ${FUNCTION_DIR}/events
# COPY .env ${FUNCTION_DIR}
# RUN chmod 755 /usr/bin/aws-lambda-rie /entry.sh
ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD [ "app.handler.lambda_handler" ]
Below is my lambda config
{
"FunctionName": "DenoiseAudio",
"FunctionArn": "arn:aws:lambda:us-east-1:xxxx:function:DenoiseAudio",
"Role": "arn:aws:iam::xxxx:role/lambda-s3-role",
"CodeSize": 0,
"Description": "",
"Timeout": 120,
"MemorySize": 128,
"LastModified": "2021-01-25T13:41:00.000+0000",
"CodeSha256": "84ae6e6e475cad50ae5176d6176de09e95a74d0e1cfab3df7cf66a41f65e4e19",
"Version": "$LATEST",
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "43c6e7c4-27a8-4c6d-8c32-c1e074d40a62",
"State": "Active",
"LastUpdateStatus": "Successful",
"PackageType": "Image",
"ImageConfigResponse": {}
}
It's resolved now. It turns out that the error log was wrong. It was happening because lambda was not assigned enough memory. As soon as I assigned enough memory the problem went away.
It looks like you are not building from one of the AWS managed base images which include the runtime interface client. It also looks like you are not adding it manually as well. My recommendation is to start with the Python3.7 or Python3.8 base image (both found here). Then add the other libraries as needed. I would also encourage you to look at the AWS SAM implementation of container image support for Lambda functions. It will make your life easier :). You can find a video demonstration of it here: https://youtu.be/3tMc5r8gLQ8?t=1390.
If you want still want to build an image from scratch, take a look at the documents that will guide you through the requirements found here.

Resources