I am trying to deploy a Flask app to cloud run using the cloud shell editor. I am getting the following error:
Failed to build the app. Error: unable to stream build output: The command '/bin/sh -c pip3 install torch==1.8.0' returned a non-zero code: 1
This is the docker file I am using:
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.9-slim
# Allow statements and log messages to immediately appear in the Knative logs
ENV PYTHONUNBUFFERED True
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Install production dependencies.
RUN pip3 install torch==1.8.0
RUN pip3 install sentence-transformers==2.0.0
RUN pip3 install ultimate-sitemap-parser==0.5
RUN pip3 install Flask-Cors==3.0.10
RUN pip3 install firebase-admin
RUN pip3 install waitress==2.0.0
RUN pip3 install Flask gunicorn
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
# Timeout is set to 0 to disable the timeouts of the workers to allow Cloud Run to handle instance scaling.
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
This is my first time deploying to cloud run and I am very inexperienced using Docker. Can you give me any suggestions of what I might be doing wrong?
I fixed this by changing:
FROM python:3.9-slim
To
FROM python:3.8
Issue is with your torch installation. Check all the requirements for torch is mentioned in your docker file.Or go with a stable version of torch.
Related
I'm trying to dockerize Nuxt 3 app, but I have strange issue.
This Dockerfile is working with this docker run command:
docker run -v /Users/my_name/developer/nuxt-app:/app -it -p 3000:3000 nuxt-app
# Dockerfile
FROM node:16-alpine3.14
# create destination directory
RUN mkdir -p /usr/src/nuxt-app
WORKDIR /usr/src/nuxt-app
# update and install dependency
RUN apk update && apk upgrade
RUN apk add git
# copy the app, note .dockerignore
COPY . /usr/src/nuxt-app/
RUN npm install
# RUN npm run build
EXPOSE 3000
# ENV NUXT_HOST=0.0.0.0
# ENV NUXT_PORT=3000
CMD [ "npm", "run", "dev"]
I don't understand why despite mounting it to /app folder in the container and declaring /usr/src/nuxt-app in Dockerfile it works.
When I try to match them then I get this error:
ERROR (node:18) PromiseRejectionHandledWarning: Promise rejection was handled asynchronously (rejection id: 3) 20:09:42
(Use `node --trace-warnings ...` to show where the warning was created)
✔ Nitro built in 571 ms nitro 20:09:43
ERROR [unhandledRejection]
You installed esbuild for another platform than the one you're currently using.
This won't work because esbuild is written with native code and needs to
install a platform-specific binary executable.
Specifically the "#esbuild/darwin-arm64" package is present but this platform
needs the "#esbuild/linux-arm64" package instead. People often get into this
situation by installing esbuild on Windows or macOS and copying "node_modules"
into a Docker image that runs Linux, or by copying "node_modules" between
Windows and WSL environments.
If you are installing with npm, you can try not copying the "node_modules"
directory when you copy the files over, and running "npm ci" or "npm install"
on the destination platform after the copy. Or you could consider using yarn
instead of npm which has built-in support for installing a package on multiple
platforms simultaneously.
If you are installing with yarn, you can try listing both this platform and the
other platform in your ".yarnrc.yml" file using the "supportedArchitectures"
feature: https://yarnpkg.com/configuration/yarnrc/#supportedArchitectures
Keep in mind that this means multiple copies of esbuild will be present.
Another alternative is to use the "esbuild-wasm" package instead, which works
the same way on all platforms. But it comes with a heavy performance cost and
can sometimes be 10x slower than the "esbuild" package, so you may also not
want to do that.
at generateBinPath (node_modules/vite/node_modules/esbuild/lib/main.js:1841:17)
at esbuildCommandAndArgs (node_modules/vite/node_modules/esbuild/lib/main.js:1922:33)
at ensureServiceIsRunning (node_modules/vite/node_modules/esbuild/lib/main.js:2087:25)
at build (node_modules/vite/node_modules/esbuild/lib/main.js:1978:26)
at runOptimizeDeps (node_modules/vite/dist/node/chunks/dep-3007b26d.js:42941:26)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
I have absolutely no clue what is going on here except architecture mismatch (that doesn't seem to be the case with working version - I'm on MacBook Air M1).
The second issue is that mounting doesn't update the page.
Okay, found the way. The issue was with Vite because HMR module is using port 24678 and I didn't expose it so it couldn't reload the page. This is how it should be looking:
docker run --rm -it \
-v /path/to/your/app/locally:/usr/src/app \
-p 3000:3000 \
-p 24678:24678 \
nuxt-app
Dockerfile
FROM node:lts-slim
WORKDIR /usr/src/app
CMD ["sh", "-c", "npm install && npm run dev"]
I have a web application that's a hybrid of JS/NPM/Webpack for the frontend and Python/Django for the backend. The backend code and the source code for the frontend are stored in the code repository but the "compiled" frontend code is not as the expectation is that Webpack would build this code after deployment.
Currently, I have the following package.json:
{
"name": "Name",
"description": "...",
"scripts": {
"start": "npx webpack --config webpack.config.js"
},
"engines": {
"npm": ">=8.11.0",
"node": ">=16.15.1"
},
"devDependencies": {
[...]
},
"dependencies": {
[...]
}
}
The app is deployed to Google Cloud's Run Cloud via the deploy command, specifically:
/gcloud/google-cloud-sdk/bin/gcloud run deploy [SERVICE-NAME] --source . --region us-west1 --allow-unauthenticated
However, the command npx webpack --config webpack.config.js is apparently never executed as the built files are not generated. Django returns the error:
Error reading /app/webpack-stats.json. Are you sure webpack has generated the file and the path is correct?
What's the most elegant/efficient way to execute the build command in production? Should I include in the Dockerfile via RUN npx webpack --config webpack.config.js? I'm not even sure this would work.
Edit 1:
My current Dockerfile:
# Base image is one of Python's official distributions.
FROM python:3.8.13-slim-buster
# Declare generic app variables.
ENV APP_ENVIRONMENT=Dev
# Update and install libraries.
RUN apt update
RUN apt -y install \
sudo \
curl \
install-info \
git-all \
gnupg \
lsb-release
# Install nodejs.
RUN curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
RUN sudo apt install -y nodejs
RUN npx webpack --config webpack.config.js
# Copy local code to the container image. This is ncessary
# for the installation on Cloud Run to work.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Handle requirements.txt first so that we don't need to re-install our
# python dependencies every time we rebuild the Dockerfile.
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
# Timeout is set to 0 to disable the timeouts of the workers to allow Cloud Run to handle instance scaling.
# Note that the $PORT variable is available by default on Cloud Run.
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 --chdir project/ backbone.wsgi:application
According to your error message (Error reading /app/webpack-stats.json), there is a reference to /app directory in webpack.config.js. It could be the problem, because at that point the directory does not exist. Try to run npx webpack command after WORKDIR /app.
I have a simple FastAPI project called toyrest that runs a trivial API. The code looks like this.
from fastapi import FastAPI
__version__ = "1.0.0"
app = FastAPI()
#app.get("/")
def root():
return "hello"
I've built the usual Python package infrastructure around it. I can install the package. If I run uvicorn toyrest:app the server launches on port 8000 and everything works.
Now I'm trying to get this to run in a Docker image. I have the following Dockerfile.
# syntax=docker/dockerfile:1
FROM python:3
# Create a user.
RUN useradd --user-group --system --create-home --no-log-init user
USER user
ENV PATH=/home/user/.local/bin:$PATH
# Install the API.
WORKDIR /home/user
COPY --chown=user:user . ./toyrest
RUN python -m pip install --upgrade pip && \
pip install -r toyrest/requirements.txt
RUN pip install toyrest/ && \
rm -rf /home/user/toyrest
CMD ["uvicorn", "toyrest:app"]
I build the Docker image and run it, forwarding port 8000 to the running container.
docker run -p 8000:8000 toyrest:1.0.0
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
When I try to connect to http://127.0.0.1:8000/ I get no response.
Presumably I am doing the port forwarding incorrectly. I've tried various permutations of the port forwarding argument (e.g. -p 8000, -p 127.0.0.1:8000:8000) to no avail.
This is such a basic Docker command that I can't see how I'm getting it wrong, but somehow I am. What am I doing wrong?
try to add this line to yourCMD in ̀dockerfile`:
CMD ["uvicorn", "toyrest:app","--host", "0.0.0.0"]
I am trying to build a Flask docker image. I get the error:
zsh: command not found: flask
I followed this old tutorial to get things working. https://medium.com/#rokinmaharjan/running-a-flask-application-in-docker-80191791e143
In order to just learn how to start flask website with Docker I have made everything simple. My Docker image should just open a Hello world front page.
My example.py:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello World'
if __name__ == '__main__':
app.run()
My Dockerfile:
FROM ubuntu:16.04
RUN apt-get update -y
RUN apt-get install python -y
RUN apt-get install python-pip -y
RUN pip install flask
COPY example.py /home/example.py
ENTRYPOINT FLASK_APP=/home/example.py flask run --host=0.0.0.0
I run
sudo docker build . -t flask-app
to build the image.
When I run
docker run -p 8080:5000 flask-app
I get the error:
zsh: command not found: flask
What am I missing here?
Well, indeed you're following a really old tutorial.
I'm not going to enter into detail whether using Flask directly without a WSGI server is something you should do, so I'm just going to focus on your question.
Concise answer: you don't have the installed modules by pip in your PATH, so of course you cannot invoke them. Flask is one of this modules.
Extended answer: keep reading.
First of all, using that base image you're downloading an old version of both Python and pip, secondary: you don't need a fully fledged operative system in order to run a Flask application. There are already base images with Python like python:3.9.10-slim-buster with way less dependencies and possible vulnerabilities than an old image from Ubuntu 16.
FROM python:3.9.10-slim-buster
Second, you shouldn't rely on what do you have on the base image and you should use an environment (venv) for your application, where you can install Flask and any other dependency of the application which should be listed on the requirements.txt. Also you should choose in which working directory you would like to place your code (/usr/src/app is a common place normally).
Indicating which port are you exposing by default is also a good thing to do (even though everyone knows that Flask exposes port 5000).
FROM python:3.9.10-slim-buster
WORKDIR /usr/src/app
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN python3 -m pip install flask
COPY example.py .
ENTRYPOINT FLASK_APP=example flask run --host=0.0.0.0
EXPOSE 5000
and as a result:
❯ docker run -p 8080:5000 flask-app
* Serving Flask app 'example' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://172.17.0.2:5000/ (Press CTRL+C to quit)
This question already has answers here:
In Docker, apt-get install fails with "Failed to fetch http://archive.ubuntu.com/ ... 404 Not Found" errors. Why? How can we get past it?
(7 answers)
Closed 1 year ago.
I was really happy when I first discovered Docker. But I keep running into this issue, in which Docker builds successfully initially, but if I try to re-build a container after a few months, it fails to build, even though I made no changes to the Dockerfile and docker-compose.yml files.
I suspect that external dependencies (in this case, some mariadb package necessary for cron) may become inaccessible and therefore breaking the Docker build process. Has anyone also encountered this problem? Is this a common problem with Docker? How do I get around it?
Here's the error message.
E: Failed to fetch http://deb.debian.org/debian/pool/main/m/mariadb-10.3/mariadb-common_10.3.29-0+deb10u1_all.deb 404 Not Found [IP: 151.101.54.132 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Fetched 17.6 MB in 0s (48.6 MB/s)
ERROR: Service 'python_etls' failed to build: The command '/bin/sh -c apt-get install cron -y' returned a non-zero code: 100
This is my docker-compose.yml.
## this docker-compose file is for development/local environment
version: '3'
services:
python_etls:
container_name: python_etls
restart: always
build:
context: ./python_etls
dockerfile: Dockerfile
This is my Dockerfile.
#FROM python:3.8-slim-buster # debian gnutls28 fetch doesn't work in EC2 machine
FROM python:3.9-slim-buster
RUN apt-get update
## install
RUN apt-get install nano -y
## define working directory
ENV CONTAINER_HOME=/home/projects/python_etls
## create working directory
RUN mkdir -p $CONTAINER_HOME
## create directory to save logs
RUN mkdir -p $CONTAINER_HOME/logs
## set working directory
WORKDIR $CONTAINER_HOME
## copy source code into the container
COPY . $CONTAINER_HOME
## install python modules through pip
RUN pip install snowflake snowflake-sqlalchemy sqlalchemy pymongo dnspython numpy pandas python-dotenv xmltodict appstoreconnect boto3
# pip here is /usr/local/bin/pip, as seen when running 'which pip' in command line, and installs python packages for /usr/local/bin/python
# https://stackoverflow.com/questions/45513879/trouble-running-python-script-cron-import-error-no-module-named-tweepy
## changing timezone
ENV TZ=America/Los_Angeles
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
## scheduling with cron
## reference: https://stackoverflow.com/questions/37458287/how-to-run-a-cron-job-inside-a-docker-container
# install cron
RUN apt-get install cron -y
# Copy crontable file to the cron.d directory
COPY crontable /etc/cron.d/crontable
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/crontable
# Apply cron job
RUN crontab /etc/cron.d/crontable
# Create the log file to be able to run tail
#RUN touch /var/log/cron.log
# Run the command on container startup
#CMD cron && tail -f /var/log/cron.log
CMD ["cron", "-f"]
EDIT: So changing the Docker base image from python:3.9-slim-buster to python:3.10-slim-buster in Dockerfile allowed a successful build. However, I'm still stumped why it broke in the first place and what I should do to mitigate this problem in the future.
You may have the misconception that docker will generate the same image every time you try to build from the same Dockerfile. It doesn't, and it's not docker's problem space.
When you build an image, you most likely reach external resources: packages, repository, registries, etc. All things that could be reachable today but missing tomorrow. A security update may be released, and some old packages deleted as consequence of it. Package managers may request the newest version of specific library, so the release of an update will impact the reproducibility of your image.
You can try to minimise this effect by pinning the version of as many dependencies as you can. There are also tools, like nix, that allow you to actually reach full reproducibility. If that's important to you, I'd point you in that direction.