Problem with my Docker image not running Flask - docker

I am trying to build a Flask docker image. I get the error:
zsh: command not found: flask
I followed this old tutorial to get things working. https://medium.com/#rokinmaharjan/running-a-flask-application-in-docker-80191791e143
In order to just learn how to start flask website with Docker I have made everything simple. My Docker image should just open a Hello world front page.
My example.py:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello World'
if __name__ == '__main__':
app.run()
My Dockerfile:
FROM ubuntu:16.04
RUN apt-get update -y
RUN apt-get install python -y
RUN apt-get install python-pip -y
RUN pip install flask
COPY example.py /home/example.py
ENTRYPOINT FLASK_APP=/home/example.py flask run --host=0.0.0.0
I run
sudo docker build . -t flask-app
to build the image.
When I run
docker run -p 8080:5000 flask-app
I get the error:
zsh: command not found: flask
What am I missing here?

Well, indeed you're following a really old tutorial.
I'm not going to enter into detail whether using Flask directly without a WSGI server is something you should do, so I'm just going to focus on your question.
Concise answer: you don't have the installed modules by pip in your PATH, so of course you cannot invoke them. Flask is one of this modules.
Extended answer: keep reading.
First of all, using that base image you're downloading an old version of both Python and pip, secondary: you don't need a fully fledged operative system in order to run a Flask application. There are already base images with Python like python:3.9.10-slim-buster with way less dependencies and possible vulnerabilities than an old image from Ubuntu 16.
FROM python:3.9.10-slim-buster
Second, you shouldn't rely on what do you have on the base image and you should use an environment (venv) for your application, where you can install Flask and any other dependency of the application which should be listed on the requirements.txt. Also you should choose in which working directory you would like to place your code (/usr/src/app is a common place normally).
Indicating which port are you exposing by default is also a good thing to do (even though everyone knows that Flask exposes port 5000).
FROM python:3.9.10-slim-buster
WORKDIR /usr/src/app
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN python3 -m pip install flask
COPY example.py .
ENTRYPOINT FLASK_APP=example flask run --host=0.0.0.0
EXPOSE 5000
and as a result:
❯ docker run -p 8080:5000 flask-app
* Serving Flask app 'example' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://172.17.0.2:5000/ (Press CTRL+C to quit)

Related

Docker Port Forwarding for FastAPI REST API

I have a simple FastAPI project called toyrest that runs a trivial API. The code looks like this.
from fastapi import FastAPI
__version__ = "1.0.0"
app = FastAPI()
#app.get("/")
def root():
return "hello"
I've built the usual Python package infrastructure around it. I can install the package. If I run uvicorn toyrest:app the server launches on port 8000 and everything works.
Now I'm trying to get this to run in a Docker image. I have the following Dockerfile.
# syntax=docker/dockerfile:1
FROM python:3
# Create a user.
RUN useradd --user-group --system --create-home --no-log-init user
USER user
ENV PATH=/home/user/.local/bin:$PATH
# Install the API.
WORKDIR /home/user
COPY --chown=user:user . ./toyrest
RUN python -m pip install --upgrade pip && \
pip install -r toyrest/requirements.txt
RUN pip install toyrest/ && \
rm -rf /home/user/toyrest
CMD ["uvicorn", "toyrest:app"]
I build the Docker image and run it, forwarding port 8000 to the running container.
docker run -p 8000:8000 toyrest:1.0.0
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
When I try to connect to http://127.0.0.1:8000/ I get no response.
Presumably I am doing the port forwarding incorrectly. I've tried various permutations of the port forwarding argument (e.g. -p 8000, -p 127.0.0.1:8000:8000) to no avail.
This is such a basic Docker command that I can't see how I'm getting it wrong, but somehow I am. What am I doing wrong?
try to add this line to yourCMD in ̀dockerfile`:
CMD ["uvicorn", "toyrest:app","--host", "0.0.0.0"]

Deploying flask app to Cloud Run with Pytorch

I am trying to deploy a Flask app to cloud run using the cloud shell editor. I am getting the following error:
Failed to build the app. Error: unable to stream build output: The command '/bin/sh -c pip3 install torch==1.8.0' returned a non-zero code: 1
This is the docker file I am using:
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.9-slim
# Allow statements and log messages to immediately appear in the Knative logs
ENV PYTHONUNBUFFERED True
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Install production dependencies.
RUN pip3 install torch==1.8.0
RUN pip3 install sentence-transformers==2.0.0
RUN pip3 install ultimate-sitemap-parser==0.5
RUN pip3 install Flask-Cors==3.0.10
RUN pip3 install firebase-admin
RUN pip3 install waitress==2.0.0
RUN pip3 install Flask gunicorn
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
# Timeout is set to 0 to disable the timeouts of the workers to allow Cloud Run to handle instance scaling.
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
This is my first time deploying to cloud run and I am very inexperienced using Docker. Can you give me any suggestions of what I might be doing wrong?
I fixed this by changing:
FROM python:3.9-slim
To
FROM python:3.8
Issue is with your torch installation. Check all the requirements for torch is mentioned in your docker file.Or go with a stable version of torch.

Expose Both Ports 8080 and 3000 For Cloud Run Deployment

TL:DR - I am trying to deploy my MERN stack application to GCP's Cloud Run. Struggling with what I believe is a port issue.
My React application is in a client folder inside of my Node.js application.
Here is my one Dockerfile to run both the front-end and back-end:
FROM node:13.12.0-alpine
WORKDIR /app
COPY . ./
# Installing components for be connector
RUN npm install --silent
WORKDIR /app/client
RUN npm install --silent
WORKDIR /app
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT [ "/app/entrypoint.sh" ]
... and here is my entrypoint.sh file:
#!/bin/sh
node /app/index.js &
cd /app/client
npm start
docker-compose up works locally, and docker run -p 8080:8080 -p 3000:3000 <image_id> runs the image I built. Port 8080 is for Node and port 3000 for the React app. However, on Cloud Run, the app does not work. When I visit the app deployed to Cloud Run, the frontend initially loads for a split second, but then the app crashes as it attempts to make requests to the API.
In the Advanced Settings, there is a container port which defaults to 8080. I've tried changing this to 3000, but neither works. I cannot enter 8080,3000, as the field takes valid integers only for the port. Is it possible to deploy React + Node at the same time to Cloud Run like this? How can I have Cloud Run listen on both 8080 and 3000, as opposed to just 1 of the 2?
Thanks!
It's not currently possible.
Instead, you can run multiple processes inside Cloud Run, but instead use nginx to proxy requests between them depending on the URL, similar to what's recommended in this answer.

How to run both neo4j and flask webapplication from the same docker container

I want to run a whole application from a single docker container, the application has three components.
neo4j database that, must be accessible via a localhost port say bolt port 7687
a flask application that must access the database and the results or output of the same available across a localhost port say 5000
a web application page index.html that acts as the front end of the flask application. this will access the flask application via 5000 port.
i need the first two coponents to run from the same container.
i got the flask application containorised but could not get both running.
i use a neo4j-community version and #not a neo4j docker image. so inorder to run the same we must execute neo4j start from neo4j-community/bin file
the docker file is stated below
FROM python:3.7
VOLUME ./:app/
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential
COPY . /app/
WORKDIR /app
RUN cd neo4j-community-3.5.3/bin/
CMD ["neo4j start"]
RUN cd ../../
RUN cd flask_jan_24/
RUN pip install -r requirements.txt
CMD ["flask_jan_24/app_flask.py"]
EXPOSE 5000
Issue is that you have actually started Neo4j in the RUN statement (which is part of the build process).
Actually you should have a shell script which must launch all the required services (like neo4j or anything else) in background and then at the end you should launch the actual flask application.

Flask running in Docker. Need to rebuild every time

I'm trying to get my Flask Docker build to be able to switch from running uWSGI + Nginx for production to simply running flask run for local development.
I have Flask running in development mode:
Here's my Dockerfile:
FROM python:3
RUN apt-get -qq update
RUN apt-get install -y supervisor nginx
RUN pip install uwsgi
# Expose dev port
EXPOSE 5000
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY nginx.conf /etc/nginx/sites-enabled/default
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN pip install --editable .
CMD [ "supervisord" ]
I'm exposing port 5000, which I run from docker-compose.yml:
version: '3'
services:
web:
build: ./
environment:
DATABASE_URL: postgresql://postgres#db/postgres
FLASK_APP: app.main
FLASK_ENV: development
command: flask run --host=0.0.0.0
ports:
- "5000:5000"
links:
- db
volumes:
- .:/app
db:
image: postgres
restart: always
ports:
- 5433:5432
volumes:
- data:/var/lib/postgresql/data
volumes:
data:
driver: local
I can build this and run it fine at http://0.0.0.0:5000/ but if I change any code, in a view, or a template, nothing gets refreshed/reloaded on the server and I'm forced to rebuild.
I thought that by specifying FLASK_ENV: development in my docker-compose that this would enable auto-reloading of the Flask server. Any idea on how to debug?
Debug it on the host, outside Docker. If you need access to databases that are running in Docker, you can set environment variables like MYSQL_HOST or PGHOST to point at localhost. You don't need root privileges at all (any docker command implies unrestricted root access). Your IDE won't be upset at your code running "somewhere else" with a different filesystem layout. If you have a remote debugger, you won't need to traverse the Docker divide to get access to it.
Once you have it running reasonably on the host and your py.test tests pass, then run docker build.
(In the particular case of server-rendered views, you should be able to see almost exactly the same thing via the Flask debug server without having the nginx proxy in front of it, and you can control the library versions that get installed via requirements in your setup.py file. So the desktop environment will be much simpler, and you should be able to convince yourself it's very close to what would be running in the Docker world.)
When you run the flask app from docker-compose, it is running the version you installed with pip install --editable . in the Dockerfile. The problem is that this is NOT the version you're editing from outside Docker in the volume mounted to '/app' (rather it is a different version in /usr/src/app from the COPY . . command).
The Dockerfile requires COPY and ADD at build time, not run-time, so in order to run from the live version the flask run command can't be running from the pip installed package.
You have a few choices:
Move the pip install command to the docker-compose command (possibly by calling a bash script that does the pip install as well as the flask run).
Run the flask run command without pip installing the package. If you're in the /app folder, I believe flask will recognize the current folder as holding the code (given the FLASK_APP variable), so the pip install is unnecessary. This works for me for a simple app where I encountered the same problem, but I imagine if you have complicated imports or other things relying on the installed package this won't work.
I suppose if you reconcile the /usr/src/app and /app folders, so that you pip install --editable at build time then overwrite that folder so that python is looking in the same place but at the locally mounted volume you can trick it into working, but I haven't quite gotten this to work... when I do this, following the flask logs (docker logs -f web) indicates that it knows when a file changes, but the behavior doesn't seem to actually change. I don't know why this is exactly, but I suspect pip is upset about the folder swap.

Resources