I have a simple FastAPI project called toyrest that runs a trivial API. The code looks like this.
from fastapi import FastAPI
__version__ = "1.0.0"
app = FastAPI()
#app.get("/")
def root():
return "hello"
I've built the usual Python package infrastructure around it. I can install the package. If I run uvicorn toyrest:app the server launches on port 8000 and everything works.
Now I'm trying to get this to run in a Docker image. I have the following Dockerfile.
# syntax=docker/dockerfile:1
FROM python:3
# Create a user.
RUN useradd --user-group --system --create-home --no-log-init user
USER user
ENV PATH=/home/user/.local/bin:$PATH
# Install the API.
WORKDIR /home/user
COPY --chown=user:user . ./toyrest
RUN python -m pip install --upgrade pip && \
pip install -r toyrest/requirements.txt
RUN pip install toyrest/ && \
rm -rf /home/user/toyrest
CMD ["uvicorn", "toyrest:app"]
I build the Docker image and run it, forwarding port 8000 to the running container.
docker run -p 8000:8000 toyrest:1.0.0
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
When I try to connect to http://127.0.0.1:8000/ I get no response.
Presumably I am doing the port forwarding incorrectly. I've tried various permutations of the port forwarding argument (e.g. -p 8000, -p 127.0.0.1:8000:8000) to no avail.
This is such a basic Docker command that I can't see how I'm getting it wrong, but somehow I am. What am I doing wrong?
try to add this line to yourCMD in ̀dockerfile`:
CMD ["uvicorn", "toyrest:app","--host", "0.0.0.0"]
Related
I have a fastapi python script that works on codespaces when I use the following command:
uvicorn main:fast_API_app --reload
The following code appears and my api's work fine:
INFO: Will watch for changes in these directories: ['/workspaces/WebAPI']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [3229] using WatchFiles
INFO: Started server process [3241]
INFO: Waiting for application startup.
INFO: Application startup complete.
resulting in
Running it in github codespaces works fine
However, when I turn this into a docker container, running it results in a 502 Bad Gateway
terminal:
docker container run <username>/<container name>:v0.0.1
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
Whether i select port 8000 to be public or private in github codespaces makes no difference.
Below is my Dockerfile which is used to build the image.
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.10-slim
EXPOSE 8000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["uvicorn", "main:fast_API_app", "--host", "0.0.0.0", "--port","8000"]
It results in the following error:
image of the error
It does not show an error code.
What I already tried (but potentially also did wrong):
I tried exposing different ports
tried running gunicorn instead of uvicorn
Searched on Stackoverflow.com for docker codespaces and bad gateway
Toggling the port forward to public / private
Changing the Port Protocol to https
rebuilding the container several times
I built API using FastAPI that calls some bash commands. Now I want to make a docker container for my app but I encountered the following issue: if I create a docker container, the app won't run bash commands. I guess I need to get out of docker container to run bash commands but I am not sure that it is possible. Any suggestions? Apologies in advance if my question is confusing.
Here is my Docker File
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip3 install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./app /code/app
# CMD ["python", "./app/main.py"]
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
and here is an example of how I run bash command (it is actually a docker command)
#app.post("/stop-camera")
async def stop_camera(info: Request):
req_info = await info.json()
file_name = str(req_info["id"])
result1 = subprocess.run([str(env_dictionary["STOP"])+ file_name], shell = True)
result2 = subprocess.run([str(env_dictionary["REMOVE"])+ file_name], shell = True)
return {
"status" : "SUCCESS",
"stop" : result1,
"rm" : result2
}
Here's a very simple example of UDP communication between something running on a docker host and something running inside a container.
On the host, start a simple docker container passing it a way to get the host's IP address:
docker run -it --add-host host.docker.internal:host-gateway alpine:latest ash
Then, still outside the container and on the host, wait for a command on UDP port 65000 from the container. Note I am using netcat here, but you would likely use Python since you have that already:
# Listen on UDP port 65000
nc -u -l 65000
Obviously you could run this in a loop to wait for multiple commands, and you could parse different commands that arrive and react differently to different commands and you could also check the source of the commands, or encrypt them for some level of security...
Inside the container, I quickly add netcat but you would probably use Python again:
# Install netcat
apk update && apk install netcat-openbsd
# Send command to host via UDP on port 65000
echo STOP | nc -u host.docker.internal 65000
I am trying to build a Flask docker image. I get the error:
zsh: command not found: flask
I followed this old tutorial to get things working. https://medium.com/#rokinmaharjan/running-a-flask-application-in-docker-80191791e143
In order to just learn how to start flask website with Docker I have made everything simple. My Docker image should just open a Hello world front page.
My example.py:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello World'
if __name__ == '__main__':
app.run()
My Dockerfile:
FROM ubuntu:16.04
RUN apt-get update -y
RUN apt-get install python -y
RUN apt-get install python-pip -y
RUN pip install flask
COPY example.py /home/example.py
ENTRYPOINT FLASK_APP=/home/example.py flask run --host=0.0.0.0
I run
sudo docker build . -t flask-app
to build the image.
When I run
docker run -p 8080:5000 flask-app
I get the error:
zsh: command not found: flask
What am I missing here?
Well, indeed you're following a really old tutorial.
I'm not going to enter into detail whether using Flask directly without a WSGI server is something you should do, so I'm just going to focus on your question.
Concise answer: you don't have the installed modules by pip in your PATH, so of course you cannot invoke them. Flask is one of this modules.
Extended answer: keep reading.
First of all, using that base image you're downloading an old version of both Python and pip, secondary: you don't need a fully fledged operative system in order to run a Flask application. There are already base images with Python like python:3.9.10-slim-buster with way less dependencies and possible vulnerabilities than an old image from Ubuntu 16.
FROM python:3.9.10-slim-buster
Second, you shouldn't rely on what do you have on the base image and you should use an environment (venv) for your application, where you can install Flask and any other dependency of the application which should be listed on the requirements.txt. Also you should choose in which working directory you would like to place your code (/usr/src/app is a common place normally).
Indicating which port are you exposing by default is also a good thing to do (even though everyone knows that Flask exposes port 5000).
FROM python:3.9.10-slim-buster
WORKDIR /usr/src/app
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN python3 -m pip install flask
COPY example.py .
ENTRYPOINT FLASK_APP=example flask run --host=0.0.0.0
EXPOSE 5000
and as a result:
❯ docker run -p 8080:5000 flask-app
* Serving Flask app 'example' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://172.17.0.2:5000/ (Press CTRL+C to quit)
I have a flask app in python which is built into an image , when I try to run it using the command
docker logs -f f7e2cd41c0706b7a26d9ff5821aa1d792c685826d1c9707422a2a5dfa2e33796
It is not showing any logs , it should at least show that flask app has started right ? Note that I am able to hit the flask API from the host and it is working.There are many print statements in the code for this API to have worked , so those print statements should have come in the logs. Am I missing something here ?
Dockerfile is :
FROM python:3.6.8
WORKDIR /app
COPY . /app
#RUN apt-get update -y
#RUN apt-get install python-pip -y
RUN pip install -r requirements.txt
EXPOSE 5001
WORKDIR Flask/
RUN chmod -x main.py ;
CMD ["python", "main.py"]
you can get logs from your host machine. The default logging driver is a JSON-structured file located on local disk
/var/lib/docker/containers/[container-id]/[container-id]-json. log.
Or the same way as you did will also work.
sudo docker ps -a
sudo docker logs -f container-id
I want to run a whole application from a single docker container, the application has three components.
neo4j database that, must be accessible via a localhost port say bolt port 7687
a flask application that must access the database and the results or output of the same available across a localhost port say 5000
a web application page index.html that acts as the front end of the flask application. this will access the flask application via 5000 port.
i need the first two coponents to run from the same container.
i got the flask application containorised but could not get both running.
i use a neo4j-community version and #not a neo4j docker image. so inorder to run the same we must execute neo4j start from neo4j-community/bin file
the docker file is stated below
FROM python:3.7
VOLUME ./:app/
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential
COPY . /app/
WORKDIR /app
RUN cd neo4j-community-3.5.3/bin/
CMD ["neo4j start"]
RUN cd ../../
RUN cd flask_jan_24/
RUN pip install -r requirements.txt
CMD ["flask_jan_24/app_flask.py"]
EXPOSE 5000
Issue is that you have actually started Neo4j in the RUN statement (which is part of the build process).
Actually you should have a shell script which must launch all the required services (like neo4j or anything else) in background and then at the end you should launch the actual flask application.