I am trying to deploy a Python App using Docker on Google Cloud
After typing the command gcloud run deploy --image gcr.io/id/name, I get this error:
ERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
Logs explorer:
TEST_MODE = input()
EOFError: EOF when reading a line
I know with error is caused by trying to take in user input, and with Dockers this command solves the error:
docker run -t -i
Any idea how to run this with gcloud?
Your example does not run a server and so it's not accepted by Cloud Run.
Cloud Run expects a server to be running on PORT (generally this evaluates to 8080 but you should not assume this).
While it's reasonable to want to run arbitrary containers on Cloud Run, the service expects to something to respond via HTTP.
One option would be to simply jam an HTTP server into your container that listens on PORT and then run your Python app alongside it but, Python is single-threaded and so it's less easy to do this. Plus, it's considered an anti-pattern to run multiple processes in a single container.
Therefore I propose the following:
Rewrite your app to return the input as an HTTP GET:
main.py:
from flask import Flask
app = Flask(__name__)
#app.route('/hello/<name>')
def hello(name):
return "Hello {name}".format(name=name)
if __name__ == '__main__':
app.run(host='127.0.0.1', port=8080, debug=True)
Test it:
python3 main.py
* Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
NOTE Flask is running on localhost (127.0.0.1). We need to change this when we run it in a container. It's running on 8080
NAME="Freddie"
curl http://localhost:8080/hello/${NAME}
Hello Freddie
Or browse: http://localhost:8080/hello/Freddie
Containerize this:
Dockerfile:
FROM python:3.10-rc-slim
WORKDIR /app
RUN pip install flask gunicorn
COPY main.py .
ENV PORT 8080
CMD exec gunicorn --bind 0.0.0.0:$PORT main:app
NOTE ENV PORT 8080 sets the environment variable PORT to a value of 8080 unless we specify otherwise (we'll do that next)
NOTE The image uses gunicorn as a runtime host for Flask. This time the Flask service is bound to 0.0.0.0 which permits it to accessible from outside the container (which we need) and it uses the value of PORT
Then:
# Build
docker build --tag=66458821 --file=./Dockerfile .
# Run
PORT="9999"
docker run \
--rm --interactive --tty \
--env=PORT=${PORT} \
--publish=8080:${PORT} \
66458821
[INFO] Starting gunicorn 20.0.4
[INFO] Listening at: http://0.0.0.0:9999 (1)
NOTE Because --env=${PORT}, Flask now runs on 0.0.0.0:9999 but, we remap this port to 8080 on the host. This is just to show how the PORT variable is now used by the container
Test it (using the commands as before)!
Publish it and gcloud run deploy ...
Test it!
Related
I'm hosting a flask application in docker container through AWS EC2, which will display the contents of the file that is uploaded to it.
I'm facing an strange issue is that my application is not exposing to the external world, but it works on localhost. When I try to hit on my public ip of the machine, it is showing "refused to connect" (Note: Security groups are fine)
docker ps command showing my container is running fine on the expected port number. Can you please let me know how can i resolve it to make it work? Thanks in advance
Docker version 20.10.7, build f0df350
Here is my Dockerfile,
FROM ubuntu:latest
RUN mkdir /testing
WORKDIR /testing
ADD . /testing
RUN apt-get update&&apt-get install -y pip
RUN pip install -r requirements.txt
CMD flask run
In my requirements.txt file, I'm having flask to be installed in docker.
Here is my flask code and html file,
from flask import Flask,render_template,request
app = Flask(__name__)
#app.route("/")
def file():
return render_template("index.html")
#app.route("/upload", methods= ['POST','GET'])
def test():
print("test")
if request.method == 'POST':
print("test3")
file = request.files['File'].read()
return file
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0')
index.html:
<html>
<body>
<form action = "http://<publiciIPofmachine>:5000/upload" method = "POST" enctype =
"multipart/form-data">
<input type = "file" name = "File" />
<input type = "submit" value = "Submit" />
</form>
</body>
</html>
docker logs:
Serving Flask app 'app' (lazy loading)
Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
Debug mode: on
Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
Running on http://172.17.0.2:5000/ (Press CTRL+C to quit)
Here are my docker commands that is used,
docker build -t <name> .
docker run -d -t -p 5000:80 <imagename>
You have a typo in the docker run command:
Background
That is what docs say:
-p=[] : Publish a container's port or a range of ports to the host
format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort
Both hostPort and containerPort can be specified as a
range of ports. When specifying ranges for both, the
number of container ports in the range must match the
number of host ports in the range, for example:
-p 1234-1236:1234-1236/tcp
So you are publishing port in format: hostPort:containerPort
Solution
Instead of your code:
docker run -d -t -p 5000:80 <imagename>
You should do
docker run -d -t -p 80:5000 <imagename>
But it's good practice to define the EXPOSE layer in your container :)
The EXPOSE has two roles:
Its kind of documentation, whoever is reading your Dockerfile, they know what port should be exposed.
You can use the --publish-all | -P flag to publish all exposed ports.
I have a project which I had previously successfully deployed to Google Cloud Run, and set up with a trigger such that upon pushing to the repo's main branch on Github, it would automatically deploy. It worked great.
Then I tried to rename the github repo, which meant deleting and creating a new trigger, and now I cannot get it working again.
Everytime, the build succeeds but deployment fails with this error in Cloud Build:
Step #2 - "Deploy": ERROR: (gcloud.run.services.update) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I have not changed anything other than the repo name, leading me to believe the fix is not with my code, but I tried some changes there anyway.
I have looked into the solutions set forth in this post. However, I believe I am listening on the correct port.
My app is using Python and Flask, and contains this:
if __name__ == "__main__":
app.run(debug=False, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
Which should use the ENV var Port (8080) and otherwise default to 8080. I also tried just using port=8080.
I tried explicitly exposing the port in the Dockerfile, which also did not work:
FROM python:3.7
#Copy files into docker image dir, and make that the current working dir
COPY . /docker-image
WORKDIR /docker-image
RUN pip install -r requirements.txt
CMD ["flask", "run", "--host", "0.0.0.0"]
EXPOSE 8080
Cloud Run does seem to be using port 8080 - if I dig into the response, I see this nested under Response.spec.container.0 :
ports: [
0: {
containerPort: 8080
name: "http1"
}
]
All that said, if I look at the logs, it shows "Now running on Port 5000".
I have no idea where that Port 5000 is coming from or being set, but trying to change the ports in Python/Flask and the Dockerfile to 5000 leads to the same errors.
How do I get it to run on Port 8080? It's very strange to me that this was working FINE prior to renaming the repo and creating a new trigger. How is this setup different? The Trigger does not give an option to set the port so I'm not sure how that caused this error.
You have mixed things. Flask command default port is effectively 5000. If you want to change it, you need to change your flask run command with the --port= parameter
CMD ["flask", "run", "--host", "0.0.0.0","--port","8080"]
In addition, your flask run command, is a flask runtime and totally ignore the standard python entrypoint if __name__ == "__main__":. If you want to use this entrypoint, use the Python runtime
CMD ["python", "<main file>.py"]
I have an flask app running on docker. the host = 0.0.0.0 and the port is 5001. If I run the docker without network=host I'm able to connect to 0.0.0.0 on port 5001, but with network=host I can't. I need the network=host because I need to connect to a db outside of the docker network cluster (which I'm able to connect to).
How can I do that? I tried running the docker with proxy but that didn't help.
Steps to reproduce:
Install python and flask (I'm using 3.8.6 and 1.1.2). In your main, create a flask app and expose one REST endpoint (the Url doesn't really matter).
This is my flask_main file:
from flask import Flask, request
app = Flask(__name__)
#app.route('/')
def route():
return "got to route"
if __name__ == "__main__":
app.run(host = 0.0.0.0, port=5001)
Dockerfile:
FROM python:3.8.6
WORKDIR /
ADD . /.
RUN pip --proxy ***my_proxy*** install -r requirements.txt
EXPOSE 5001
CMD ["python", "flask_main.py"]
requirements.txt:
Flask==1.1.2
itsdangerous==1.1.0
Jinja2==2.11.2
MarkupSafe==1.1.1
psycopg2==2.8.6
Werkzeug==1.0.1
click==7.1.2
I tried running the Dockerfile with and without proxy, tried to remove the host=0.0.0.0 but nothing helps.
I managed to solve the problem by removing the network=host and specifying the dns server of my company.
I need to start two services/commands in docker, from google I got that I can use ENTRYPOINT and CMD to pass different commands. but when I start the container only ENTRYPOINT script runs and CMD seems not running. since I am a new docker can you help me on how to run two commands.
Dockerfile :
FROM registry.suse.com/suse/sle15
ADD repolist/*.repo /etc/zypp/repos.d/
RUN zypper refs && zypper refresh
RUN zypper in -y bind
COPY docker-entrypoint.d/* /docker-entrypoint.d/
COPY --chown=named:named named /var/lib/named
COPY --chown=named:named named.conf /etc/named.conf
COPY --chown=named:named forwarders.conf /etc/named.d/forwarders.conf
ENTRYPOINT [ "./docker-entrypoint.d/startbind.sh" ]
CMD ["/usr/sbin/named","-g","-t","/var/lib/named","-u","named"]
startbind.sh:
#! /bin/bash
/usr/sbin/named.init start
Thanks & Regards,
Mohamed Naveen
You can use supervisor tools for managing multiple services inside a single docker container.
Check out the below example(running Redis and Django server using single CMD):
Dockerfile:
# Base Image
FROM alpine
# Installing required tools
RUN apk --update add nano supervisor python3 redis
# Adding Django Source code to container
ADD /django_app /src/django_app
# Adding supervisor configuration file to container
ADD /supervisor /src/supervisor
# Installing required python modules for app
RUN pip3 install -r /src/django_app/requirements.txt
# Exposing container port for binding with host
EXPOSE 8000
# Using Django app directory as home
WORKDIR /src/django_app
# Initializing Redis server and Gunicorn server from supervisors
CMD ["supervisord","-c","/src/supervisor/service_script.conf"]
service_script.conf file
## service_script.conf
[supervisord] ## This is the main process for the Supervisor
nodaemon=true ## This setting is to specify that we are not running in daemon mode
[program:redis_script] ## This is the part where we give the name and add config for our 1st service
command=redis-server ## This is the main command to run our 1st service
autorestart=true ## This setting specifies that the supervisor will restart the service in case of failure
stderr_logfile=/dev/stdout ## This setting specifies that the supervisor will log the errors in the standard output
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout ## This setting specifies that the supervisor will log the output in the standard output
stdout_logfile_maxbytes = 0
## same setting for 2nd service
[program:django_service]
command=gunicorn --bind 0.0.0.0:8000 django_app.wsgi
autostart=true
autorestart=true
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes = 0
Final output:
Redis and Gunicorn service in same docker container
You can read my complete article on this, the link is given below:
Link for complete article
Options to run more than one service within the container described really well in this official docker article:
multi-service_container.
I'd recommend reviewing why you need two services in one container(shared data volume, init, etc) cause by properly separating the services you'll have ready to scale architecture, more useful logs, easier lifecycle/resource management, and easier testing.
Within startbind.sh
you can do:
#! /bin/bash
#start second servvice here, and push it to background:
/usr/sbin/secondesrvice.init start &
#then run the last commands:
/usr/sbin/named.init start
your /usr/sbin/named.init start (the last command on the entry point) command must NOT go into background, you need to keep it on the foreground.
If this last command is not kept in foreground, Docker will exit.
You can add to startbind.sh the two service start. You can use RUN command also. RUN execute commands in docker container. If dont work, you can ask me to keep helping you.
Im trying to run a test Flask in Docker, but for some reason, i cant connect to it from the host machine, the composition of files is:
pyweb/
|app.py
|Dockerfile
|___app/
|__init__.py
The files content:
Dockerfile:
FROM python:2.7
COPY . /pyweb
WORKDIR /pyweb
RUN pip install flask
RUN export FLASK_APP=app.py
ENTRYPOINT ["python"]
CMD ["app.py"]
app.py:
from flask import Flask
from app import app
app/init.py:
from flask import Flask
app = Flask(__name__)
#app.route('/')
#app.route('/index')
def index():
return "Hello, World!"
app.run(host='0.0.0.0')
After running the docker file, and starting the container with docker run -d -p 5000:5000, the response is
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
But i cant connect form the host machine, it doesnt respond
Ok so I had that same issue during an interview a bit more than a year ago and it still puzzles me to this day.
I don't know why it's not working as expected when running the flask app with app.run().
Somehow it works fine when starting the app with the flask command line directly.
The Dockerfile would look like this:
FROM python:2.7
COPY . /pyweb
WORKDIR /pyweb
RUN pip install flask
ENV FLASK_APP=app.py
CMD ["flask", "run", "--host", "0.0.0.0"]
And you can drop app.run(host='0.0.0.0') from the __init__.py file.
I'll probably spend some time later trying to understand why your original implementation doesn't work as expected. I don't know much about flask but I don't see anything wrong in your code.
You should mention the port in flask app and set debug = True
app.run(host='0.0.0.0', port=5000, debug = True)
You should connect to the same port
docker run -p 3000:5000
Now you can access your app at http://localhost:3000/
This might not answer the OP's question but might answer others' questions who came here..
For me, the Flask output showed that the application was running after launching the Docker container. It showed this line, for example:
Running on http://172.17.0.2:5000/ (Press CTRL+C to quit)
So, I clicked this link and expected it to work.
However, you need to connect to the Docker container --- not the Flask app. Going to http://172.17.0.2:5000/ connects to the Flask app directly, which won't work. You need to go to http://0.0.0.0:5000/, which connects to the Docker container, which connects to the Flask app.
So going to http://0.0.0.0:5000/ worked.