Why docker not running flask app on 0.0.0.0? - docker

I am trying to dockerize my flask app. I see that docker will host on 0.0.0.0. But I am not getting it. My dockerfile:
#!/bin/bash
FROM python:3.8
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
#ENTRYPOINT [ "./main_class.py" ]
#CMD [ "flask", "run", "-h", "0.0.0.0", "-p", "5000" ]
CMD [ "python", "main_class.py", "flask", "run", "-h", "0.0.0.0", "-p", "5000" ]
When I build it , it build successfully. When I run it docker run -p 5000:5000 iris, it ran successfully. It says Running on http://172.17.0.2:5000/ (Press CTRL+C to quit).
Moreover when I see 172.17.0.2:5000 in browser, it does not work. But if I use 127.0.0.1:5000 it works. How can I bring 0.0.0.0:5000 ? In main_class.py I am using
if __name__ == '__main__':
app.run(debug=True, threaded=True, host="0.0.0.0")
When I type docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90a8a1ee3625 iris "python main_class.p…" 23 minutes ago Up 21 minutes 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp thirsty_lalande
Is there anything wrong I am doing in dockerfile ?
Moreover when I use ENTRYPOINT, it shows me an error standard_init_linux.go:228: exec user process caused: exec format error, thats why I am not using ENTRYPOINT.

The flask run -h 0.0.0.0 will tell your flask process to bind to any local IP address, as mentioned in the comments. This will also allow flask to respond to your Docker port-forwarding (5000->5000), i.e. to answer incoming traffic from the Docker bridge.
On the Docker level your effective port-forwarding is 0.0.0.0:5000->5000/tcp as shown by docker ps, so on your host system port 5000 on any local IP address will be forwarded to your container's flask process.
Please mind that 0.0.0.0 isn't a real IP address, but a placeholder to bind to any local IP address.
You can access your flask application on any IP that routes to your host system, port 5000.

Related

Flask application is not exposing to external world in docker

I'm hosting a flask application in docker container through AWS EC2, which will display the contents of the file that is uploaded to it.
I'm facing an strange issue is that my application is not exposing to the external world, but it works on localhost. When I try to hit on my public ip of the machine, it is showing "refused to connect" (Note: Security groups are fine)
docker ps command showing my container is running fine on the expected port number. Can you please let me know how can i resolve it to make it work? Thanks in advance
Docker version 20.10.7, build f0df350
Here is my Dockerfile,
FROM ubuntu:latest
RUN mkdir /testing
WORKDIR /testing
ADD . /testing
RUN apt-get update&&apt-get install -y pip
RUN pip install -r requirements.txt
CMD flask run
In my requirements.txt file, I'm having flask to be installed in docker.
Here is my flask code and html file,
from flask import Flask,render_template,request
app = Flask(__name__)
#app.route("/")
def file():
return render_template("index.html")
#app.route("/upload", methods= ['POST','GET'])
def test():
print("test")
if request.method == 'POST':
print("test3")
file = request.files['File'].read()
return file
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0')
index.html:
<html>
<body>
<form action = "http://<publiciIPofmachine>:5000/upload" method = "POST" enctype =
"multipart/form-data">
<input type = "file" name = "File" />
<input type = "submit" value = "Submit" />
</form>
</body>
</html>
docker logs:
Serving Flask app 'app' (lazy loading)
Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
Debug mode: on
Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
Running on http://172.17.0.2:5000/ (Press CTRL+C to quit)
Here are my docker commands that is used,
docker build -t <name> .
docker run -d -t -p 5000:80 <imagename>
You have a typo in the docker run command:
Background
That is what docs say:
-p=[] : Publish a container's port or a range of ports to the host
format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort
Both hostPort and containerPort can be specified as a
range of ports. When specifying ranges for both, the
number of container ports in the range must match the
number of host ports in the range, for example:
-p 1234-1236:1234-1236/tcp
So you are publishing port in format: hostPort:containerPort
Solution
Instead of your code:
docker run -d -t -p 5000:80 <imagename>
You should do
docker run -d -t -p 80:5000 <imagename>
But it's good practice to define the EXPOSE layer in your container :)
The EXPOSE has two roles:
Its kind of documentation, whoever is reading your Dockerfile, they know what port should be exposed.
You can use the --publish-all | -P flag to publish all exposed ports.

gcloud Docker error because of user input taking

I am trying to deploy a Python App using Docker on Google Cloud
After typing the command gcloud run deploy --image gcr.io/id/name, I get this error:
ERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
Logs explorer:
TEST_MODE = input()
EOFError: EOF when reading a line
I know with error is caused by trying to take in user input, and with Dockers this command solves the error:
docker run -t -i
Any idea how to run this with gcloud?
Your example does not run a server and so it's not accepted by Cloud Run.
Cloud Run expects a server to be running on PORT (generally this evaluates to 8080 but you should not assume this).
While it's reasonable to want to run arbitrary containers on Cloud Run, the service expects to something to respond via HTTP.
One option would be to simply jam an HTTP server into your container that listens on PORT and then run your Python app alongside it but, Python is single-threaded and so it's less easy to do this. Plus, it's considered an anti-pattern to run multiple processes in a single container.
Therefore I propose the following:
Rewrite your app to return the input as an HTTP GET:
main.py:
from flask import Flask
app = Flask(__name__)
#app.route('/hello/<name>')
def hello(name):
return "Hello {name}".format(name=name)
if __name__ == '__main__':
app.run(host='127.0.0.1', port=8080, debug=True)
Test it:
python3 main.py
* Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
NOTE Flask is running on localhost (127.0.0.1). We need to change this when we run it in a container. It's running on 8080
NAME="Freddie"
curl http://localhost:8080/hello/${NAME}
Hello Freddie
Or browse: http://localhost:8080/hello/Freddie
Containerize this:
Dockerfile:
FROM python:3.10-rc-slim
WORKDIR /app
RUN pip install flask gunicorn
COPY main.py .
ENV PORT 8080
CMD exec gunicorn --bind 0.0.0.0:$PORT main:app
NOTE ENV PORT 8080 sets the environment variable PORT to a value of 8080 unless we specify otherwise (we'll do that next)
NOTE The image uses gunicorn as a runtime host for Flask. This time the Flask service is bound to 0.0.0.0 which permits it to accessible from outside the container (which we need) and it uses the value of PORT
Then:
# Build
docker build --tag=66458821 --file=./Dockerfile .
# Run
PORT="9999"
docker run \
--rm --interactive --tty \
--env=PORT=${PORT} \
--publish=8080:${PORT} \
66458821
[INFO] Starting gunicorn 20.0.4
[INFO] Listening at: http://0.0.0.0:9999 (1)
NOTE Because --env=${PORT}, Flask now runs on 0.0.0.0:9999 but, we remap this port to 8080 on the host. This is just to show how the PORT variable is now used by the container
Test it (using the commands as before)!
Publish it and gcloud run deploy ...
Test it!

Can't connect to a docker running flask app when network=host

I have an flask app running on docker. the host = 0.0.0.0 and the port is 5001. If I run the docker without network=host I'm able to connect to 0.0.0.0 on port 5001, but with network=host I can't. I need the network=host because I need to connect to a db outside of the docker network cluster (which I'm able to connect to).
How can I do that? I tried running the docker with proxy but that didn't help.
Steps to reproduce:
Install python and flask (I'm using 3.8.6 and 1.1.2). In your main, create a flask app and expose one REST endpoint (the Url doesn't really matter).
This is my flask_main file:
from flask import Flask, request
app = Flask(__name__)
#app.route('/')
def route():
return "got to route"
if __name__ == "__main__":
app.run(host = 0.0.0.0, port=5001)
Dockerfile:
FROM python:3.8.6
WORKDIR /
ADD . /.
RUN pip --proxy ***my_proxy*** install -r requirements.txt
EXPOSE 5001
CMD ["python", "flask_main.py"]
requirements.txt:
Flask==1.1.2
itsdangerous==1.1.0
Jinja2==2.11.2
MarkupSafe==1.1.1
psycopg2==2.8.6
Werkzeug==1.0.1
click==7.1.2
I tried running the Dockerfile with and without proxy, tried to remove the host=0.0.0.0 but nothing helps.
I managed to solve the problem by removing the network=host and specifying the dns server of my company.

Application not loading on host port - Docker

Docker service is unable to load on exposed port
I have created a simple DOCKERFILE and have built image "sample-image" from it inside a docker swarm, but whenever I am trying to run a container by creating docker service and exposing it on respective port but its unable to load on my browser. Kindly help
First I have initialised docker swarm and created an image "sample-image" from dockerfile, after that I created overlay network "sample-network" inside swarm and created a service to run container on it "docker service create --name sample-service -p 8010:8001 --network sample-network sample-image". Service gets created but it couldn't loads up on a browser as I have enabled my ufw for port 8010 also.
DOCKERFILE:
FROM node:8
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8001
CMD [ "npm", "start" ]
server.js:
'use strict';
const express = require('express');
const PORT = 8001;
const HOST = '0.0.0.0';
const app = express();
app.get('/', (req, res) => {
res.send('Hello world\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
I expect my web browser to display "Hello world" on exposed port number.
Did you check the docker logs ? That should give you what is going wrong.
$ docker logs <containerid>
Could you try "curl -v http://127.0.0.1:8081" and paste the output here please? Also, did you manipulate ufw after the service or dockerd started?
I think your listening port inside docker is 8010, and outside docker is 8001, can you try changing -p 8001:8001? if it works try -p 8001:8010.

Docker Container Listening on http://[::]:80

I am working on setting up two Docker containers using Docker for Windows. A simple node based web app, and a dotnet core API application. I am starting both these containers using "docker-compose up". The node app starts up perfectly and I can hit the exposed URL, however the dotnet app isn't seeming to work.
The output of the docker-compose up command is below:
application.client_1 | INFO: Accepting connections at http://localhost:8080
application.api_1 | warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
application.api_1 | No XML encryptor configured. Key {cc83a8ac-e1de-4eb3-95ab-8c69a5961bf9} may be persisted to storage in unencrypted form.
application.api_1 | Hosting environment: Development
application.api_1 | Content root path: /app/application.Api
application.api_1 | Now listening on: http://[::]:80
application.api_1 | Application started. Press Ctrl+C to shut down.
The Docker file looks like the following:
FROM microsoft/dotnet AS build
WORKDIR /app
ENV PORT=8081
COPY application.Api/application.Api.csproj application.Api/
COPY application.Business/application.Business.csproj application.Business/
COPY application.DataAccess/application.DataAccess.csproj application.DataAccess/
COPY application.DataModel/application.DataModel.csproj application.DataModel/
WORKDIR /app/application.Api
RUN dotnet restore
WORKDIR /app/
COPY application.Api/. ./application.Api/
COPY application.Business/. ./application.Business/
COPY application.DataAccess/. ./application.DataAccess/
COPY application.DataModel/. ./application.DataModel/
WORKDIR /app/application.Api
RUN dotnet publish -c Release -o out
FROM microsoft/dotnet AS runtime
WORKDIR /app/application.Api
COPY --from=build /app/application.Api/out .
ENTRYPOINT ["dotnet", "application.Api.dll" ]
EXPOSE $PORT
I am unable to get an IP and thus hit the API url. Any thoughts would be much appreciated as I am pretty new to Docker.
UPDATE 1: Compose YML
version: '3.4'
services:
tonquin.api:
image: application.api
ports:
- 8081:5000
build:
context: .
dockerfile: Dockerfile
tonquin.client:
image: application.client
ports:
- 8080:8080
build:
context: .
dockerfile: ../application.Client/Dockerfile
As they've mentioned it seems your container is running on port 80. So for whatever reason that's the port being exposed.
Maybe the EXPOSE $PORT is not returning 8081 as you expect?
When you run the container, unless you specify where to map it, it will only be available at the container's IP at the exposed port (80 in your case). Find out this container Ip easily by running docker inspect <container_id>
Test your image by doing something like docker run -p 8080:80 yourimage. You'll see that in addition to the port 80 that the image exposes, it is being mapped to your local port 8080 so that http://localhost:8080 should be reachable.
See this in case it helps you
See this answer.
The base dotnet image overrides the default kestrel port. Why, I don't know. Adding the environment declaration to my docker file fixed the problem for me.
It's trying to use the IPv6 protocol on the network interface. Disable IPv6 and restart docker. It also looks like you might have both apps trying to use port 80. You can only serve one item on a given port with a given interface/IP. Try setting the API to use a different port number, like 8080.

Resources