I'm hosting a flask application in docker container through AWS EC2, which will display the contents of the file that is uploaded to it.
I'm facing an strange issue is that my application is not exposing to the external world, but it works on localhost. When I try to hit on my public ip of the machine, it is showing "refused to connect" (Note: Security groups are fine)
docker ps command showing my container is running fine on the expected port number. Can you please let me know how can i resolve it to make it work? Thanks in advance
Docker version 20.10.7, build f0df350
Here is my Dockerfile,
FROM ubuntu:latest
RUN mkdir /testing
WORKDIR /testing
ADD . /testing
RUN apt-get update&&apt-get install -y pip
RUN pip install -r requirements.txt
CMD flask run
In my requirements.txt file, I'm having flask to be installed in docker.
Here is my flask code and html file,
from flask import Flask,render_template,request
app = Flask(__name__)
#app.route("/")
def file():
return render_template("index.html")
#app.route("/upload", methods= ['POST','GET'])
def test():
print("test")
if request.method == 'POST':
print("test3")
file = request.files['File'].read()
return file
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0')
index.html:
<html>
<body>
<form action = "http://<publiciIPofmachine>:5000/upload" method = "POST" enctype =
"multipart/form-data">
<input type = "file" name = "File" />
<input type = "submit" value = "Submit" />
</form>
</body>
</html>
docker logs:
Serving Flask app 'app' (lazy loading)
Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
Debug mode: on
Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
Running on http://172.17.0.2:5000/ (Press CTRL+C to quit)
Here are my docker commands that is used,
docker build -t <name> .
docker run -d -t -p 5000:80 <imagename>
You have a typo in the docker run command:
Background
That is what docs say:
-p=[] : Publish a container's port or a range of ports to the host
format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort
Both hostPort and containerPort can be specified as a
range of ports. When specifying ranges for both, the
number of container ports in the range must match the
number of host ports in the range, for example:
-p 1234-1236:1234-1236/tcp
So you are publishing port in format: hostPort:containerPort
Solution
Instead of your code:
docker run -d -t -p 5000:80 <imagename>
You should do
docker run -d -t -p 80:5000 <imagename>
But it's good practice to define the EXPOSE layer in your container :)
The EXPOSE has two roles:
Its kind of documentation, whoever is reading your Dockerfile, they know what port should be exposed.
You can use the --publish-all | -P flag to publish all exposed ports.
Related
I am a docker newbie and this is a newbie question :)
I am running Docker Engine from Ubuntu 18.04 in WLS2. When I run my container, I cannot connect to it from a WSL2 bash.
Here is my Dockerfile
FROM node:18.4-alpine3.15
WORKDIR /usr/src/app
# see https://nodejs.org/en/docs/guides/nodejs-docker-webapp/#creating-a-dockerfile
COPY package*.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD [ "node", "./src/main.js" ]
This is how I (successfully) build my image out of it
docker build . -t gb/test
This is how I run my container
docker run --rm -p 3000:3000 --name gbtest gb/test
# tried also this one with no luck
#docker run --rm -P --name gbtest gb/test
My server starts successfully, but I cannot understand how to reach it via curl. I tried all of the following with no success
curl http://localhost:3000
curl http://127.0.01:3000
curl http://127.17.01:3000 <- gather with ifconfig
I tried also what suggested by > this < answer, but it didn't work.
# start container with --add-host
docker run --rm -p 3000:3000 --add-host host.docker.internal:host-gateway --name gbtest gb/test
# try to connect with that host
curl http://host.docker.internal:3000
Note that the NodeJS server I am developing is connecting to the right port as:
it states in its output that it connects to http://127.0.0.1:3000
when I run it directly, so without docker, I can connect with just curl http://localhost:3000
Should you need any further info, just let me know.
EDIT 1
Here is my main.js file, basically the base example from Fastify.
import Fastify from "fastify";
// Require the framework and instantiate it
const fastify = Fastify({ logger: true });
// Declare a route
fastify.get("/", async (request, reply) => {
return { hello: "world" };
});
// Run the server!
const start = async () => {
try {
await fastify.listen({ port: 3000 });
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
};
start();
Here are my container logs
$ docker logs gbtest
{"level":30,"time":1655812753498,"pid":1,"hostname":"8429b2ed314d","msg":"Server listening at http://127.0.0.1:3000"}
I also tried to use host network like below. It works, but I am not sure it is the way to go. How am I supposed to expose a container to the internet? Is it OK to share the whole host network? Does it come with any security concerns?
docker run --rm --network host --name gbtest gb/test
Changing await fastify.listen({ port: 3000 }); to await fastify.listen({ host: "0.0.0.0", port: 3000 }); did the trick.
As you guys pointed out, I was binding to container-private localhost by default, while I want my container to respond to outer connections.
New output of docker logs gbtest is
{"level":30,"time":1655813850993,"pid":1,"hostname":"8e2460ac00c5","msg":"Server listening at http://0.0.0.0:3000"}
I am trying to deploy a Python App using Docker on Google Cloud
After typing the command gcloud run deploy --image gcr.io/id/name, I get this error:
ERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
Logs explorer:
TEST_MODE = input()
EOFError: EOF when reading a line
I know with error is caused by trying to take in user input, and with Dockers this command solves the error:
docker run -t -i
Any idea how to run this with gcloud?
Your example does not run a server and so it's not accepted by Cloud Run.
Cloud Run expects a server to be running on PORT (generally this evaluates to 8080 but you should not assume this).
While it's reasonable to want to run arbitrary containers on Cloud Run, the service expects to something to respond via HTTP.
One option would be to simply jam an HTTP server into your container that listens on PORT and then run your Python app alongside it but, Python is single-threaded and so it's less easy to do this. Plus, it's considered an anti-pattern to run multiple processes in a single container.
Therefore I propose the following:
Rewrite your app to return the input as an HTTP GET:
main.py:
from flask import Flask
app = Flask(__name__)
#app.route('/hello/<name>')
def hello(name):
return "Hello {name}".format(name=name)
if __name__ == '__main__':
app.run(host='127.0.0.1', port=8080, debug=True)
Test it:
python3 main.py
* Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
NOTE Flask is running on localhost (127.0.0.1). We need to change this when we run it in a container. It's running on 8080
NAME="Freddie"
curl http://localhost:8080/hello/${NAME}
Hello Freddie
Or browse: http://localhost:8080/hello/Freddie
Containerize this:
Dockerfile:
FROM python:3.10-rc-slim
WORKDIR /app
RUN pip install flask gunicorn
COPY main.py .
ENV PORT 8080
CMD exec gunicorn --bind 0.0.0.0:$PORT main:app
NOTE ENV PORT 8080 sets the environment variable PORT to a value of 8080 unless we specify otherwise (we'll do that next)
NOTE The image uses gunicorn as a runtime host for Flask. This time the Flask service is bound to 0.0.0.0 which permits it to accessible from outside the container (which we need) and it uses the value of PORT
Then:
# Build
docker build --tag=66458821 --file=./Dockerfile .
# Run
PORT="9999"
docker run \
--rm --interactive --tty \
--env=PORT=${PORT} \
--publish=8080:${PORT} \
66458821
[INFO] Starting gunicorn 20.0.4
[INFO] Listening at: http://0.0.0.0:9999 (1)
NOTE Because --env=${PORT}, Flask now runs on 0.0.0.0:9999 but, we remap this port to 8080 on the host. This is just to show how the PORT variable is now used by the container
Test it (using the commands as before)!
Publish it and gcloud run deploy ...
Test it!
I have an flask app running on docker. the host = 0.0.0.0 and the port is 5001. If I run the docker without network=host I'm able to connect to 0.0.0.0 on port 5001, but with network=host I can't. I need the network=host because I need to connect to a db outside of the docker network cluster (which I'm able to connect to).
How can I do that? I tried running the docker with proxy but that didn't help.
Steps to reproduce:
Install python and flask (I'm using 3.8.6 and 1.1.2). In your main, create a flask app and expose one REST endpoint (the Url doesn't really matter).
This is my flask_main file:
from flask import Flask, request
app = Flask(__name__)
#app.route('/')
def route():
return "got to route"
if __name__ == "__main__":
app.run(host = 0.0.0.0, port=5001)
Dockerfile:
FROM python:3.8.6
WORKDIR /
ADD . /.
RUN pip --proxy ***my_proxy*** install -r requirements.txt
EXPOSE 5001
CMD ["python", "flask_main.py"]
requirements.txt:
Flask==1.1.2
itsdangerous==1.1.0
Jinja2==2.11.2
MarkupSafe==1.1.1
psycopg2==2.8.6
Werkzeug==1.0.1
click==7.1.2
I tried running the Dockerfile with and without proxy, tried to remove the host=0.0.0.0 but nothing helps.
I managed to solve the problem by removing the network=host and specifying the dns server of my company.
I am trying to start an ASP.NET Core container hosting a website.
It does not exposes the ports when using the following command line
docker run my-image-name -d -p --expose 80
or
docker run my-image-name -d -p 80
Upon startup, the log will show :
Now listening on: http://[::]:80
So I assume the application is not bound to a specific address.
But does work when using the following docker compose file
version: '0.1'
services:
website:
container_name: "aspnetcore-website"
image: aspnetcoredocker
ports:
- '80:80'
expose:
- '80'
You need to make sure to pass all options (-d -p 80) to the docker command before naming the image as described in the docker run docs. The notation is:
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
So please try the following:
docker run -d -p 80 my-image-name
Otherwise the parameters are used as command/args inside the container. So basically running your entrypoint of the docker image with the additional params of -d -p 80 instead of passing them to the docker command itself. So in your example the docker daemon is just not receiving the params -d and -p 80 and thus not mapping the port to the host. You can also notice that by not receiving the -d the command runs in the foreground and you see the logs in your terminal.
I'm trying to publish a tmpnb server, but am stuck. Following the Quickstart at http://github.com/jupyter/tmpnb, I can run the server locally and access it at 172.17.0.1:8000.
However, I can't access the server remotely. I've tried adding -p 8000:8000 when I create the proxy container with the following command:
docker run -it -p 8000:8000 --net=host -d -e CONFIGPROXY_AUTH_TOKEN=$TOKEN --name=proxy jupyter/configurable-http-proxy --default-target http://127.0.0.1:9999
I tried to access the server by typing the machine's IP address:8000 but my browser still returns "This site can't be reached."
The logs for proxy are:
docker logs --details 45d836f98450
08:33:20.981 - info: [ConfigProxy] Proxying http://*:8000 to http://127.0.0.1:9999
08:33:20.988 - info: [ConfigProxy] Proxy API at http://localhost:8001/api/routes
To verify that I can access other servers run on the same machine I tried the following command: docker run -d -it --rm -p 8888:8888 jupyter/minimal-notebook and was able to accessed it remotely at the machine's ip address:8888.
What am I missing?
I'm working on an Ubuntu 16.04 machine with Docker 17.03.0-ce
Thanks
Create file named docker-compose.yml with content following, then you can launch the container with docker-compose up. Since images will be directly pulled errors will be arrested.
httpproxy:
image: jupyter/configurable-http-proxy
environment:
CONFIGPROXY_AUTH_TOKEN: 716238957362948752139417234
container_name: tmpnb-proxy
net: "host"
command: --default-target http://127.0.0.1:9999
ports:
- 8000:8000
tmpnb_orchestrate:
image: jupyter/tmpnb
net: "host"
container_name: tmpnb_orchestrate
environment:
CONFIGPROXY_AUTH_TOKEN: $TOKEN$
volumes:
- /var/run/docker.sock:/docker.sock
command: python orchestrate.py --command='jupyter notebook --no-browser --port {port} --ip=0.0.0.0 --NotebookApp.base_url=/{base_path} --NotebookApp.port_retries=0 --NotebookApp.token="" --NotebookApp.disable_check_xsrf=True'
A solution is available from the github.com/jupyter/tmpnb README.md file. At the end of the file under the heading "Development" three commands are listed:
git clone https://github.com/jupyter/tmpnb.git
cd tmpnb
make dev
These commands clone the tmpnb repository, cd into the tmpnb repository, and run the "dev" command from the the makefile contained in the tmpnb repository. On my machine, entering those commands created a notebook on a temporary server that I could access remotely. Beware that the "make dev" command deletes potentially conflicting docker containers as part of the launching process.
Some insight into how this works can be gained by looking inside the makefile. When the configurable-http-proxy image is run on Docker, both port 8000 and 8001 are published, and the tmpnb image is run with CONFIGPROXY_ENDPOINT=http://proxy:8001