How do I get my docker container running gunicorn / FastAPI server to respond to outside traffic?
This is how my container runs
docker run --detach --net host -v "/path/to/app/app":"/app" -it me/app:appfastapi_latest /start.sh
cat start.sh
#! /usr/bin/env sh
set -e
# Start Gunicorn
exec gunicorn -k "uvicorn.workers.UvicornWorker" -c /app/gunicorn_conf.py "main:app"
cat ./app/gunicorn_conf.py
...
host = "0.0.0.0"
port = "8000"
bind = f"{host}:{port}"
...
docker logs container_id
...
[2022-02-15 05:40:10 +0000] [1] [INFO] Listening at: http://127.0.0.1:8000 (1)
^^^ this was before a fix in the conf, now its
0.0.0.0:8000
...
Curl container from host
curl localhost:8000/hw {"message":"Hello World"}
This is how it should be. But when I do
curl domain:8000/hw
curl: (7) Failed to connect to domain port 8000: Connection refused
I do not know how to troubleshoot this. In the FastAPI main I have
ORIGINS = [
"http://127.0.0.1:8000",
"http://localhost:8000",
"http://domain:8000",
]
app = FastAPI(title="MY API", root_path=ROOT_PATH, docs_url="/")
app.add_middleware(
CORSMiddleware,
allow_origins=ORIGINS,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
and I have the firewall open (I believe)
sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- 172.17.0.2 anywhere tcp dpt:mysql
ACCEPT tcp -- anywhere anywhere tcp dpt:8000
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain DOCKER (1 references) target prot opt source destination Chain DOCKER-ISOLATION-STAGE-1 (1 references) target prot opt source destination DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere RETURN all -- anywhere anywhere Chain DOCKER-ISOLATION-STAGE-2 (1 references) target prot opt source destination DROP all -- anywhere anywhere RETURN all -- anywhere anywhere
that I have opened for port 8000 with
sudo iptables -A INPUT -p tcp --dport 8000 -j ACCEPT
The system I am on is Debian9,
docker --version
Docker version 19.03.15, build 99e3ed8919
Listening at: http://127.0.0.1:8000
means that gunicorn listening localhost of docker container. Container's localhost is not accessible from external network. You should set 0.0.0.0:8000 to be able access from outside.
Yes, you tried to set
host = "0.0.0.0"
port = "8000"
But gunicorn config file doesn't have host and port parameters. You should use bind = '0.0.0.0:8000' instead.
And don't forget to publish port -p 8000:8000 when run container
Related
I am using a really simple docker-compose file from here:
https://github.com/brandonserna/flask-docker-compose
this is the docker compose file:
version: '3.5'
services:
flask-app-service:
build: ./app
volumes:
- ./app:/usr/src/app
- .:/user/src
ports:
- 5555:9999
However I can only reach the app from outside network when I am using port 80.
ports:
- 80:9999
When I am using for example port 8000. I cant reach the container from outside network.
From the local machine I can reach the app. (Tested with wget localhost:8000)
ports:
- 8000:9999
iptables -L gives me this:
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.18.0.2 tcp dpt:9999
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:http
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Not enough for comment so this is why:
From what it seems it could be either firewall rule in your host running the container or one between the host to your house.
To test which on between the two I'd try to use nmap with --reason and --tracerout options, since we have connectivity in another port it's unlikely that there is a complete block between your home and the container so the traceroute wouldn't give much info but just in case.
Also if you have root access to the host machine or just to the iptables service try to stop it to check if that's the root cause for the block.
also check with docker ps if the port is bound to the port on the machine, should look something like this:
0.0.0.0:port --> tcp\port
where instead of port you have the port number
If it doesn't maybe it's due to some problem with the docker-compose up command so try to run the service with a simple docker run command
Bear with me, I'm new to Docker...
I'm trying to get a Docker environment going on a Red Hat Linux server (7.6) and am having trouble accessing containers from a computer other than the host.
I got Docker installed no problem. Then, the first container I installed was Portainer and the Portainer Agent:
docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
docker run -d -p 9001:9001 --name portainer_agent --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker/volumes:/var/lib/docker/volumes portainer/agent
Seems peachy:
# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
973a685cfbe1 portainer/portainer "/portainer" 19 hours ago Up 2 minutes 0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp portainer
602537dc21ec portainer/agent "./agent" 45 hours ago Up 19 hours 0.0.0.0:9001->9001/tcp portainer_agent
And using # curl http://localhost:9000 connects just fine. However, the connection gets dropped when attempting to connect from another computer on the same network (in a different subnet, if that matters). I can connect to the server just fine (I'm managing it via SSH, and even tested netcat on port 9002 for good measure).
The iptables, if this helps:
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:etlservicemgr
ACCEPT tcp -- anywhere 172.17.0.3 tcp dpt:cslistener
ACCEPT tcp -- anywhere 172.17.0.3 tcp dpt:irdmi
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
I've searched around a bit but keep finding conflicting answers (some suggesting that it should just work, and others suggesting that there's a lot more I've got left to learn and configure). I'm afraid that I'm fumbling in the dark. I gather that I need a route configured to forward host traffic to the container? Or an iptables rule? What exactly am I missing?
...Nevermind.
On a lark, I tried connecting to the server from a device that's on-premises; rather than my computer which is connected via VPN. The on-prem device connected fine.
I'm really unable to understand why everything works fine on the same host but ports are filtered outside the host (even on a virtual machine on the same host but in bridged mode)
services:
vpn:
build: ./openvpn
# cap_add, security_opt, and volume required for the image to function
cap_add:
- net_admin
environment:
OPENVPN_USERNAME: 'XXXXXX'
OPENVPN_PASSWORD: 'XXXXXXXX'
OPENVPN_PROVIDER: 'XXXXXXXXXXX'
OPENVPN_CONFIG: 'Amsterdam'
SQUID_EXT_PORT: "3001"
networks:
- dockerproxy
sysctls:
- net.ipv6.conf.all.disable_ipv6=0
read_only: true
tmpfs:
- /run
- /tmp
restart: unless-stopped
security_opt:
- label:disable
stdin_open: true
tty: true
ports:
- "0.0.0.0:${SQUID_EXT_PORT:-3001}:3128"
volumes:
- /dev/net:/dev/net:z
- /config
squid:
build: ./squid
environment:
SQUID_VERSION: '3.5.27'
SQUID_CACHE_DIR: '/squid/var/cache/squid'
SQUID_LOG_DIR: '/var/log/squid'
SQUID_USER: 'proxy'
tty: true
network_mode: service:vpn
volumes:
- /srv/docker/squid/cache:/squid/var/cache/squid
restart: unless-stopped
networks:
dockerproxy:
external:
name: dockerproxy
I have checked that ports are open
netstat -tulpn | grep 3001
tcp6 0 0 :::3001 :::* LISTEN -
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cba39f7e94dc amsterdam_squid "/sbin/entrypoint.sh" 9 minutes ago Up 9 minutes amsterdam_squid_1
2856f2bb2b7c amsterdam_vpn "/usr/local/bin/open…" 9 minutes ago Up 9 minutes (healthy) 0.0.0.0:3001->3128/tcp amsterdam_vpn_1
I suspect it could be a docker daemon iptables configuration, which I have not changed since I'm not very very confident with them.
sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:domain /* managed by anbox-bridge */
ACCEPT udp -- anywhere anywhere udp dpt:domain /* managed by anbox-bridge */
ACCEPT tcp -- anywhere anywhere tcp dpt:bootps /* managed by anbox-bridge */
ACCEPT udp -- anywhere anywhere udp dpt:bootps /* managed by anbox-bridge */
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere /* managed by anbox-bridge */
ACCEPT all -- anywhere anywhere /* managed by anbox-bridge */
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (3 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.28.0.2 tcp dpt:3128
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (3 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
does anyone more able than me find a reason why I can proxy on the same host:
nmap localhost -p 3001
Starting Nmap 7.60 ( https://nmap.org ) at 2020-01-10 17:06 CET
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00023s latency).
PORT STATE SERVICE
3001/tcp open nessus
Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds
but can't from another host
map 192.168.1.14 -p 3001
Starting Nmap 7.80 ( https://nmap.org ) at 2020-01-10 10:54 EST
Nmap scan report for 192.168.1.14
Host is up (0.00076s latency).
PORT STATE SERVICE
3001/tcp filtered nessus
Nmap done: 1 IP address (1 host up) scanned in 0.26 seconds
I haven't noticed this behaviour before, and I have always been able to access all my docker services on the same machine except this proxy-vpn one.
The problem is that the VPN container does not really know about the network where it is hosted.
In other words, in order to have this working you have to add a route in the VPN container (since it alone will be used for networking thanks to the network_mode: service:vpn directive) to where to send reply packets (usually the Docker host gateway). Otherwise your packets will simply be dropped, from which comes usually the nmap filtered state. Oddly enough the packets will not even reach your squid server, therefore there will be no logs from that part. This got me off road for a long time, but the fact that the packets are not reaching the squid server is actually what is happening, so I'm the only one to blame to have been mislead, I suppose.
An effective way to add the route that will allow your packet to come back is:
/sbin/ip r a "${localNet}" via "${GW}" dev "${INT}"
That I took from the great Docker app
https://github.com/haugene/docker-transmission-openvpn
where you can find the meaning of the variables in the script line above.
Using Docker v 17.03.1-ce, on a linux mint machine, i'm unable to reach the container web server (container port 5000) with my browser (localhost port 9000) on the host.
Container launched with command :
sudo docker run -d -p 9000:5000 --name myContainer imageName
I started by checking that the server (flask) on my container was properly launched. It's launched.
I wanted to check that the server was working properly, so in the container, using curl, i sent a GET request on localhost, port 5000. The server returned the web page
So, the server is working, therefore the issue lies somewhere in the communication between container and host.
I checked iptables, but am not sure what to make of it:
sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-ISOLATION all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:5000
Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
sudo iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
MASQUERADE all -- 172.18.0.0/16 0.0.0.0/0
MASQUERADE tcp -- 172.17.0.2 172.17.0.2 tcp dpt:5000
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:9000 to:172.17.0.2:5000
Expected result : using my browser, with url "localhost:9000", i can receive the homepage sent from the container, through port 5000.
edit: Adding docker logs and docker ps
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
59a20248c5b2 apptest "python3 src/jboos..." 12 hours ago Up 12 hours 0.0.0.0:9000->5000/tcp jboost
sudo docker logs jboost
* Serving Flask app "jboost_app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 310-292-856
127.0.0.1 - - [03/Jul/2019 04:12:54] "GET / HTTP/1.1" 200 -
edit 2: adding results for curl localhost:9000 on host machine
So when connecting with my web browser, the connection doesn't work, but curl gives a more specific message:
curl localhost:9000
curl: (56) Recv failure: Connection reset by peer
I found the solution in this post : https://devops.stackexchange.com/questions/3380/dockerized-flask-connection-reset-by-peer
The Docker networking and port forwarding were working correctly. The problem was with my flask server. It seems that by default, the server is configured to only accept requests from local host.
When launching your flash server, with the "run" command, you must specify host='0.0.0.0' , so that any ip can be served.
if __name__ == "__main__":
app.run(host='0.0.0.0')
My application need access webservice requires white list IP.
Since I have many app servers, i end up a solution like this
iptables -t nat -A OUTPUT -p tcp --dport 80 -d a.webservices.dimension.abc.com -j DNAT --to 52.1.2.3:9999
that means outgoing connection to a.webservices.dimension.abc.com will be routed to 52.1.2.3:9999 ( a proxy server )
It works.
And now I move my application server ( ec2 vm) to Docker container. But doing this, the iptables rule doesn't work.
here is the final rules
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCALChain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
DNAT tcp -- anywhere 1.2.3.4 tcp dpt:http to:52.1.2.3:9999
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- ip-172-17-0-0.us-west-2.compute.internal/16 anywhere
MASQUERADE tcp -- ip-172-17-0-2.us-west-2.compute.internal ip-172-17-0-2.us-west-2.compute.internal tcp dpt:http
Chain DOCKER (2 references)
target prot opt source destination
LOG all -- anywhere anywhere limit: avg 2/min burst 5 LOG level warning prefix "DOCKER CHAIN "
RETURN all -- anywhere anywhere
DNAT tcp -- anywhere anywhere tcp dpt:9090 to:172.17.0.2:80
As a result, the following command d
docker exec -it some-nginx curl -iL 1.2.3.4
doesn't work
BUT
curl -iL 1.2.3.4
works
I don't know why.
Proxying outgoing connection from Docker container should be common but I haven't found out any existing solution.
Please help.
UPDATE
The following link
https://serverfault.com/questions/734526/whitelisting-outgoing-traffic-from-docker-containers
shed me some lights.
use this rule to make it work
iptables -t nat -A PREROUTING -i docker0 -p tcp a.webservices.dimension.abc.com -j DNAT --to 52.1.2.3:9999