I have this Dockerfile:
FROM nginx:latest
COPY devops/nginx_proxy.conf /etc/nginx/conf.d/default.conf
EXPOSE 8080
and a devops/nginx_proxy.conf:
server {
listen 8080;
client_max_body_size 32M;
underscores_in_headers on;
}
Running the Dockerfile with docker run -p 8080:80 test and then testing with curl http://localhost/, I see this error:
curl: (7) Failed to connect to localhost port 80: Connection refused
Even more curious, curl http://localhost:8080/ returns this:
curl: (52) Empty reply from server
Why am I getting these errors?
With Docker you can bind containers ports to host ports using the -p option.
General rule
docker run -p HOST_PORT:CONTAINER_PORT
Bind container 8080 port to the 80 of the host
docker run -p 80:8080 test
Ports which are not bound to the host (i.e., -p 80:80 instead of -p 127.0.0.1:80:80) are accessible from the outside
Bind the port limiting the access to localhost
docker run -p 127.0.0.1:80:8080 test
Related
This question already has answers here:
Docker-compose/Nginx/Gunicorn - Nginx not working
(1 answer)
How to communicate between Docker containers via "hostname"
(5 answers)
Closed 4 months ago.
Below are the important details:
Dockerfile for nginx build
FROM nginx:latest
EXPOSE 443
COPY nginx.conf /etc/nginx/nginx.conf
Nginx.conf
events {}
http {
client_max_body_size 1000M;
server {
server_name _;
location / {
proxy_pass http://127.0.0.1:8000/;
proxy_set_header Host $host;
}
listen 443 ssl;
ssl_certificate cert/name.crt;
ssl_certificate_key cert/name.key;
}
}
Nginx docker command
docker run -dit -p 0.0.0.0:443:443 -v /etc/cert/:/etc/nginx/cert <MY NGINX CONTAINER> nginx -g 'daemon off;'
Docker command to start gunicorn server
docker run -dit -p 127.0.0.1:8000:8000 <My FASTAPI CONTAINER> gunicorn -w 3 -k uvicorn.workers.UvicornWorker -b 127.0.0.1:8000 server:app
Other details:
I expose port 8000 in my Fastapi container docker build
I run nginx docker command right before the gunicorn docker command
I am currently testing with python requests library and have turned verify=False for the SSL configuration
Edit:
My issue related most directly to this post:
From inside of a Docker container, how do I connect to the localhost of the machine?
Binding to 0.0.0.0:8000 for my gunicorn run and adding the tag --network="host" to my docker run nginx command solved my issue
I am trying to setup two docker containers(yes separate without docker-compose): one with nginx and one with uwsgi with basic flask app.
I run containers in same network within docker
My nginx config for site added/linked to sites-enabled(everything else is default):
server {
listen 80;
server_name 127.0.0.1;
location / {
include uwsgi_params;
uwsgi_pass 0.0.0.0:8080;
}
}
My uwsgi.ini
[uwsgi]
module = app:app
master = true
processes = 2
socket = 0.0.0.0:8080
uwsgi entry point in docker looks like
.local/bin/uwsgi --ini uwsgi.ini
Containers run fine on their own - uwsgi receives request on 8080 and nginx receives expected requests. How ever when I try to access 127.0.0.1 i get 502 status code and nginx logs error:
1 connect() failed (111: Connection refused) while connecting to
upstream, client: 192.168.4.1, server: 127.0.0.1, request: "GET /
HTTP/1.1", upstream: "uwsgi://0.0.0.0:8080", host: "127.0.0.1"
By googling i find solution that rather use one container and some_socket.sock as file or use docker compose. Apparently problem with permissions, but I do not know how to solve them or diagnose.
I launch containers with these commands:
docker run --network app_network --name nginx --rm -p 80:80 my_nginx
docker run --network app_network --name flaskapp --rm -p 8080:8080 my_uwsgi
EDIT
You can simply use the hostname of the docker container in the uwsgi_pass directive as both docker containers are on the same subnet.
location / {
include uwsgi_params;
uwsgi_pass flaskapp:8080;
}
0.0.0.0 isn't the IP address of the server, it essentially tells the server to be hosted on every IP that the device has allocated.
To connect to it from nginx, you will need to use the IP address of the container instead.
You can find the IP address of the container running uWsgi with the following command:
docker inspect CONTAINER_ID
Where CONTAINER_ID is the ID of the container you started uwsgi in.
From here you can update the nginx config as follows:
uwsgi_pass IP_ADDRESS:8080;
Where IP_ADDRESS is the one you found from the command above
You can also set the ip address of the container when you start it with the following option
--ip <ip>
Be careful, however, to ensure that the IP address you set is in the same subnet as the standard IP's assigned.
Following the tutorial on https://docs.docker.com/get-started/part2/.
I start my docker container with docker run -p 4000:80 friendlyhello
and see
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:8088/ (Press CTRL+C to quit)
But it's inaccessible from the expected path of localhost:4000.
$ curl http://localhost:4000/
curl: (7) Failed to connect to localhost port 4000: Connection refused
$ curl http://127.0.0.1:4000/
curl: (7) Failed to connect to 127.0.0.1 port 4000: Connection refused
Okay, so maybe it's not on my local host. Getting the container ID I retrieve the IP with
docker inspect --format '{{ .NetworkSettings.IPAddress }}' 7e5bace5f69c
and it returns 172.17.0.2 but no luck! curl continues to give the same responses. I can confirm something is running on 4000....
lsof -i :4000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 94812 travis 18u IPv6 0x7516cbae76f408b5 0t0 TCP *:terabase (LISTEN)
I'm pulling my hair out on this. I've read through the troubleshooting guide and can confirm
* not on a proxy
* don't use a custom dns
* I'm having issues connecting to docker, not docker connecting to my pip server.
Running the app.py with python app.py the server starts and I'm able to hit it. What am I missing?
Did you accidentally put port=8088 at the bottom of your app.py file? When you are running this the last line of your output is saying that your python app is exposed on port 8088 not 80.
To confirm you can run either modify the app.py file and rebuild the image, or alternatively you could run: docker run -p 4000:8088 friendlyhello which would map your local port 4000 to 8088 in the container.
Try to run it using:
docker run -p 4000:8088 friendlyhello
As you can see from the logs, your app starts on port 8088, but you connect 4000 to 80 where on 80, nothing is actually listening.
I'm setting up my AWS EC2 instance. I wanted to let that instance access via https but I get a
This is what I tried
run docker pull abiosoft/caddy
Put Caddyfile in home folder
Run mkdir -p $HOME/caddycerts; chmod ugo+rwx $HOME/caddycerts;
Run docker run -d -e "CADDYPATH=/etc/caddycerts" -v $HOME/Caddyfile:/etc/Caddyfile -v $HOME/caddycerts:/etc/caddycerts -p 443:443 abiosoft/caddy
Run docker restart *dockerName*
My Caddyfile looks like this:
some-domain-name.com {
tls myemail
proxy / 172.17.0.1:9001 {
header_upstream Host {host}
header_upstream X-Real-IP {remote}
header_upstream X-Forwarded-Proto {scheme}
}
}
Error: curl: (7) Failed to connect to some-domain-name.com port 443: Connection refused
EC2 instance's security group has https enabled for port 443
when you use AWS make sure that the port you are using is allowed and you have the right to use it
AWS Security group and ACL doesn't give connection refused, they silently drops the packet. From the message connection refused it seems the service isn't running or server isn't listening on port 443.
Have you tried to telnet it locally ? Does it work ?
telnet localhost 443
Error: curl: (7) Failed to connect to some-domain-name.com port 443: Connection refused
The above error message means that your web server is not running on the specified port of 443. You can simply validate via a telnet (which I see in James's answer above).
From your caddyfile it points to port 9001. The first line of the Caddyfile is always the address of the site to serve.
Without seeing the dockerfile it's hard to pinpoint, but I'd say there's nothing configured to run on 443 in your application
getting this error while curl the application ip
curl (56) Recv failure: Connection reset by peer - when hitting docker container
Do a small check by running:
docker run --network host -d <image>
if curl works well with this setting, please make sure that:
You're mapping the host port to the container's port correctly:
docker run -p host_port:container_port <image>
Your service application (running in the container) is running on localhost or 0.0.0.0 and not something like 127.0.0.1
I GOT the same error
umesh#ubuntu:~/projects1$ curl -i localhost:49161
curl: (56) Recv failure: Connection reset by peer
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
in my case it was due wrong port no
|---MY Projects--my working folder
--------|Dockerfile ---port defined 8080
--------|index.js-----port defined 3000
--------|package.json
then i was running ::::
docker run -p 49160:8080 -d umesh1/node-web-app1
so as the application was running in port 3000 in index.js it was not able to connect to the application got the error as u were getting
So TO SOLVE THE PROBLEM
deleted the last container/image that was created my worong port
just change the port no of INDEX.JS
|---MY Projects--my working folder
--------|Dockerfile ---port defined 8080
--------|index.js-----port defined 8080
--------|package.json
then build the new image
docker build -t umesh1/node-web-app1 .
running the image in daemon mode with exposed port
docker run -p 49160:8080 -d umesh1/node-web-app1
THUS MY APPLICATION WAS RUNNING without any error listing on port 49161
I have same when bind to port that is not lissened by any service inside container.
So check -p option
-p 9200:9265
-p <port in container>:<port in host os to be binded to>