docker nginx-proxy "Bad Gateway" - docker

I have what I think is exactly the setup prescribed in the documentation. Easy peasy, development-only, no SSL ... But I'm getting "Bad Gateway."
docker exec ... cat /etc/nginx/conf.d/default.conf
... seems to correctly identify the internal IP-address of the other-container of interest ... which means that scanning for ENV VIRTUAL_HOST obviously worked:
upstream my_site.local {
[...]
server 172.16.238.5:80 # CORRECT!
}
When I do docker logs app_server I see ... silence. The server isn't being contacted.
When I do docker logs nginx_proxy I see this:
failed (111: connection refused) while connecting to upstream, client 172.16.238.1 [...] upstream: "172.16.238.5:80/"
The other container specifies EXPOSE 80 ... so, why is the connection being refused and who is refusing it?

Well, as I said above, I realized the error of my ways and did this:
VIRTUAL_PROTO=fastcgi
VIRTUAL_ROOT=/var/www
... and within the Dockerfile of the app container I apparently did need to EXPOSE 9000. (This being the default port used by php-fpm for FastCGI purposes.)

Related

GitLab can't reach PlantUml in docker container

So I have GitLab EE server (Omnibus) installed and set up on Ubuntu 20.04.
Next, following official documentation found on GitLab PlantUML integration, I started PlantUML in a docker container which I did with the following command:
docker run -d --name plantuml -p 8084:8080 plantuml/plantuml-server:tomcat
Next, I also configured /etc/gitlab/gitlab.rb file and added next line for redirection as my GitLab server is using SSL:
nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n proxy_cache off; \n proxy_pass http://plantuml:8080/; \n}\n"
In the GitLab server GUI in admin panel, in Settings -> General, when I expand PlantUML, I set the value of PlantUML URL to (two ways):
1st approach:
https://HOSTNAME:8084/-/plantuml
Then, when trying to reach it through the browser through this address(https://HOSTNAME:8084/-/plantuml), I get
This site can’t provide a secure connection.
HOSTNAME sent an invalid response.
ERR_SSL_PROTOCOL_ERROR
2nd approach:
Also I tried to put before that I tried different value in in Settings -> General -> PlantUML -> PlantUML URL:
https://HOSTNAME/-/plantuml
Then, when trying to reach it through the browser through this address (https://HOSTNAME/-/plantuml), I get
502
Whoops, GitLab is taking too much time to respond
In both cases when I trace logs with gitlab-ctl tail I get the same errors:
[crit] *901 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: CLIENT_IP, server: 0.0.0.0:443
[error] 1123593#0: *4 connect() failed (113: No route to host) while connecting to upstream
My question is which of the above two ways is correct to access PlantUML with the above configuration and is there any configuration I am missing?
I believe the issue is that you are running the plantuml in a docker container and then trying to reach it via gitlab (on localhost) with name.
In order to check if that is the issue please change
proxy_pass http://plantuml:8080/
to
proxy_pass http://localhost:8080/
and trying again with the first approach.
Your second approach seems to be missing the container port in the url.

NGINX and SPRINGBOOT in DOCKER container GOT 502 Bad Gateway

I deployed my springboot project in docker container opening port 8080 as well as an nginx server opening port 80
enter image description here
When I use
curl http://localhost:8080/heya/index
it returns normally
But when I use
curl http://localhost/heya/index
hoping I can reach from nginx proxy,it failed. And I checked the log, it says
*24#24: 11 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: , request: "GET /heya/index HTTP/1.1", upstream: "http://127.0.0.1:8080/heya/index", host: "localhost"
Here is my nginx.conf
enter image description here
I cannot figure it out and need help.
I finally got the answer!!
I ran nginx container and webapp container using host network mode, and it worked.
111: Connection refused) while connecting to upstream
is saying Nginx can't connect to the upstream server.
Your
proxy_pass http://heya;
is telling Nginx that the upstream is talking the HTTP protocol [on the default port 80] on the hostname heya. Unless you're running multiple containers in the same Compose network, it's unlikely that the hostname would be heya.
If the Java application is running on port 8080 inside the same container, talking the HTTP protocol, the correct proxy_pass would be
proxy_pass http://localhost:8080;
(since localhost in the container's view is the container itself).

Connection refused: when uwsgi and nginx in different containers

I am trying to setup two docker containers(yes separate without docker-compose): one with nginx and one with uwsgi with basic flask app.
I run containers in same network within docker
My nginx config for site added/linked to sites-enabled(everything else is default):
server {
listen 80;
server_name 127.0.0.1;
location / {
include uwsgi_params;
uwsgi_pass 0.0.0.0:8080;
}
}
My uwsgi.ini
[uwsgi]
module = app:app
master = true
processes = 2
socket = 0.0.0.0:8080
uwsgi entry point in docker looks like
.local/bin/uwsgi --ini uwsgi.ini
Containers run fine on their own - uwsgi receives request on 8080 and nginx receives expected requests. How ever when I try to access 127.0.0.1 i get 502 status code and nginx logs error:
1 connect() failed (111: Connection refused) while connecting to
upstream, client: 192.168.4.1, server: 127.0.0.1, request: "GET /
HTTP/1.1", upstream: "uwsgi://0.0.0.0:8080", host: "127.0.0.1"
By googling i find solution that rather use one container and some_socket.sock as file or use docker compose. Apparently problem with permissions, but I do not know how to solve them or diagnose.
I launch containers with these commands:
docker run --network app_network --name nginx --rm -p 80:80 my_nginx
docker run --network app_network --name flaskapp --rm -p 8080:8080 my_uwsgi
EDIT
You can simply use the hostname of the docker container in the uwsgi_pass directive as both docker containers are on the same subnet.
location / {
include uwsgi_params;
uwsgi_pass flaskapp:8080;
}
0.0.0.0 isn't the IP address of the server, it essentially tells the server to be hosted on every IP that the device has allocated.
To connect to it from nginx, you will need to use the IP address of the container instead.
You can find the IP address of the container running uWsgi with the following command:
docker inspect CONTAINER_ID
Where CONTAINER_ID is the ID of the container you started uwsgi in.
From here you can update the nginx config as follows:
uwsgi_pass IP_ADDRESS:8080;
Where IP_ADDRESS is the one you found from the command above
You can also set the ip address of the container when you start it with the following option
--ip <ip>
Be careful, however, to ensure that the IP address you set is in the same subnet as the standard IP's assigned.

Docker refusing connection on port 443

I'm setting up my AWS EC2 instance. I wanted to let that instance access via https but I get a
This is what I tried
run docker pull abiosoft/caddy
Put Caddyfile in home folder
Run mkdir -p $HOME/caddycerts; chmod ugo+rwx $HOME/caddycerts;
Run docker run -d -e "CADDYPATH=/etc/caddycerts" -v $HOME/Caddyfile:/etc/Caddyfile -v $HOME/caddycerts:/etc/caddycerts -p 443:443 abiosoft/caddy
Run docker restart *dockerName*
My Caddyfile looks like this:
some-domain-name.com {
tls myemail
proxy / 172.17.0.1:9001 {
header_upstream Host {host}
header_upstream X-Real-IP {remote}
header_upstream X-Forwarded-Proto {scheme}
}
}
Error: curl: (7) Failed to connect to some-domain-name.com port 443: Connection refused
EC2 instance's security group has https enabled for port 443
when you use AWS make sure that the port you are using is allowed and you have the right to use it
AWS Security group and ACL doesn't give connection refused, they silently drops the packet. From the message connection refused it seems the service isn't running or server isn't listening on port 443.
Have you tried to telnet it locally ? Does it work ?
telnet localhost 443
Error: curl: (7) Failed to connect to some-domain-name.com port 443: Connection refused
The above error message means that your web server is not running on the specified port of 443. You can simply validate via a telnet (which I see in James's answer above).
From your caddyfile it points to port 9001. The first line of the Caddyfile is always the address of the site to serve.
Without seeing the dockerfile it's hard to pinpoint, but I'd say there's nothing configured to run on 443 in your application

docker nginx proxy nginx connect() failed (111: Connection refused) while connecting to upstream

I'm trying to run a nginx container as the main entry point for all of my websites and web services. I managed to run a portainer as a container, and I'm able to reach it from the internet. Right now I'm trying to reach a static website hosted by another nginx container, but I fail doing so - when I go to the URL, I get
502 Bad Gateway
I've tried adding the upstream section to my main nginx's config, but nothing changed (after every config change, I reload my main nginx service inside the container).
On the other hand, adding upstream is something I'd like to avoid if it's possible because spawning multiple different applications would require adding an upstream for each application - and that's much more work than I'd expect.
Here is my main nginx's configuration file:
events {
worker_connections 1024;
}
http {
server {
listen 80;
location /portainer/ {
proxy_pass http://portainer:9000/;
}
location /helicon/ {
proxy_pass http://helicon:8001/;
}
}
}
Here is how I start my main nginx container:
docker run -p 80:80 --name nginx -v /var/nginx/conf:/etc/nginx:ro --net=internal-net -d nginx
Here is my static website's nginx configuration file:
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name helicon;
root /var/www/html/helicon;
error_log /var/log/nginx/localhost.error.log;
access_log /var/log/nginx/localhost.access.log;
}
}
Here is how the docker-compose file to create and start that container:
version: '3.5'
services:
helicon:
build: .
image: helicon
ports:
- "127.0.0.1:8001:80"
container_name: helicon
networks:
- internal-net
networks:
internal-net:
external: true
I'm using internal-net network to keep all apps in the same network instead of deprecated --link option for docker run
When I go to http://my.server.ip.address/helicon I get 502. Then I check logs with docker logs nginx and there is an information
2018/06/24 11:15:28 [error] 848#848: *466 connect() failed (111: Connection refused) while connecting to upstream, client: Y.Y.Y.Y, server: , request: "GET /helicon/ HTTP/1.1", upstream: "http://172.18.0.2:8001/", host: "X.X.X.X"
The helicon container indeed has an IP address of 172.18.0.2.
What am I missing? Maybe my approach should be completely different from using networks?
Kind regards,
Daniel
To anyone coming across this page here is my little contribution to your understanding of docker networking.
I would like to illustrate with an example scenario.
We are running several contianers with docker-compose such as the following:
Docker container client
Docker container nginx reverse proxy
Docker container service1
Docker container service2
etc ...
to make sure you are setup correctly, make sure of the following:
All containers are on same network!
first run: "docker network ls" to find your network name for your stack
secondly run: "docker network inspect [your_stack_network_name]"
note that the ports you expose in docker-compose have nothing to do with nginx reverse proxying!
that means that any ports you exposed in your docker-compose file are available on your actual host machine i.e your latptop or pc via your browser BUT for proxying purposes you must point nginx to actual ports of your services.
A walkthrough:
lets say service1 runs inside container 1 on port 3000, you mapped port 8080 in your docker-compose file like so: "8080:3000", with this configuration on your local machine you can access the container via your browser on port 8080 (localhost:8080) BUT for nginx reverse proxy container, when trying to proxy to service1, port 8080 is not relevant! the dockerized nginx reverse-proxy will use docker DNS to map service1 to its ip within then docker network and look at it entirely independently from your local host.
to nginx reverse proxy inside docker network, service1 only listens on port 3000!!!
so make sure to point nginx to the correct port!
Solved. I was working on this for hours thinking it was a nginx config issue. I modified nginx.conf endlessly but couldn't fix it. I was getting 502 Bad Gateway and the error description was:
failed (111: Connection refused) while connecting to upstream
I was looking in the wrong place. It turns out that the http server in my index.js file was listening on the url 'localhost'.
httpServer.listen(PORT, 'localhost', async err => {
This works fine on your development machine, but when running inside a container it must be named the url of the container itself. My containers are networked, and in my case the container is named 'backend'
I changed the url from 'localhost' to 'backend' and everything works fine.
httpServer.listen(PORT, 'backend', async err => {
I was getting the same error. In docker-compose.yml my service port was mapped to a different port (e.g. 1234:8080) and I was using the mapped port number (1234) inside nginx.conf.
However, inside the docker network, the containers do not use their mapped port numbers. In order to solve this problem, I changed the proxy_pass statement to use the correct port number (8080).
To make it clear the working configuration is like this: (Check the used port number in nginx.conf!)
docker-compose.yml
version: '3.8'
services:
...
web1:
ports:
- 1234:8080
networks:
- net1
...
proxy:
image: nginx:latest
networks:
- net1
...
networks:
net1:
driver: bridge
nginx.conf
...
location /api
...
proxy_pass http://web1:8080/;
I must thank user8458126 for pointing me in the right direction.
For me, I had overwrote my default.conf nginx file and mistyped the destination for it in my docker file which told nginx not to be listening on the correct port, and instead defaulting to port 80.
Long story short, make sure you're overwriting to the correct path.
What I had:
COPY ./default.conf ./etc/nginx/default.conf
Correct:
COPY ./default.conf ./etc/nginx/conf.d/default.conf
Hope this saves someone a few hours of racking their brain.

Resources