Nginx as a reverse proxy inside docker container - docker

I'm trying to use nginx as a reverse proxy inside a container points to the different PHP application container.
My PHP container gets requests from external port 8080 and forwards it to internal 80. I want my nginx to get listen to port 80 and forward the request to the PHP container on port 8080 but have issues redirecting the request.
My nginx Dockerfile:
FROM nginx:latest
COPY default.conf /etc/nginx/conf.d/default.conf
My nginx default.conf:
server {
listen 80;
error_page 497 http://$host:80$request_uri;
client_max_body_size 32M;
underscores_in_headers on;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_pass http://php-container:8080;
proxy_read_timeout 90;
proxy_http_version 1.1;
}
}
I've tried deploying it via docker-compose with the above yml file, but got the same error when CURL to the nginx.
When CURL to HTTP://localhost:8080 (PHP application) and also to HTTP://localhost:80 (nginx) there's a log output from the docker-compose log.
But when CURL to nginx, I got the above error:

You have a misconfiguration here.
nginx (host:80 -> container:8080)
php-app (host:8080 -> container:80)
Nginx can't reach of "localhost" of another container because it's a different container network.
I suggest you create a docker network --network and place both containers to the network. Then in Nginx config, you can refer php-app container by name.
proxy_read_timeout 90;
proxy_redirect http://localhost:80/ http://php-container:8080/;
Besides you can expose only the Nginx port and your backend will be safe.

Related

Docker Gitlab container with nginx container

I have set up a gitlab container and nginx for proxy_pass but not working.
For example, I type example.com/gitlab, it can proxy_pass to 8086 port.
It can successful to display login page with out photo and the button is not working.
I find that if I add back the port number, it is work normally http://example.com:8086/projects/new
But proxy_pass address is http://example.com/projects/new, it cannot find the file and display 404.
location /gitlab {
proxy_pass http://example.com:8086;
}
how can I handle this case?
http://example.com/projects/new
http://example.com:8086/projects/new
Pass the GITLAB_HOST env in to container
docker run -e GITLAB_HOST=http://example.com/gitlab ....
and pass the request header and proxy port to the proxy server in nginx config
location /gitlab {
proxy_pass http://example.com:8086;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}

gitlab container proxy pass by nginx doesn't work with push request

I have 3 containers on my docker. and I want to have gitlab as a subdomain.
my gitlab container ports are:
443/tcp, 0.0.0.0:10022->22/tcp, 0.0.0.0:10080->80/tcp
gitlab container has created with this command:
docker run --detach --name gitlab --restart=always\
--publish 10022:22 --publish 10080:80 \
--network nginx_network \
--volume /srv/gitlab/config:/etc/gitlab \
--volume /srv/gitlab/logs:/var/log/gitlab \
--env 'EXTERNAL_URL=https://develop.domain.com' \
--volume /srv/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
my nginx config is:
upstream isa_fire {
server isa_fire:8000;
}
upstream gitlab {
server gitlab:80;
}
upstream gedata {
server geoserver:8080;
}
server {
listen 80;
server_name domain.com www.domain.com;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
root /usr/share/nginx/html;
index index.html index.htm;
location / {
proxy_pass http://isa_fire;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /static/ {
alias /isa_fire/static/;
}
location /files/ {
alias /isa_fire/;
}
}
server {
listen 80;
server_name develop.domain.com www.develop.domain.com;
location / {
proxy_pass http://gitlab;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 80;
server_name geoserver.domain.com www.geoserver.domain.com;
location / {
proxy_pass http://gedata;
}
}
client_max_body_size 240M;
every things works good with browser on my gitlab. but when i try to push:
git push -u origin master
face with this error after some minutes:
*ssh: connect to host develop.domain.com port 22: Connection timed out
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists*
There are two ways to solve this ,
Change docker host ssh port from 22 to something else, then create gitlab container with these ports instead, 22:22 ,10080:80, 443
Or you can edit .git/config file of your project , in address add port 10022 at the end of url
Btw you can pull and push using http(s) url of the project and leave ssh 😁
I think you need to expose Port 22 too, if you want to use SSH.
So that means your nginx config must be extended with a second server which listens on port 22 and proxy passes it to your gitlab docker container.
Port 22 must be also forwarded/opened in your router settings!
I hope this helps!

Broken UI and multiple console errors when deploying Nexus3 on Docker and an nginx reverse-proxy

I am running Nexus3 in Docker as well as an nginx reverse-proxy on Docker. I have my own ssl certificate. I followed the instructions from Sonatype on how to properly handle using a reverse proxy and use the Nexus3 certificates. Problem is that this is what I see when I go to my repository:
These are my errors:
What could cause this? This is my nginx.conf:
server {
listen 443 ssl http2;
ssl_certificate /etc/ssl/confidential.com/fullchain.cer;
ssl_certificate_key /etc/ssl/confidential.com/*.confidential.com.key;
server_name internal.confidential.com;
location /test {
proxy_pass http://nexus:8081/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}

How to implement (Certbot) ssl using Docker with Nginx image

I'm trying to implement ssl in my application using Docker with nginx image. I have two apps, one for back-end (api) and other for front-end (admin). It's working with http on port 80, but I need to use https. This is my nginx config file...
upstream ulib-api {
server 10.0.2.229:8001;
}
server {
listen 80;
server_name api.ulib.com.br;
location / {
proxy_pass http://ulib-api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
upstream ulib-admin {
server 10.0.2.229:8002;
}
server {
listen 80;
server_name admin.ulib.com.br;
location / {
proxy_pass http://ulib-admin;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
I get some tutorials but all is using docker-compose. I need to install it with Dockerfile. Can anyone give me a light?
... I'm using ECS instance on AWS and project is building with CI/CD
This is just one of possible ways:
First issue certificate using certbot. You will end up with a couple of *.pem files.
There are pretty tutorials on installing and running certbot on different systems, I used Ubuntu with command certbot --nginx certonly. You need to run this command on your domain because certbot will check that you are the owner of the domain by a number of challenges.
Second, you create nginx containers. You will need proper nginx.conf and link certificates to this containers. I use docker volumes but that is not the only way.
My nginx.conf looks like following:
http {
server {
listen 443 ssl;
ssl_certificate /cert/<yourdomain.com>/fullchain.pem;
ssl_certificate_key /cert/<yourdomain.com>/privkey.pem;
ssl_trusted_certificate /cert/<yourdomain.com>/chain.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
...
}
}
Last, you run nginx with proper volumes connected:
docker run -d -v $PWD/nginx.conf:/etc/nginx/nginx.conf:ro -v $PWD/cert:/cert:ro -p 443:443 nginx:1.15-alpine
Notice:
I mapped $PWD/cert into container as /cert. This is a folder, where *.pem files are stored. They live under ./cert/example.com/*.pem
Inside nginx.conf you refer these certificates with ssl_... directives
You should expose port 443 to be able to connect

failed (113: No route to host) while connecting to upstream

I want using nginx make reverse proxy(docker container). However, there have been some exceptions.
issue context
Centos version: 7.4.1708
nginx version: 1.13.12
docker version: 1.13.1
Open firewall and exposed 80 port
nginx reproxy on docker container: failed (113: No route to host) while connecting to upstream
nginx reproxy on host: function normal
nginx configuration:
server
{
listen 80;
server_name web.pfneo.geo;
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://172.18.0.249:88;
}
access_log logs/web.tk_access.log;
}
close the firewall
nginx reproxy on docker container: function normal
nginx reproxy on host: function normal
Open firewall not expose port
nginx app service on docker container(88 port): function normal
It seems that this problem is caused by docker?
Docker can ignore the host firewall?

Resources