I'm looking for a way to configure Nginx to access hosted services through a subdomain of my server. Those services and Nginx are instantiated with Docker-compose.
In short, when typing jenkins.192.168.1.2, I should access to Jenkins hosted on 192.168.1.2 redirected with Nginx proxy.
A quick look of what I currently have. It doesn't work without a top domain name, so it works fine on play-with-docker.com, but not locally with for example 192.168.1.2.
server {
server_name jenkins.REVERSE_PROXY_DOMAIN_NAME;
location / {
proxy_pass http://jenkins:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
To have a look of what I want: https://github.com/Ivaprag/devtools-compose
My overall goal is to access remote docker containers without modifying clients' DNS service.
Unfortunately nginx doesn't support sub-domains on IP addresses like that.
You would either have to modify the clients hosts file (which you said you didn't want to do)...
Or you can just set your nginx to redirect like so:
location /jenkins {
proxy_pass http://jenkins:8080;
...
}
location /other-container {
proxy_pass http://other-container:8080;
}
which would allow you to access jenkins at 192.168.1.2/jenkins
Or you can try and serve your different containers through different ports. E.g:
server {
listen 8081;
location / {
proxy_pass http://jenkins:8080;
...
}
}
server {
listen 8082;
location / {
proxy_pass http://other-container:8080;
...
}
}
And then access jenkins from 192.168.1.2:8081/
If you are already using docker-compose I recommend using the jwilder nginx-proxy container.
https://github.com/jwilder/nginx-proxy
This allows you to add unlimited number of web service containers to the backend of the defined nginx proxy, for example:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- "/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "nginx_certs:/etc/nginx/certs:rw"
nginx:
build:
context: ./docker/nginx/
dockerfile: Dockerfile
volumes_from:
- data
environment:
VIRTUAL_HOST: www.host1.com
nginx_2:
build:
context: ./docker/nginx_2/
dockerfile: Dockerfile
volumes_from:
- data
environment:
VIRTUAL_HOST: www.host2.com
apache_1:
build:
context: ./docker/apache_1/
dockerfile: Dockerfile
volumes_from:
- data
environment:
VIRTUAL_HOST: www.host3.com
The nginx-proxy mount the host docker sock file in order to get information about the other containers running, if any of them have the env variable VIRTUAL_HOST set then it will add it to its configuration.
I was trying to configure subdomains in nginx (host), for two virtualhosts in one LXC container.
The way it worked for me:
For apache (in the container), I created two virtual hosts: one in port 80 and the other one in port 90.
For enabling port 90 in apache2 (container), it was necessary to add the line "Listen 90" below "Listen 80" in /etc/apache2/ports.conf
For NGINX (host machine), configured two DOMAINS, both in port 80 creating independent .conf files in /etc/nginx/sites-available. Created symbolic link for each file to /etc/nginx/sites-enabled.
In the first NGINX myfirstdomain.conf file, redirect to http://my.contai.ner.ip:80.
In the second NGINX myseconddomain.conf file, redirect to http://my.contai.ner.ip:90
That was it for me !
Related
I've search a lot of online materials, but I wasn't able to find a solution for my problem. I'll try to make it as clear as possible. I think I'm missing something and maybe someone with more experience on the server configuration side may have the answer.
I have MERN stack app and I'm trying to deploy it on a DigitalOcean droplet, using Docker. All good so far, everything runs as it should should, except de fact that I'm not able to access my app by the domain. It works perfectly if I'm using the IP of the droplet.
What I've checked so far:
checked my ufw status and I have both HTTP and HTTPS enabled
the domain is from GoDaddy and it's live, linked with the proper namespaces from Digital Ocean.
in the domains sections from Digital Ocean everything it's set as it should. I have the proper CNAME records pointing to the IP of my droplet
a direct ping to my domain works fine (it returns the correct IP)
also checked DNS LookUp tools and everything seems to be linked just fine
When it comes to the Docker containers, I have 3 of them: client, backend and nginx.
This is how my docker-compose looks like:
version: '3'
services:
nginx:
container_name: jtg-nginx
depends_on:
- backend
- client
restart: always
image: host-of-my-image-nginx:latest
networks:
- local-net
ports:
- '80:80'
backend:
container_name: jtg-backend
image: host-of-my-image-backend:latest
ports:
- "5000:5000"
volumes:
- logs:/app/logs
- uploads:/app/uploads
networks:
- local-net
env_file:
- .env
client:
container_name: jtg-client
stdin_open: true
depends_on:
- backend
image: host-of-my-image-client:latest
networks:
- local-net
env_file:
- .env
networks:
local-net:
driver: bridge
volumes:
logs:
driver: local
uploads:
driver: local
I have two instances of Nginx. One is used inside the client container and the other one is used in it's own container.
This is the default.conf from the client:
server {
listen 3000;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
Now it comes the most important part. This is the default.conf used inside the main Nginx container:
upstream client {
server client:3000;
}
upstream backend {
server backend:5000;
}
server{
listen 80;
server_name my-domain.com www.my-domain.com;
location / {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /backend {
rewrite /backend/(.*) /$1 break;
proxy_pass http://backend;
}
}
I really don't understand what's wrong with this configuration and I think it's something very small that I'm missing out.
Thank you!
If you want to setup a domain name in front, you'll need to have a webserver instance that allows you to proxy_pass your hostname to your container
So this is what you may wanna do :
server{
listen 80;
server_name my-domain.com www.my-domain.com;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /backend {
rewrite /backend/(.*) /$1 break;
proxy_pass http://backend;
}
}
The mistery was solved. After adding SSL certificate everything works as it should.
I have a containerized app that uses nginx as a reverse proxy. If I map nginx ports as 1337:80 I am only able to reach my website at <MY_INSTANCE_IP>:1337. If I instead map nginx ports as 80:80 I am able to reach my website at <MY_INSTANCE_IP>. Changing the ports in my docker-compose file worked but I'd like to know why.
My docker-compose config:
version: '3.7'
services:
web:
build:
context: .
dockerfile: ./compose/production/flask/Dockerfile
image: flask_web
command: /start
volumes:
- .:/app
expose:
- 5000
env_file:
- .env/.prod
environment:
- FLASK_APP=app
nginx:
build: ./compose/production/nginx
ports:
- 80:80
depends_on:
- web
My nginx config:
upstream flask-app {
server web:5000;
}
server {
listen 80;
server_name <MY_INSTANCE_IP>;
location / {
proxy_pass http://flask-app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
# client_max_body_size 20M;
}
}
So, you have nginx set to listen on port 80 (default http). When you set the port for your nginx service in docker-compose the first number is the port that docker will "publish" the service on the host and the second number, after the colon (:), is the port the server is listening on "inside" the container. See:
https://docs.docker.com/config/containers/container-networking/#published-ports for more detail.
If I want to setup nginx with my docker containers, one option is to setup the nginx instance in my docker-compose.yml, and link the nginx container to all application containers.
The drawback of this approach, however, is that the docker-compose.yml becomes server-level, since only one nginx container can expose ports 80/443 to the internet.
I'm interested in being able to define several docker-compose.yml files on the same server, but still easily expose the public-facing containers in each compose file via a single server-specific nginx container.
I feel this should be pretty easy, but I haven't been able to find a good resource or example for this.
First, you need to create a network for nginx and the proxied containers:
docker network create nginx_network
Next, configure the nginx container in a compose file like this:
services:
nginx:
image: your_nginx_image
ports:
- "80:80"
- "443:443"
networks:
- nginx_network
networks:
nginx_network:
external: true
After that you can run proxied containers:
services:
webapp1:
image: ...
container_name: mywebapp1
networks:
- nginx_network # proxy and app must be in same network
- webapp1_db_network # you can use additional networks for some stuff
database:
image: ...
networks:
- webapp1_db_network
networks:
nginx_network:
external: true
webapp1_db_network: ~ # this network won't be accessible from outside
Also, to make this work you need to configure your nginx properly:
server {
listen 80;
server_name your_app.example.com;
# Docker DNS
resolver 127.0.0.11;
location / {
# hack to prevent nginx to resolve container's host on start up
set $docker_host "mywebapp1";
proxy_pass http://$docker_host:8080;
}
}
You need to tell nginx to use Docker's DNS, so it will be able to access containers by their names.
But note that if you run the nginx container before the others, then nginx will try to resolve another container's host and fail, because the other containers are not running yet. You can use a hack with placing the host into a variable. With this hack, nginx won't try to resolve host until receiving a request.
With this combination you can have nginx always up, while starting and stopping proxied applications independently.
Update:
If you want a more dynamic solution, you can modify the nginx config as follows:
server {
listen 80;
resolver 127.0.0.11;
# define server_name with regexp which will read subdomain into variable
server_name ~^(?<webapp>.+)\.example\.com;
location / {
# use variable from regexp to pass request to desired container
proxy_pass http://$webapp:8080;
}
}
With this configuration, a request to webapp1.example.com will be passed to container "webapp1", webapp2.example.com to "webapp2" etc. You only need to add DNS records and run app containers with right name.
The accepted answer is great, but since I am in the trenches with this right now I'm going to expand upon it with my debugging steps in hopes it helps someone (myself in the future?)
Docker-compose projects often use nginx as reverse-proxy to route http traffic to the other docker services. nginx was a service in my projectfolder/docker-compose.yml which was connected to two docker networks.
One was the default network created when I used docker-compose up on projectfolder/docker-compose.yml (It is named projectfolder_default and services connect to it by default UNLESS you have a networks property for your service with another network, then make sure you add - default to the list). When I ran docker network ls I saw projectfolder_default in the list and when I ran docker network inspect projectfolder_default I saw the nginx container, so everything was good.
The other was a network called my_custom_network that I set up myself. I had a startup script that created it if it did not exist using https://stackoverflow.com/a/53052379/13815107 I needed it in order to talk to the web service in otherproject/docker-compose.yml. I had correctly added my_custom_network to:
nginx service's networks list of projectfolder/docker-compose.yml
bottom of projectfolder/docker-compose.yml
web service's networks in otherproject/docker-compose.yml
bottom of otherproject/docker-compose.yml
The network showed up and had the right containers using docker network ls and docker network inspect my_custom_network
However, I assumed that proxy_pass to http://web in my server.conf would map to the docker service web.projectfolder_default. I was mistaken. To test this I opened shell on the nginx container (docker exec -it nginx sh). When I used ping web (may need to apt-get update, apt-get install iputils-ping) it succeeded, but it printed a url with my_custom_network which is how I figured out the mistake.
Update: I tried using http://web.my_custom_network in server.conf.template and it routed great, but my webapp (Django-based) choked on underscores in the url. I renamed web to web2 in otherproject/docker-compose.yml and then used something like docker stop otherproject_web and docker rm otherproject_web to get rid of the bad one.
projectfolder/docker-compose.yml
services:
# http://web did NOT map to this service!! Use http://web.main_default or change the names
web:
...
nginx:
...
links:
- web
networks:
- default
- my_custom_network
...
networks:
- my_custom_network
external: true
otherproject/docker-compose.yml
services:
# http://web connected to this service instead. You could use http://web.my_custom_network to call it out instead
web:
...
networks:
- default
- my_custom_network
...
networks:
- my_custom_network
external: true
projectfolder/.../nginx/server.conf.template (next to Dockerfile)
...
server {
...
location /auth {
internal;
# This routed to wrong 'web'
proxy_pass http://web:9001;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
}
location / {
alias /data/dist/;
}
location /robots.txt {
alias /robots.txt;
}
# Project Folder backend
location ~ ^/(api|login|logout)/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
# This routed to wrong 'web'
proxy_pass http://web:9001;
}
# Other project UI
location /other-project {
alias /data/other-project-client/dist/;
}
# Other project Django server
location ~ ^/other-project/(rest)/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
# nginx will route, but won't work some frameworks like django (it doesn't like underscores)
# I would rename this web2 in yml and use http://web2
proxy_pass http://web.my_custom_network:8000;
}
}
I do have a DOCKER_HOST specified by :
DOCKER_HOST=tcp://g3-docker-1:2375
secured by TLS. On this host I could have quite a view "jboss/wildfly" containers in different configurations and loaded with different apps. They can be started on request by some people for software testing purposes. The following docker-compose is used :
version: '2'
services:
wildfly:
build:
dockerfile: Dockerfile.wildfly
context: .
ports:
- "8080:8080"
depends_on:
- logvolume
- mariadb
volumes_from:
- logvolume
mariadb:
image: mariadb:latest
ports:
- "3307:3307"
environment:
- MYSQL_ROOT_PASSWORD=secret
logvolume:
build:
dockerfile: Dockerfile.logvolume
context: .
volumes:
- /opt/jboss/wildfly/standalone/log:/opt/jboss/wildfly/standalone/log
I am planning to build quite a view containers each one with different preloaded data and different webapps inside "wildfly"
When I start these containers each one is assigned a IP addres inside the _dirname_default network (bridged). Jboss is reachable by the outside world with $DOCKER_HOST:8080 and maria_db is reachable so fine so good ...
But what if I have a couple of this. Do I have to map different ports to the different wildflys or is there another way to access the dockerized wildflys by the outside eg. via the containerid or so ?
I am now using nginx as reverse proxy in order to decide based on the url which wildfly to talk to
This needs an addtional service in docker-compose.yml like this :
reverseproxy:
build:
dockerfile: Dockerfile.nginx
context: .
ports:
- 80:80
depends_on:
- wildfly
and the following nginx.conf:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream docker-wildfly {
server wildfly:8080;
}
server {
listen 80;
location /wildfly/ {
proxy_pass http://docker-wildfly/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Each wildfly will get its own location
I'm running nginx via lets-nginx in the default nginx configuration (as per the lets-nginx project) in a docker swarm:
services:
ssl:
image: smashwilson/lets-nginx
networks:
- backend
environment:
- EMAIL=sas#finestructure.co
- DOMAIN=api.finestructure.co
- UPSTREAM=api:5000
ports:
- "80:80"
- "443:443"
volumes:
- letsencrypt:/etc/letsencrypt
- dhparam_cache:/cache
api:
image: registry.gitlab.com/project_name/image_name:0.1
networks:
- backend
environment:
- APP_SETTINGS=/api.cfg
configs:
- source: api_config
target: /api.cfg
command:
- run
- -w
- tornado
- -p
- "5000"
api is a flask app that runs on port 5000 on the swarm overlay network backend.
When services are initially started up everything works fine. However, whenever I update the api in a way that makes the api container move between nodes in the three node swarm, nginx fails to route traffic to the new container.
I can see in the nginx logs that it sticks to the old internal ip, for instance 10.0.0.2, when the new container is now on 10.0.0.4.
In order to make nginx 'see' the new IP I need to either restart the nginx container or docker exec into it and kill -HUP the nginx process.
Is there a better and automatic way to make the nginx container refresh its name resolution?
Thanks to #Moema's pointer I've come up with a solution to this. The default configuration of lets-nginx needs to be tweaked as follows to make nginx pick up IP changes:
resolver 127.0.0.11 ipv6=off valid=10s;
set $upstream http://${UPSTREAM};
proxy_pass $upstream;
This uses docker swarm's resolver with a TTL and sets a variable, forcing nginx to refresh name lookups in the swarm.
Remember that when you use set you need to generate the entire URL by yourself.
I was using nginx in a compose to proxy a zuul gateway :
location /api/v1/ {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_pass http://rs-gateway:9030/api/v1/;
}
location /zuul/api/v1/ {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_pass http://rs-gateway:9030/zuul/api/v1/;
}
Now with Swarm it looks like that :
location ~ ^(/zuul)?/api/v1/(.*)$ {
set $upstream http://rs-gateway:9030$1/api/v1/$2$is_args$args;
proxy_pass $upstream;
# Set headers
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
}
Regex are good but don't forget to insert GET params into the generated URL by yourself.