I use an nginx container with this config:
set $ui http://ui:9000/backend;
resolver 127.0.0.11 valid=5m;
proxy_pass $ui;
This is needed, because the "ui" container wont necessarly be up when nginx starts. This avoids the "host not found in upstream..." error.
But now I get a 404 even when the ui-container is up and running (they are both in the same network defined in the docker-compose.yml). When I proxy pass without the variable, without the resolver and start the ui container first, everything works.
Now I am looking for why docker is failing to resolve it. Could I maybe manually add a fake route to http://ui which gets replaced when the ui-container starts? Where would that be? Or can I fix the resolver?
The answer is like in this post:
https://stackoverflow.com/a/52319161/3093499
Only change is putting the resolver and set variable into the server-body instead of the location.
First you need to make sure that you have the port in the ui backend Dockerfile with EXPOSE 9000. Then you're going to want to have this as your config:
http {
upstream ui {
server ui:9000;
}
server {
# whatever port your nginx reverse proxy is listening on.
listen 80;
location / {
proxy_pass http://ui/backend;
}
}
http
{
server {
ssl_certificate /etc/tls/tls.crt;
ssl_certificate_key /etc/tls/tls.key;
resolver 127.0.0.11;
resolver_timeout 10s;
access_log /var/log/nginx/access_log.log;
location / {
set $upstream_app homer;
set $upstream_port 8080;
set $upstream_proto http;
proxy_pass http://localhost:7001;
}
}
}
i worked too
Related
I have an application running on port 4343. This is a single page app, so hitting http://myApp:4343 will dynamically redirect me to somewhere like http://myApp:4343/#/pageOne.
Both the nginx container and the myApp container are running on the same docker network so can resolve via container name.
I'm trying to proxy this via nginx with:
server {
listen 80;
server_name localhost;
location /myApp {
proxy_pass http://myApp:4343
}
}
How do I wildcard the rule?
I'm trying to configure nginx to work as a reverse proxy for the proget application. Everything works fine if I use IP in browser. Unfortunately for some reason it doesn't work at domain name like example.com. I host applications on the digitalocean droplet. I have DNS configured there too.
Nginix configuration below:
upstream proget{
server proget;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://proget;
}
}
I create other containers according to the documentation: https://docs.inedo.com/docs/proget/installation/installation-guide/linux-docker
I met similar problem in a k8s cluster before. And I fixed it by adding resolver directive to my nginx config.
I am using Nginx as a reverse proxy. It is running as a containerized service in a Swarm cluster.
A while ago I discovered this weird behavior and I'm trying to wrap my head around it.
On my host I have three subdomains set up:
one.domain.com
two.domain.com
three.domain.com
In my Nginx server config I am specifying that the server_name I am targeting is three.domain.com, so I am expecting Nginx to only respond to requests targeting that subdomain.
events { worker_connections 1024; }
http {
upstream service {
server node:3000;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name three.domain.com;
[...... ssl settings here.......]
location / {
proxy_pass http://service;
proxy_set_header Host $host;
}
}
}
What happens instead of only responding to requests sent to three.domain.com, it responds to one.domain.com and two.domain.com as well. (it routes them to three.domain.com)
If I add multiple server blocks specifically targeting subdomains one and two, it works as expected, it routes the requests where they belong.
That being said, the ideal behavior would be to only respond to subdomains which are listed in the server_name section of a server block.
Nginx tests the request’s header field “Host” (or SNI hostname in case of https) to determine which server the request should be routed to. If its value does not match any server name, or the request does not contain this header field at all, then nginx will route the request to the default server for this port. In your configuration above, the default server is the first (and only) one — which is nginx’s standard default behaviour. If there are multiple server blocks, it can also be set explicitly which server should be default, with the default_server parameter in the listen directive
So, you need to add another server block:
server {
listen 443 ssl default_server;
server_name default.example.net;
...
return 444;
}
I have a reverse proxy with nginx set up using docker compose. It is fully working when I run all services together with docker-compose up. However, I want to be able to run individual containers, and start (docker-compose up service1) and stop them independently from the proxy container. Here is a snippet from my current nginx config:
server {
listen 80;
location /service1/ {
proxy_pass http://service1/;
}
location /service2/ {
proxy_pass http://service2/;
}
}
Right now if I run service1, service2, and the proxy together all is well. However, if I run the proxy and only service2, for example, I get the following error: host not found in upstream "service1" in /etc/nginx/conf.d/default.conf:13. The behavior I want here is to just throw some HTTP error, and when that service does come up to route to it appropriately.
Is there any way to get this behavior?
Your issue is with nginx. It will fail to start if it cannot resolve one of the upstream hostnames.
In your case the docker service name will be unresolvable if the service is not up.
Try one of the solutions here, such as resolving at the location level.
(edit) The below example works for me:
events {
worker_connections 4096;
}
http {
server {
location /service1 {
resolver 127.0.0.11;
set $upstream http://service1:80;
proxy_pass $upstream;
}
location /service2 {
resolver 127.0.0.11;
set $upstream2 http://service2:80;
proxy_pass $upstream2;
}
}
}
Sounds like you need to use load balancing. I believe with load balancing it will attempt to share the load across servers/services. If one goes down, it should automatically use the others.
Example
http {
upstream myapp1 {
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
}
}
}
Docs: http://nginx.org/en/docs/http/load_balancing.html
I'm trying to keep a jenkins container(docker) behind nginx reverse proxy. It works fine with this path, https://example.com/ but it returns 502 Bad Gateway when I add parameter to the path, https://example.com/jenkins.
The docker container for jenkins is run like this
docker container run -d -p 127.0.0.1:8080:8080 jenkins/jenkins
Here is my code,
server {
listen 80;
root /var/www/html;
server_name schoolcloudy.com www.schoolcloudy.com;
location / {
proxy_pass http://localhost:8000;
}
}
# Virtual Host configuration for example.com
upstream jenkins {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name jenkins;
location /jenkins {
proxy_pass http://jenkins;
proxy_redirect 127.0.0.1:8080 https://schoolcloudy.com/jenkins;
}
}
Specify the Jenkins container's network with --network=host flag when you run the container. This way the container will be able to interact with host network or use the container's IP explicitly in the Nginx conf.
good practice in such questions is official documentation usage:
wiki.jenkins.io
I've configured Jenkins behind Nginx reverse proxy several time, wiki works fine for me each time.
P.S.: look like proxy_pass option value in your config should be changed to http://127.0.0.1:8080