I have an uwsgi ini file that should redirect traffic to a different host.
[uwsgi]
route = .* http:somehost:8000
Unfortunately, the hostname somehost cannot be resolved by uwsgi. However, it is listed in /etc/hosts and dnsmasq is running.
Is there a way to configure uwsgi to use dnsmasq to resolve the name ?
You can resolve domain name in config using resolve directive.
Here is how it worked out for me:
resolve = api_backend=api_backend
route = ^/api/ http:%(api_backend):9000
Related
The following i want to achieve:
On Server A there is docker installed. There are, lets say, 3 Containers:
Container 1: App1, ip: 172.17.0.2, network: mynet, Simple HTML Welcome page, accessible by port 80
Container 2: App2, ip: 172.17.0.3, network: mynet, a Wiki System -> dokuwiki, accessible by port 8080
Container 3: App3, ip: 172.17.0.4, network: mynet, something else
You can see, every container are in the same Docker network. The Containers are accessible by different Ports.
The Clients on the same network needs to access all of the Containers. I can't use DNS in this case (Reverse Proxy via VHOST), because i am not control the DNS. My Goal:
Container 1 : accessible via http://myserver.home.local/app1/
Container 2 : accessible via http://myserver.home.local/app2/
Container 3 : accessible via http://myserver.home.local/app3/
What i did to solve this is the following: Add another Container with nginx, and do proxy_pass to the containers. I use the official nginx image (docker pull nginx), then i mount my custom config into the /etc/nginx/conf.d dir. My Config looks like the follow:
server {
location / {
root /usr/share/nginx/html;
index: index.html index.htm;
}
location /app1/ {
proxy_pass http://app1/
}
location /app2/ {
proxy_pass http://app2:8080/
}
location /app3/ {
proxy_pass http://app3/
}
}
The app1 does work. The app2 does not: It prints me some ugly html output. In the Browser Web Console, i see a lot of 404. I guess that has something to do with Reverse / Rewrite of nginx, because, the app2 is Dokuwiki. I also add the apache ProxyPassReverse equivalent for nginx, without success.
I just do not know what to do in this case, or where to start. How can i know, what to be rewrite? I hope someone can help me.
As mentioned in the comments:
As soon as I use the dokuwiki basedir / baseurl config, the proxy is working as expected. To do so, edit the dokuwiki.php configuration file located in the conf folder:
conf/dokuwiki.php
change the following settings to your environment
$conf['basedir'] = '/dokuwiki';
$conf['baseurl'] = '';
I have a docker-compose file that contains Nginx, PhpFpm and Varnish.
My Nginx works this way:
User connect to the website => Nginx (443) => Varnish (80) => Nginx (8080) => Phpfpm:9000 or others..
My project work well with "depends_on" config inside docker-compose.
But, with docker swarm, the depends_on is ignored.
There my problems starts..
My Varnish container, needs Nginx to be running, or its crash, due to the hostname defined on top of the configuration file :
# varnish config file
backend default {
.host = "nginx";
.port = "8080";
.connect_timeout = 10s;
.first_byte_timeout = 10s;
.between_bytes_timeout = 10s;
}
And, my nginx, needs varnish to be running or it's crash too...
# pass to varnish
location / {
proxy_pass http://varnish;
}
upstream varnish {
server varnish:80;
}
Soooo, varnish crashs because nginx is not up and nginx crashs because varnish is not up.
Is there any solution for this problem ?
The reason why the Varnish container would fail is because the nginx hostname cannot be resolved to an IP address.
It is possible in docker-compose.yml to assign static IP addresses to the various containers. Please consider fixed IP addresses and assigning one of them to the .host property of your Varnish backend.
This way you avoid the cyclical dependency. Even if the IP address doesn't yet exist, Varnish won't complain. Varnish only connects to the backend when it a cache miss or cache bypass occurs.
Similar questions appear on this site but I cannot figure this one out. I am running a dockerized config. I can hit my site, benweaver-VirtualBox:3000/dev/test/rm successfully. But I want to be able to hit the site without the port: benweaver-VirtualBox/dev/test/rm .
The port does not seem to be handled in my proxy_redirect. I tried commenting out default nginx configuration to no effect. Because I am running a dockerized config I thought the default config may not be relevant anyhow. It is true that a netstat -tlpn |grep :80 does not find nginx. But docker-compose config has nginx as port 80 both in the container and on export. The config:
server {
listen 80;
client_max_body_size 200M;
location /dev/$NGINX_PREFIX/rm {
proxy_pass http://$PUBLIC_IP:3000/dev/$NGINX_PREFIX/rm;
PUBLIC_IP is set to the hostname of the box: benweaver-VirtualBox. This hostname is defined in /etc/hosts:
127.0.0.1 benweaver-VirtualBox
I suspect the problem to lie with my hostname.
What config of my hostname, benweaver-VirtualBox, is preventing a successful proxy_pass from a portless URL to benweaver-VirtualBox (127.0.0.1) : 3000 where my app is running?
I got things to work. Here are some take-aways: (1) if you use an address that includes a port, such as my benweaver-VirtualBox:3000/dev/test/rm you might not be hitting NGINX at all! Your first step is to make certain you are hitting NGINX; (2) Know how your hosts are associated with ip addresses in the /etc/hosts file. It is ok to associate two or more hostnames with the same numerical ip address; (3) learn about the use of trailing forward slashes in NGINX location expressions. There are two "styles" of writing a URL proxy. In one the writer appends a trailing forward slash onto the end of the location path. Should he or she wish to use location paths in the proxied URL, they must replicate those paths, appending the path elements themselves in the proxy_pass line. Omission of the trailing forward slash ensures that the appending of the location path onto the proxied URL is done automatically
I am using nginx to do TCP forwarding based on hostname as discussed here: Nginx TCP forwarding based on hostname
When the upstream containers are taken down for a short period of time (5 or so mins), and then brought back up, nginx doesn't seem to re-resolve them (continue to get 111: connection refused error).
I've attempted to put a resolver in the server block of the nginx config:
server {
listen 443;
resolver x.x.x.x valid=30s
proxy_pass $name;
ssl_preread on;
}
I still get the same behaviour with this in place.
Like BMitch says, you can scale to 0 to ensure DNS remains available to Nginx.
But really, if you're using nginx in Swarm, I recommend using a Swarm-aware proxy solution that dynamically updates nginx/haproxy config's based on Services that have the proper labels. In those cases, when the service was removed, the config would also be removed from the proxy. One's I've used include:
Traefik
Docker Flow Proxy
i have a question concerning the correct setup of debian's resolv.conf file.
i have 3 domains:
a-domain.com,
b-domain.com and
c-domain.com
the server has a static IP.
do i have to add all (one by one) domains to resolv.conf ?
currently none of them are in resolv.conf - i have sth. like
dns timeouts - the page is only available after 10-20 seconds!
same issue for ssh.
what did i forget or what is misconfigured ?
thanks for hints in advanced.
They go in /etc/hosts like so:
192.168.0.1 a-domain.com
192.168.0.2 b-domain.com
192.168.0.3 c-domain.com
(Use the actual IP addresses, of course. And I'm assuming this is on a private subnet -- otherwise why aren't you using DNS?)
resolv.conf is there to set up your DNS servers.