The following i want to achieve:
On Server A there is docker installed. There are, lets say, 3 Containers:
Container 1: App1, ip: 172.17.0.2, network: mynet, Simple HTML Welcome page, accessible by port 80
Container 2: App2, ip: 172.17.0.3, network: mynet, a Wiki System -> dokuwiki, accessible by port 8080
Container 3: App3, ip: 172.17.0.4, network: mynet, something else
You can see, every container are in the same Docker network. The Containers are accessible by different Ports.
The Clients on the same network needs to access all of the Containers. I can't use DNS in this case (Reverse Proxy via VHOST), because i am not control the DNS. My Goal:
Container 1 : accessible via http://myserver.home.local/app1/
Container 2 : accessible via http://myserver.home.local/app2/
Container 3 : accessible via http://myserver.home.local/app3/
What i did to solve this is the following: Add another Container with nginx, and do proxy_pass to the containers. I use the official nginx image (docker pull nginx), then i mount my custom config into the /etc/nginx/conf.d dir. My Config looks like the follow:
server {
location / {
root /usr/share/nginx/html;
index: index.html index.htm;
}
location /app1/ {
proxy_pass http://app1/
}
location /app2/ {
proxy_pass http://app2:8080/
}
location /app3/ {
proxy_pass http://app3/
}
}
The app1 does work. The app2 does not: It prints me some ugly html output. In the Browser Web Console, i see a lot of 404. I guess that has something to do with Reverse / Rewrite of nginx, because, the app2 is Dokuwiki. I also add the apache ProxyPassReverse equivalent for nginx, without success.
I just do not know what to do in this case, or where to start. How can i know, what to be rewrite? I hope someone can help me.
As mentioned in the comments:
As soon as I use the dokuwiki basedir / baseurl config, the proxy is working as expected. To do so, edit the dokuwiki.php configuration file located in the conf folder:
conf/dokuwiki.php
change the following settings to your environment
$conf['basedir'] = '/dokuwiki';
$conf['baseurl'] = '';
Related
recently I'm trying to set up a litte Home Server with a buildin DNS.
The DNS Service is given by lancacheDNS and set up in combination with a Monolithic-Cache (Port 1234) in two docker containers on 192.168.178.11 (Host machine) in my local network.
Since I want to serve a Website(Port 8080) along with some independent APIs (Ports 8081, 8082 or whatsoever) I decided to use Nginx as a reverse Proxy.
The DNS does the following:
getr.me --> 192.168.178.11
The routing works completely fine and getr.me:8080 gives me my website as expected.
Now the tricky part (for me);
Set up Nginx such that:
website.getr.me --> serving website
api1.getr.me --> serving the API1
api2.getr.me --> serving the API2
For that I created a Newtwork "default_dash_nginx".
I edited the nginx to connect to that via:
networks: default: name: default_dash_nginx external: true
Also I connected my website serving container (dashboard) to the network via --network default_dash_nginx.
The serving website gets the IP 172.20.0.4 (received via docker inspect default_dash_nginx) and also the nginx server is connected to the network.
Nginx works and I can edit the admin page.
But unfortunaly event though I edited the proxyHost to the IP + Port of my website receiced from the network, the site is not available. Here the output of my network inspection: https://pastebin.com/jsuPZpqQ
I hope you have another Idea,
thanks in advance,
Maxi
Edit:
The nginx container is actually a NginxReverseProxyManager Container (I don´t know of it was unclear above or simply not important)
The Nginx container can actually Ping the website container ang also get the HTML files from Port 80 from it.
So it seems like the nginx itself isn´t working like it should.
The first answer got no results( I tried to save it as every of the mentioned files
here
Do I have missed something or am I just not smart enough?
nginx config, try and understand
server {
listen 80;
server_name api1.getr.me;
location / {
proxy_pass http://localhost:8081;
}
}
server {
listen 80;
server_name api2.getr.me;
location / {
proxy_pass http://localhost:8082;
}
}
server {
listen 80;
server_name some.getr.me;
location / {
proxy_pass http://localhost:XXXX;
}
}
I try to make a architecture like this:
docker_container_1 nginx: expose to public (network_mode:"bridge", port:80)
docker_container_2 web_serverI: as internal service (network_mode:"host", port:8080)
docker_container_2 web_serverII: as internal service (network_mode:"host", port:8081)
upstream server-i {
server 172.17.0.1:8080;
}
upstream server-ii {
server 172.17.0.1:8081;
}
server {
listen 80;
server_name localhost;
location /service-i {
proxy_pass http://server-i;
}
location /service-ii {
proxy_pass http://server-ii;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
i already set service i and ii network_mode:"host"
than i use "docker ps" to check, only nginx PORTS show
0.0.0.0:80->80/tcp, and other has nothing.
and i also use "docker stats" to check, NET I/O of all container only nginx has value other is zero.
and i found that still can access server_i server-ii from outside use: http://ip:port(8080 and 8081)
how can do ? do i miss something ?
If you specify network_mode: host, it disables all of Docker's networking functionality for that container. You can't remap a container port or prevent it from being visible on the host; you can't use the normal Docker inter-container networking to connect between containers. Your host-networking containers, from a network point of view, are indistinguishable from a non-Docker process running directly on the host.
Host networking is almost never necessary. In some very unusual cases -- if your service has thousands of ports it listens on, if you've measured the Docker NAT overhead to be significant with very-high-volume traffic -- it can get around some limitations of the Docker networking system. In almost all practical cases you should use the default (bridged) networking, and publish specific ports if you need them to be accessible from outside Docker space.
You should:
Disable host networking on all of your containers; use the standard bridge networking instead. (In a non-Compose context, you will need to docker network create a network with default settings.)
Update your Nginx configuration to use the other containers' names as host names; server docker_container_2 web_serverI:8080. See for example Networking in Compose for additional details.
If you don't want the back-end containers to be visible from outside of Docker space, remove their Compose ports: configuration or docker run -p option. Inter-container communication will still work without this.
Similar questions appear on this site but I cannot figure this one out. I am running a dockerized config. I can hit my site, benweaver-VirtualBox:3000/dev/test/rm successfully. But I want to be able to hit the site without the port: benweaver-VirtualBox/dev/test/rm .
The port does not seem to be handled in my proxy_redirect. I tried commenting out default nginx configuration to no effect. Because I am running a dockerized config I thought the default config may not be relevant anyhow. It is true that a netstat -tlpn |grep :80 does not find nginx. But docker-compose config has nginx as port 80 both in the container and on export. The config:
server {
listen 80;
client_max_body_size 200M;
location /dev/$NGINX_PREFIX/rm {
proxy_pass http://$PUBLIC_IP:3000/dev/$NGINX_PREFIX/rm;
PUBLIC_IP is set to the hostname of the box: benweaver-VirtualBox. This hostname is defined in /etc/hosts:
127.0.0.1 benweaver-VirtualBox
I suspect the problem to lie with my hostname.
What config of my hostname, benweaver-VirtualBox, is preventing a successful proxy_pass from a portless URL to benweaver-VirtualBox (127.0.0.1) : 3000 where my app is running?
I got things to work. Here are some take-aways: (1) if you use an address that includes a port, such as my benweaver-VirtualBox:3000/dev/test/rm you might not be hitting NGINX at all! Your first step is to make certain you are hitting NGINX; (2) Know how your hosts are associated with ip addresses in the /etc/hosts file. It is ok to associate two or more hostnames with the same numerical ip address; (3) learn about the use of trailing forward slashes in NGINX location expressions. There are two "styles" of writing a URL proxy. In one the writer appends a trailing forward slash onto the end of the location path. Should he or she wish to use location paths in the proxied URL, they must replicate those paths, appending the path elements themselves in the proxy_pass line. Omission of the trailing forward slash ensures that the appending of the location path onto the proxied URL is done automatically
I have a docker compose file with 2 services: joomla and phpmyadmin.
I need a reverse proxy which behaves like below:
path: / --> joomla service
path: /managedb --> phpmyadmin service
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://joomla;
}
location /managedb {
proxy_pass http://phpmyadmin;
}
}
Everthing works well, however I'd need to add load balancing to balance work between my 3 machines in docker swarm.
They all are VM on the same LAN with static IP 192.168.75.11/12/13.
The Nginx way to add load balancing should be the follow:
upstream joomla_app {
server 192.168.75.11;
server 192.168.75.12;
server 192.168.75.13;
}
upstream phpmyadmin_app {
server 192.168.75.11;
server 192.168.75.12;
server 192.168.75.13;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://joomla_app;
}
location /managedb {
proxy_pass http://phpmyadmin_app;
}
}
However, since the only exposed port is the Ngxinx 80 one because i need it as reverse proxy too, the code above is obviously not working.
So how can I add the load balancing in this scenario?
Thank you in advance!
In docker swarm, you don't need own load balancer, it has a built in one. Simply scale your services and that's all. Swarm name resolver will resolve joomla and phpmyadmin either to a virtual ip that will be a swarm lb for that service or if you configure service to work in dnsrr mode, will use a dns round-robin method when resolving servicename-hostname to container ip.
However, if you want to distribute services across nodes in swarm, that's a different thing. In this case, you can set placement restrictions for each service or set them to be "global" instead replicated - see https://docs.docker.com/engine/swarm/services/#control-service-placement
I am reading a lot these days about how to setup and run a docker stack. But one of the things I am always missing out on is how to setup that particular containers respond to access through their domain name and not just their container name using docker dns.
What I mean is, that say I have a microservice which is accessible externally, for example: users.mycompany.com, it will go through to the microservice container which is handling the users api
Then when I try to access the customer-list.mycompany.com, it will go through to the microservice container which is handling the customer lists
Of course, using docker dns I can access them and link them into a docker network, but this only really works for wanting to access container to container, but not internet to container.
Does anybody know how I should do that? Or the best way to set that up.
So, you need to use the concept of port publishing, so that a port from your container is accessible via a port from your host. Using this, you can can setup a simple proxy_pass from an Nginx that will redirect users.mycompany.com to myhost:1337 (assuming that you published your port to 1337)
So, if you want to do this, you'll need to setup your container to expose a certain port using:
docker run -d -p 5000:5000 training/webapp # publish image port 5000 to host port 5000
You can then from your host curl your localhost:5000 to access the container.
curl -X GET localhost:5000
If you want to setup a domain name in front, you'll need to have a webserver instance that allows you to proxy_pass your hostname to your container.
i.e. in Nginx:
server {
listen 80;
server_name users.mycompany.com;
location / {
proxy_pass http://localhost:5000;
}
}
I would advise you to follow this tutorial, and maybe check the docker run reference.
For all I know, Docker doesn't provide this feature out of the box. But surely there are several workarounds here. In fact you need to deploy a DNS on your host that will distinguish the containers and resolve their domain names in dynamical IPs. So you could give a try to:
Deploy some of Docker-aware DNS solutions (I suggest you to use SkyDNSv1 / SkyDock);
Configure your host to work with this DNS (by default SkyDNS makes the containers know each other by name, but the host is not aware of it);
Run your containers with explicit --hostname (you will probably use scheme container_name.image_name.dev.skydns.local)
You can skip step #2 and run your browser inside container too. It will discover the web application container by hostname.
Here is a one solution with the nginx and docker-compose:
users.mycompany.com is in nginx container on port 8097
customer-list.mycompany.com is in nginx container on port 8098
Nginx configuration:
server {
listen 0.0.0.0:8097;
root /root/for/users.mycompany.com
...
}
server {
listen 0.0.0.0:8098;
root /root/for/customer-list.mycompany.com
...
}
server {
listen 0.0.0.0:80;
server_name users.mycompany.com;
location / {
proxy_pass http://0.0.0.0:8097;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
server {
listen 0.0.0.0:80;
server_name customer-list.mycompany.com;
location / {
proxy_pass http://0.0.0.0:8098;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Docker compose configuration :
services:
nginx:
container_name: MY_nginx
build:
context: .docker/nginx
ports:
- '80:80'
...