Hugo theme link refers to container port in Docker/Nginx - docker

I've got a simple static site, generated with Hugo, that I'm building to a Docker container running Nginx. Nginx is listening on port 90. I'm encountering strange behavior where certain links try to open the container port rather than the host port (in the case of localhost, it's 8000). So for example, this link:
Docs
...when moused-over shows that it will attempt to open localhost:8000/documents, which is correct, but when clicked it attempts instead to open http://localhost:90/documents/ (If I manually change the URL in the browser to http://localhost:8000/documents/, it responds fine.)
What makes this even stranger:
Only certain links, specifically in the header menu, do this.
I've used dozens of Hugo themes, and I've only encountered this issue with one of them: ZDoc. Could it be specific to this theme? That seems strange to me.
What could be causing this? I'm struggling to even know what this phenomenon is called. "Host/container port confusion"?
I'm certain it's not a misconfiguration of Nginx or Docker. I'm exposing port 90 properly in my Dockerfile:
EXPOSE 90
nginx.conf is set to listen on that port:
http {
include /etc/nginx/mime.types;
sendfile on;
server {
root /usr/share/nginx/html/;
index index.html;
server_name localhost;
listen 90;
}
}
And I'm starting the Docker container with the host port 8000 forwarding to the port Nginx is listening on:
docker run --name my-simple-site -p 8000:90 -d simple-site
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de9cd1526034 simple-site "nginx -g 'daemon of…" 41 minutes ago Up 41 minutes 0.0.0.0:8000->90/tcp my-simple-site

Strangely, the fix for this was to change the link to point directly to the file: Docs
I'm unclear why and would love some insight into this. Does Nginx infer a port when pointing to a directory?

Related

Dockercontainer with Nginx share the same network but can´t reach each other

recently I'm trying to set up a litte Home Server with a buildin DNS.
The DNS Service is given by lancacheDNS and set up in combination with a Monolithic-Cache (Port 1234) in two docker containers on 192.168.178.11 (Host machine) in my local network.
Since I want to serve a Website(Port 8080) along with some independent APIs (Ports 8081, 8082 or whatsoever) I decided to use Nginx as a reverse Proxy.
The DNS does the following:
getr.me --> 192.168.178.11
The routing works completely fine and getr.me:8080 gives me my website as expected.
Now the tricky part (for me);
Set up Nginx such that:
website.getr.me --> serving website
api1.getr.me --> serving the API1
api2.getr.me --> serving the API2
For that I created a Newtwork "default_dash_nginx".
I edited the nginx to connect to that via:
networks: default: name: default_dash_nginx external: true
Also I connected my website serving container (dashboard) to the network via --network default_dash_nginx.
The serving website gets the IP 172.20.0.4 (received via docker inspect default_dash_nginx) and also the nginx server is connected to the network.
Nginx works and I can edit the admin page.
But unfortunaly event though I edited the proxyHost to the IP + Port of my website receiced from the network, the site is not available. Here the output of my network inspection: https://pastebin.com/jsuPZpqQ
I hope you have another Idea,
thanks in advance,
Maxi
Edit:
The nginx container is actually a NginxReverseProxyManager Container (I don´t know of it was unclear above or simply not important)
The Nginx container can actually Ping the website container ang also get the HTML files from Port 80 from it.
So it seems like the nginx itself isn´t working like it should.
The first answer got no results( I tried to save it as every of the mentioned files
here
Do I have missed something or am I just not smart enough?
nginx config, try and understand
server {
listen 80;
server_name api1.getr.me;
location / {
proxy_pass http://localhost:8081;
}
}
server {
listen 80;
server_name api2.getr.me;
location / {
proxy_pass http://localhost:8082;
}
}
server {
listen 80;
server_name some.getr.me;
location / {
proxy_pass http://localhost:XXXX;
}
}

NGINX Server to Redirect to Docker Container

The following i want to achieve:
On Server A there is docker installed. There are, lets say, 3 Containers:
Container 1: App1, ip: 172.17.0.2, network: mynet, Simple HTML Welcome page, accessible by port 80
Container 2: App2, ip: 172.17.0.3, network: mynet, a Wiki System -> dokuwiki, accessible by port 8080
Container 3: App3, ip: 172.17.0.4, network: mynet, something else
You can see, every container are in the same Docker network. The Containers are accessible by different Ports.
The Clients on the same network needs to access all of the Containers. I can't use DNS in this case (Reverse Proxy via VHOST), because i am not control the DNS. My Goal:
Container 1 : accessible via http://myserver.home.local/app1/
Container 2 : accessible via http://myserver.home.local/app2/
Container 3 : accessible via http://myserver.home.local/app3/
What i did to solve this is the following: Add another Container with nginx, and do proxy_pass to the containers. I use the official nginx image (docker pull nginx), then i mount my custom config into the /etc/nginx/conf.d dir. My Config looks like the follow:
server {
location / {
root /usr/share/nginx/html;
index: index.html index.htm;
}
location /app1/ {
proxy_pass http://app1/
}
location /app2/ {
proxy_pass http://app2:8080/
}
location /app3/ {
proxy_pass http://app3/
}
}
The app1 does work. The app2 does not: It prints me some ugly html output. In the Browser Web Console, i see a lot of 404. I guess that has something to do with Reverse / Rewrite of nginx, because, the app2 is Dokuwiki. I also add the apache ProxyPassReverse equivalent for nginx, without success.
I just do not know what to do in this case, or where to start. How can i know, what to be rewrite? I hope someone can help me.
As mentioned in the comments:
As soon as I use the dokuwiki basedir / baseurl config, the proxy is working as expected. To do so, edit the dokuwiki.php configuration file located in the conf folder:
conf/dokuwiki.php
change the following settings to your environment
$conf['basedir'] = '/dokuwiki';
$conf['baseurl'] = '';

Nginx Reverse Proxy to Docker 502 Bad Gateway

Spent all week on this one and tried every related stackoverflow post. Thanks for being here.
I have an Ubuntu VM running nginx with reverse proxies pointing to various docker daemons concurrently running on different ports. All my static sites work flawlessly. However, I have one container running an expressjs app.
I get responses after restarting the server for about an hour. Then I get 502 Bad Gateway. A refresh brings the site back up for approx 5 seconds until it permanently goes down. This is reproducible.
The docker container has express listening on 0.0.0.0:8090 inside the container
The container is running
02e1917991e6 docker/express-site "docker-entrypoint.s…" About an hour ago Up About an hour 127.0.0.1:8090->8090/tcp express-site
The 8090 port is EXPOSEd in the Dockerfile.
I tried other ports.
When down, I can curl the site from within the container when inspecting.
When down, curling the site from within the VM yields
curl: (52) Empty reply from server
Memory and CPU usage within the container and within the VM barely reach 5%.
Site usually has SSL but tried http as well.
Tried various nginx proxy settings (see config below)
Using out-of-the box nginx.conf
Considering that it might be related to a timeout or docker network settings...
My site-available config file looks like:
server {
server_name example.com www.example.com;
location / {
proxy_pass http://127.0.0.1:8090;
#proxy_set_header Host $host;
#proxy_buffering off;
#proxy_buffer_size 16k;
#proxy_busy_buffers_size 24k;
#proxy_buffers 64 4k;
}
listen 80;
listen [::]:80;
#listen 443 ssl; # managed by Certbot
#ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem; # managed by Certbot
#ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem; # managed by Certbot
#include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
#ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
Nginx Error Log shows:
2021/01/02 23:50:00 [error] 13901#13901: *46 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: ***.**.**.***, server: example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:8090/favicon.ico", host: "www.example.com", referrer: "http://www.example.com"
Anyone else have ideas?
Didn't get much feedback, but I did more research and the issue is now stable so I wanted to post my findings.
I have isolated the issue with the docker container. Nginx works fine with the same app running on the VM directly.
I updated my docker container image from node:12-alpine to node:14-alpine. The site has been up for 42 hours without issue.
If it randomly fails again, then it's probably due to load.
I hope this solves someone's issue.
Update 2021-10-24
The same issue started and I've narrowed it down to the port and/or docker on my version of Ubuntu. May I recommend...
changing the port
rebooting your PC
installing the latest OS and docker updates

Issue with nginx config using docker

I'm start doing a little test using docker in order to set up my own server and I have a little bit issues.
I use the nginx-fpm image which have most of the services I need to set up my server.
This is my Dockerfile I did in order to set up a basic server. I built it perfectly without any issues and named it as nginx-custom-server.
Dockerfile
FROM "richarvey/nginx-php-fpm"
ADD /conf/simple-project.conf /etc/nginx/sites-available/simple-project.conf
RUN mkdir /srv/www/
RUN mkdir /LOGS/
RUN ln -s /etc/nginx/sites-available/simple-project.conf /etc/nginx/sites-enabled/simple-project.conf
RUN rm /etc/nginx/sites-enabled/default.conf
CMD ["/start.sh"]
I ran it using the following command via terminal.
docker run --name=server-stack -v /home/ismael/Documentos/docker-nginx/code:/srv/www -v /home/ismael/Documentos/docker-nginx/logs:/LOGS -p 80:80 -d nginx-custom-server:stack
In /srv/www folder I have a simple hello world php. I want make changes in my code on my local machine and sync it with docker container using the shared folder code.
The nginx logs are empty so I don't know what is wrong. I set up logs in my conf but nginx didn't create them so I think there is a problem with the general nginx conf I guess.
Here is the conf I'm using for my hello world. Also I mapped this server name in the hosts of the host machine.
simple-project.conf
server {
listen 0.0.0.0:80;
server_name simple-project.olive.com;
root /srv/www/simple-project/;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
# the ubuntu default
fastcgi_pass /var/run/php-fpm.sock:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param APPLICATION_ENV int;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
location ~ \.php$ {
return 404;
}
error_log /LOGS/custom_error.log;
access_log /LOGS/custom_access.log;
}
EDIT : Error when I tried to access to the server inside docker's container.
bash-4.4# wget localhost:80 > /tmp/output.html
--2019-03-27 12:33:11-- http://localhost/ Resolving localhost... 127.0.0.1, ::1 Connecting to localhost|127.0.0.1|:80... failed: Connection refused. Connecting to localhost|::1|:80... failed: Address
not available. Retrying.
From what I can tell, there are two reasons why you can't access the server.
The first is that you don't forward any ports from the container to the host. You should include the -p 80:80 argument to your docker run command.
The second is that you're attempting to listen on what I assume to be the IP of the container itself, which is not static (by default). In the nginx config, you should replace listen 172.17.0.2:80; with listen 0.0.0.0:80;.
With these two modifications in place, you should be able to access your server.
A different approach (but not recommended) would be to start the container in with the --network=host parameter. This way, the host's network is actually visible from within the container. In this scenario, you would only need to set the nginx config to listen on a valid address.
However, if the problem persists, a good approach would be to run docker exec -it {$container_id} bash when the container is running and see if you can access the server from within the container itself. This would mean that the server is running correctly but from other reasons, the port is not being correctly forwarded to the host.

Docker services only available to the local host

Simple use case: a production server running Nginx (listening on 0.0.0.0:80 and 443), responsible for reading the SSL certificates and, if valid, redirect to a "hidden" service in a Docker container. This service runs Gunicorn (so a Django website) and is listening on port 8000. That sounds standard, simple, tested, almost too beautiful to be true... but of course, it's not working as I'd like.
Because Gunicorn, running in its little Docker container, is accessible through the Internet. If you go to my hostname, port 8000, you get the Gunicorn website. Obviously, it's ugly, but the worst is it completely bypasses Nginx and the SSL certificates. So why would a Docker container be accessible through the Internet? I know that for some time it was the other way around with Docker. We need a proper balance!
On further inspecting the problem: I do have a firewall and it's configured to be extremely limiting. It only allows port 22 (for ssh, waiting to be remapped), 80 and 443. So 8000 should absolutely not be allowed. But ufw uses iptables, and Docker adds some iptables rules to bypass configuration if a container runs on that port.
I tried a lot of stupid things (that's part of the job): in my docker-compose.yml file specifying the ports to bind, I tried to remove them ('course if I do, nginx can't access my hidden service). I tried to add an IP specified to bind (it seems it's allowed):
ports:
- "127.0.0.1:8000:8000"
This had a weird result, as Nginx wasn't able to connect but Gunicorn was still visible through the Internet. So I would say, exactly the contrary of what I want. I tried to manually change the Docker service to add flags (not good) and tried to add a configuration file in etc/docker/daemon.json (changing the "ip" setting to "127.0.0.1" again). I'm short out of ideas. If anyone has a pointer... I wouldn't think this is an extremely rare usage of Docker, after all.
Specifics: I don't run containers with docker run directly. I have a docker-compose.yml file and run services using docker stack, so in a swarm (although I only have one machine at the time). Could be related, though again, I would think not.
System: Debian 9.
Docker version: 18.09.
docker-compose version: 1.22
nginx (/etc/nginx/sites-available/example.com)
upstream gunicorn {
server localhost:8000;
}
server {
server_name example.com www.example.com;
location / {
proxy_pass http://gunicorn;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate ...;
ssl_certificate_key ...;
include ...;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 404; # managed by Certbot
}
docker-compose.yml
version: '3.7'
services:
gunicorn:
image: gunicorn:latest
command: gunicorn mysite.wsgi:application --bind 0.0.0.0:8000
volumes:
- ./:/usr/src/app/
ports:
- "8000:8000"
depends_on:
- db
networks:
- webnet
db:
image: postgres:10.5-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
networks:
- webnet
networks:
webnet:
volumes:
postgres_data:
Note: the gunicorn image has been built beforehand, but there's no trick, just a Python-3.7:slim image with all set for gunicorn and a Django website under mysite/. It doesn't expose any port (not that I think makes any difference here).
Okay, after some digging, here's what I found: docker will make sure to create iptables rules to access containers from outside of the network. Asking Docker to not bother about iptables at all is not a good strategy, as it will need it to forward connections from the containers to the outside world. So the documentation recommended to create an iptables rule in the docker-user chain to prevent external access to the Docker daemon. That's what I did. Of course, the given command (slightly modified to completely forbid external access) didn't persist, so I had to create a service just to add this rule. Probably not the best option, so don't hesitate to comment if you have a better choice to offer. Here's the iptables rule I added:
iptables -I DOCKER-USER -i ext_if ! -s 127.0.0.1 -j DROP
This rule forbids external access to the Docker daemon, but still allows to connect to individual containers. This rule didn't seem to solve anything for me, since this solution wasn't exactly an answer to my question. To forbid access to my Docker container, running on port 8000, from the Internet, I added yet another rule in the same script:
iptables -I DOCKER-USER -p tcp --destination-port 8000 -j DROP
This rule is a bit extreme: it completely forbids network traffic on port 8000 over the Docker network. The host (localhost) is excluded from this rule, since a previous rule should allow local traffic no matter what (if you do your iptables config by hand, you will have to add this rule, it's not a given, a reason why I switched to simpler solutions like ufw). If you look at the firewall rules with iptables -L, you will see that the chain DOCKER-USER is called before any Docker rule. You might find another rule allowing traffic on the same port. However, because we forbid traffic in a higher-priority rule, port 8000 will be effectively hidden from outside.
This solution, while seeming to solve the problem, is not exactly elegant and intuitive. Again, I can't help but wonder if I was the first ever to use Docker to host my "hidden" services while maintaining a different service in front. I guess the usual approach is to have nginx in a docker container itself, but that created other issues for me that I frankly decided outdid the advantages.

Resources