Running numerous dockers right now on a new build for a homelab server and trying to make sure everything is locked down and secure. I use the server for a variety of things, both requiring access from the outside world (nextcloud) and things that I will only access from my internal network (plex). Of course the server is behind a router that limits open ports but looking for additional security - I would like to restrict those dockers that I want to only access via internal network, to 192.168.0.0/24. That way, if somehow a port became open on my router, it would not be exposed (am I being to paranoid?).
Currently docker-compose files are exposing ports via:
....
ports:
- 8989:8989
....
This is of course works fine but is accessible to the world should I open the port on my router. I know i can bind to localhost via
....
ports:
- 127.0.0.1:8989:8989
....
But that doesn't help me when I'm trying to access the docker from my internal network. I've read numerous articles regarding docker networks and various flags and also read about possibility iptables solution.
Any guidance is much appreciated.
Thanks,
Simply do not declare any ports in docker-compose, they are automatically visible between containers.
I use an elasticsearch container in this way and a separate kibana can connect to it by the server name declated on the yml.
if somehow a port became open on my router, it would not be exposed
Using this procedure the ports are never visible outside the docker environment (i.e. outside == in your local network).
If your concern is that ports are published in your LAN when doing the procedure I told you, they are not.
you are actually very close with
ports:
- 127.0.0.1:8989:8989
as with this it is accessible locally on your server, fun enough, your bind to localhost trick is exactly what i was looking for my own setup xD
from this point there are actually a couple of ways to set it up so that you can access it on your local network.
SSH Tunneling
the first one is the one i'm using on my own setup: ssh forwarding
you can, if you haven't already, set up an .ssh/config file to forward localhost ports to your computer. taking your example into account the syntax is as follows
Host some-hostname
HostName 192.168.x.x
User user-of-server
LocalForward 8989 127.0.0.1:8989
some-hostname is a shortname you can choose, user-of-server is the actual user you set up to log in with, 192.168.x.x is the actual local ip address of your server and you can also include a IdentityFile /path/to/ssh/key. with this you can run ssh some-hostname to ssh into your server from any computer on your local network and your server will be available at localhost:8989 on that specific computer
Reverse Proxy
the second is a reverse proxy like nginx, this too can be run in a docker container and you could bind it to any port like say for example to 6443 and you can mount its config file into the container with
volumes:
- 'config:/etc/nginx/conf.d'
ports:
- 6443:443
volumes:
config:
driver: local
driver_opts:
type: none;
o: bind
device: "./config"
the in the ./config/defaul.conf you could set up something like
server {
listen 443 ssl http2;
server_name 192.168.x.x;
ssl_certificate /etc/letsencrypt/signed_chain.crt;
ssl_certificate_key /etc/letencrypt/domain.key
include /etc/nginx/includes/ssl.conf
location /{
### force timeouts if one of backend is died ##
### such died, many backend, very timeouts ##
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
### Set headers ####
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host:server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto $scheme;
add_header Front-End-Https on;
proxy_buffering off;
proxy_pass http://127.0.0.1:8989
}
then it should be available on and only on 192.168.x.x:6443
Related
I've got several Rails websites running in Docker dev containers. Docker is running in WSL (Ubuntu 20.04) on Windows 11. Nginx is running in Ubuntu as a reverse proxy, IIS is turned off in Windows. The Ubuntu /etc/hosts file is automatically populated from the hosts file in Windows. It is set up like this because others on the team are running Linux on Macs but I switch between Rails and .Net development.
An example website is mysite1.localhost which is exposed on port 8081 on Docker and there is an entry of '127.0.0.1 mysite1.localhost' in both hosts files.
The problem I have is browsing (Chrome on Windows) localhost:8081 returns 200 from the website, great, but using the hostname mysite1.localhost returns 502 Bad Gateway.
I am assuming Nginx doesn't know about Docker or something like that?
Here is the mysite1.conf for Nginx:
server {
listen 80;
listen [::]:80;
server_name mysite1.localhost;
resolver 127.0.0.1;
location ~* "^/shared-nav" {
proxy_set_header Accept-Encoding "";
proxy_pass http://localhost:3000/stuff$is_args$args;
}
location / {
ssi on;
ssi_silent_errors off;
log_subrequest on;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8081;
add_header Cache-Control "no-cache";
if ($request_filename ~* ^.*?/([^/]*?)$) {
set $filename $1;
}
if ($filename ~* ^.*?\.(eot)|(ttf)|(woff)|(woff2)$) {
add_header Access-Control-Allow-Origin *;
}
}
}
I can see two problems in nginx/error.log:
2023/02/02 08:50:10 [warn] 2841#2841: conflicting server name "mysite1.localhost" on 0.0.0.0:80, ignored
2023/02/02 08:50:12 [error] 2845#2845: *52 connect() failed (111: Connection refused) while connecting to upstream, client: ::1, server: mysite1.localhost, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8081/", host: "mysite1.localhost"
It doesn't seem to matter whether or not docker is running.
For the conflicting server name waring, I've tried looking for temporary files that need deleting but cannot find anything.
Most of the other questions I've looked at involve solving problems with containerized Nginx where as this is sitting in WSL.
Please let me know if I can better explain the problem, thanks for any help.
One thing I didn't take into consideration was using VS Code to dev in the containers means that the ports were forwarded to Windows - although the containers are running in WSL, VS Code is running in Windows.
So, I could either turn IIS back on and use it as a reverse proxy or try Nginx for Windows. I've opted for the latter as it means I can share the same config files as the Linux guys and will see how it works out, for now I can browse the websites by hostname.
If anyone else needs to work with this set up, I'm happy to pass on my experiences.
I have 2 dockers containers running on my EC2 instance:
Docker1: Wordpress website running with PHP server mapped to port 8081 of EC2 instance.
Docker2: Portal created on Angular running with NGINX mapped to port 8082 of EC2 instance.
I want to use the same EC2 instance for my domain and subdomain xyz.com and portal.xyz.com on the same port 80.
Ideally, if the request comes from xyz.com, it should redirect to Docker1 running on 8081 and if it is from portal.xyz.com, it should be redirected to Docker2 running on 8082.
Is it feasible and if yes, how? I do not want to spawn 2 EC2 instances for this and both have to be mapped to HTTP on port 80.
Using multiple load balancers and target groups can solve your problem. https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ecs-services-now-support-multiple-load-balancer-target-groups/
You can set up both load balancers to listen on HTTP and target your one ECS instance on different ports. After that, setting up the routes in Route53 will be straight forward.
I had done something similar on a VPS server, technically it should work on an ec2 instance as well.
I created a new docker network 'proxy-network' (Note: You can do without creating a network and just proxy to localhost:8081 and localhost:8082. This is just cleaner)
Launch all the application servers in that network with proper names (eg: wordpress, angular). Use --name in run command or conatiner_name in docker-compose.
Launch a new nginx server map host port 80 and 443(if you need https to work). I used nginx:latest image. Created a new default.conf and replace /etc/nginx/conf.d/default.conf in container.
Sample proxy.conf should like this:
server {
listen 80;
server_name domain1.com www.domain1.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://wordpress;
}
}
server {
listen 80 default_server;
server_name domain2.com www.domain2.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://angular;
}
}
Once you update the alias records in domain registrar works like a charm. Hope it helps. Good luck.
My Goal
I want to use a single NGinx docker container as a proxy.
I want it to respond to traffic on my domain: "sub.domain.com" and listen on port 80 and 443.
When traffic comes in on /admin I want it to direct all traffic to one docker container (say... admin_container:6000).
When traffic comes in on /api I want it to direct all traffic to another docker container (say... api_container:5500).
When traffic comes in on any other path (/anything_else) I want it to direct all traffic to another docker container (say... website_container:5000).
Some Helpful Context
Real quick, let me provide some context in case it's helpful. I have a NodeJS website running in a docker container. I'd like to also have an Admin section, created with ASP.NET Core that runs in a second docker container. I'd like both of these websites to share and make use of a single ASP.NET Core Web Api project, running in a third docker container. So, one NodeJS project and two ASP.NET Core projects, that all live on a single subdomain:
sub.domain.com/
Serves the main website
sub.domain.com/admin
Serves the Admin website
sub.domain.com/api
Serves API endpoints and handles Database connectivity
What I have So Far
So far I have the NGinX reverse proxy set up and a single docker container for the NodeJS application. Currently all traffic on :80 is redirected to 443. All traffic on 443 is directed to the NodeJS docker container, running privately on :5000. I'll admit I'm not great with NGinx and don't fully understand how this works.
The NGinx.conf file
worker_processes 2;
events { worker_connections 1024; }
http {
sendfile on;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
upstream docker-nodejs {
server nodejs_prod:5000;
}
server {
listen 80;
server_name sub.domain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name sub.domain.com;
ssl_certificate /etc/nginx/ssl/combined.crt;
ssl_certificate_key /etc/nginx/ssl/mysecretkeyfile.key;
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_pass http://docker-nodejs;
}
}
}
Quick Note: In the above code "nodejs_prod:5000" refers to the container who's name is "nodejs_prod" and listens on port 5000. I don't understand how that works, but it is working. Somehow docker is creating DNS entries in the private network, one for each container name. I'm actually using docker-compose.
My Actual Question: How will this NGinx.conf file look when I have 2 more websites (each a docker container). It's important that /admin and /api are sent to the correct docker containers, and not handled by the "catch all" location. I'm imagining that I'll have a "catch all" location which captures all traffic that DOESN'T START WITH /admin OR /api.
Thank you!
In your nginx config you want to add location rules, such as
location /admin {
proxy_pass http://admin_container:6000/;
}
location /api {
proxy_pass http://api_container:5500/;
}
This redirects /admin to admin_container port 6000, and /api to api_container and port 5500.
docker-compose creates a network between all it's containers. Which is why http://api_container:5500/ points to the container named api_container and the port 5500. This can be used for communication between containers. You can read more about it here https://docs.docker.com/compose/networking/
I am reading a lot these days about how to setup and run a docker stack. But one of the things I am always missing out on is how to setup that particular containers respond to access through their domain name and not just their container name using docker dns.
What I mean is, that say I have a microservice which is accessible externally, for example: users.mycompany.com, it will go through to the microservice container which is handling the users api
Then when I try to access the customer-list.mycompany.com, it will go through to the microservice container which is handling the customer lists
Of course, using docker dns I can access them and link them into a docker network, but this only really works for wanting to access container to container, but not internet to container.
Does anybody know how I should do that? Or the best way to set that up.
So, you need to use the concept of port publishing, so that a port from your container is accessible via a port from your host. Using this, you can can setup a simple proxy_pass from an Nginx that will redirect users.mycompany.com to myhost:1337 (assuming that you published your port to 1337)
So, if you want to do this, you'll need to setup your container to expose a certain port using:
docker run -d -p 5000:5000 training/webapp # publish image port 5000 to host port 5000
You can then from your host curl your localhost:5000 to access the container.
curl -X GET localhost:5000
If you want to setup a domain name in front, you'll need to have a webserver instance that allows you to proxy_pass your hostname to your container.
i.e. in Nginx:
server {
listen 80;
server_name users.mycompany.com;
location / {
proxy_pass http://localhost:5000;
}
}
I would advise you to follow this tutorial, and maybe check the docker run reference.
For all I know, Docker doesn't provide this feature out of the box. But surely there are several workarounds here. In fact you need to deploy a DNS on your host that will distinguish the containers and resolve their domain names in dynamical IPs. So you could give a try to:
Deploy some of Docker-aware DNS solutions (I suggest you to use SkyDNSv1 / SkyDock);
Configure your host to work with this DNS (by default SkyDNS makes the containers know each other by name, but the host is not aware of it);
Run your containers with explicit --hostname (you will probably use scheme container_name.image_name.dev.skydns.local)
You can skip step #2 and run your browser inside container too. It will discover the web application container by hostname.
Here is a one solution with the nginx and docker-compose:
users.mycompany.com is in nginx container on port 8097
customer-list.mycompany.com is in nginx container on port 8098
Nginx configuration:
server {
listen 0.0.0.0:8097;
root /root/for/users.mycompany.com
...
}
server {
listen 0.0.0.0:8098;
root /root/for/customer-list.mycompany.com
...
}
server {
listen 0.0.0.0:80;
server_name users.mycompany.com;
location / {
proxy_pass http://0.0.0.0:8097;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
server {
listen 0.0.0.0:80;
server_name customer-list.mycompany.com;
location / {
proxy_pass http://0.0.0.0:8098;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Docker compose configuration :
services:
nginx:
container_name: MY_nginx
build:
context: .docker/nginx
ports:
- '80:80'
...
I have installed Gerrit 2.12.3 on my Ubuntu Server 16.04 system.
Gerrit is listening on http://127.0.0.1:8102.
behind an nginx server, which is listening on https://SERVER1:8102.
Some contents of the etc/gerrit.config file is as follow:
[gerrit]
basePatr = git
canonicalWebUrl = https://SERVER1:8102/
[httpd]
listenUrl = proxy-https://127.0.0.1:8102/
And some contents of my nginx settings is as follow:
server {
listen 10.10.20.202:8102 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/server1.crt;
ssl_certificate_key /etc/nginx/ssl/server1.key;
location / {
# Allow for large file uploads
client_max_body_size 0;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:8102;
}
}
Nearly all the function of Gerrit works very well now. But one problem I can not solved is that:
The url generated in notification email is https://SERVER1:8102/11 which seems right, but when I click the link, it redirects to https://SERVER1/#/c/11/ instead of https://SERVER1:8102/#/c/11/
Can anyone tell me how to solve it?
Thanks.
That the port of gerrit.canonicalWebUrl and httpd.listenUrl match makes no sense.
Specify as gerrit.canonicalWebUrl the URL that is accessible to your users through the Nginx proxy, e.g., https://gerrit.example.com.
This vhost in Nginx (listening to port 443) in turn is configured in the proxy to connect to the backend as specified in httpd.listenUrl, so e.g. port 8102 to which Gerrit would be listening in your case.
The canonicalWebUrl is just used that Gerrit knows its own host name, e.g., for sending email notifications IIRC.
You might also just follow Gerrit Documentation and stick to the ports as described there.
EDIT: I really noticed that you want the proxy AND Gerrit both to listen on port 8102 - on a public interface respectively on 127.0.0.1. While this would work, if you really make sure that Nginx is not binding to 0.0.0.0, I think it makes totally no sense. Don't you want your users to connect via HTTPS on port 443?