I am able to block sites fine when I install squid proxy onto my Ec2 and proxy with the EC2 IP address and default port. When I dockerized the process I can no longer deny any sites even with https_access deny all . Is there something special I am missing form my squid.conf or perhaps my dockerfile? I see all traffic go through my access.log but I am unable to block anything. Thanks in advance.
Related
I have the following problem. I'm running multiple Docker containers on one host, all listening on SMTP port 25.
However, each container should be accessible via its own hostname (Domain name). And this doesn't work over NGINX as I can't work with virtual hosts for SMTP.
Does anyone have a hint? Is there a special module for NGINX or another proxy? Or is it just not possible because SMTP doesn't support hostname?
I already tried it with "stream" module, but that doesn't work with multiple containers, only with one instance and the "map" module is an http modul.
Example:
I have a domain name https://example.com that points to a vps server on amazon lightsail. I have several applications i want to run. The apps are in vue js and some in spring and i am using nginx as the web server.
The landing page is basically an app running on port 3000 but using reverse proxy to display it at the root of example.com on port 80
I would like to run another app like:
example.com/one, example.com/two and example.com/three where one, two and three are applications each running inside a docker container.
How would i go about configuring my apps in this way keeping in mind the apps are running separately inside docker?
I highly suggest using Caddy for this type of setup.
Nginx is awesome and you could use that for the same purpose.
But for what you want to do caddy will work perfectly.
Just make sure to run each container on a different port.
Then use caddy as a reverse proxy to each container:
https://medium.com/bumps-from-a-little-front-end-programmer/caddy-reverse-proxy-tutorial-faa2ce22a9c6
Lets say you have containers running on port 5000,8800 and 9000
the you could do:
example.com
reverse_proxy /one localhost:5000
reverse_proxy /two localhost:8800
reverse_proxy /three localhost:9000
Caddy is cool because it will also setup SSL via Letsencrypt.
I didn't have time or a server to test this now, but let me know if it works.
God bless :)
Docker can only route to different ports. It can not determine the container by a http-path.
You need a reverse proxy (RP).
You have two options:
Install RP on host
You can install the RP on your host machine. There are many pros, like you can use the certbot for automatic lets encrypt certs. And you have the opportunity to use more docker-containers.
For this you have to publish ports in docker to your hostmachine.
Use your docker-nginx as RP
You can also set your frontend as RP. Just put your docker-containers in a docker-network and add the RP-config to your nginx.
After finding a solution for this problem, I have another question: I am running a flask app in a docker container (my web map), and on this map I want to show tiles served by a (flask-based) Terracotta tile server running in another docker container. The two containers are on the same docker network and can talk to each other, however only the port where my web server is running is open to the public, and I like to keep it that way. Is there a way I can serve my tiles somehow "from local" without opening the port of the tile server? Maybe by setting up some redirects or something?
Main reason for this is that I need someone else to open ports for me, which takes ages.
If you are running your docker containers on a remote machine like ec2, then you need not worry about a port being open to public, as by default ports are closed in ec2 or similar services. You just need to open the port on which you are running your app, you can use aws console for that.
If you are running your docker container locally or on some server for which you don't have cosole access, then you can use somekind of firewall to open or close a port. I personally prefer UFW for Ubuntu systems. You can allow a certain range of ports using a simple command such as sudo ufw allow 9000 to allow incoming tcp packets on port 9000. Similarly you can deny incoming packets to a port. Also, you can open a port to a certain ip (like your own ip) using sudo ufw allow from <ip address>.
I use docker-compose stacks to run things on my personal VPS. Right now, I have a stack that's composed of:
Nginx (exposed port 443)
Ghost (blogging)
MySQL (for Ghost)
Adminer (for MySQL, exposed port 8080)
I wanted to try out Matomo analytics software, but I didn't want to add that to my current stack until I was happy with it, so I decided to create a second docker-compose stack for just the Matomo analytics:
Nginx (exposed port 444)
Matomo
MariaDB (for Matomo)
Adminer (for MariaDB, exposed port 8081)
With both stacks running, I can access everything at its appropriate port, but only by IP address. If I try to use my domain, it can only connect to the first Nginx, the one exposing port 443. If I try https://www.example.com:444 in my browser, it isn't able to connect. If I try https://myip:444 in my browser, it connects to the second Nginx instance exposing port 444, warning me that the SSL certificate has issues (since I'm connecting to my IP, not my domain), and then lets me through.
I was wondering if anyone knew why this behavior was happening. I'm admittedly new to setting up VPSs, using Nginx to proxy to other hosted services, etc. If it turns out Nginx cannot be used this way, I'd love to hear recommendations on how else I could arrange this. (Do I have to only have one Nginx instance running at a time, and I must proxy through it to everything else? etc)
Thank you!
I was able to fix this by troubleshooting my Cloudflare. I posted this question while waiting for my domain to accept the name servers for my VPS instead of Cloudflare. When this was finished, I tested it and it did get through at https://example.com:444. This proved it was Cloudflare blocking me.
I found this page which explained that the free Cloudflare plans only support a few ports, which does not include port 444. If I upgraded to a pro plan, I would have that option.
Therefore, I can conclude that the solution to my problem is to either upgrade my Cloudflare plan or merge the two docker-compose stacks so that I can accept requests for everything on just port 443.
I'm new to docker and my question is similar to:
My websites running in docker containers, how to implement virtual host?
But I don't actually need to host multiple sites with different virtual hosts. I just need to get the server to respond to a particular virtual host name, eg: myhost.mysite.com
Right now the site works fine via IP but won't respond when I use the host name. Since I only have the one site/hostname do I have to setup a proxy as described in the question?
I've tried adding a -h 'myhost.mysite.com' to my docker run command but that didn't seem to make any difference.
PS. hostname DNS does correctly resolve to the IP address of the docker server.
That really depends on the web server running inside the container.
Apache: use ServerName and possibly ServerAlias
Nginx: use server_name
Django: use ALLOWED_HOSTS
Really, Docker doesn't need to know. The HTTP server software needs to know.
The question you linked to deals with multiple sites, which is why a proxy was needed. If you are only running one site, a proxy is not necessary (at least, not for this purpose). Just let Docker listen on port 80 and/or 443 itself, and let the server software running inside decide what hostname(s) are valid for the site.