Hostname for website running inside Docker - docker

I'm new to docker and my question is similar to:
My websites running in docker containers, how to implement virtual host?
But I don't actually need to host multiple sites with different virtual hosts. I just need to get the server to respond to a particular virtual host name, eg: myhost.mysite.com
Right now the site works fine via IP but won't respond when I use the host name. Since I only have the one site/hostname do I have to setup a proxy as described in the question?
I've tried adding a -h 'myhost.mysite.com' to my docker run command but that didn't seem to make any difference.
PS. hostname DNS does correctly resolve to the IP address of the docker server.

That really depends on the web server running inside the container.
Apache: use ServerName and possibly ServerAlias
Nginx: use server_name
Django: use ALLOWED_HOSTS
Really, Docker doesn't need to know. The HTTP server software needs to know.
The question you linked to deals with multiple sites, which is why a proxy was needed. If you are only running one site, a proxy is not necessary (at least, not for this purpose). Just let Docker listen on port 80 and/or 443 itself, and let the server software running inside decide what hostname(s) are valid for the site.

Related

How do I connect to other computers via Host Name on Ubuntu?

I have a docker container that is running on Windows currently and it is accessing database resources via the host name (e.g Desktop1, Desktop2, etc...). The docker container is using a bridge network that was created new for the purpose of the system.
What I notice on Windows is that I can ping or connect to those resources simply via the host name and I do not need to remember the IP address of the computer.
I also notice that this can also be done even if I don't have a DNS server running locally (I think?).
However, when I run the container on an Ubuntu host, I keep getting connection errors and timeouts.
I have tried to edit the /etc/hosts and /etc/hostname to include the proper host name of the PC and the fixed wired IP I am using.
I have also tried a test database on the same Ubuntu system but I cannot connect to it via its host name. At best, I am able to connect via something like Desktop1.local but it only solves 1 issue. The other responses I receive from the other systems on the network return only the hostname (e.g. http://Desktop2/api/..., ws://Desktop3/api/..., etc...).
I was wondering if there is a configuration I am missing to have the same functionality as Windows? Do I need to change my code to handle this kind of situations or do I need to do something else like on the OS level?
My command for creating the docker container is along these lines:
docker create -p 172.16.0.1:50000:80/tcp --env MongoDatabaseSettings__ConnectionString="mongodb://desktop1:27017/?uuidRepresentation=standard" --env ConnectionStrings__MySQLConnection="server=desktop2;database=DB;user=user;password=password" --name container1 registry.gitlab.com/group/image:latest
Contents of my /etc/hosts
127.0.0.1 localhost
172.16.0.1 desktop1
If it's me, maybe will try to build the reverse proxy server.
Step. 1
choose your server. (recommend Nginx)
Step. 2
Forward traffic
For example, if your ip of docker service is 192.168.1.2:8080, then you can make 127.0.0.1:80 to forward to it. (or any port you want)
Then you just need to access 127.0.0.1:80, the server will forward the traffic to service of docker.
I dont know is that you actually want to do.
oh, btw, if you still want to access via host name, just edit host file with root user. (make 127.0.0.1:80 a custom domain.
I dont know the reason of that why you can not setting the host file, but set 127.0.0.1 in host file is always working for me.

Two Nginx instances, listening to different ports, only one reachable by domain

I use docker-compose stacks to run things on my personal VPS. Right now, I have a stack that's composed of:
Nginx (exposed port 443)
Ghost (blogging)
MySQL (for Ghost)
Adminer (for MySQL, exposed port 8080)
I wanted to try out Matomo analytics software, but I didn't want to add that to my current stack until I was happy with it, so I decided to create a second docker-compose stack for just the Matomo analytics:
Nginx (exposed port 444)
Matomo
MariaDB (for Matomo)
Adminer (for MariaDB, exposed port 8081)
With both stacks running, I can access everything at its appropriate port, but only by IP address. If I try to use my domain, it can only connect to the first Nginx, the one exposing port 443. If I try https://www.example.com:444 in my browser, it isn't able to connect. If I try https://myip:444 in my browser, it connects to the second Nginx instance exposing port 444, warning me that the SSL certificate has issues (since I'm connecting to my IP, not my domain), and then lets me through.
I was wondering if anyone knew why this behavior was happening. I'm admittedly new to setting up VPSs, using Nginx to proxy to other hosted services, etc. If it turns out Nginx cannot be used this way, I'd love to hear recommendations on how else I could arrange this. (Do I have to only have one Nginx instance running at a time, and I must proxy through it to everything else? etc)
Thank you!
I was able to fix this by troubleshooting my Cloudflare. I posted this question while waiting for my domain to accept the name servers for my VPS instead of Cloudflare. When this was finished, I tested it and it did get through at https://example.com:444. This proved it was Cloudflare blocking me.
I found this page which explained that the free Cloudflare plans only support a few ports, which does not include port 444. If I upgraded to a pro plan, I would have that option.
Therefore, I can conclude that the solution to my problem is to either upgrade my Cloudflare plan or merge the two docker-compose stacks so that I can accept requests for everything on just port 443.

Setup Nginx reverse-proxy

I am absolutely new to the selfhosting community I want to setup a home server that grants access to different applications. Using docker I got a wordpress and a nextcloud app up and running. Now I want to add bitwarden and want it to be accessible via vault.myhosting.xx. Later I want to add ssl via letsencrypt.
I am using jwilder/nginx-proxy which makes it very easy to add new virtual host by doing minor changes in the docker-compose.yml of the specific app.
I wanted to do the same in bitwarden (I had some bugs when editing the docker-compose.yml see issue:188). The bitwarden developer suggested adjusting the reverse-proxy. But I do not know how to do this. I tried to implement a new virtual host in the reverse-proxy container but I do not understand how to do the linking to the bitwarden container.
A warring! English is not native for me. Im not sure that understand your problem exactly.
How it's work in approximation:
Any domain name is a real IP address alias.
Special NS servers (DNS) keep this pair domain-IP.
When any user enter the address it reffers to DNS.
DNS may be public like a GoogleDNS (8.8.8.8), or specific for internet provider.
How docker work:
Docker in out of the box use severals drivers to network connections:
bridge, host, null, overlay...
For example bridge:
Your machine with Docker it is HOST
Any Containers can there port provide to host port
In Docker-compose its look like
.yaml
services:
wordpress:
image: wordpress
ports:
- 8080:80
Its mean that host port 8080 will be alias to 80 port of container. Only one application in each time can listen specific port. If you try add one more it will crush container up process. So use different port. For port less then 1024 you may need root access.
Now when app provide port. You may access on address like localhost:8080
If you need access from internet to this port you may meet next problem:
After that problems
1) Switch NAT -> most of switch have setting to open host port to internet. Way is specific by each Vendor, but not very hard to set up.
2) Dynamic IP -> it may to resolving with service like http://GoDaddy.com , http://noip.com . Also you may to buy a domain name.
Also read or watch about "docker network"

Make docker machine available under host name in Windows

I'm trying to make a docker machine available to my Windows by a host name. After creating it like
docker-machine create -d virtualbox mymachine
and setting up a docker container that exposes the port 80, how can I give that docker machine a host name such that I can enter "http://mymachine/" into my browser to load the website? When I change "mymachine" to the actual IP address then it works.
There is an answer to this question but I would like to achieve it without an entry in the hosts file. Is that possible?
You might want to refer to docker documentaion:
https://docs.docker.com/engine/userguide/networking/#exposing-and-publishing-ports
You expose ports using the EXPOSE keyword in the Dockerfile or the
--expose flag to docker run. Exposing ports is a way of documenting which ports are used, but does not actually map or open any ports.
Exposing ports is optional.
You publish ports using the --publish or --publish-all flag to docker
run. This tells Docker which ports to open on the container’s network
interface. When a port is published, it is mapped to an available
high-order port (higher than 30000) on the host machine, unless you
specify the port to map to on the host machine at runtime. You cannot
specify the port to map to on the host machine when you build the
image (in the Dockerfile), because there is no way to guarantee that
the port will be available on the host machine where you run the
image.
I also suggest reviewing the -P flag as it differs from the -p one.
Also i suggest you try "Kitematic" for Windows or Mac, https://kitematic.com/ . It's much simpler (but dont forget to commit after any changes!)
Now concerning the network in your company, it has nothing to do with docker, as long as you're using docker locally on your computer it wont matter what configuration your company set. Even you dont have to change any VM network config in order to expose things to your local host, all comes by default if you're using Vbox ( adapter 1 ==> NAT & adapter 2 ==> host only )
hope this is what you're looking for
If the goal is to keep it as simple as possible for multiple developers, localhost will be your best bet. As long as the ports you're exposing and publishing are available on host, you can just use http://localhost in the browser. If it's a port other than 80/443, just append it like http://localhost:8080.
If you really don't want to go the /etc/hosts or localhost route, you could also purchase a domain and have it route to 127.0.0.1. This article lays out the details a little bit more.
Example:
dave-mbp:~ dave$ traceroute yoogle.com
traceroute to yoogle.com (127.0.0.1), 64 hops max, 52 byte packets
1 localhost (127.0.0.1) 0.742 ms 0.056 ms 0.046 ms
Alternatively, if you don't want to purchase your own domain and all developers are on the same network and you are able to control DHCP/DNS, you can setup your own DNS server to include a private route back to 127.0.0.1. Similar concept to the Public DNS option, but a little more brittle since you might allow your devs to work remote, outside of a controlled network.
Connecting by hostname requires that you go through hostname to IP resolution. That's handled by the hosts file and falls back to DNS. This all happens before you ever touch the docker container, and docker machine itself does not have any external hooks to go out and configure your hosts file or DNS servers.
With newer versions of Docker on windows, you run containers with HyperV and networking automatically maps ports to localhost so you can connect to http://localhost. This won't work with docker-machine since it's spinning up virtualbox VM's without the localhost mapping.
If you don't want to configure your hosts file, DNS, and can't use a newer version of docker, you're left with connecting by IP. What you can do is use a free wildcard DNS service like http://xip.io/ that maps any name you want, along with your IP address, back to that same IP address. This lets you use things like a hostname based reverse proxy to connect to multiple containers inside of docker behind the same port.
One last option is to run your docker host VM with a static IP. Docker-machine doesn't support this directly yet, so you can either rely on luck to keep the same IP from a given range, or use another tool like Vagrant to spin up the docker host VM with a static IP on the laptop. Once you have a static IP, you can modify the host file once, create a DNS entry for every dev, or use the same xip.io URL, to access the containers each time.
If you're on a machine with Multicasting DNS (that's Bonjour on a Mac), then the approach that's worked for me is to fire up an Avahi container in the Docker Machine vbox. This lets me refer to VM services at <docker-machine-vm-name>.local. No editing /etc/hosts, no crazy networking settings.
I use different Virtualbox VMs for different projects for my work, which keeps a nice separation of concerns (prevents port collisions, lets me blow away all the containers and images without affecting my other projects, etc.)
Using docker-compose, I just put an Avahi instance at the top of each project:
version: '2'
services:
avahi:
image: 'enernoclabs/avahi:latest'
network_mode: 'host'
Then if I run a webserver in the VM with a docker container forwarding to port 80, it's just http://machine-name.local in the browser.
You can add a domain name entry in your hosts file :
X.X.X.X mymachine # Replace X.X.X.X by the IP of your docker machine
You could also set up a DNS server on your local network if your app is meant to be reachable from your coworkers at your workplace and if your windows machine is meant to remain up as a server.
that would require to make your VM accessible from local network though, but port forwarding could then be a simple solution if your app is the only webservice running on your windows host. (Note that you could as well set up a linux server to avoid using docker-machine on windows, but you would still have to set up a static IP for this server to ensure that your domain name resolution works).
You could also buy your own domain name (or get a free one) and assign it your docker-machine's IP if you don't have rights to write in your hosts file.
But these solution may not work anymore after some time if app host doesn't have a static IP and if your docker-machine IP changes). Not setting up a static IP doesn't imply it will automatically change though, there should be some persistence if you don't erase the machine to create a new one, but that wouldn't be guaranteed either.
Also note that if you set up a DNS server, you'd have to host it on a device with a static IP as well. Your coworkers would then have to configure their machine to use this one.
I suggest nginx-proxy. This is what I use all the time. It comes in especially handy when you are running different containers that are all supposed to answer to the same port (e.g. multiple web-services).
nginx-proxy runs seperately from your service and listens to docker-events to update it's own configuration. After you spun up your service and query the port nginx-proxy is listening to, you will be redirected to your service. Therefore you either need to start nginx-proxy with the DEFAULT_HOST flag or send the desired host as header param with the request.
As I am running this only with plain docker, I don't know if it works with docker-machine, though.
If you go for this option, you can decide for a certain domain (e.g. .docker) to be completely resolved to localhost. This can be either done company-wide by DNS, locally with hosts file or an intermediate resolver (the specific solution depends on your OS, of course). If you then try to reach http://service1.docker nginx-proxy will route to the container that has then ENV VIRTUAL_HOST=service1.docker. This is really convenient, because it only needs one-time setup and is from then on dynamic.

Squid Proxy doesn't work when running as Docker Container

I am able to block sites fine when I install squid proxy onto my Ec2 and proxy with the EC2 IP address and default port. When I dockerized the process I can no longer deny any sites even with https_access deny all . Is there something special I am missing form my squid.conf or perhaps my dockerfile? I see all traffic go through my access.log but I am unable to block anything. Thanks in advance.

Resources