Flask in docker, access other flask server running locally - docker

After finding a solution for this problem, I have another question: I am running a flask app in a docker container (my web map), and on this map I want to show tiles served by a (flask-based) Terracotta tile server running in another docker container. The two containers are on the same docker network and can talk to each other, however only the port where my web server is running is open to the public, and I like to keep it that way. Is there a way I can serve my tiles somehow "from local" without opening the port of the tile server? Maybe by setting up some redirects or something?
Main reason for this is that I need someone else to open ports for me, which takes ages.

If you are running your docker containers on a remote machine like ec2, then you need not worry about a port being open to public, as by default ports are closed in ec2 or similar services. You just need to open the port on which you are running your app, you can use aws console for that.
If you are running your docker container locally or on some server for which you don't have cosole access, then you can use somekind of firewall to open or close a port. I personally prefer UFW for Ubuntu systems. You can allow a certain range of ports using a simple command such as sudo ufw allow 9000 to allow incoming tcp packets on port 9000. Similarly you can deny incoming packets to a port. Also, you can open a port to a certain ip (like your own ip) using sudo ufw allow from <ip address>.

Related

Securing docker containers between 2 servers

One of my RPI's (3B+) (192.168.0.3) is running out of memory so I want to remove NginxProxyManager running in docker container from RPI to save some memory.
I put couple of the containers running on RPI (192.168.0.3) behind another NginxProxyManager running on my main server (192.168.0.2). So far so good.
The only problem with this solution I have is that you can access the containers with RPI's IP and port number from any device on the same network and if I think correctly the data between NPM on my main server and RPI containers are not encrypted (some containers do not use HTTPS).
The connection is on my local LAN so it should be secure and there should not be any snooping but still I would like to create some kind of direct tunnel between 192.168.0.2 and 192.168.0.3 (certain ports and containers only).
What would be the proper way to allow ONLY my main server to certain ports on my RPI?
Or am I worrying too much? ;-)

Map Google Cloud VM docker port to HTTPS

I have a Google Cloud VM which runs a docker image. The docker image runs a specific JAVA app which runs on port 1024. I have pointed my domain DNS to the VM public IP.
This works, as I can go to mydomain.com:1024 and access my app. Since Google Cloud directly exposes the docker port as a public port. However, I want to access the app through https://example.com (port 443). So basically map port 443 to port 1024 in my VM.
Note that my docker image starts a nginx service. Previously I configured the java app to run on port 443, then the nginx service listened to 443 and Google Cloud exposed this HTTPS port so everthing worked fine. But I cannot use the port 443 anymore for my app for specific reasons.
Any ideas? Can I configure nginx somehow to map to this port? Or do I setup a load balancer to proxy the traffic (which seems rather complex as this is all pretty new to me)?
Ps. in Google Cloud you cannot use "docker run -p 443:1024 ..." which basically does the same if I am right. But the containerized VMs do not allow this.
Container Optimized OS maps ports one to one. Port 1000 in the container is mapped to 1000 on the public interface. I am not aware of a method to change that.
For your case, use Compute Engine with Docker or a load balancer to proxy connections.
Note: if you use a load balancer, your app does not need to manage SSL/TLS. Offload SSL/TLS to the load balancer and just publish HTTP within your application. Google can then manage your SSL certificate issuance and renewal for you. You will find that managing SSL certificates for containers is a deployment pain.

Exposing a docker container to the internet

I deployed a ghost blogging platform on my server using docker. Now I want to expose it to the internet but I'm having some difficulties doing so.
I opened port 8000 in my router a forwarded it to port 32769 which is the one assign to that container. Using port 32769 inside my network I can access the website fine but when I try to access it from the internet it gives a took too long to respond error.
Local IP + PORT: http://10.0.0.140:32769/
Docker port config
Port tester
Router settings
This post was also added to Super User since it has been said that it would be responded better in there.
Let's say your application inside docker is now working on port 8000
You want to expose your application to internet.
The request would go: internet -> router -> physical computer (host machine) -> docker.
You need to export your application to your host machine, this could be done via EXPOSE 8000 instruction in Dockerfile.
That port should be accessible from your host machine first, so, when starting your docker image as docker container, you should add -p parameter, such as
sudo docker run -d -it -p 8000:8000 --name docker_contaier_name docker_image_name
From now on, your docker application can be access within your host machine, let's say it is your physical computer.
Forward port from your router to your host machine
This time, you may want to do as what you did in your question.
Access your application from internet.
If I am thinking correctly, the ip address 10.0.0.140 is just your computer LAN IP address, it cannot accessible from internet.
You can only able to connect to your app via an internet IP, to do that, you can check your router to see what is your WAN IP address, which will be assigned to your router by your internet service provider. Or go google with "what is my IP"
What works for me, more or less, is setting up Apache2 as reverse proxy, redirecting a path in Apache2 to the port of the Docker container. This probably could also be done for example with NGINX.
This way the traffic from the net gets proxied to the container and back to the net, and I see the WordPress site. So regarding the question of OP, the docker container is now exposed to the internet.
However 1: This still doesn't explain why I don't get return traffic from the Docker container if I access it directly from the net.
However 2: Not all the url's in the WordPress site are correct, but that seems to be a WordPress issue and not a Docker / routing issue.

Docker app not available with host IP when using automatic port mapping

I am deploying a eureka server on a VM(say host external IP is a.b.c.d) as a docker image. Trying this in 2 ways.
1.I am running the docker image without explicit port mapping : docker run -p 8671 test/eureka-server
Then running docker ps command shows the port mapping as : 0.0.0.0:32769->8761/tcp
Try accessing the eureka server from outside of the VM with http://a.b.c.d:32769 , its not available.
2.I am running the docker image with explicit port mapping : docker run -p 8761:8761 test/eureka-server
Then running docker ps command shows the port mapping as : 0.0.0.0:8761->8761/tcp
Try accessing the eureka server from outside of the VM with http://a.b.c.d:8761 , its available.
Why in the first case the eureka server is not available from out side the host machine even if there is a random port(32769) assigned by docker.
Is it necessary to have explicit port mapping to have docker app available from external network ?
Since you're looking for access from the outside world to the host via the mapped port you'll need to ensure that the source traffic is allowed to reach that port on the host and given protocol. I'm not a network security specialist, but I'd suggest that opening up an entire range of ports simply because you don't know which port docker will pick would be a bad idea. If you can, I'd say pick a port and explicitly map it and ensure the firewall allows access to that port from the appropriate source address(es) e.g. ALLOW TCP/8671 in from 10.0.1.2/32 as an example - obviously your specific address range will vary on your network configuration. Docker compose may help you keep this consistent (as will other orchestration technologies like Kubernetes). In addition if you use cloud hosting services like AWS you may be able to leverage VPC security groups to help you whitelist source traffic to the port without knowing all possible source IP addresses ahead of time.
You either have the firewall blocking this port, or from wherever you are making the requests, for certain ports your outgoing traffic is disabled, so your requests never leave your machine.
Some companies do this. They leave port 80, 443, and couple of more for their intranet, and disable all other destination ports.

Make docker machine available under host name in Windows

I'm trying to make a docker machine available to my Windows by a host name. After creating it like
docker-machine create -d virtualbox mymachine
and setting up a docker container that exposes the port 80, how can I give that docker machine a host name such that I can enter "http://mymachine/" into my browser to load the website? When I change "mymachine" to the actual IP address then it works.
There is an answer to this question but I would like to achieve it without an entry in the hosts file. Is that possible?
You might want to refer to docker documentaion:
https://docs.docker.com/engine/userguide/networking/#exposing-and-publishing-ports
You expose ports using the EXPOSE keyword in the Dockerfile or the
--expose flag to docker run. Exposing ports is a way of documenting which ports are used, but does not actually map or open any ports.
Exposing ports is optional.
You publish ports using the --publish or --publish-all flag to docker
run. This tells Docker which ports to open on the container’s network
interface. When a port is published, it is mapped to an available
high-order port (higher than 30000) on the host machine, unless you
specify the port to map to on the host machine at runtime. You cannot
specify the port to map to on the host machine when you build the
image (in the Dockerfile), because there is no way to guarantee that
the port will be available on the host machine where you run the
image.
I also suggest reviewing the -P flag as it differs from the -p one.
Also i suggest you try "Kitematic" for Windows or Mac, https://kitematic.com/ . It's much simpler (but dont forget to commit after any changes!)
Now concerning the network in your company, it has nothing to do with docker, as long as you're using docker locally on your computer it wont matter what configuration your company set. Even you dont have to change any VM network config in order to expose things to your local host, all comes by default if you're using Vbox ( adapter 1 ==> NAT & adapter 2 ==> host only )
hope this is what you're looking for
If the goal is to keep it as simple as possible for multiple developers, localhost will be your best bet. As long as the ports you're exposing and publishing are available on host, you can just use http://localhost in the browser. If it's a port other than 80/443, just append it like http://localhost:8080.
If you really don't want to go the /etc/hosts or localhost route, you could also purchase a domain and have it route to 127.0.0.1. This article lays out the details a little bit more.
Example:
dave-mbp:~ dave$ traceroute yoogle.com
traceroute to yoogle.com (127.0.0.1), 64 hops max, 52 byte packets
1 localhost (127.0.0.1) 0.742 ms 0.056 ms 0.046 ms
Alternatively, if you don't want to purchase your own domain and all developers are on the same network and you are able to control DHCP/DNS, you can setup your own DNS server to include a private route back to 127.0.0.1. Similar concept to the Public DNS option, but a little more brittle since you might allow your devs to work remote, outside of a controlled network.
Connecting by hostname requires that you go through hostname to IP resolution. That's handled by the hosts file and falls back to DNS. This all happens before you ever touch the docker container, and docker machine itself does not have any external hooks to go out and configure your hosts file or DNS servers.
With newer versions of Docker on windows, you run containers with HyperV and networking automatically maps ports to localhost so you can connect to http://localhost. This won't work with docker-machine since it's spinning up virtualbox VM's without the localhost mapping.
If you don't want to configure your hosts file, DNS, and can't use a newer version of docker, you're left with connecting by IP. What you can do is use a free wildcard DNS service like http://xip.io/ that maps any name you want, along with your IP address, back to that same IP address. This lets you use things like a hostname based reverse proxy to connect to multiple containers inside of docker behind the same port.
One last option is to run your docker host VM with a static IP. Docker-machine doesn't support this directly yet, so you can either rely on luck to keep the same IP from a given range, or use another tool like Vagrant to spin up the docker host VM with a static IP on the laptop. Once you have a static IP, you can modify the host file once, create a DNS entry for every dev, or use the same xip.io URL, to access the containers each time.
If you're on a machine with Multicasting DNS (that's Bonjour on a Mac), then the approach that's worked for me is to fire up an Avahi container in the Docker Machine vbox. This lets me refer to VM services at <docker-machine-vm-name>.local. No editing /etc/hosts, no crazy networking settings.
I use different Virtualbox VMs for different projects for my work, which keeps a nice separation of concerns (prevents port collisions, lets me blow away all the containers and images without affecting my other projects, etc.)
Using docker-compose, I just put an Avahi instance at the top of each project:
version: '2'
services:
avahi:
image: 'enernoclabs/avahi:latest'
network_mode: 'host'
Then if I run a webserver in the VM with a docker container forwarding to port 80, it's just http://machine-name.local in the browser.
You can add a domain name entry in your hosts file :
X.X.X.X mymachine # Replace X.X.X.X by the IP of your docker machine
You could also set up a DNS server on your local network if your app is meant to be reachable from your coworkers at your workplace and if your windows machine is meant to remain up as a server.
that would require to make your VM accessible from local network though, but port forwarding could then be a simple solution if your app is the only webservice running on your windows host. (Note that you could as well set up a linux server to avoid using docker-machine on windows, but you would still have to set up a static IP for this server to ensure that your domain name resolution works).
You could also buy your own domain name (or get a free one) and assign it your docker-machine's IP if you don't have rights to write in your hosts file.
But these solution may not work anymore after some time if app host doesn't have a static IP and if your docker-machine IP changes). Not setting up a static IP doesn't imply it will automatically change though, there should be some persistence if you don't erase the machine to create a new one, but that wouldn't be guaranteed either.
Also note that if you set up a DNS server, you'd have to host it on a device with a static IP as well. Your coworkers would then have to configure their machine to use this one.
I suggest nginx-proxy. This is what I use all the time. It comes in especially handy when you are running different containers that are all supposed to answer to the same port (e.g. multiple web-services).
nginx-proxy runs seperately from your service and listens to docker-events to update it's own configuration. After you spun up your service and query the port nginx-proxy is listening to, you will be redirected to your service. Therefore you either need to start nginx-proxy with the DEFAULT_HOST flag or send the desired host as header param with the request.
As I am running this only with plain docker, I don't know if it works with docker-machine, though.
If you go for this option, you can decide for a certain domain (e.g. .docker) to be completely resolved to localhost. This can be either done company-wide by DNS, locally with hosts file or an intermediate resolver (the specific solution depends on your OS, of course). If you then try to reach http://service1.docker nginx-proxy will route to the container that has then ENV VIRTUAL_HOST=service1.docker. This is really convenient, because it only needs one-time setup and is from then on dynamic.

Resources