link docker containers and use wildcard subdomains - docker

I have two docker containers (container_one and container_two), one is linked to the other container_one >>link>> container_two.
when i run a curl command from within container_one using the address: http://container_two/index.php the curl command executes successfully as expected.
however, i would like to introduce a wildcard subdomain so that i can attach any number of subdomains to container_two (eg: site1.container_two, site2.container_two, *.container_two, etc). Obviously, calling a curl command from container_one: http://site1.container_two/index.php does not work with linking alone.
Does anyone know how this would be possible with a docker run command or perhaps some other way?

Basically, you cannot do this with just --link flags, because --link adds an entry to the /etc/hosts file to facilitate this inter-container communication, and /etc/hosts files do not support wildcard entries.
However, you could set up a DNS server on your container_one, and set up your wildcard host (or subdomain records) on that DNS server to point to your container_two (and forward all other requests to your actual DNS for all other hostnames), and then specify --dns=127.0.0.1 in your docker run command for container_one. This seems a bit hacky, but what happens is that container_one will then use 127.0.0.1 (localhost) when it encounters a hostname it does not recognize in /etc/hosts, and the DNS on container_one will point to container_two for subdomains (and all other requests forwarding to your external DNS infrastructure).
You can find more information about this in the documentation. Good luck!

Related

Can (Should) I Run a Docker Container with Same host name as the Docker Host?

I have a server application (that I cannot change) that, when you connect as a client, will give you other URLs to interact with. Those URLs are also part of the same server so the URL advertised uses the hostname of a docker container.
We are running in a mixed economy (some docker containers, some regular applications). We actually need to set up where we have the server running as a docker application on a single VM, and that server will be accessed by non-docker clients (as well as docker clients not running on the same docker network).
So you have a server hostname (the docker container) and a docker hostname (the hostname of the VM running docker).
The client's initial connection is to: dockerhostname:1234 but when the server sends URLs to the client, it sends: serverhostname:5678 ... which is not resolvable by the client. So far, we've addressed this by adding "server hostname " to the client's /etc/hosts file but this is a pain to maintain.
I have also set the --hostname of the server docker container to the same name as the docker host and it has mostly worked but I've seen where a docker container running on the same docker network as the server had issues connecting to the server.
I realize this is not an ideal docker setup. We're migrating from a history of delivering as rpm's to delivering containers .. but it's a slow process. Our company has lots of applications.
I'm really curious if anyone has advice/lessons learned with this situation. What is the best solution to my URL problem? (I'm guessing it is the /etc/hosts we're already doing)
You can do port-mapping -p 8080:80
How you build and run your container?
With a shell command, dockerfile or yml file?
Check this:
docker port
Call this and it will work:
[SERVERIP][PORT FROM DOCKERHOST]
To work with hostnames you need DNS or use hosts file.
The hosts file solution is not a good idea, it's how the internet starts in the past ^^
If something change you have to change all hosts files on every client!
Or use a static ip for your container:
docker network ls
docker network create my-network
docker network create --subnet=172.18.0.0/16 mynet123
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
Assign static IP to Docker container
You're describing a situation that requires a ton of work. The shortest path to success is your "adding things to /etc/hosts file" process. You can use configuration management, like ansible/chef/puppet to only have to update one location and distribute it out.
But at that point, you should look into something called "service discovery." There are a ton of ways to skin this cat, but the short of it is this. You need some place (lazy mode is DNS) that stores a database of your different machines/services. When a machine needs to connect to another machine for a service, it asks that database. Hence the "service discovery" part.
Now implementing the database is the hardest part of this, there are a bunch of different ways, and you'll need to spend some time with your team to figure out what is the best way.
Normally running an internal DNS server like dnsmasq or bind should get you most of the way, but if you need something like consul that's a whole other conversation. There are a lot of options, and the best thing to do is research, and audit what you actually need for your situation.

How do I connect to other computers via Host Name on Ubuntu?

I have a docker container that is running on Windows currently and it is accessing database resources via the host name (e.g Desktop1, Desktop2, etc...). The docker container is using a bridge network that was created new for the purpose of the system.
What I notice on Windows is that I can ping or connect to those resources simply via the host name and I do not need to remember the IP address of the computer.
I also notice that this can also be done even if I don't have a DNS server running locally (I think?).
However, when I run the container on an Ubuntu host, I keep getting connection errors and timeouts.
I have tried to edit the /etc/hosts and /etc/hostname to include the proper host name of the PC and the fixed wired IP I am using.
I have also tried a test database on the same Ubuntu system but I cannot connect to it via its host name. At best, I am able to connect via something like Desktop1.local but it only solves 1 issue. The other responses I receive from the other systems on the network return only the hostname (e.g. http://Desktop2/api/..., ws://Desktop3/api/..., etc...).
I was wondering if there is a configuration I am missing to have the same functionality as Windows? Do I need to change my code to handle this kind of situations or do I need to do something else like on the OS level?
My command for creating the docker container is along these lines:
docker create -p 172.16.0.1:50000:80/tcp --env MongoDatabaseSettings__ConnectionString="mongodb://desktop1:27017/?uuidRepresentation=standard" --env ConnectionStrings__MySQLConnection="server=desktop2;database=DB;user=user;password=password" --name container1 registry.gitlab.com/group/image:latest
Contents of my /etc/hosts
127.0.0.1 localhost
172.16.0.1 desktop1
If it's me, maybe will try to build the reverse proxy server.
Step. 1
choose your server. (recommend Nginx)
Step. 2
Forward traffic
For example, if your ip of docker service is 192.168.1.2:8080, then you can make 127.0.0.1:80 to forward to it. (or any port you want)
Then you just need to access 127.0.0.1:80, the server will forward the traffic to service of docker.
I dont know is that you actually want to do.
oh, btw, if you still want to access via host name, just edit host file with root user. (make 127.0.0.1:80 a custom domain.
I dont know the reason of that why you can not setting the host file, but set 127.0.0.1 in host file is always working for me.

Coreos security

I'm playing with coreos and digitalocean, and I'd like to start allowing internal communication between my containers.
I've got private networking set up for all the hosts, and now I'd like to ensure that some containers only open ports to localhost and to the internal interface.
I've explored a lot of options for this, but none of them seem satisfactory:
Using the '-p', I can ensure docker binds to the local interface, but this has two downsides:
I can't easily test services by SSHing in, because that traffic originates from localhost
I need to write somewhat hacky shell scripts to start my services, in order to inject the address of the machine that the container is running on
I tried using flannel, but it doesn't make the traffic private (or I didn't set it up right)
I considered using iptables on the containers to prevent external access, but that doesn't seem as secure
I tried using iptables on the coreos hosts, but ... it's tricky, and I couldn't get it working.
When I tried to configured iptables on the host, I used the method here: https://docs.docker.com/articles/networking/#communication-between-containers-and-the-wider-world, by adding a DROP rule to the docker chain, but it didn't work, and packets still got through
So what's the best approach, and I'll invest time in making it work.
Overall, I guess I need to find something that I can:
Roll out to all the hosts reliably
Something that is reasonably flexible going forward
Something that allows for 'edge machines' which are accessible from the wider internet.
Solution
I'll go into how I ended up solving this. Thanks to larsks for their help. In the end, their approach was the correct one. It's tricky on coreos, because there aren't really stable addresses, like larsks assumes. The whole point of coreos it to be able to forget about ip addresses.
I solved this by finding a not-too-bad way to inject the ip address into the command in the service file. The tricky thing about this is that it doesn't really support a lot of the shell features I expected. What I wanted to do was to assign the ip address of the machine to a variable then inject it into the command:
ip=$(ifconfig eth1 | grep -o 'inet [0-9]*\.[0-9]*\.[0-9]*\.[0-9]*' | grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*');
/usr/bin/docker run -p $ip:7000:7000 ...
But, as mentioned, that doesn't work. So what to do? Get the shell!
ExecStart=/usr/bin/sh -c "\
export ip=$(ifconfig eth1 | grep -o 'inet [0-9]*\.[0-9]*\.[0-9]*\.[0-9]*' | grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*');\
echo $ip;\
/usr/bin/docker run -p $ip:7000:7000"
I hit a few problems along the way.
I'm pretty sure there aren't newlines in that command, so I had to add the ';' characters
when you test the above bash -c command in a shell, it'll have very different effects to when systemd does it. In the shell you need to escape the '$' characters, while in systemd config files, you don't.
I included the echo so that I could see what the command thought the ip was.
When I was doing all this, I actually inserted a small webserver to the docker image, so that I could just test using curl.
Downsides of this approach is that it's tied to the way ifconfig works, and ipv4. In fact, this approach doesn't work on my linux mint laptop, where ifconfig produces differently formatted output. The important lesson here is to output things in yaml or json, so that shell json tools can access things more easily.
Instead of grep-ping the IP address, you can use the environment files to get the IP address (both public and private) of the host the service gets scheduled on. This allows you to bind your container ports to either public or private ports in a simple way.
Like so:
[Service]
EnvironmentFile=/etc/environment
ExecStart=/usr/bin/docker run --name myservice -p \
${COREOS_PUBLIC_IPV4}:80:80 \
${COREOS_PRIVATE_IPV4}:3306:3306 \
ubuntu /bin/bash
I've got private networking set up for all the hosts, and now I'd like
to ensure that some containers only open ports to localhost and to the
internal interface.
This is exactly the behavior that you get with the -p option when you specify an ip address. Let's say I have a host with two external interfaces, eth0 (with address 10.0.0.10) and eth1 (with address 192.168.0.10), and the docker0 bridge at 172.17.42.1/16.
If I start a container like this:
docker run -p 192.168.0.10:80:80 -d larsks/mini-httpd
This will start a container that is accessible over the eth1 interface at 192.168.0.10, port 80. This service is also accessible -- from the host on which the container is located -- at the address assigned to the container on the docker0 network. This would be something like 172.17.0.39, port 80.
This seems to meet your goals:
The container port is exposed over the "private" eth1 interface.
The container port is accessible from the host.
I can't easily test services by SSHing in, because that traffic originates from localhost.
If you were running ssh inside a container, you would ssh to it at the "internal" address assigned by Docker. But if you are running ssh inside your containers, you may want to consider not doing that and rely on tools like docker exec instead.
I need to write somewhat hacky shell scripts to start my services, in order to inject the address of the machine that the container is running on
With this solution, there is no need to inject the machine ip into the container.

Route traffic to a docker container based on subdomain

I have wildcard dns pointed to my server e.g. *.domain.com
I'd like to route each subdomain to it's own docker container.
So that box1.domain.com goes to the appropriate docker container.
This should work for any traffic primarily HTTP and SSH.
Or perhaps the port can be part of the subdomain e.g. 80.box1.domain.com.
I will have lots of docker containers so the solution should be dynamic not hard-coded for every container.
Another solution would be to use https://github.com/jwilder/nginx-proxy.
This tool automatically forwards requests to the appropriate container (based on subdomain via the VIRTUAL_HOST container environment variable).
For instance, if you want to redirect box1.domain.com to a container, simply set the VIRTUAL_HOST container environment variable to "box1.domain.com".
Here is a detailed tutorial I wrote about it: http://blog.florianlopes.io/host-multiple-websites-on-single-host-docker.
I went with interlock to route http traffic using the nginx plugin.
I settled on using a random port for each SSH connection as I couldn't get it work using the subdomain alone.
The easiest solution would be to use the Apache mod_rewrite RewriteMap method. It's very performant when used against a text file, but it can call a script if desired. There is another StackOverflow answer that covers the script variant pretty well.
If you want to avoid Apache, the good folks over at dotCloud created Hipache to do the routing for their PaaS services. They even documented the different things they tried before building their own solution. I found a reference to tsuru.io using hipache exactly for routing to docker containers, so that definitely validates it for this purpose.
my answer may come to late but when you use docker you don't really need ssh to connect to your containers. with the docker exec command, you can run shell command directly in your running container.
here is my advice use the nginx proxy container listed at the beginning for configuring sub-domains. and run portainer on your host in order to have a visual overview of your Containers, images, logs and even execute command in it all of this through the portainer gui.
I used apache proxypresevehost
ProxyPreserveHost On
ProxyPass "/" "http://localhost:4533/"
ProxyPassReverse "/" "http://localhost:4533/"

Docker container linking via port forwarding?

It seems that the preferred way to expose services to other Docker containers is container linking, which sets some environment variables that you then have to use in your application code to look up host names and port numbers:
psql -h $PG_PORT_5432_TCP_ADDR -p $PG_PORT_5432_TCP_PORT
Is there a reason this is not done via port forwarding in a way that is transparent to the application? So that in the same way that I can just run my web server inside the container on standard port 80 and have Docker figure out what actual port to use, I could just be doing
psql -h 0.0.0.0 # no -p necessary, we use the default port
The port forwarding would be set up when I start docker, just like with server ports.
This is possible! It has actually be proposed by the CoreOS team; you can read more in the following blog post:
http://coreos.com/blog/Jumpers-and-the-software-defined-localhost/
Docker will soon allow to start a container sharing the network namespace of another container; it will help with those scenarios (and in the short term, it will allow to do what you suggest very easily).
Project Atomic is also following this approach:
http://www.projectatomic.io/docs/inter-container-networking/
Geard uses iptables to enable containers to connect to each other. Network namespaces allows adding iptables rules to the network namespace of a container. The basic idea is to make remote endpoints appear as if they were local to a container. For example the database container could be made to appear to be running locally inside the application container.

Resources