docker containers communication on dev machine - docker

I have a container that runs a simple service that requires a connection to elasticsearch. For this I need to provide my service with the address of elasticsearch. I am confused as to how I can create a container that can be used in production and on my local machine (mac). How are people providing configuration like this these days?
So far I have come up with having my process take environmental variables as arguments which I can pass to the container with docker run -e. It seems unlikely that I would be doing this type of thing in production.

I have a container that runs a simple service that requires a connection to elasticsearch. For this I need to provide my service with the address of elasticsearch
If elasticsearch is running in its own container on the same host (managed by the same docker daemon), then you can link it to your own container (at the docker run stage) with the --link option (which sets environment variables)
docker run --link elasticsearch:elasticsearch --name <yourContainer> <yourImage>
See "Linking containers together"
In that case, your container config can be static and known/written in advance, as it will refer to the search machine as 'elasticsearch'.

How about writing it into the configuration file of your application and mount the configuration directory onto your container with -v?
To make it more organized, I use Ansible for orchestration. This way you could have a template of the configuration file for your application while the actually parameters are in the variable file of the corresponding Ansible playbook at a centralized location. Ansible will be in charge of copying the template over to the desired location and do variable substitution for you. It also recently enhanced its Docker support.

Environment variables are absolutely fine (we use them all the time for this sort of thing) as long as you're using service names, not ip addresses. Even with ip addresses you'd have no problem as long as you only have one ES and you're willing to restart your service every time the ES ip address changes.
You should really ask someone who knows for sure how you resolve these things in your production environments, because you're unlikely to be the only person in your org who has had this problem -- connecting to a database poses the same problem.
If you have no constraints at all then you should check out something like Consul from Hashicorp. It'll help you a lot with this problem; if you are allowed to use it.

Related

Deploy Docker services to a remote machine via ssh (using Docker Compose with DOCKER_HOST var)

I'm trying deploy some docker services from a compose file to a Vagrantbox. The Vagrantbox does not have a static IP. I'm using the DOCKER_HOST environment variable to set up the target engine.
This is the command I use: DOCKER_HOST="ssh://$BOX_USER#$BOX_IP" docker-compose up -d. The BOX_IP and BOX_USER vars contain the correct IP address and username (obtained at runtime from the Vagrantbox).
I can connect and deploy services this way, but I the SSH connection always asks if I wanna trust the machine. Since the VM gets a dynamic IP, my known_hosts file gets polluted with lines I only used once and might cause trouble some time in the future in case the IP is taken again.
Assigning a static IP results in error messages stating that the machine does not match my known_hosts entry.
And setting StrictHostKeyChecking=no also is not an option because this opens the door for a lot of security issues.
So my question is: how can I deploy containers to a remote Vagrantbox without the metioned issues? Ideally I can start a docker container handles the deployments. But I'm open to any other idea as well.
The reason why I don't just use a bash script while provisioning the VM is that this VM acts as a testing ground for a physical machine. The scripts I use are the same for the real machine. I test them regularly and automated inside a Vagrantbox.
UPDATE: I'm using Linux

Can (Should) I Run a Docker Container with Same host name as the Docker Host?

I have a server application (that I cannot change) that, when you connect as a client, will give you other URLs to interact with. Those URLs are also part of the same server so the URL advertised uses the hostname of a docker container.
We are running in a mixed economy (some docker containers, some regular applications). We actually need to set up where we have the server running as a docker application on a single VM, and that server will be accessed by non-docker clients (as well as docker clients not running on the same docker network).
So you have a server hostname (the docker container) and a docker hostname (the hostname of the VM running docker).
The client's initial connection is to: dockerhostname:1234 but when the server sends URLs to the client, it sends: serverhostname:5678 ... which is not resolvable by the client. So far, we've addressed this by adding "server hostname " to the client's /etc/hosts file but this is a pain to maintain.
I have also set the --hostname of the server docker container to the same name as the docker host and it has mostly worked but I've seen where a docker container running on the same docker network as the server had issues connecting to the server.
I realize this is not an ideal docker setup. We're migrating from a history of delivering as rpm's to delivering containers .. but it's a slow process. Our company has lots of applications.
I'm really curious if anyone has advice/lessons learned with this situation. What is the best solution to my URL problem? (I'm guessing it is the /etc/hosts we're already doing)
You can do port-mapping -p 8080:80
How you build and run your container?
With a shell command, dockerfile or yml file?
Check this:
docker port
Call this and it will work:
[SERVERIP][PORT FROM DOCKERHOST]
To work with hostnames you need DNS or use hosts file.
The hosts file solution is not a good idea, it's how the internet starts in the past ^^
If something change you have to change all hosts files on every client!
Or use a static ip for your container:
docker network ls
docker network create my-network
docker network create --subnet=172.18.0.0/16 mynet123
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
Assign static IP to Docker container
You're describing a situation that requires a ton of work. The shortest path to success is your "adding things to /etc/hosts file" process. You can use configuration management, like ansible/chef/puppet to only have to update one location and distribute it out.
But at that point, you should look into something called "service discovery." There are a ton of ways to skin this cat, but the short of it is this. You need some place (lazy mode is DNS) that stores a database of your different machines/services. When a machine needs to connect to another machine for a service, it asks that database. Hence the "service discovery" part.
Now implementing the database is the hardest part of this, there are a bunch of different ways, and you'll need to spend some time with your team to figure out what is the best way.
Normally running an internal DNS server like dnsmasq or bind should get you most of the way, but if you need something like consul that's a whole other conversation. There are a lot of options, and the best thing to do is research, and audit what you actually need for your situation.

Read host's ifconfig in the running Docker container

I would like to read host's ifconfig output during the run of the Docker container, to be able to parse it and get OpenVPN interface (tap0) IP address and process it within my application.
Unfortunately, propagating this value via the environment is not my case, because IP address could change in time of running the container and I don't want to restart my application container each time to see a new value.
Current working solution is a CRON on the host which writes the IP into the file on a shared volume and container reads from it - but I am looking for better solution as it seems to me as a workaround. Also, there was a plan to create new container with network: host which will see host's interfaces - it works, but it also looks like a workaround as it involves many steps and probably security issues.
I have a question, is there any valid and more clean way to achieve my goal - read host's ifconfig in docker container in realtime?
A specific design goal of Docker is that containers can’t directly access the host’s network configuration. The workarounds you’ve identified are pretty much the only way to do these.
If you’re trying to modify the host’s network configuration in some way (you’re trying to actually run a VPN, for example) you’re probably better off running it outside of Docker. You’ll still need root permission either way, but you won’t need to disable a bunch of standard restrictions to do what you need.
If you’re trying to provide some address where the service can be reached, using configuration like an environment variable is required. Even if you could access the host’s configuration, this might not be the address you need: consider a cloud environment where you’re running on a cloud instance behind a load balancer, and external clients need the load balancer; that’s not something you can directly know given only the host’s network configuration.

Isolated Docker environments via SSH

I am setting up a series of Linux command line challenges (for internal use/training), similar to those at OverTheWire.org's Bandit. From some reading I have done of their infrastructure, they setup things as such:
All ssh-based games on OverTheWire run in Docker containers. When you
login with SSH to one of the games, a fresh Docker container is
created just for you. Noone else is logged in into your container, nor
are there any files from other players lying around. We opted for this
setup to provide each player with a clean environment to experiment
and learn in, which is automatically cleaned up when you log out.
This seems like an ideal solution, since everyone who logs in gets a completely clean environment (destroyed on logout) so that simultaneous players do not interfere with each other.
I am very new to Docker and understand it in principle, but am unsure about how to setup a similar system - particularly spawn new Docker instances on SSH login to a server and then destroy the instance on logout/disconnection.
I'd appreciate any advice on how to design/implement this kind of setup.
It seems to me there are two main goals here. First undestand what docker really makes and how it works. Second the sistem that orquestates the whole sistem.
Let me make some brief and short introduction. I won't go into details but mainly docker is a plaform that works like a system virtualization that lets you isolate a process, operating system or a whole aplication without any kind of hypervisor. The container shares the kernel of the host system and all that it cointains is islated from the host and the rest of the containers.
So the basic principle you are looking for is a system that orchestrates containers that has an ssh server with the port 22 open. Although there are many ways of how you could reach this goal, one way it can be with this docker sshd server image.
docker run -itd --rm rastasheep/ubuntu-sshd bash
Docker needs a process to keep alive. By using -it you are creating an interactive session with the "bash" interpreter. This will keep alive the container plus lets you start a bash terminal inside an isolated virtual ubuntu server.
--rm: will remove the container once you exists from the container.
rastasheep/ubuntu-sshd: it is the docker image id.
As you can see, there is a lack of a system that connects between your aplication and this docker platform. One approach would it be with a library that python has that uses the docker client programaticaly. As an advice I would recomend you to install docker in your computer and to try to create a couple of ubuntu servers with ssh server and to connect into it from your host. It will help you to see if it's really necesary to have sshd server, the network requisites you will need if so, to traffic all the clients into the containers. Read the oficial docker network documentation.
With the example I had described a new fresh terminal is started and there is no need to connect to the docker via ssh. By using this way you won't need to route the traffic, indentify the host free ports to connect your host to the containers or to check and shutdown the container once the connection has finished. Otherwhise the container will keep alive.
There are many ways where your system can be made and I would strongly recomend to you to start by creating some containers with the docker tool and start to understand how it works.

Use Eureka despite having random external port of docker containers

I am writing an application that is composed of a few spring boot based microservices with a zuul based reverse proxy in the front-
It works when I start the services on my machine, but for server rollout I'd like to use docker for the services, but this seems not to be possible right now.
Normally you would have a fixed "internal" port and randomized ports at the outside of the container. But the app in the container doesn't know the outside port (and IP).
The Netflix tools match what I would want to write an efficient microservice architecture and conceptually I really like docker.
As far as I can see it would be very troublesome to start the container, gather the outside port on the host and pass it to the app, because you can't simply change the port after the app is started.
Is there any way to use eureka with docker based clients?
[Update]
I guess I did a poor job explaining the problem. So maybe this clarifies it a bit more:
The eureka server itself can run in docker, as I have only one and the outside port doesn't matter. I can use the link feature to access it from the clients.
The problem is the URL that the clients register themselves with.
This is for example https://localhost:8080/ but due to dynamic port assignment it is really only accessible via https://localhost:54321/
So eureka will return the wrong URL for the services.
UPDATE
I have updated my answer below, so have a look there.
I have found a solution myself, which is maybe not the best solution, but it fits for me...
When you start docker with "--net=host" (host networking), then you use the hosts network stack directly. Then I just use 0 as port for spring-boot and spring randomizes the port for me and as it's using the hosts networking stack there is no translation to a different port (and IP).
There are some drawbacks though:
When you use host networking you can't use the link-feature for these containers as link source or target.
Using the hosts network stack leads to less encapsulation of the instance, which maybe a problem depending on your project.
I hope it helps
A lot of time has passed and I think I should elaborate this a little bit further:
If you use docker to host your spring application, just don't use a random port! Use a fixed port because every container gets his own IP anyway so every service can use the same port. This makes life a lot easier.
If you have a public facing service then you would use a fixed port anyway.
For local starts via maven or for example the command line have a dedicated profile that uses randomized ports so you don't have conflicts (but be aware that there are or have been a few bugs surrounding random ports and service registration)
if for whatever reason you want to or need to use host networking you can use randomized ports of course, but most of the time you shouldn't!
You can set up a directory for each docker instance and share it between the host and the instance and then write the port and IP address to a file in that directory.
$ instanceName=$(generate random instance name)
$ dirName=/var/lib/docker/metadata/$instanceName
$ mkdir -p $dirName
$ docker run -name $instanceName -v ${dirName}:/mnt/metadata ...
$ echo $(get port number and host IP) > ${dirName}/external-address
Then you just read /mnt/metadata/external-address from your application and use that information with Eureka.

Resources