Docker - Accessing host from private network - docker

My team and I have a lot of small services that work with each other to get the job done. We came up with some internal tooling and some some solutions that work for us, however we are always trying to improve.
We have created a docker setup where we do something like this:
That is, we have a private network in place where the services can call each other by name and that works.
In this situation if service-a call service-b like http://localhost:6002 it wouldn't work.
We have a scenario where we want to work on a module and run the rest on docker. So we could run the service-a directly out of our IDE for example and leave service-b and service-c on docker. The last service (service-c) we would than reference as localhost:6003 from the host network.
This works fine!
However things can get "out of hand" the more we go down the line. In the example we only have 3 services but our longest chain is like 6 services. Supposing I want to work on the one before the last, I have to start all services that came before in order to simulate a complete chain. (In most of the cases we work through APIs, which render the point mute, but bear with me). Something like this:
The QUESTION
Is there a way to allow a situation such as this one?
In order to run part of the services as docker containers and maybe one service from my IDE for example?
Or would for that be necessary that I put all of them on the host network for them all to be able to call each other through localhost?
I appreciate any help!

You can use --add-host service-b:$(hostname -i) to push the host IP address into a container so that the container doesn't need to know about which service is not running in docker.
You could set up your docker compose file to accept an argument as to WHICH service you want to do...
extra_hosts:
- ${HOST_SVC:fakehost}:${HOST_IP}
But honestly- the easiest solution is probably just to set up the service you want to work on with the source code mounted in, run them all in docker, and restart the container as needed.

Related

Docker namespace, docker on virtualbox, mirror environment

Let's assume scenario I'm using a set of CLI docker run commands for creating a whole environment of containers, networks (bridge type in my case) and connect containers to particular networks.
Everything works well till the moment I want to have only one such environment at a single machine.
But what if I want to have at the same machine a similar environment to the one I've just created but for a different purpose (testing) I'm having an issue of name collisions since I can't crate and start containers and networks with the same name.
So far I tried to start second environment the same way I did with the first but with prefixing all containers and networks names.That worked but had a flaw: in the application that run all requests to URIs were broken since they had a structure
<scheme>://<container-name>:<port-number>
and the application was not able to reach <prefix-container-name>.
What I want to achieve is to have an exact copy of the first environment running on the same machine as the second environment that I could use to perform the application tests etc.
Is there any concept of namespaces or something similar to it in Docker?
A command that I could use before all docker run etc commands I use to create environment and have just two bash scripts that differ only by the namespace command at their beginning?
Can using virtual machine, ie Oracle Virtualbox be the solution to my problem? Create a VM for the second environment? isn't that an overkill, will it add an additional set of troubles?
Perhaps there is a kind of --hostname for docker run command that will allow to access the container from other container by using this name? Unlucky --hostname only gives ability to access the container by this name form the container itself but not from any other. Perhaps there is an option or command that can make an alias, virtual host or whatever magic common name I could put into apps URIs <scheme>://<magic-name>:<port-number> so creating second environment with different containers and networks names will cause no problem as long as that magic-name is available in the environment network
My need for having exact copy of the environment is because of tests I want to run and check if they fail also on dependency level, I think this is quite simple scenario from the continues integration process. Are there any dedicated open source solutions to what I want to achieve? I don't use docker composer but bash script with all docker cli commands to get the whole env up and running.
Thank you for your help.
Is there any concept of namespaces or something similar to it in Docker?
Not really, no (but keep reading).
Can using virtual machine [...] be the solution to my problem? ... Isn't that an overkill, will it add an additional set of troubles?
That's a pretty reasonable solution. That's especially true if you want to further automate the deployment: you should be able to simulate starting up a clean VM and then running your provisioning script on it, then transplant that into your real production environment. Vagrant is a pretty typical tool for trying this out. The biggest issue will be network connectivity to reach the individual VMs, and that's not that big a deal.
Perhaps there is a kind of --hostname for docker run command that will allow to access the container from other container by using this name?
docker run --network-alias is very briefly mentioned in the docker run documentation and has this effect. docker network connect --alias is slightly more documented and affects a container that's already been created.
Are there any dedicated open source solutions to what I want to achieve?
Docker Compose mostly manages this for you, if you want to move off of your existing shell-script solution: it puts a name prefix on all of the networks and volumes it creates, and creates network aliases for each container matching its name in the YAML file. If your host volume mounts are relative to the current directory then that content is fairly isolated too. The one thing you can't easily do is launch each copy of the stack on a separate host port(s), so you have to resolve those conflicts.
Kubernetes has a concept of a namespace which is in fact exactly what you're asking for, but adopting it is a substantial investment and would involve rewriting your deployment sequence even more than Docker Compose would.

How to link multiple docker swarm services?

I'm a huge fan of the docker philosophy (or at least I think I am). Even so, I'm still quite novice in the sense that I don't seem to grasp the intended way of using docker.
As I see it currently, there are two ways of using docker.
Create a container with everything I need for the app in it.
For example, I would like something like a Drupal site. I would then put nginx, php, mysql and code into a container. I could run this as a service in swarm mode and scale it as needed. If I need another Drupal site, I would then run a second container/service that holds nginx, php and mysql and (slightly) different code. I would now need 2 images to run a container or service off.
Pro's - Easy, everything I need in a single container
Con's - Cannot run each container on port 80 (so need a reverse proxy or something). (Not sure at but I could imagine) Server load is higher since there are multiple containers/services running nginx, php and mysql.
Create 4 separate containers. 1 nginx container, 1 php container, 1 mysql container and 1 code/data container.
For example, I would like the same Drupal site. I could now run them all as a separate service and scale them across my servers as the amount of code containers (Drupal sites or other sites) increases. I would only need 1 image per container/service instead of a separate image for each site.
Pro's - Modular, single responsibility per service (1 for database, 1 for webserver etc), easy to scale only the area that needs scaling (scale database if requests increase, nginx if traffic increases etc).
Con's - I don't know how to make this work :).
Personally I would opt to make a setup according to the second option. Have a database container/service, nginx container/service etc. This seems much more flexible to me and makes more sense.
I am struggling however on how to make this work. How would I make the nginx service look at the php service and point the nginx config to the code folder in the data service etc. I have read some stuff about an overlay network but that does not make clear to me how nginx would look for php in a separate container/service.
I therefore have 2 (and a half) questions:
How is docker meant to be used (option 1 or 2 above or totally different)?
How can I link services together (make nginx look for php in a different service)?
(half) I know I am a beginner trying to grasp the concept but setting up a simple webserver and running websites seems like a basic task (at least, it is for me in conventional ways) but I can't seem to find my answers online anywhere. Am I totally off par in the way I think I would like to use docker or have I not been looking well enough?
How is docker meant to be used (option 1 or 2 above or totally different)?
Upto you, I prefer using Option #2, but i have at times used mix of Option #1 and options #2 also. So it all depends on the use case and which options looks better for the use case. At one of our client it was needed to have SSH and Nginx, PHP all in same container. So we mixed #1 and #2. Mysql, redis on their own container and app on one container
How can I link services together (make nginx look for php in a different service)?
Use docker-compose to define your services and docker stack to deploy them. You won't have to worry about the names of the services
version: '3'
services:
web:
image: nginx
db:
image: mysql
environment:
- "MYSQL_ROOT_PASSWORD=root"
Now deploy using
docker stack deploy --compose-file docker-compose.yml myapp
In your nginx container you can reach mysql by using it's service name db. So linking happens automatically and you need not worry.
I know I am a beginner trying to grasp the concept but setting up a simple webserver and running websites seems like a basic task (at least, it is for me in conventional ways) but I can't seem to find my answers online anywhere. Am I totally off par in the way I think I would like to use docker or have I not been looking well enough
There are lot of good resources available in forms of articles, you just need to look

Running several apps via docker-compose

We are trying to run two apps via docker-compose. These apps are (obviously) in separate folders, each of them having their own docker-compose.yml . On the filesystem it looks like this:
dir/app1/
-...
-docker-compose.yml
dir/app2/
-...
-docker-compose.yml
Now we need a way to compose these guys together for they have some nitty-gritty integration via http.
The issue with default docker-compose behaviour is that if treats all relative paths with respect to folder it is being run at. So if you go to dir from the example above and run
docker-compose up -f app1/docker-compose.yml -f app2/docker-compose.yml
you'll end up being out of luck if any of your docker-compose.yml's uses relative paths to env files or whatever.
Here's the list of ways that actually work, but have their drawbacks:
Run those apps separately, and use networks.
It is described in full at Communication between multiple docker-compose projects
I've tested that just now, and it works. Drawbacks:
you have to mention network in docker-compose.yml and push that to repository some day, rendering entire app being un-runnable without the app that publishes the network.
you have to come up with some clever way for those apps to actually wait for each other
2 Use absolute paths. Well, it is just bad and does not need any elaboration.
3 Expose the ports you need on host machine and make them talk to host without knowing a thing about each other. That is too, obviously, meh.
So, the question is: how can one manage the task with just docker-compose ?
Thanks to everyone for your feedback. Within our team we have agreed to the following solution:
Use networks & override
Long story short, your original docker-compose.yml's should not change a bit. All you have to do is to make docker-compose.override.yml near it, which publishes the network and hooks your services into it.
So, whoever wants to have a standalone app runs
docker-compose -f docker-compose.yml up
But when you need to run apps side-by-side and communicating with each other, you should go with
docker-compose -f docker-compose.yml -f docker-compose.override.yml up

Linking containers in Docker

Docker allows you to link containers by name.
I have two questions on this:
Supposed A (client) is linked to B (service), and B's port is exposed dynamically (i.e. the actual host port is determined by Docker, not given by the user). What happens if B goes down and is being restarted?
Does Docker update the environment variable on A?
Does Docker assign the very same port again to B?
Is A link to B broken?
…?
Besides that, it's quite clear that this works fine if both containers are run on the same host machine. Does linking containers also work across machine boundaries?
Have you looked into the ambassador pattern?
It's ideal for this concept where you may want App server linked to DB server but if you take DB server down then App server needs to be restarted also.
http://docs.docker.io/en/latest/use/ambassador_pattern_linking/
I would say: try ;).
At the moment, docker as no control whatsoever on the process once started as it execve(3) without fork. It is not possible to update the env, that's why the links need to be done before the container runs and can't be edited afterward.
Docker will try to reassign the same port to B, but there is no warranty as an other container could be using it.
What do you mean by 'broken'? If you disabled the networking between unlinked container, it should still be working if you stop/start a container.
No, you can't link container across network yet.

Is it feasible to control Docker from inside a container?

I have experimented with packaging my site-deployment script in a Docker container. The idea is that my services will all be inside containers and then using the special management container to manage the other containers.
The idea is that my host machine should be as dumb as absolutely possible (currently I use CoreOS with the only state being a systemd config starting my management container).
The management container be used as a push target for creating new containers based on the source code I send to it (using SSH, I think, at least that is what I use now). The script also manages persistent data (database files, logs and so on) in a separate container and manages back-ups for it, so that I can tear down and rebuild everything without ever touching any data. To accomplish this I forward the Docker Unix socket using the -v option when starting the management container.
Is this a good or a bad idea? Can I run into problems by doing this? I did not read anywhere that it is discouraged, but I also did not find a lot of examples of others doing this.
This is totally OK, and you're not the only one to do it :-)
Another example of use is to use the management container to hande authentication for the Docker REST API. It would accept connections on an EXPOSE'd TCP port, itself published with -p, and proxy requests to the UNIX socket.
As this question is still of relevance today, I want to answer with a bit more detail:
It is possible to work with this setup, where you pass the docker socket into a running container. This is done by many solutions and works well. BUT you have to think about the problems, that come with this:
If you want to use the socket, you have to be root inside the container. This allows the execution of any command inside the container. So for example if an intruder controlls this container, he controls all other docker containers.
If you expose the socket with a TCP Port as sugested by jpetzzo, you will have the same problem even worse, because now you won't even have to compromise the container but just the network. If you filter the connections (like sugested in his comment) the first problem stays.
TLDR;
You could do this and it will work, but then you have to think about security for a bit.

Resources