Docker Distributed Application Bundle(DAB) no network creation options - docker

Before the new version of Docker i was using docker-compose with -H :swarm_manager_port to distribute my services. in the new version Docker introduced stack & bundle system to distribute apps,however i couldn't find any option to set a fixed ip and subnet to containers via an overlay network like compose and the classic swarm. in compose file,i declare a network with IPv4 & subnet options set.
so does anyone know how i can assign a constant ip (e.g.10.1.2.3) to containers via bundle and why bundle file does not support network options like "networks" in compose ?
Thanks

At the moment it does not appear the creation of a network is part of the DAB file spec. It's still in development/experimental, so that might change, but the README on DABs at the time of this answer states,
Networks that the service containers should be connected to. An entity deploying a bundle should create networks as needed.

Related

Docker containers on WSL2 don't get added to the bridge network

Issue: My containers (all of which are webservers) can't communicate with each other by container name (the DNS lookup fails). I can make them communicate by creating a new network and adding each created container to that network, but I'd prefer to not have to do this manually.
Details: According to the docs all new containers should automatically get added to the bridge network and be able to communicate to each other simply by container_name:port. However, on WSL2, even though the bridge network exists, the containers don't seem to be added to it because they can't communicate with each other by name.
Workarounds that I've tried:
I am making it work right now by creating a network and adding containers on that network. However, this is cumbersome and not feasible when I eventually have a large number of containers.
docker-compose is an idea, but my integration test suite creates containers from inside it and all my integration tests will not work (and I'll have to switch to a new integration test suite entirely).
Is there a way that I can make new containers automatically join the bridge network (or my own network) without using docker-compose?
Docker Desktop version: 3.2.2 (61853)
Windows 10; Build 19042.928
Turns out my docker containers WERE getting added to the default bridge network. However, them not being able to communicate with each other is an intended design. Containers on the default bridge network can't talk to each other by host name; they must use IP to communicate.
docker run --network="bridge" <mycontainer>
You can check exactly what is going on inside with
docker inspect <containerID>
I would go with these test options to isolate issue
1- check bridge network itself working fine in WSL system, as WSL is new have some issue.
2- checking container through if yes it means docker is creating container correctly
3- try to resolve IP to check if it is resolving, if yes then it can be purely DNS issue
4- as per 3rd point will check DNS pod if it is functioning correctly.
If possible could you share exact error and DNS pod status.

Docker Swarm - Unable to connect to other containers because IP lookup fails

Say we provision an overlay network using docker swarm and create various containers with following names:
Alice
Bob
Larry
John
now if we try to ping any container from another it fails because it does not know how to do the IP lookup i.e., alice does not know bob's IP and so on. We have been taking care of this by manually editing the /etc/hosts on every container and entering the name/IP key value pair in that file but this is becoming very tedious with every restart of our network. There ought to be a better way of handling this.
E.g., services created using docker stack do not suffer from this problem. Due to various reasons we are stuck with creating containers using the vanilla docker create. How can we make containers discover each other on the overlay network without manual labor of editing /etc/hosts?
Below is detailed workflow we currently have to follow:
we first provision a docker swarm and overlay network
Then for each container, we create it using the docker create command and then start it using docker start command. we use the --network flag to attach the container to the overlay network at time of creation
We then use docker container inspect to get the IP address of each container. This involves running n commands and noting down IP address.
Then we log into each container and edit the /etc/hosts file by hand and enter the (name, IP) key-value pair of the other containers. So this means having to enter n*(n-1) records by hand when summed across containers.
Not sure why docker create does not do all this automatically - docker already knows (or can know) all the IP addresses. Containers provisioned using docker stack e.g., do not have to go through this manual process to "discover" each other. The reason why we cannot use docker stack is because:
it does not allow us to specify container name
we run various commands (mostly docker cp) before starting the container and not possible to do this using stack
You might have seen this already: DNS on User defines networks
Have you created your services like in the section „Attach a service to an overlay“ in this doc?
It seems that the only thing that is needed is to refer the containers by their {name}.{network} instead of just the {name}. No need to edit /etc/hosts or use the --add-host flag or run some additional dns server. Refer https://forums.docker.com/t/need-help-connecting-containers-in-swarm-mode/77944/6
Further details: the official documentation for docker does not mention anywhere the necessity to add .{network} suffix to the {containername}. Indeed on this link, Step #7 under the Walk-through, there is no .{network} suffix used. So not sure why we need to do that. The version of docker we are using is 18.06.1-ce for linux.
I had a similar issue : I'm following this official tutorial to create a docker swarm overlay network on two Raspberry pi 3 and the ping was impossible unless I found on Github the answer : as I understood, it seems that latest version of alpine (for a reason that I ignore) is not suitable for Raspberry pi 3 so the solution would be the use of the version 3.12.3 like this : sudo docker run -dit --name alpine1 --network test1 alpine:3.12.3
Hope that this might help someone :)

Azure Service Fabric On-Premise docker network default ip range

We are running windows containers on an on-premise Azure Service Fabric installation. We are building the fabric nodes from a corporate template (Windows 2016 with container support) that also contains an internal firewall product (also controlling the flows between internal networks in the node). The configuration of this firewall is centrally managed.
In order to correctly configure the firewalls, we need to control the IP range of the docker network. To do this we created or own docker network (of type 'nat') and named it 'xyz'. (as the current docker-ee for windows version does not accept the "fixed-cidr" parameter in the configuration file).
When using containers in service fabric we ran into problems because when the container is started by sf, it tries to attach to a default network named 'nat'. Apparently it is not possible to name a custom network 'nat', or to pass the name of a network to which the container should attach to service fabric (either through classic application package or docker compose file).
To solve the problems following would work:
Fix the IP segment address during Docker for Windows installation
Have the option to specify the name of the network the container should connect to when started by service fabric (when starting the container manually this can be done with the --network option)
????
Any suggestions?
Only thing you can do today is option 1.
We are working on 2 :-)

Docker linked containers, Docker Networks, Compose Networks - how should we now 'link' containers

I have an existing app that comprises of 4 docker containers running on the same host. They have been linked together using the link command.
However, after some upgrades of docker, the link behaviour has been deprecated, and changed it seems. We are having issues where containers are loosing the link to each other now.
So, docker says to use the new Network feature over linked containers. But I can't see how this works.
If 2 containers are in the same network, are the same ENV vars automatically exposed on the containers as if they were linked?
Or is the hosts file updated with the correct container name / ip addresses ? Even after a docker restart ?
I can't see in the docs how a container can find the location of another in its network?
Also, compose looks to have a simple set up for linking containers, and may automate some of this - would compose be the way to go for defining multi container apps? Or is it too soon to run it in production?
Does compose support multiple host configuration as well?
at some point in the future we will probably need to move one of the containers to a different host....
If 2 containers are in the same network, are the same ENV vars automatically exposed on the containers as if they were linked?
no, you would now have to use the container names as their hostnames. The new network feature has no idea which ports will be used. Think of this as 2 computers plugged on the same network hub. Both can address the other one by its hostname.
is the hosts file updated with the correct container name / ip addresses ? Even after a docker restart ?
yes, /etc/hosts files for all containers which are part of a network will be updated live by the docker engine.
I can't see in the docs how a container can find the location of another in its network?
Using the container name. See the Connect containers section of the Work with network commands doc:
Once connected, the containers can communicate using another container’s IP address or name.
Also, compose looks to have a simple set up for linking containers, and may automate some of this - would compose be the way to go for defining multi container apps? Or is it too soon to run it in production?
Compose supports the new network feature as beta by offering the --x-networking option. You should not use it in production yet (current Compose version is 1.5).
Furthermore, the current implementation is a bit inconvenient as we must use the full container name which is composed of the project name + _ + container name + _1. The documentation says the next version (current one is 1.5) will improve this so that we should not have to worry about the project name to address containers.
Does compose support multiple host configuration as well?
Yes, in conjonction with Swarm as detailed in the overlay network documentation

How to link Docker services across hosts?

Docker allows servers from multiple containers to connect to each other via links and service discovery. However, from what I can see this service discovery is host-local. I would like to implement a service that uses other services hosted on a different machine.
There have been several approaches to solving this problem in Docker, such as CoreOS's jumpers, host-local services that essentially proxy to the other machine, and a whole bunch of github projects for managing Docker deployments that appear to have attempted to support this use-case.
Given the pace of development it is hard to follow what current best practices are. Therefore my question is essentially:
What (if any) is the current predominant method for linking across hosts in Docker, and
Are there any plans for supporting this functionality directly in the Docker system?
Update
Docker has recently announced a new tool called Swarm for Docker orchestration.
Swarm allows you do "join" multiple docker daemons: You first create a swarm, start a swarm manager on one machine, and have docker daemons "join" the swarm manager using the swarm's identifier. The docker client connects to the swarm manager as if it were a regular docker server.
When a container started with Swarm, it is automatically assigned to a free node that meets any constraints that have been defined. The following example is taken from the blog post:
$ docker run -d -P -e constraint:storage=ssd mysql
One of the supported constraints is "node" that allows you pin a container to a specific hostname. The swarm also resolves links across nodes.
In my testing I got the impression that Swarm doesn't yet work with volumes at a fixed location very well (or at least the process of linking them is not very intuitive), so this is something to keep in mind.
Swarm is now in beta phase.
Until recently, the Ambassador Pattern was the only Docker-native approach to remote-host service discovery. This pattern can still be used and doesn't require any magic beyond plain Docker in that the pattern consists of one or more additional containers that act as proxies.
Additionally, there are several third-party extensions to make Docker cluster-capable. Third-party solutions include:
Connecting the Docker network bridges on two hosts, lightweight and various solutions exist, but generally with some caveats
DNS-based discovery e.g. with skydock and SkyDNS
Docker management tools such as Shipyard, and Docker orchestration tools. See this question for an extensive list: How to scale Docker containers in production
UPDATE 3
Libswarm has been renamed as swarm and is now a separate application.
Here is the github page demo to use as a starting point:
# create a cluster
$ swarm create
6856663cdefdec325839a4b7e1de38e8
# on each of your nodes, start the swarm agent
# <node_ip> doesn't have to be public (eg. 192.168.0.X),
# as long as the other nodes can reach it, it is fine.
$ swarm join --token=6856663cdefdec325839a4b7e1de38e8 --addr=<node_ip:2375>
# start the manager on any machine or your laptop
$ swarm manage --token=6856663cdefdec325839a4b7e1de38e8 --addr=<swarm_ip:swarm_port>
# use the regular docker cli
$ docker -H <swarm_ip:swarm_port> info
$ docker -H <swarm_ip:swarm_port> run ...
$ docker -H <swarm_ip:swarm_port> ps
$ docker -H <swarm_ip:swarm_port> logs ...
...
# list nodes in your cluster
$ swarm list --token=6856663cdefdec325839a4b7e1de38e8
http://<node_ip:2375>
UPDATE 2
The official approach is now to use libswarm see a demo here
UPDATE
There is a nice gist for openvswitch hosts communication in docker using the same approach.
To allow service discovery there is an interesting approach based on DNS called skydock.
There is also a screencast.
This is also a nice article using the same pieces of the puzzle but adding also vlans on top:
http://fbevmware.blogspot.it/2013/12/coupling-docker-and-open-vswitch.html
The patching has nothing to do with the robustness of the solution. Docker is actually only a sort of DSL upon Linux Containers and both solutions in these articles simply bypass some Docker automatic settings and fall back directly to Linux Containers.
So you can use the solutions safely and wait to be able to do it in a simpler way once Docker will implement it.
Weave is a new Docker virtual network technology that acts as a virtual ethernet switch over TCP/UDP - all you need is a Docker container running Weave on your host.
What's interesting here is
Instead of links, use static IPs/hostnames in your virtual network
Hosts don't need full connectivity, a mesh is formed based on what peers are available, and packets will be routed multi-hop to where they need to go
This leads to interesting scenarios like
Create a virtual network across the WAN, none of the Docker containers will know or care what actual network they sit in
Move your containers to different physical docker hosts, Weave will detect the peer accordingly
For example, there's an example guide on how to create a multi-node Cassandra cluster across your laptop and a few cloud (EC2) hosts with two commands per host. I launched a CoreOS cluster with AWS CloudFormation, installed weave on each in /home/core, plus my laptop vagrant docker VM, and got a cluster up in under an hour. My laptop is firewalled but Weave seemed to be okay with that, it just connects out to its EC2 peers.
Update
Docker 1.12 contains the so called swarm mode and also adds a service abstraction. They probably aren't mature enough for every use case, but I suggest you to keep them under observation. The swarm mode at least helps in a multi-host setup, which doesn't necessarily make linking easier. The Docker-internal DNS server (since 1.11) should help you to access container names, if they are well-known - meaning that the generated names in a Swarm context won't be so easy to address.
With the Docker 1.9 release you'll get built in multi host networking. They also provide an example script to easily provision a working cluster.
You'll need a K/V store (e.g. Consul) which allows to share state across the different Docker engines on every host. Every Docker engine need to be configured with that K/V store and you can then use Swarm to connect your hosts.
Then you create a new overlay network like this:
$ docker network create --driver overlay my-network
Containers can now be run with the network name as run parameter:
$ docker run -itd --net=my-network busybox
They can also be connected to a network when already running:
$ docker network connect my-network my-container
More details are available in the documentation.
The following article describes nicely how to connect docker containers on multiple hosts: http://goldmann.pl/blog/2014/01/21/connecting-docker-containers-on-multiple-hosts/
It is possible to bridge several Docker subnets together using Open vSwitch or Tinc. I have prepared Gists to show how to do it:
Open vSwitch: https://gist.github.com/noteed/8656989
Tinc: https://gist.github.com/noteed/11031504
The advantage I see using this solution instead of the --link option and the ambassador pattern is that I find it more transparent: there is no need to have additional containers and more importantly, no need to expose ports on the host. Actually I think of the --link option to be a temporary hack before Docker get a nicer story about multi-host (or multi-daemon) setups.
Note: I know there is another answer pointing to my first Gist but I don't have enough karma to edit or comment on that answer.
As mentioned above, Weave is definitely a viable solution to link Docker containers across the hosts. Based on my own experience with it, it is fairly straightfoward to set it up. It is now also has DNS service which you can address container's by its DNS names.
On the other hand, there is CoreOS's Flannel and Juniper's Opencontrail for wiring the containers across the hosts.
Seems like docker swarm 1.14 allows you to:
assing hostname to container, using --hostname tag, but i haven't been able to make it work, containers are not able to ping each other by assigned hostnames.
assigning services to machine using --constraint 'node.hostname == <host>'

Resources