Docker networks in Kubernetes/Rancher - docker

I've been trying to convert my SimpleLogin Docker containers to Kubernetes using Rancher. However one of the steps requires me to create a network.
sudo docker network create -d bridge \
--subnet=240.0.0.0/24 \
--gateway=240.0.0.1 \
sl-network
I couldn't really find a way to do this on Kubernetes/Rancher.
How do I set up an equivalent network like the above command in Kubernetes?
If you want more information about what this network should do you can find it here.

You don't. Kubernetes has its own network ecosystem, which mostly acts as though every Pod and Service is on the same network. You can't create separate subnets within that, there's no way to create a separate network per logical application. You also can't control the IP range of networks in Kubernetes (it shouldn't usually be necessary in Docker either).
Generally you can communicate between Kubernetes Pods by putting a Service in front of each, and then using the Service's DNS name as a host name. If all of the parts were running in the same Namespace, and the Service in front of the database were named sl-db, then the webapp Pod could use sl-db as the host name part of the DB_URI setting.
Reading through the documentation you link to, you will probably need to do some extra work to get the Postfix MTA set up. Note that it looks like it runs outside of Docker in this setup; either you will have to port the setup to run inside Kubernetes or configure its mynetworks settings to include the network that contains the Kubernetes nodes. You will also need to set up Kubernetes ConfigMaps and Secrets to hold the various configuration files and certificates this setup needs.

Related

How to add docker container hostname and network in Kubernetes Deployment?

I have docker images of different elk stacks, and I want to communicate between them. I have achieved it by creating a docker network and accessing them via hostname. I want to know if we can pass this properties in the kubernetes or not?
Can we create a docker network over there? And how do we pass these properties inside the deployment yaml?
I have created a docker network named as "elk", and then passed it in the run arguments (as docker run --network=elk -h elasticsearch ....)
I am expecting to create this network in kubernetes cluster and then pass these properties to deployment yaml
Kubernetes does not have Docker's notion of separate per-application isolated networks. You can't reproduce this Docker setup in Kubernetes and don't need to. Also see Services, Load Balancing, and Networking in the Kubernetes documentation.
In Kubernetes you usually do not communicate directly with Pods (containers). Instead, you also create a Service matching each Deployment, and then make calls to the Service name and port.
If you're currently deploying containers with docker run --net=... then you can ignore that option when migrating to Kubernetes. If you're using Compose, I'd suggest first trying to update the Compose setup to use only the Compose-provided default network, removing all of the networks: blocks.
For something like Elasticsearch, you probably want to run it in a StatefulSet which can also manage the per-replica storage. This has specific requirements around corresponding Services, and it does provide a way to connect to a specific replica when you need to. Relevantly to this question, if the StatefulSet is named elasticsearch then the Pods will be named elasticsearch-0, elasticsearch-1, and so on, and these names will also be visible as the hostname(8) inside the container, matching the docker run -h option.

Set Docker Container Iptable Rules

I have a linux server that kicks off a variety of docker containers on boot. I'd like to implement firewall rules for those containers specific to the container. The containers are connecting to servers with known static ips and ports. I've considered creating the rules inside the container's network namespace using netns to add rules but the namespace is being created via CNI and I would like the rules to be implemented when the namespace gets created, rather than configure the firewall of an already created container.
Most of what I can find about setting up iptables via CNI seems to refer to configuring the host firewall. Does Docker have some way of defining container specific firewall rules that get implemented when a container is run? I'd prefer to not use 3rd party products to accomplish this.
I had made a tool a year ago for the same reason. There is no documentation but if you run it with -h you will get a good indication of how it works. You can use it as a service on Linux with a configuration file and your rules in yaml. I hope it is still up to date with the docker api (I just made it public). You can look at the code, it's pretty simple.
here: https://github.com/tr4cks/docker-netns

How to access local machine from a pod

I have a pod created on the local machine. I also have a script file on the local machine. I want to run that script file from the pod (I will be inside the pod and run the script present on the local host).
That script will update /etc/hosts of another pod. Is there a way where i can update the /etc/hosts of one pod from another pod? The pods are created from two different deployments.
I want to run that script file from the pod (I will be inside the pod and run the script present on the local host).
You can't do that. In a plain Docker context, one of Docker's key benefits is filesystem isolation, so the container can't see the host's filesystem at all unless parts of it are explicitly published into the container. In Kubernetes not only is there this restriction, but you also have limited control over which node you're running on, and there's potential trouble if one node has a given script and another doesn't.
Is there a way where i can update the /etc/hosts of one pod from another pod?
As a general rule, you should avoid using /etc/hosts for anything. Setting up a DNS service keeps things consistent and avoids having to manually edit files in a bunch of places.
Kubernetes provides a DNS service for you. In particular, if you define a Service, then the name of that Service will be visible as a DNS name (within the cluster); one pod can reach the other via first-service-name.default.svc.cluster.local. That's probably the answer you're actually looking for.
(If you really only have a single-node environment then Kubernetes adds a lot of complexity and not much benefit; consider plain Docker and Docker Compose instead.)
As an addition to David's answer - you can copy script from your host to a pod using cp:
kubectl cp [file-path] [pod-name]:/[path]
About your question in the comment. You can do it by exposing a deployment:
kubectl expose deployment/name
Which will result in creating a service, you can find more practical examples and approach in this section.
Thus after your specific Pod terminates you can still reach new Pods by the same port and Service. You can find more details here.
In the example from documentation you can see that nginx Pod has been created with a container port 80 and the expose command will have following effect:
This specification will create a Service which targets TCP port 80 on
any Pod with the run: my-nginx label, and expose it on an abstracted
Service port (targetPort: is the port the container accepts traffic
on, port: is the abstracted Service port, which can be any port other
pods use to access the Service). View Service API object to see the
list of supported fields in service definition
Other than that seems like David provided really good explanation here, and it would be finding out more about FQDN and DNS - which also connects with services.

Kubernetes-How to send data from a pod to another pod in kubernetes

In dockers, I had two containers Mosquitto abd userInfo
userInfo is a container which performs some logic and then send the result to mosquitto container. Mosquitto container then use this information to send it to IOT hub. To start these containers in Docker, I created a network and started both the container in the same network. So I can easily use the hostname of mosquitto container inside userinfo container to send data. I need to do this same in kubernetes.
So in kubernetes, what I did, I deployed the Mosquitto so its POD was created then I created its service and use it inside the userInfo pod to send data to mosquitto. But this is not working.
I created the service by using
kubectl expose deployment mosquitto
I need to send data of userInfo to Mosquitto.
How can I achieve this.?
Do I need to create network as I was doing in dockers or is there any other way.?
I also tried creating a pod with two containers i.e. mosquitto & userInfo, but this was also not working.
Thanks
A Kubernetes pod may contain multiple containers. People generally run multiple containers in a pod when the two containers are tightly coupled, and it sounds like this is what you're looking for. These containers are guaranteed to be hosted on the same machine (they can contact each other via localhost), share the same port space, and can also use the same volumes.
https://kubernetes.io/docs/concepts/workloads/pods/pod/#what-is-a-pod
Two containers same POD
If you are interested in the communication between two containers belonging to the same POD there is the guide from the official documentation showing how to achieve this through shared volumes.
The primary reason that Pods can have multiple containers is to support helper applications that assist a primary application. Typical examples of helper applications are data pullers, data pushers, and proxies. Helper and primary applications often need to communicate with each other. Typically this is done through a shared filesystem, as shown in this exercise, or through the loopback network interface, localhost.
Try to avoid to place two container in the same POD if you do not need it. Additional information can be found here: Multi-container pods and container communication in Kubernetes.
Two containers two POD
In this case (you can do the same also for the previous case) the best way to proceed is to expose a through a service the listening process of the container.
In this way you will be able to rely always on the very same IP or domain name (that you will be able to resolve merely internally) and port.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created.
The network part is managed by Kubernetes so you will not need to do anything(in the basic configurations), merely when creating the service to instruct which is the target port of the container and the port that the client should use and the mapping is automatic.
Then once you exposed a port and an IP how you implement the communication and the datatransfer is no longer a Kubernetes question. You can implement it thorough a web server having static contents, through FTP, having a script sending SCP commands, basically there are infinite ways to do it.

How to link Docker services across hosts?

Docker allows servers from multiple containers to connect to each other via links and service discovery. However, from what I can see this service discovery is host-local. I would like to implement a service that uses other services hosted on a different machine.
There have been several approaches to solving this problem in Docker, such as CoreOS's jumpers, host-local services that essentially proxy to the other machine, and a whole bunch of github projects for managing Docker deployments that appear to have attempted to support this use-case.
Given the pace of development it is hard to follow what current best practices are. Therefore my question is essentially:
What (if any) is the current predominant method for linking across hosts in Docker, and
Are there any plans for supporting this functionality directly in the Docker system?
Update
Docker has recently announced a new tool called Swarm for Docker orchestration.
Swarm allows you do "join" multiple docker daemons: You first create a swarm, start a swarm manager on one machine, and have docker daemons "join" the swarm manager using the swarm's identifier. The docker client connects to the swarm manager as if it were a regular docker server.
When a container started with Swarm, it is automatically assigned to a free node that meets any constraints that have been defined. The following example is taken from the blog post:
$ docker run -d -P -e constraint:storage=ssd mysql
One of the supported constraints is "node" that allows you pin a container to a specific hostname. The swarm also resolves links across nodes.
In my testing I got the impression that Swarm doesn't yet work with volumes at a fixed location very well (or at least the process of linking them is not very intuitive), so this is something to keep in mind.
Swarm is now in beta phase.
Until recently, the Ambassador Pattern was the only Docker-native approach to remote-host service discovery. This pattern can still be used and doesn't require any magic beyond plain Docker in that the pattern consists of one or more additional containers that act as proxies.
Additionally, there are several third-party extensions to make Docker cluster-capable. Third-party solutions include:
Connecting the Docker network bridges on two hosts, lightweight and various solutions exist, but generally with some caveats
DNS-based discovery e.g. with skydock and SkyDNS
Docker management tools such as Shipyard, and Docker orchestration tools. See this question for an extensive list: How to scale Docker containers in production
UPDATE 3
Libswarm has been renamed as swarm and is now a separate application.
Here is the github page demo to use as a starting point:
# create a cluster
$ swarm create
6856663cdefdec325839a4b7e1de38e8
# on each of your nodes, start the swarm agent
# <node_ip> doesn't have to be public (eg. 192.168.0.X),
# as long as the other nodes can reach it, it is fine.
$ swarm join --token=6856663cdefdec325839a4b7e1de38e8 --addr=<node_ip:2375>
# start the manager on any machine or your laptop
$ swarm manage --token=6856663cdefdec325839a4b7e1de38e8 --addr=<swarm_ip:swarm_port>
# use the regular docker cli
$ docker -H <swarm_ip:swarm_port> info
$ docker -H <swarm_ip:swarm_port> run ...
$ docker -H <swarm_ip:swarm_port> ps
$ docker -H <swarm_ip:swarm_port> logs ...
...
# list nodes in your cluster
$ swarm list --token=6856663cdefdec325839a4b7e1de38e8
http://<node_ip:2375>
UPDATE 2
The official approach is now to use libswarm see a demo here
UPDATE
There is a nice gist for openvswitch hosts communication in docker using the same approach.
To allow service discovery there is an interesting approach based on DNS called skydock.
There is also a screencast.
This is also a nice article using the same pieces of the puzzle but adding also vlans on top:
http://fbevmware.blogspot.it/2013/12/coupling-docker-and-open-vswitch.html
The patching has nothing to do with the robustness of the solution. Docker is actually only a sort of DSL upon Linux Containers and both solutions in these articles simply bypass some Docker automatic settings and fall back directly to Linux Containers.
So you can use the solutions safely and wait to be able to do it in a simpler way once Docker will implement it.
Weave is a new Docker virtual network technology that acts as a virtual ethernet switch over TCP/UDP - all you need is a Docker container running Weave on your host.
What's interesting here is
Instead of links, use static IPs/hostnames in your virtual network
Hosts don't need full connectivity, a mesh is formed based on what peers are available, and packets will be routed multi-hop to where they need to go
This leads to interesting scenarios like
Create a virtual network across the WAN, none of the Docker containers will know or care what actual network they sit in
Move your containers to different physical docker hosts, Weave will detect the peer accordingly
For example, there's an example guide on how to create a multi-node Cassandra cluster across your laptop and a few cloud (EC2) hosts with two commands per host. I launched a CoreOS cluster with AWS CloudFormation, installed weave on each in /home/core, plus my laptop vagrant docker VM, and got a cluster up in under an hour. My laptop is firewalled but Weave seemed to be okay with that, it just connects out to its EC2 peers.
Update
Docker 1.12 contains the so called swarm mode and also adds a service abstraction. They probably aren't mature enough for every use case, but I suggest you to keep them under observation. The swarm mode at least helps in a multi-host setup, which doesn't necessarily make linking easier. The Docker-internal DNS server (since 1.11) should help you to access container names, if they are well-known - meaning that the generated names in a Swarm context won't be so easy to address.
With the Docker 1.9 release you'll get built in multi host networking. They also provide an example script to easily provision a working cluster.
You'll need a K/V store (e.g. Consul) which allows to share state across the different Docker engines on every host. Every Docker engine need to be configured with that K/V store and you can then use Swarm to connect your hosts.
Then you create a new overlay network like this:
$ docker network create --driver overlay my-network
Containers can now be run with the network name as run parameter:
$ docker run -itd --net=my-network busybox
They can also be connected to a network when already running:
$ docker network connect my-network my-container
More details are available in the documentation.
The following article describes nicely how to connect docker containers on multiple hosts: http://goldmann.pl/blog/2014/01/21/connecting-docker-containers-on-multiple-hosts/
It is possible to bridge several Docker subnets together using Open vSwitch or Tinc. I have prepared Gists to show how to do it:
Open vSwitch: https://gist.github.com/noteed/8656989
Tinc: https://gist.github.com/noteed/11031504
The advantage I see using this solution instead of the --link option and the ambassador pattern is that I find it more transparent: there is no need to have additional containers and more importantly, no need to expose ports on the host. Actually I think of the --link option to be a temporary hack before Docker get a nicer story about multi-host (or multi-daemon) setups.
Note: I know there is another answer pointing to my first Gist but I don't have enough karma to edit or comment on that answer.
As mentioned above, Weave is definitely a viable solution to link Docker containers across the hosts. Based on my own experience with it, it is fairly straightfoward to set it up. It is now also has DNS service which you can address container's by its DNS names.
On the other hand, there is CoreOS's Flannel and Juniper's Opencontrail for wiring the containers across the hosts.
Seems like docker swarm 1.14 allows you to:
assing hostname to container, using --hostname tag, but i haven't been able to make it work, containers are not able to ping each other by assigned hostnames.
assigning services to machine using --constraint 'node.hostname == <host>'

Resources