Kubernetes pod application connectivity with Mysql database which is not on container - docker

can i connect k8's POD with non container application ,where my kubernetes POD is running on 10.200.x.x subnet and my mysql is running on simple linux server other than container
how can i connect with the database ?
As im working in a organization where there are so many network restrictions and i have to open ports and IPs to access
do i have possibility to connect container application with non container database as subnet masks are different too

If you can reach mysql from worker node then you should also be able to reach it from pod running on this node.
Check you company firewall and make sure that packets from worker node can reach the instance with mysql running. Also make sure that these networks are not separated in some other way.
Usually packets sent from your application pod to mysql instance will have source ip set to worker nodes ip (so you want to allow for traffic from k8s nodes to mysql instance).
This is due to fact that k8s network (with most CNIs) is sort of a virtual network that only k8s nodes aware of and for external traffic to by able to come back to the pod, routers in your network need to know where to route the traffic to. This is why pod traffic going outside of k8s network is NATed.
This is true for most CNIs that encapsulate internal traffic in k8s but remeber that there are also some CNIs that don't encapsulate traffic and it makes possible to access pods directly from anywhere inside of a private network and not only from k8s nodes (e.g Azure CNI).
In first case with NATed network make sure that you enable access to mysql instance from all worker nodes, not just one because when this one specific node goes down and pod gets rescheduled to other node it wont be able to connect to the database.
In second case where you are using CNI that is using direct netwoking (without NAT) its more complicated because when pod gets rescheduled it gets different ip every time and I can't help you with that as it all depends on specific CNI.

Related

Deploying couchbase in a docker swarm environment

I'm trying to deploy couchbase community edition in a docker swarm environment. I followed the steps suggested by Arun Gupta, though I'm not sure if a master-worker model is desired as Couchbase doesn't have the notion of master/slave model.
Following are the problems I encountered. I'm wondering if anyone is able to run Couchbase successfully in a swarm mode.
Docker swarm assigns different IP address each time the service is restarted. Sometimes, docker moves the service to a new node which, again assigns a different IP address. It appears that Couchbase doesn't start if it finds a new IP address. (log says "address on which the service is configured is not up. Waiting for the interface to be brought up"). I'm using a host mounted volume as the data folder (/opt/couchase/var) to persist data across restarts.
I tried to read overlay network address used internally and update ip and ip_start files in a run script within the container. This doesn't help either. Server comes up as a new instance without loading old data. This is a real problem as production data can be lost if docker swarm moves services around.
docker swarm's internal router assigns an address from overlay network in addition to other interfaces. I tried using localhost, master.overlaynet, IP address of the overlaynet, private address assigned by docker to container etc. as server address in the Couchbase cluster configuration. While the cluster servers are able to communicate to each other, this created another problem with client connections. Client normally connects to an address/port exposed by the swarm cluster. This is different from cluster node address. In case of a python client, it reads Couchbase cluster server addresses and tried to connect to that if overlay address is given as server address at the time of joining the cluster. The client times out as the address is not reachable.
I might be able to add a network address constraint to the yaml file to ensure that master node will come up with the same address. For eg.
networks:
default:
ipv4_address: 172.20.x.xx
Above approach may not work for worker nodes as that will impact ability to scale worker nodes based on load/growth.
In this model (master/worker), how does a worker get elected as leader if master node goes down? Is master/worker the right approach for a Couchbase cluster in swarm environment?
It will be helpful if I can get some references to Couchbase swarm mode setup or some suggestions on how to handle IP address change.
We ran into the same problem (couchbase server 5.1.1) and our temporary solution is to use fixed IPs on a new docker bridge network.
networks:<br>
default:<br>
ipv4_address: 172.19.0.x
Although this works, this is not a good solution as we loose auto-scaling as mentioned above. We had some learnings during setup. Just to let you know:
You can run a single-node couchbase setup with dynamic IP. You can stop/restart this container and update couchbase-server version with no limitations.
When you add a second node this initially works with dynamic IP as well during setup. You can add the server and rebalance the cluster. But when you stop/restart/scale 0/1 a couchbase container, it won't start up anymore due to a new IP provides by docker (10.0.0.x with default network).
Changing the "ip" or "ip_start" files (/opt/couchbase/var/lib/couchbase/config) to update the IP does NOT work. Server starts up as "new" server, when changing the ip in "ip" and "ip_start" but it still has all the data. So you can backup your data, if you need now. So even after you "switched" to fixed IP you can't re-start the server directly, but need to cbbackup and cbrestore.
https://docs.couchbase.com/server/5.1/install/hostnames.html documentation for using hostnames is a little misleading as this only documents how to "find" a new server while configuring a cluster. If you specify hostnames couchbase anyway configures all nodes with the static IPs.
You might start your docker swarm with host network might be a solution, but we run multiple instances of other containers on a single host, so we would like to avoid that solution.
So always have a backup of the node/cluster. We always make a file-backup and a cluster-backup with cbbackup. As restoring from a file backup is much faster.
There is a discussion at https://github.com/couchbase/docker/issues/82 on this issue, but this involves using AWS for static IPs, which we don't.
I am aware of couchbase autonomous operator for kubernetes, but for now we would like to stay with docker swarm. If anybody has a nicer solution for this, how to configure couchbase to use hostnames, please share.

How does kubernetes pod gets IP instead of container instead of it as CNI plugin works at container level

How does kubernetes pod gets IP instead of container instead of it as CNI plugin works at container level?
How all containers of same pod share same network stack?
Containers use a feature from the kernel called virtual network interface, the virtual network Interface( lets name it veth0) is created and then assigned to a namespace, when a container is created, it is also assigned to a namespace, when multiple containers are created within the same namespace, only a single network interface veth0 will be created.
A POD is just the term used to specify a set of resources and features, one of them is the namespace and the containers running in it.
When you say the POD get an IP, what actually get the ip is the veth0 interface, container apps see the veth0 the same way applications outside a container see a single physical network card on a server.
CNI is just the technical specification on how it should work to make multiple network plugins work without changes to the platform. The process above should be the same to all network plugins.
There is a nice explanation in this blog post
its the kubeproxy that makes everything work. one pod has one proxy which translates all the ports over one IP for the remaining containers. only in specific cases it is said that you want to have multiple containers in the same pod. its not preferred but its possible. this is why they call it "tightly" coupled. please refer to: https://kubernetes.io/docs/concepts/cluster-administration/proxies/
Firstly, let's dig deeper into the CNI aspect. In production systems, workload/pod (workload can be thought of as one or many containerized applications used to fulfill a certain function) network isolation is a first class security requirement. Moreover, depending on how the infrastructure is set up, the routing plane might also need to be a attribute of either the workload (kubectl proxy), or the host-level proxy (kube proxy), or the central routing plane (apiserver proxy) that the host-level proxy exposes a gateway for.
For both service discovery, and actually sending requests from a workload/pod, you don't want individual application developers to talk to the apiserver proxy, since it may incur overhead. Instead you want them to communicate with other applications via either the kubectl or kube proxy, with those layers being responsible for knowing when and how to communicate with the apiserver plane.
Therefore, when spinning up a new workload, the kubelet can be passed --network-plugin=cni and a path to a configuration telling kubelet how to set up the virtual network interface for this workload/pod.
For example, if you dont want your application containers in pod to be able to talk to host-level kube proxy directly, since you want to do some infrastructure specific monitoring, your CNI and workload configuration would be:
monitoring at outermost container
outermost container creates virtual network interface for every other container in pod
outermost container is on bridge interface (also a private virtual network interface) that can talk to kube proxy on host
The IP that the pod gets is to allow other workloads to send bytes to this pod, via its bridge interface - since fundamentally, other people should be talking to the pod, not individual work units inside the pod.
There is a special container called 'pause container' that holds the network namespace for the pod. It does not do anything and its container process just goes into sleep.
Kubernetes creates one pause container for each pod, to acquire the respective pod's IP address and set up the network namespace for all other containers that are part of specific pod. All containers in a pod can reach each other using localhost.
This means that your 'application' container can die, and come back to life, and all of the network setup will still be intact.

Kubernetes multi servers communication

I have a question regarding Kubernetes networking.
I know that in Docker swarm if I want to run difference containers on difference servers, I need to create an overlay network, and then all the containers (from all the servers) will be attached to this network and they can communicate with each other (for example, I can ping from container A to container B).
I guess that in Kubernetes there isn't an overlay network - but another solution. For example, I would like to create 2 linux containers on 2 servers (server 1: ubuntu, server 2: centos7), so how do the pods communicate with each other if there isn't an overlay network?
And another doubt - can I create a cluster which consists of windows and linux machines with kubernetes?I mean, a multi platform kubernetes which all the pods communicate with each other.
Thanks a lot!!
In kubernetes, pods communicate with each other through service. To access any pod within cluster, it must be exposed using clusterIP service. So if you created service before creating pods, you will have env variable for each available service within container. Using that you can ping or access services and in turn pods.
For example:
Suppose you have two pods U1 and C1 and those are exposed by service named U-SVC and C-SVC respectively.
So if you want to access C1 from U1, you will have C-SVC service env variables(C-SVC_SERVICE_HOST,C-SVC_SERVICE_PORT) within container which you can use for access.
Also if DNS server set for your cluster, you can access service without env varibles.

Access a VM in the same network as the nodes of my cluster from a pod

I have a kubernetes cluster with some nodes and a VM in the same network as the nodes. I need to execute a command via SSH from one of my Pods in this VM. Is it even possible?
I do not control the cluster or the VM, I just have access to them.
Well, this is a network level issue. When you have a kubernetes cluster onthe same network as your target there is a potential issue that might or might not show up - origin IP on the tcp connection. IF your nodes will MASQ/SNAT all of the outgoing traffic then you are fine, but... for a vm in the same domain as kube nodes it might actually be excluded from that MASQ/SNAT. The reason for that is that kube nodes do know how to route traffic based on POD IP cause they have the overlay networking installed (flannel, calico, weave etc.).
To round this up, you need to either have the traffic to your destination node on MASQ/SNAT at some point, or the target node has to be able to route traffic back to your POD, usualy meaning that it needs overlay networking installed (with the exception of setups that are implemented on higher networking level then nodes them selves, like ie. AWS VPC routing tables)

Kubernetes-How to send data from a pod to another pod in kubernetes

In dockers, I had two containers Mosquitto abd userInfo
userInfo is a container which performs some logic and then send the result to mosquitto container. Mosquitto container then use this information to send it to IOT hub. To start these containers in Docker, I created a network and started both the container in the same network. So I can easily use the hostname of mosquitto container inside userinfo container to send data. I need to do this same in kubernetes.
So in kubernetes, what I did, I deployed the Mosquitto so its POD was created then I created its service and use it inside the userInfo pod to send data to mosquitto. But this is not working.
I created the service by using
kubectl expose deployment mosquitto
I need to send data of userInfo to Mosquitto.
How can I achieve this.?
Do I need to create network as I was doing in dockers or is there any other way.?
I also tried creating a pod with two containers i.e. mosquitto & userInfo, but this was also not working.
Thanks
A Kubernetes pod may contain multiple containers. People generally run multiple containers in a pod when the two containers are tightly coupled, and it sounds like this is what you're looking for. These containers are guaranteed to be hosted on the same machine (they can contact each other via localhost), share the same port space, and can also use the same volumes.
https://kubernetes.io/docs/concepts/workloads/pods/pod/#what-is-a-pod
Two containers same POD
If you are interested in the communication between two containers belonging to the same POD there is the guide from the official documentation showing how to achieve this through shared volumes.
The primary reason that Pods can have multiple containers is to support helper applications that assist a primary application. Typical examples of helper applications are data pullers, data pushers, and proxies. Helper and primary applications often need to communicate with each other. Typically this is done through a shared filesystem, as shown in this exercise, or through the loopback network interface, localhost.
Try to avoid to place two container in the same POD if you do not need it. Additional information can be found here: Multi-container pods and container communication in Kubernetes.
Two containers two POD
In this case (you can do the same also for the previous case) the best way to proceed is to expose a through a service the listening process of the container.
In this way you will be able to rely always on the very same IP or domain name (that you will be able to resolve merely internally) and port.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created.
The network part is managed by Kubernetes so you will not need to do anything(in the basic configurations), merely when creating the service to instruct which is the target port of the container and the port that the client should use and the mapping is automatic.
Then once you exposed a port and an IP how you implement the communication and the datatransfer is no longer a Kubernetes question. You can implement it thorough a web server having static contents, through FTP, having a script sending SCP commands, basically there are infinite ways to do it.

Resources