API for service-to-service communication in Kubernetes - docker

I'm K8s newbie building 2 services, service A and service B, in a K8s cluster. Service A needs to call service B as part of service A's code. What are my options in terms of the API should I have for service B? Is a REST API, e.g., built using Python Flask, an option typically used for this kind of service-to-service communication within a K8s cluster? What other options do I have?

Related

service discovery in microservices Vs service discovery in docker

I am a little confused about the difference between these two.
Docker swarm provides service discovery for the services that run in it.
In a microservice architecture, each microservices run in one of the containers. Do I need a separate service discovery that is provided by some of the API Gateways or any service discovery frameworks like Eureka, Zookeeper, etc?
Is there any added advantage if I use some specific service discovery frameworks other than that is provided by Docker Swarm?
Do I need a separate service discovery that is provided by some of the
API Gateways or any service discovery frameworks like Eureka,
Zookeeper, etc?
If your micro services are deployed as docker swarm services within the same swarm you don't need additional service discovery mechanism.
Each docker service can connect to another by its service name.

Kubernetes multi servers communication

I have a question regarding Kubernetes networking.
I know that in Docker swarm if I want to run difference containers on difference servers, I need to create an overlay network, and then all the containers (from all the servers) will be attached to this network and they can communicate with each other (for example, I can ping from container A to container B).
I guess that in Kubernetes there isn't an overlay network - but another solution. For example, I would like to create 2 linux containers on 2 servers (server 1: ubuntu, server 2: centos7), so how do the pods communicate with each other if there isn't an overlay network?
And another doubt - can I create a cluster which consists of windows and linux machines with kubernetes?I mean, a multi platform kubernetes which all the pods communicate with each other.
Thanks a lot!!
In kubernetes, pods communicate with each other through service. To access any pod within cluster, it must be exposed using clusterIP service. So if you created service before creating pods, you will have env variable for each available service within container. Using that you can ping or access services and in turn pods.
For example:
Suppose you have two pods U1 and C1 and those are exposed by service named U-SVC and C-SVC respectively.
So if you want to access C1 from U1, you will have C-SVC service env variables(C-SVC_SERVICE_HOST,C-SVC_SERVICE_PORT) within container which you can use for access.
Also if DNS server set for your cluster, you can access service without env varibles.

Docker Swarm - Is it possible to restrict network access workers?

we built a Docker Swarm cluster, over several cloud providers.
Everything works but we have new constraints and need to restrict network communications between the cloud providers.
Is it possible to build a Docker Swarm cluster with "local load balancing"? What I mean by this question is, is it possible to use:
- one cloud provider for Swarm managers, with network access to Swarm workers;
- two cloud providers for Swarm workers, with network access to the Swarm managers, but no network access between these cloud providers?
In that case, would the load balancing still work if someone runs a web request towards one of the workers?
Please find below a drawing of the targeted architecture.

service discovery in docker without using consul

I'm new to docker and microservices. I've started to decompose my web-app into microservices and currently, I'm doing manual configuration.
After some study, I came across docker swarm mode which allows service discovery. Also, I came across other tools for service discovery such as Eureka and Consul.
My main aim is to replace IP addresses in curl call with service name and load balance between multiple instances of same service.
i.e. for ex. curl http://192.168.0.11:8080/ to curl http://my-service
I have to keep my services language independent.
Please suggest, Do I need to use Consul with docker swarm for service discovery or i can do it without Consul? What are the advantages?
With the new "swarm mode", you can use docker services to create clustered services across multiple swarm nodes. You can then access those same services, load-balanced, by using the service name rather than the node name in your requests.
This only applies to nodes within the swarm's overlay network. If your client systems are part of the same swarm, then discovery should work out-of-the-box with no need for any external solutions.
On the other hand, if you want to be able to discover the services from systems outside the swarm, you have a few options:
For stateless services, you could use docker's routing mesh, which will make the service port available across all swarm nodes. That way you can just point at any node in the swarm, and docker will direct your request to a node that is running the service (regardless of whether the node you hit has the service or not).
Use an actual load balancer in front of your swarm services if you need to control routing or deal with different states. This could either be another docker service (i.e. haproxy, nginx) launched with the --mode global option to ensure it runs on all nodes, or a separate load-balancer like a citrix netscaler. You would need to have your service containers reconfigure the LB through their startup scripts or via provisioning tools (or add them manually).
Use something like consul for external service discovery. Possibly in conjunction with registrator to add services automatically. In this scenario you just configure your external clients to use the consul server/cluster for DNS resolution (or use the API).
You could of course just move your service consumers into the swarm as well. If you're separating the clients from the services in different physical VLANs (or VPCs etc) though, you would need to launch your client containers in separate overlay networks to ensure you don't effectively defeat any physical network segregation already in place.
Service discovery (via dns) is built into docker since version 1.12. When you create a custom network (like bridge or overlay if you have multiple hosts) you can simply have the containers talk to each other via name as long as they are part of same network. You can also have an alias for each container which would round-robin the list of containers which have the same alias. For simple example see:
https://linuxctl.com/docker-networking-options-bridge
As long as you are using the bridge mode for your docker network and creating your containers inside that network, service discovery is available to you out of the box.
You will need to get help from other tools once your infrastructure starts to span in to multiple servers and microservices distributed on them.
Swarm is a good tool to start with, however, I would like to stick to consul if it comes to any IaaS provider like Amazon for my production loads.

What is the different between putting a separate service discovery and integrate it into the cluster machine in Docker Swarm

I am having problem understanding the need of a separated service discovery server while we could register the slave node to the master node at the slave node start-up through whatever protocol. Hosting another service seem redundant to me.
Docker Swarm is there to create a cluster of hosts running Docker and schedule containers across the cluster.
It does not include service discovery, which is provided by a backend service, such as etcd, consul or zookeeper.
The first problem: service registration and discovery is an infrastructure concern, not an application concern.
The second problem: implementing service registration and discovery when infrastructure and application implementation are mutually agnostic is tough.
The DockerCon makes that distinction clear this morning (Nov. 16th, 2015), with the "Docker Stack":
(Graphics from #laurelcomics)
Docker networking solves these problems by backing an interface (DNS) with pluggable infrastructure components that adhere to a common KV interface.
You can see consul.io used in:
"Easy routing and service discovery with Docker, Consul and nginx"
"Docker Overlay Networks: That was Easy"
"Docker DNS getaddrinfo ENOTFOUND"
That means:
Consul is a KV (Key/Value) store which can be plugged into Swarm in order to manage the service discovery aspect.
Swarm is the access layer, which is usually the layer that contains a gateway or routing component that allows others to actually reach your services.
(Image from the "Easy routing and service discovery with Docker, Consul and nginx" article written by Ladislav Gazo)
The goal is to isolate what is an infrastructure concern (Discovery service) in its own container, separate from a dev tool concern (Swarm).

Resources