Kubernetes - how to send request to all the minions? - docker

I have pod and its purpose is to take the incoming data and write it to the host volume. I'm running this pod in all the minions.
Now when i setup NodePort service to this pods, traffic will go to 1 pod at a time.
But how do i send request to all this pods in different minions? How to i bypass the load-balancing here? I want that data to be available in all the minions host volume.

A service uses a selector to identify the list of pods to proxy to (if they're in the Ready state). You could simply ask for the same list of pods with a GET request:
$ curl -G "$MASTER/api/v1/namespaces/$NAMESPACE/pods?labelSelector=$KEY=$VALUE"
And then manually send your request to each of the pod ip:port endpoints. If you need to be able to send the request from outside the cluster network, you could create a proxy pod (exposed to the external network through the standard means). The proxy pod could watch for pods with your label (similar to above), and forward any requests it receives to the list of ready pods.
A similar effect could be achieved using hostPort and forwarding to nodes, but the use of hostPort is discourage (see best practices).

Here's a method that works as long as you can send the requests from a container inside the k8s network (this may not match the OP's desire exactly, but I'm guessing this may work for someone googling this).
You have to look up the pods somehow. Here I'm finding all pods in the staging namespace with the label app=hot-app:
kubectl get pods -l app=hot-app -n staging -o json | jq -r '.items[].status.podIP'
this example uses the awesome jq tool to parse the resulting json and grab the pod ips, but you can parse the json in other ways, including with kubectl itself.
this returns something like this:
10.245.4.253
10.245.21.143
you can find the internal port like this (example has just one container, so one unique port):
kubectl get pods -l app=hot-app -n staging -o json | jq -r '.items[].spec.containers[].ports[].containerPort' | sort | uniq
8080
then you get inside a container in your k8s cluster with curl, combine the ips and port from the previous commands, and hit the pods like this:
curl 10.245.4.253:8080/hot-path
curl 10.245.21.143:8080/hot-path

You need to define a hostPort for the container and address each pod on each node individually via the host IP.
See caveats in the best-practice guide's Services section.

Related

Sysdig - get syscalls triggered by a k8 pod

I want to capture all system calls from a k8 pod.
Sysdig supports the -k flag for specifying a url to the kubernetes kubectl api.
I exposed the kubectl api using the kubectl proxy command below
kubectl proxy --port=8080 &
I want to filter system calls for a specific k8 pod called 'mypod'
sudo sysdig -k http://127.0.0.1:8080 k8s.pod.name=mypod
No events are captured using this filter. It is also worth noting that I am running this sysdig command from the master node, and that 'mypod' is running on a different worker machine that is a part of the k8 cluster.
what am I missing?
Sysdig OSS should run on the same machine where the process/container you want to monitor is.
If you try to filter syscalls that happen in another node it'll be impossible, since a process never calls another machine's kernel.
Sysdig OSS, like Falco, works at the kernel level to monitor syscalls. If you were trying to monitor K8S Audit events that'd be different since they are sent to the plugin socket.

Docker networks in Kubernetes/Rancher

I've been trying to convert my SimpleLogin Docker containers to Kubernetes using Rancher. However one of the steps requires me to create a network.
sudo docker network create -d bridge \
--subnet=240.0.0.0/24 \
--gateway=240.0.0.1 \
sl-network
I couldn't really find a way to do this on Kubernetes/Rancher.
How do I set up an equivalent network like the above command in Kubernetes?
If you want more information about what this network should do you can find it here.
You don't. Kubernetes has its own network ecosystem, which mostly acts as though every Pod and Service is on the same network. You can't create separate subnets within that, there's no way to create a separate network per logical application. You also can't control the IP range of networks in Kubernetes (it shouldn't usually be necessary in Docker either).
Generally you can communicate between Kubernetes Pods by putting a Service in front of each, and then using the Service's DNS name as a host name. If all of the parts were running in the same Namespace, and the Service in front of the database were named sl-db, then the webapp Pod could use sl-db as the host name part of the DB_URI setting.
Reading through the documentation you link to, you will probably need to do some extra work to get the Postfix MTA set up. Note that it looks like it runs outside of Docker in this setup; either you will have to port the setup to run inside Kubernetes or configure its mynetworks settings to include the network that contains the Kubernetes nodes. You will also need to set up Kubernetes ConfigMaps and Secrets to hold the various configuration files and certificates this setup needs.

How to access local machine from a pod

I have a pod created on the local machine. I also have a script file on the local machine. I want to run that script file from the pod (I will be inside the pod and run the script present on the local host).
That script will update /etc/hosts of another pod. Is there a way where i can update the /etc/hosts of one pod from another pod? The pods are created from two different deployments.
I want to run that script file from the pod (I will be inside the pod and run the script present on the local host).
You can't do that. In a plain Docker context, one of Docker's key benefits is filesystem isolation, so the container can't see the host's filesystem at all unless parts of it are explicitly published into the container. In Kubernetes not only is there this restriction, but you also have limited control over which node you're running on, and there's potential trouble if one node has a given script and another doesn't.
Is there a way where i can update the /etc/hosts of one pod from another pod?
As a general rule, you should avoid using /etc/hosts for anything. Setting up a DNS service keeps things consistent and avoids having to manually edit files in a bunch of places.
Kubernetes provides a DNS service for you. In particular, if you define a Service, then the name of that Service will be visible as a DNS name (within the cluster); one pod can reach the other via first-service-name.default.svc.cluster.local. That's probably the answer you're actually looking for.
(If you really only have a single-node environment then Kubernetes adds a lot of complexity and not much benefit; consider plain Docker and Docker Compose instead.)
As an addition to David's answer - you can copy script from your host to a pod using cp:
kubectl cp [file-path] [pod-name]:/[path]
About your question in the comment. You can do it by exposing a deployment:
kubectl expose deployment/name
Which will result in creating a service, you can find more practical examples and approach in this section.
Thus after your specific Pod terminates you can still reach new Pods by the same port and Service. You can find more details here.
In the example from documentation you can see that nginx Pod has been created with a container port 80 and the expose command will have following effect:
This specification will create a Service which targets TCP port 80 on
any Pod with the run: my-nginx label, and expose it on an abstracted
Service port (targetPort: is the port the container accepts traffic
on, port: is the abstracted Service port, which can be any port other
pods use to access the Service). View Service API object to see the
list of supported fields in service definition
Other than that seems like David provided really good explanation here, and it would be finding out more about FQDN and DNS - which also connects with services.

Kubernetes: multiple pods in a node when each pod exposes a port

I was following along with the Hello, World example in Kubernetes getting started guide.
In that example, a cluster with 3 nodes/instances is created on Google Container Engine.
The container to be deployed is a basic nodejs http server, which listens on port 8080.
Now when I run
kubectl run hello-node --image <image-name> --port 8080
it creates a pod and a deployment, deploying the pod on one of nodes.
Running the
kubectl scale deployment hello-node --replicas=4
command increases the number of pods to 4.
But since each pod exposes the 8080 port, will it not create a port conflict on the pod where two nodes are deployed?
I can see 4 pods when I do kubernetes get pods, however what the behaviour will be in this case?
Got some help in #kubernetes-users channel on slack :
The port specified in kubectl run ... is that of a pod. And each pod has its unique IP address. So, there are no port conflicts.
The pods won’t serve traffic until and unless you expose them as a service.
Exposing a service by running kubectl expose ... assigns a NodePort (which is in range 30000-32000) on every node. This port must be unique for every service.
If a node has multiple pods kube-proxy balances the traffic between those pods.
Also, when I accessed my service from the browser, I was able to see logs in all the 4 pods, so the traffic was served from all the 4 pods.
There is a difference between the port that your pod exposes and the physical ports on your node. Those need to be linked by for instance a kubernetes service or a loadBalancer as discussed a bit further in the hello-world documentation http://kubernetes.io/docs/hellonode/#allow-external-traffic

IP variable expansion in Kubernetes Pod definition

I have a docker image that needs the container IP address (or hostname) to be passed by the command line.
Is it possible expansion the pod hostname or IP in the container command definition? if not, what is the better way to obtain it in a kuberneted deployed container?
In AWS I usually obtain it by contacting the EC2 meta-data service, I can do somethng similar contacting the kubernetes api, as long as I can obtain the pod name/id?
Thanks.
Depending on your pod setup, you may be able to use hostname -i.
E.g.
$ kubectl exec ${POD_NAME} hostname -i
10.245.2.109
From man hostname
...
-i, --ip-address
Display the network address(es) of the host name. Note that this works only if the host name can be resolved. Avoid using this option; use hostname --all-ip-addresses instead.
-I, --all-ip-addresses
Display all network addresses of the host. This option enumerates all configured addresses on all network interfaces. The loopback interface and IPv6 link-local addresses are omitted. Contrary to option -i, this
option does not depend on name resolution. Do not make any assumptions about the order of the output.
...
In v1.1 (releasing soon) you can expose the pod's IP as an environment variable through the downward api (note that the published documentation is for v1.0 which doesn't include pod IP in the downward API).
Prior to v1.1, the best way to get this is probably by querying the API from the pod. See Accessing the API from a Pod for how to access the API. The pod name is your $HOSTNAME, and you can find the IP with something like:
wget -O - ${KUBERNETES_RO_SERVICE_HOST}:${KUBERNETES_RO_SERVICE_PORT}/api/v1/namespaces/default/pods/${HOSTNAME} | grep podIP
Although I recommend you use a json parser such as jq
EDIT:
Just wanted to add that pod IP is not preserved across restarts, which is why the usual recommendation is to set up a service pointing to your pod. If you do use a service, the services IP will be fixed and act as a proxy to the pod, even across restarts. Service IPs are provided as environment variables, such as FOO_SERVICE_HOST and FOO_SERVICE_PORT.

Resources