IP variable expansion in Kubernetes Pod definition - docker

I have a docker image that needs the container IP address (or hostname) to be passed by the command line.
Is it possible expansion the pod hostname or IP in the container command definition? if not, what is the better way to obtain it in a kuberneted deployed container?
In AWS I usually obtain it by contacting the EC2 meta-data service, I can do somethng similar contacting the kubernetes api, as long as I can obtain the pod name/id?
Thanks.

Depending on your pod setup, you may be able to use hostname -i.
E.g.
$ kubectl exec ${POD_NAME} hostname -i
10.245.2.109
From man hostname
...
-i, --ip-address
Display the network address(es) of the host name. Note that this works only if the host name can be resolved. Avoid using this option; use hostname --all-ip-addresses instead.
-I, --all-ip-addresses
Display all network addresses of the host. This option enumerates all configured addresses on all network interfaces. The loopback interface and IPv6 link-local addresses are omitted. Contrary to option -i, this
option does not depend on name resolution. Do not make any assumptions about the order of the output.
...

In v1.1 (releasing soon) you can expose the pod's IP as an environment variable through the downward api (note that the published documentation is for v1.0 which doesn't include pod IP in the downward API).
Prior to v1.1, the best way to get this is probably by querying the API from the pod. See Accessing the API from a Pod for how to access the API. The pod name is your $HOSTNAME, and you can find the IP with something like:
wget -O - ${KUBERNETES_RO_SERVICE_HOST}:${KUBERNETES_RO_SERVICE_PORT}/api/v1/namespaces/default/pods/${HOSTNAME} | grep podIP
Although I recommend you use a json parser such as jq
EDIT:
Just wanted to add that pod IP is not preserved across restarts, which is why the usual recommendation is to set up a service pointing to your pod. If you do use a service, the services IP will be fixed and act as a proxy to the pod, even across restarts. Service IPs are provided as environment variables, such as FOO_SERVICE_HOST and FOO_SERVICE_PORT.

Related

Docker networks in Kubernetes/Rancher

I've been trying to convert my SimpleLogin Docker containers to Kubernetes using Rancher. However one of the steps requires me to create a network.
sudo docker network create -d bridge \
--subnet=240.0.0.0/24 \
--gateway=240.0.0.1 \
sl-network
I couldn't really find a way to do this on Kubernetes/Rancher.
How do I set up an equivalent network like the above command in Kubernetes?
If you want more information about what this network should do you can find it here.
You don't. Kubernetes has its own network ecosystem, which mostly acts as though every Pod and Service is on the same network. You can't create separate subnets within that, there's no way to create a separate network per logical application. You also can't control the IP range of networks in Kubernetes (it shouldn't usually be necessary in Docker either).
Generally you can communicate between Kubernetes Pods by putting a Service in front of each, and then using the Service's DNS name as a host name. If all of the parts were running in the same Namespace, and the Service in front of the database were named sl-db, then the webapp Pod could use sl-db as the host name part of the DB_URI setting.
Reading through the documentation you link to, you will probably need to do some extra work to get the Postfix MTA set up. Note that it looks like it runs outside of Docker in this setup; either you will have to port the setup to run inside Kubernetes or configure its mynetworks settings to include the network that contains the Kubernetes nodes. You will also need to set up Kubernetes ConfigMaps and Secrets to hold the various configuration files and certificates this setup needs.

How to access local machine from a pod

I have a pod created on the local machine. I also have a script file on the local machine. I want to run that script file from the pod (I will be inside the pod and run the script present on the local host).
That script will update /etc/hosts of another pod. Is there a way where i can update the /etc/hosts of one pod from another pod? The pods are created from two different deployments.
I want to run that script file from the pod (I will be inside the pod and run the script present on the local host).
You can't do that. In a plain Docker context, one of Docker's key benefits is filesystem isolation, so the container can't see the host's filesystem at all unless parts of it are explicitly published into the container. In Kubernetes not only is there this restriction, but you also have limited control over which node you're running on, and there's potential trouble if one node has a given script and another doesn't.
Is there a way where i can update the /etc/hosts of one pod from another pod?
As a general rule, you should avoid using /etc/hosts for anything. Setting up a DNS service keeps things consistent and avoids having to manually edit files in a bunch of places.
Kubernetes provides a DNS service for you. In particular, if you define a Service, then the name of that Service will be visible as a DNS name (within the cluster); one pod can reach the other via first-service-name.default.svc.cluster.local. That's probably the answer you're actually looking for.
(If you really only have a single-node environment then Kubernetes adds a lot of complexity and not much benefit; consider plain Docker and Docker Compose instead.)
As an addition to David's answer - you can copy script from your host to a pod using cp:
kubectl cp [file-path] [pod-name]:/[path]
About your question in the comment. You can do it by exposing a deployment:
kubectl expose deployment/name
Which will result in creating a service, you can find more practical examples and approach in this section.
Thus after your specific Pod terminates you can still reach new Pods by the same port and Service. You can find more details here.
In the example from documentation you can see that nginx Pod has been created with a container port 80 and the expose command will have following effect:
This specification will create a Service which targets TCP port 80 on
any Pod with the run: my-nginx label, and expose it on an abstracted
Service port (targetPort: is the port the container accepts traffic
on, port: is the abstracted Service port, which can be any port other
pods use to access the Service). View Service API object to see the
list of supported fields in service definition
Other than that seems like David provided really good explanation here, and it would be finding out more about FQDN and DNS - which also connects with services.

kubectl apply -f behind proxy

I am able to install kubernetes using kubeadm method successfully. My environment is behind a proxy. I applied proxy to system, docker and I am able to pull images from Docker Hub without any issues. But at the last step where we have to install the pod network (like weave or flannel), its not able to connect via proxy. It gives a time out error. I am just checking to know if there is any command like curl -x http:// command for kubectl apply -f? Until I perform this step it says the master is NotReady.
When you do work with a proxy for internet access, do not forget to configure the NO_PROXY environment variable, in addition of HTTP(S)_PROXY.
See this example:
NO_PROXY accepts a comma-separated list of hosts, IP addresses, or IP ranges in CIDR format:
For master hosts
Node host name
Master IP or host name
For node hosts
Master IP or host name
For the Docker service
Registry service IP and host name
See also for instance weaveworks/scope issue 2246.

Kubernetes (gke) get name of pod network interface

I'm trying to locate a performance issue that might be network related. Therefore I want to inspect all packets entering and leaving a pod.
I'm on kubernetes 1.8.12 on GKE. I ssh to the host and I see the bridge called cbr0 that sees all the traffic. I also see a ton of interfaces named like vethdeadbeef#if3. I assume those are virtual interfaces that are created per container. Where do I look to find out which interface belongs to which container, so I can get all a list of all the interfaces of a pod.
If you have cat available in the container, you can compare the interface index of the containers eth0 with those of the veth* devices on your host. For example:
# grep ^ /sys/class/net/vet*/ifindex | grep ":$(docker exec aea243a766c1 cat /sys/class/net/eth0/iflink)"
/sys/class/net/veth1d431c85/ifindex:92
veth1d431c85 is what your are looking for.

Kubernetes - how to send request to all the minions?

I have pod and its purpose is to take the incoming data and write it to the host volume. I'm running this pod in all the minions.
Now when i setup NodePort service to this pods, traffic will go to 1 pod at a time.
But how do i send request to all this pods in different minions? How to i bypass the load-balancing here? I want that data to be available in all the minions host volume.
A service uses a selector to identify the list of pods to proxy to (if they're in the Ready state). You could simply ask for the same list of pods with a GET request:
$ curl -G "$MASTER/api/v1/namespaces/$NAMESPACE/pods?labelSelector=$KEY=$VALUE"
And then manually send your request to each of the pod ip:port endpoints. If you need to be able to send the request from outside the cluster network, you could create a proxy pod (exposed to the external network through the standard means). The proxy pod could watch for pods with your label (similar to above), and forward any requests it receives to the list of ready pods.
A similar effect could be achieved using hostPort and forwarding to nodes, but the use of hostPort is discourage (see best practices).
Here's a method that works as long as you can send the requests from a container inside the k8s network (this may not match the OP's desire exactly, but I'm guessing this may work for someone googling this).
You have to look up the pods somehow. Here I'm finding all pods in the staging namespace with the label app=hot-app:
kubectl get pods -l app=hot-app -n staging -o json | jq -r '.items[].status.podIP'
this example uses the awesome jq tool to parse the resulting json and grab the pod ips, but you can parse the json in other ways, including with kubectl itself.
this returns something like this:
10.245.4.253
10.245.21.143
you can find the internal port like this (example has just one container, so one unique port):
kubectl get pods -l app=hot-app -n staging -o json | jq -r '.items[].spec.containers[].ports[].containerPort' | sort | uniq
8080
then you get inside a container in your k8s cluster with curl, combine the ips and port from the previous commands, and hit the pods like this:
curl 10.245.4.253:8080/hot-path
curl 10.245.21.143:8080/hot-path
You need to define a hostPort for the container and address each pod on each node individually via the host IP.
See caveats in the best-practice guide's Services section.

Resources