connecting to Kubernetes kops pod using docker deamon - docker

I created Kubernetes cluster with kops (on AWS), and i want to access to one of my nodes as a root.According to this post, it's possible only with Docker command.
When i type docker image ls i'm getting nothing. When i was using minikube i solved this issue with minikube docker-env and from output i just copied last line into new CMD line #FOR /f "tokens=*" %i IN ('minikube docker-env') DO #%i
(I'm using Widnows 10) and using above procedure, after typing docker image ls or docker image ps i was able to see all minikube pods. Is there any way to do the same for pods created with kops ?
I'm able to do it connecting to Kubernetes node, install docker on it, and then i can connect to pod specifying -u root switch, but i wonder is it possible to do the same from host machine (Windows 10 in my case)

It's a bit unclear what you're trying to do. So I'm going to give some general info here based on this scenario : You've created a K8S cluster on AWS using Kops
I want to access one of my AWS node as root
This has nothing to do with kops nor it has with Docker. This is basic AWS management. You need to check on your AWS console management to get all the info to connect to your node.
I want to see all the docker image from my windows laptop
Again, this has nothing to do with kops. Kops is a Kubernetes distribution. In Kubernetes, the smallest units of computing that can be managed is the pod. You cannot manage directly docker containers or images with kubernetes.
So if you want to see your docker images, you'll need to somehow connect to your AWS node and then execute
docker image ls
In fact, that's what you're doing with your minikube example. You're just executing the docker command on the VM managed by minikube.
More info on what's a pod here
I want to see all the pods created with kops
Well, assuming that you've succesfully configured your system to access AWS with kops (more info on that here), then you'll just have to directly execute any kubectl command. For example, to list all the pods located in the kube-system namespace :
kubectl -n kube-system get po
Hope this helps !

That is not possible. A pod is an abstraction created and managed by kubernetes. The docker daemon has no idea to what is a pod. You can only see the containers using docker command. But then again, you wont be able to tell which container is associated to which pod.
Answered by #Marc ABOUCHACRA

Related

Kubernetes elasticserach health check fails - but only in some containers

I've got a very strange networking problem trying to get elasticsearch working on a local Kubernetes cluster, and I'm completely stumped on what could be causing the issue. At this point, I don't think this is an Elasticsearch problem, I think there is something odd going on in the host machine, but I can't for the life of me figure out what it is.
TLDR version: "curl -X GET http://127.0.0.1:9200" works from inside some containers, but not other others.
The details are as follows:
I have a 4 node Kubernetes cluster for testing on two different machines.
Both hosts have the same operating system (OpenSuse Leap 15.1)
They both have the same version of VirtualBox and Vagrant.
They both have a 4 node Kubernetes cluster created from the same Vagrantfile, using the same version of the same Vagrant base box (bento/centos-7).
Since the Vagrant boxes are the same, both of my environments will have the same version of Docker in the VMs.
I've installed Elasticsearch to each cluster using the same Helm chart, and they both use the same Elasticsearch Docker image.
But from there, I run into problems in one of my environments when I do the following:
I run kubectl get pods -A -o wide to find out where the elasticsearch master is running.
I run vagrant ssh to that node.
As root, I run docker ps to find out the id of the container running elasticsearch.
As root, I run docker exec -u root -it container_name /bin/bash to get a shell in the container.
I run curl -X GET http://127.0.0.1:9200/_cluster/health, which is what Kubernetes is using for a health check. In one environment, I get back JSON. In the other, I get "Connection refused"
I can't figure out why the same docker image running in the same kind of virtual machine would produce a different result on a different host.
Can anyone shed some light on this situation?

Docker compose not available on docker swarm cluster

Question 1: I am new to docker swarm, I created a docker swarm cluster on my local machine and SSH in to it. To my surprise docker-compose was NOT installed inside the manager node. Is that normal ? Is there any workaround to get the docker compose up and running on swarm manager node ?
Question 2: how do I manage to get all my code inside manager node. Let’s say I have my source code on a director. If I want to move that inside my docker swarm manager node. How can I do that ?
It is common for docker-compose to not be installed on servers compared to docker-desktop-clients which come bundled with docker-compose and other tools.
You have to install it to use it on your local machine. https://docs.docker.com/compose/install/
Although you can use your installation of docker-compose to work against the docker-daemon on your local machine by setting DOCKER_HOST
https://docs.docker.com/engine/reference/commandline/cli/#environment-variables
You can copy your source-code onto your local-machine via scp https://linuxize.com/post/how-to-use-scp-command-to-securely-transfer-files/
But you would rather build images and deploy onto your local-machine.

how to find directory of a kafka in pod in GCP?

I'm using Kubernetes on Google Cloud Platform; I installed the Kafka image in a pod, but when I try to make communication between producer and consumer with Kafkacat nothing is working.
I want to find the directory kafka in pod.
The containers running inside a pod are actually run by the docker daemon (assuming docker is the chosen container runtime for this Kubernetes deployment) of the host machine.
So in case of GCP the host machine will be the worker VM where the pod is scheduled by Kubernetes.
You can get to know which worker VM by looking at the node by running the command:
kubectl get pod pod name -o wide
Hence the image will be stored in the file system of the host machine. The exact path depends on the OS distribution of the host machine.
This is discussed here Where are Docker images stored on the host machine?

No internet connectivity inside docker container running inside kubernetes with weave as networking

I have a kubernetes cluster that is running on AWS EC2 instances and weave as networking(cni). I have disabled the docker networking(ipmask and iptables) as it is managed by weave(to avoid network conflicts).
I have deployed my Jenkins on this cluster as K8s pod and this jenkins uses jenkins kubernetes plugin to spawn dynamic slaves based on pod and container template which I have defined. These slaves container have docker client in it which connects to the host docker engine via docker.sock
So when I run any job in Jenkins it starts a slave and on this it clones a git repo and starts building the Dockerfile present inside the repo.
My sample dockerfile looks like this:
FROM abc:123
RUN yum update
So when container starts building this it tries connecting to redhat repo to update the local repo and fails here. To debug I logged in to this container and try wget/CURL some packages and finds that there is no internet connectivity in this container.
I suspect that while building docker starts intermediate containers and those containers are not managed by weave so they do not have internet connectivity.
Need suggestions.
Related question: Internet connection inside Docker container in Kubernetes
Ok finally after lot of struggle I find the solution.
So when ever K8s starts a pod it starts a sidecart container whose role is basically to provide network to pod containers.
So while running docker build if I pass it's container ID as network then my intermediate contexts start getting internet connectivity via this container.
So changes looks something like this:
docker build -t "some name" --network container:\$(docker ps | grep \$(hostname) | grep k8s_POD | cut -d\" \" -f1) -f infra/docker/Dockerfile .
Hope this helps. :D
You can try to attach weave networking dynamically as a part of your build job. Is it definitely possible to change active network of container on the flight with weave.
Maybe you will need to use some additional container with Weave Docker Api Proxy or you can use a different way to communicate with Weave network on your nodes.
So, the main idea is just attach your containers where you running builds to the Kubernetes pods network, where you have an external access.
Also, and maybe it will be better, you can create another one Weave virtual network with access to the Internet and attach your contenders to it.
You're right - the docker build process runs in a different context, and Weave Net doesn't attach those automatically.
Even more complicated, Kubernetes will connect via CNI whereas Docker has its own plugin API. I believe it's possible to have both on a machine at the same time, but rather complicated.
Maybe look at some of the ways to build images without using Docker ?

Kubernetes pods not starting, running behind a proxy

I am running kubernetes on minikube, I am behind a proxy, so I had set the env variables(HTTP_PROXY & NO_PROXY) for docker in /etc/systemd/system/docker.service.d/http-proxy.conf.
I was able to do docker pull but when I run the below example
kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment hello-minikube --type=NodePort
kubectl get pod
pod never starts and I get the error
desc = unable to pull sandbox image \"gcr.io/google_containers/pause-amd64:3.0\"
docker pull gcr.io/google_containers/echoserver:1.4 works fine
I ran into the same problem and am sharing what I learned after making a couple of wrong turns. This is with minikube v0.19.0. If you have an older version you might want to update.
Remember, there are two things we need to accomplish:
Make sure kubctl does not go through the proxy when connecting to minikube on your desktop.
Make sure that the docker daemon in minikube does go through the proxy when it needs to connect to image repositories.
First, make sure your proxy settings are correct in your environment. Here is an example from my .bashrc:
export {http,https,ftp}_proxy=http://${MY_PROXY_HOST}:${MY_PROXY_PORT}
export {HTTP,HTTPS,FTP}_PROXY=${http_proxy}
export no_proxy="localhost,127.0.0.1,localaddress,.your.domain.com,192.168.99.100"
export NO_PROXY=${no_proxy}
A couple things to note:
I set both lower and upper case. Sometimes this matters.
192.168.99.100 is from minikube ip. You can add it after your cluster is started.
OK, so that should take care of kubectl working correctly. Now we have the next issue, which is making sure that the Docker daemon in minikube is configured with your proxy settings. You do this, as mentioned by PMat like this:
$ minikube delete
$ minikube start --docker-env HTTP_PROXY=${http_proxy} --docker-env HTTPS_PROXY=${https_proxy} --docker-env NO_PROXY=192.168.99.0/24
To verify that theses settings have taken, do this:
$ minikube ssh -- systemctl show docker --property=Environment --no-pager
You should see the proxy environment variables listed.
Why do the minikube delete? Because without it the start won't update the Docker environment if you had previously created a cluster (say without the proxy information). Maybe this is why PMat did not have success passing --docker-env to start (or maybe it was on older version of minikube).
I was able to fix it myself.
I had Docker on my host and there is Docker in Minikube.
Docker in Minukube had issues
I had to ssh into minikube VM and follow this post
Cannot download Docker images behind a proxy
and it all works nows,
There should be a better way of doing this, on starting minikube i have passed docker env like below, which did not work
minikube start --docker-env HTTP_PROXY=http://xxxx:8080 --docker-env HTTPS_PROXY=http://xxxx:8080
--docker-env NO_PROXY=localhost,127.0.0.0/8,192.0.0.0/8 --extra-config=kubelet.PodInfraContainerImage=myhub/pause:3.0
I had set the same env variable inside Minikube VM, to make it work
It looks like you need to add the minikube ip to no_proxy:
export NO_PROXY=$no_proxy,$(minikube ip)
see this thread: kubectl behind a proxy

Resources