Kubernetes pods not starting, running behind a proxy - docker

I am running kubernetes on minikube, I am behind a proxy, so I had set the env variables(HTTP_PROXY & NO_PROXY) for docker in /etc/systemd/system/docker.service.d/http-proxy.conf.
I was able to do docker pull but when I run the below example
kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment hello-minikube --type=NodePort
kubectl get pod
pod never starts and I get the error
desc = unable to pull sandbox image \"gcr.io/google_containers/pause-amd64:3.0\"
docker pull gcr.io/google_containers/echoserver:1.4 works fine

I ran into the same problem and am sharing what I learned after making a couple of wrong turns. This is with minikube v0.19.0. If you have an older version you might want to update.
Remember, there are two things we need to accomplish:
Make sure kubctl does not go through the proxy when connecting to minikube on your desktop.
Make sure that the docker daemon in minikube does go through the proxy when it needs to connect to image repositories.
First, make sure your proxy settings are correct in your environment. Here is an example from my .bashrc:
export {http,https,ftp}_proxy=http://${MY_PROXY_HOST}:${MY_PROXY_PORT}
export {HTTP,HTTPS,FTP}_PROXY=${http_proxy}
export no_proxy="localhost,127.0.0.1,localaddress,.your.domain.com,192.168.99.100"
export NO_PROXY=${no_proxy}
A couple things to note:
I set both lower and upper case. Sometimes this matters.
192.168.99.100 is from minikube ip. You can add it after your cluster is started.
OK, so that should take care of kubectl working correctly. Now we have the next issue, which is making sure that the Docker daemon in minikube is configured with your proxy settings. You do this, as mentioned by PMat like this:
$ minikube delete
$ minikube start --docker-env HTTP_PROXY=${http_proxy} --docker-env HTTPS_PROXY=${https_proxy} --docker-env NO_PROXY=192.168.99.0/24
To verify that theses settings have taken, do this:
$ minikube ssh -- systemctl show docker --property=Environment --no-pager
You should see the proxy environment variables listed.
Why do the minikube delete? Because without it the start won't update the Docker environment if you had previously created a cluster (say without the proxy information). Maybe this is why PMat did not have success passing --docker-env to start (or maybe it was on older version of minikube).

I was able to fix it myself.
I had Docker on my host and there is Docker in Minikube.
Docker in Minukube had issues
I had to ssh into minikube VM and follow this post
Cannot download Docker images behind a proxy
and it all works nows,
There should be a better way of doing this, on starting minikube i have passed docker env like below, which did not work
minikube start --docker-env HTTP_PROXY=http://xxxx:8080 --docker-env HTTPS_PROXY=http://xxxx:8080
--docker-env NO_PROXY=localhost,127.0.0.0/8,192.0.0.0/8 --extra-config=kubelet.PodInfraContainerImage=myhub/pause:3.0
I had set the same env variable inside Minikube VM, to make it work

It looks like you need to add the minikube ip to no_proxy:
export NO_PROXY=$no_proxy,$(minikube ip)
see this thread: kubectl behind a proxy

Related

How does one control the locally accessible port for the registry addon when running minikube under docker?

I'm new to kubernetes and trying to get started. On my dev machine I'm running minikube using the docker driver. On startup I get the following spew:
😄 minikube v1.27.1 on Ubuntu 20.04 (amd64)
🎉 minikube 1.28.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.28.0
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.25.2 on Docker 20.10.18 ...
🔎 Verifying Kubernetes components...
▪ Using image docker.io/kubernetesui/dashboard:v2.7.0
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
▪ Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
▪ Using image k8s.gcr.io/metrics-server/metrics-server:v0.6.1
▪ Using image k8s.gcr.io/ingress-nginx/controller:v1.2.1
▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ Registry addon with docker driver uses port 32795 please use that instead of default port 5000 │
│ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯
📘 For more information see: https://minikube.sigs.k8s.io/docs/drivers/docker
▪ Using image gcr.io/google_containers/kube-registry-proxy:0.4
▪ Using image docker.io/registry:2.8.1
🔎 Verifying ingress addon...
🔎 Verifying registry addon...
🌟 Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard, ingress, registry
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Note that the registry addon attempts to use port 5000 but, since that port is in use on my system, it assigns an available high range port, in this example 32795. My (oversimplified) understanding is that minikube starts the various services in what are conceptually ssomething like separate miniature containers within minikube, each service is addressable by a separate internally ranged ip (10.x.x.x) and the minikube deployment then maps exposed ports for those services to ports that can be addressed from the host machine (so svc/registry 80 is mapped such that minikube 5000 will hit it. But since minikube is running in docker, there's an additional mapping such that it goes something like 127.0.0.1:32795->minikube:5000->svc/registry:80.
The assigned port changes whenever minikube starts. I can build and push docker images to the registry using this port:
$ docker build -t 127.0.0.1:30400/jenkins:2.303.2-lts -f kubernetes-ci-cd/applications/jenkins/Dockerfile kuber
netes-ci-cd/applications/jenkins
$ docker push 127.0.0.1:32795/jenkins:2.303.2-lts
I would like to have this bound to a stable port. Changing the configuration (in the container under /etc/kubernetes/addons/registry-svc.yaml) doesn't work since that folder is not persisted and any changes to it just get blown away on startup. I've tried saving a local copy of the file and applying it after startup, but that doesn't seem to work.
$ kubectl apply -f ~/registry-svc.yaml
service/registry configured
Rebinding the port-forward
kubectl port-forward --namespace kube-system svc/registry 30400:80
Forwarding from 127.0.0.1:30400 -> 5000
Forwarding from [::1]:30400 -> 5000
Changes the port binding for minikube, it looks like (it breaks pushing images to the registry, anyway, presumably because the old port is no longer the correct one) but it's running in a docker container so, since 30400 wasn't exposed at startup and there's no way to expose a port on a running container, attempting to push to that port gets connection refused. I can probably come up with some sort of workaround like persisting the /etc/kubernetes/addons folder, but it kinda feels like that can't really be the right solution since configuration changes from default must be a common thing and if they were a common thing then the configuration folder would have been persisted by default. What is the "correct" solution for controlling which port services (such as the registry, although this is going to become an issue with other, non-addon services as soon as I solve this problem) are bound to and exposed when running minikube under docker?

Can't access Ingress from Kubernetes node

I have a CentOS machine where I created a Kubernetes cluster:
minikube start --driver=docker --addons ingress.
Inside the cluster, I installed a Harbor instance using a Helm chart using:
helm install harbor-release harbor/harbor --set expose.type=ingress
In the CentOS machine, I added an entry to my /etc/hosts pointing to the new ingress:
echo "$(minikube ip) core.harbor.domain" >> /etc/hosts
And with this, I can access to Harbor from this machine. I can login using Firefox and I'm able to push some custom images:
docker pull python
docker tag docker.io/python core.harbor.domain:443/library/python:latest
docker login https://core.harbor.domain --username admin --password Harbor12345
docker push core.harbor.domain:443/library/python:latest
And we are all happy. My problem starts when I try to deploy another Helm chart using those images. Kubernetes is not able to pull the images and timeouts. After some tries, I find out that my minikube node is not able to connect to Harbor.
I tried adding to /etc/hosts different IPs like 127.0.0.1, minikube ip, etc without any results. Docker can never do a pull. If I use 127.0.0.1 I'm able to do a curl -k https://core.harbor.domain but not a docker login.
I also tried adding core.harbor.domain to docker insecure registries but without any luck.
Maybe I'm missing something and I shouldn't be able to access the ingress url from my minikube node in the first place.
What could I be doing wrong?
Do you think it's a good approach to put Harbor and the application pods in the same cluster?

Kubernetes elasticserach health check fails - but only in some containers

I've got a very strange networking problem trying to get elasticsearch working on a local Kubernetes cluster, and I'm completely stumped on what could be causing the issue. At this point, I don't think this is an Elasticsearch problem, I think there is something odd going on in the host machine, but I can't for the life of me figure out what it is.
TLDR version: "curl -X GET http://127.0.0.1:9200" works from inside some containers, but not other others.
The details are as follows:
I have a 4 node Kubernetes cluster for testing on two different machines.
Both hosts have the same operating system (OpenSuse Leap 15.1)
They both have the same version of VirtualBox and Vagrant.
They both have a 4 node Kubernetes cluster created from the same Vagrantfile, using the same version of the same Vagrant base box (bento/centos-7).
Since the Vagrant boxes are the same, both of my environments will have the same version of Docker in the VMs.
I've installed Elasticsearch to each cluster using the same Helm chart, and they both use the same Elasticsearch Docker image.
But from there, I run into problems in one of my environments when I do the following:
I run kubectl get pods -A -o wide to find out where the elasticsearch master is running.
I run vagrant ssh to that node.
As root, I run docker ps to find out the id of the container running elasticsearch.
As root, I run docker exec -u root -it container_name /bin/bash to get a shell in the container.
I run curl -X GET http://127.0.0.1:9200/_cluster/health, which is what Kubernetes is using for a health check. In one environment, I get back JSON. In the other, I get "Connection refused"
I can't figure out why the same docker image running in the same kind of virtual machine would produce a different result on a different host.
Can anyone shed some light on this situation?

connecting to Kubernetes kops pod using docker deamon

I created Kubernetes cluster with kops (on AWS), and i want to access to one of my nodes as a root.According to this post, it's possible only with Docker command.
When i type docker image ls i'm getting nothing. When i was using minikube i solved this issue with minikube docker-env and from output i just copied last line into new CMD line #FOR /f "tokens=*" %i IN ('minikube docker-env') DO #%i
(I'm using Widnows 10) and using above procedure, after typing docker image ls or docker image ps i was able to see all minikube pods. Is there any way to do the same for pods created with kops ?
I'm able to do it connecting to Kubernetes node, install docker on it, and then i can connect to pod specifying -u root switch, but i wonder is it possible to do the same from host machine (Windows 10 in my case)
It's a bit unclear what you're trying to do. So I'm going to give some general info here based on this scenario : You've created a K8S cluster on AWS using Kops
I want to access one of my AWS node as root
This has nothing to do with kops nor it has with Docker. This is basic AWS management. You need to check on your AWS console management to get all the info to connect to your node.
I want to see all the docker image from my windows laptop
Again, this has nothing to do with kops. Kops is a Kubernetes distribution. In Kubernetes, the smallest units of computing that can be managed is the pod. You cannot manage directly docker containers or images with kubernetes.
So if you want to see your docker images, you'll need to somehow connect to your AWS node and then execute
docker image ls
In fact, that's what you're doing with your minikube example. You're just executing the docker command on the VM managed by minikube.
More info on what's a pod here
I want to see all the pods created with kops
Well, assuming that you've succesfully configured your system to access AWS with kops (more info on that here), then you'll just have to directly execute any kubectl command. For example, to list all the pods located in the kube-system namespace :
kubectl -n kube-system get po
Hope this helps !
That is not possible. A pod is an abstraction created and managed by kubernetes. The docker daemon has no idea to what is a pod. You can only see the containers using docker command. But then again, you wont be able to tell which container is associated to which pod.
Answered by #Marc ABOUCHACRA

kubectl: Connection to server was refused

When I run kubectl run ... or any command I get an error message saying
The connection to the server localhost:8080 was refused - did you specify the right host or port?
What exactly is this error and how to resolve it?
In my case, working with minikube I had not started minikube. Starting minikube with
minikube start
fixed it.
In most cases, this means a missing kubeconfig file. kubectl is trying to use the default values when there is no $HOME/.kube/config.
You must create or copy a valid config file to solve this problem.
For example if you are using kubeadm you can solve this with:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively you can also export KUBECONFIG variable like this:
export KUBECONFIG=/etc/kubernetes/admin.conf
I really don't know much about kubectl... But the various reasons you have a connection refused to localhost I know of are as follows
1) Make sure you can resolve and ping your local host with the IP(127.XX.XX.XX) and also "localhost" if using a DNS or host file.
2) Make sure the service trying to access the localhost has enough permissions to run as root if trying to use localhost.
3) Check the ports with netstat and check for the appropriate flags you need amongst the "Plantu" flags, Look up the meaning of each of the flags as it would apply to your situation. If the service you are trying to access on localhost is listening on that port, netstat would let you know.
4) Check if you have admin or management settings in your application that needs permissions to access your localhost in the configuration parameters of your application.
5) According to the statement that says did you specify the right host or port, this means that either your "kubectl" run is not configured to run as localhost instead your primary DNS server hostname or IP, Check what host is configured to run your application and like I said check for the appropriate ports to use, You can use telnet to check this port and further troubleshoot form there.
My two cents!
creating cluster before running kubectl worked for me
gcloud container clusters create k0
If swap is not disabled, kubelet service will not start on the masters and nodes, for Platform9 Managed Kubernetes version 3.3 and above..
By running the below command to turn off swap memory
sudo swapoff -a
To make it permanent
go to /etc/fstab and comment the swap line
works well..
I'm a newbie in k8s, came here while working with microk8s &
want to use kubectl on microk8s cluster.
run below command
microk8s config > ~/.kube/config
got the solution from this link
https://microk8s.io/docs/working-with-kubectl
overall, kubectl needs a config file to work with cluster (here microk8s cluster)
Thanks
I also experienced the same issue when I executed kubectl get pods. The reason was docker desktop was not running, then I ran the docker desktop, checked for the docker and k8s running . Then again I ran kubectl get pods
same output. Then I started minikube by minikube start. Everything went normal.
try run with sudo permission mode
sudo kubectl run....

Resources