Can't access Ingress from Kubernetes node - docker

I have a CentOS machine where I created a Kubernetes cluster:
minikube start --driver=docker --addons ingress.
Inside the cluster, I installed a Harbor instance using a Helm chart using:
helm install harbor-release harbor/harbor --set expose.type=ingress
In the CentOS machine, I added an entry to my /etc/hosts pointing to the new ingress:
echo "$(minikube ip) core.harbor.domain" >> /etc/hosts
And with this, I can access to Harbor from this machine. I can login using Firefox and I'm able to push some custom images:
docker pull python
docker tag docker.io/python core.harbor.domain:443/library/python:latest
docker login https://core.harbor.domain --username admin --password Harbor12345
docker push core.harbor.domain:443/library/python:latest
And we are all happy. My problem starts when I try to deploy another Helm chart using those images. Kubernetes is not able to pull the images and timeouts. After some tries, I find out that my minikube node is not able to connect to Harbor.
I tried adding to /etc/hosts different IPs like 127.0.0.1, minikube ip, etc without any results. Docker can never do a pull. If I use 127.0.0.1 I'm able to do a curl -k https://core.harbor.domain but not a docker login.
I also tried adding core.harbor.domain to docker insecure registries but without any luck.
Maybe I'm missing something and I shouldn't be able to access the ingress url from my minikube node in the first place.
What could I be doing wrong?
Do you think it's a good approach to put Harbor and the application pods in the same cluster?

Related

How-To: Access local minikube cluster, with kubectl running inside VSCode Development Container?

I have minikube cluster running on Windows WSL2, and I have a Dev Container https://code.visualstudio.com/docs/remote/create-dev-container) running my React Application and Kubernetes CLI tools. My goal is to containerize the application and run that on the minikube cluster.
So now I have exposed the local configurations and certifications of minikube to my Dev Container, and I am using that as a default KUBECONFIG. I have a deployment, and Docker Image ready - so next step is to try to use deployment and have that running on the cluster.
When I am running a kubectl command inside the Dev Container, I am getting error message like this:
The connection to the server 127.0.0.1:51515 was refused - did you specify the right host or port?
When I am inspecting the minikube container, I see that its listening only to localhost
gcr.io/k8s-minikube/kicbase:v0.0.28 "/usr/local/bin/entr…" 3 hours ago Up 3 hours 127.0.0.1:58892->22/tcp, 127.0.0.1:58893->2376/tcp, 127.0.0.1:58895->5000/tcp, 127.0.0.1:58896->8443/tcp, 127.0.0.1:58894->32443/tcp minikube
So as far as I know, these requests fail, because request from Dev Container is not considered as a localhost request (I am able to ping localhost.). I am running the Dev Container with network=host flag.
So atleast one way to get this setup working it to bind minikube's ports to listen to 0.0.0.0, instead of localhost - is there any other way? How could I get that 0.0.0.0 bind working? I am having a feeling that this could be Docker Desktop settings related - that I need to somehow change some kind of default settings from 127.0.0.1 -> 0.0.0.0.
Running minikube with this command didn't do the trick.
minikube start --driver=docker --listen-address='0.0.0.0'
Versions:
Docker Desktop 4.6.0 (75818)
Docker 20.10.13, build a224086
minikube v1.24.0
kubectl 1.21.5
Thank you in advance!
EDIT:
I also tried different alternatives to localhost, without changing the configuration in minikube, with the same port that is on host computer - these didn't do the trick. I can however ping every address from Container.
kubernetes.docker.internal, host.docker.internal, 192.168.49.2 (Minikube's IP on localhost), minikubeCA, control-plane.minikube.internal, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost
Here is my minikube's KUBECONFIG.
- cluster:
certificate-authority-data: Removed for Security.
extensions:
- extension:
last-update: Mon, 28 Mar 2022 17:30:48 EEST
provider: minikube.sigs.k8s.io
version: v1.24.0
name: cluster_info
server: https://localhost:58896
name: minikube
I managed to solve this. Docker Desktop requires to use host.docker.internal instead of localhost inside Kubernetes Config YAML.
Problem is that first address is not allowed by Minikubes Certificate. Running kubectl commands with flag --insecure-skip-tls-verify - so for example
kubectl get nodes -A --insecure-skip-tls-verify
works, with setup defined above.
Found also some documentation: https://github.com/Microsoft/vscode-dev-containers/tree/main/containers/kubernetes-helm

connecting to Kubernetes kops pod using docker deamon

I created Kubernetes cluster with kops (on AWS), and i want to access to one of my nodes as a root.According to this post, it's possible only with Docker command.
When i type docker image ls i'm getting nothing. When i was using minikube i solved this issue with minikube docker-env and from output i just copied last line into new CMD line #FOR /f "tokens=*" %i IN ('minikube docker-env') DO #%i
(I'm using Widnows 10) and using above procedure, after typing docker image ls or docker image ps i was able to see all minikube pods. Is there any way to do the same for pods created with kops ?
I'm able to do it connecting to Kubernetes node, install docker on it, and then i can connect to pod specifying -u root switch, but i wonder is it possible to do the same from host machine (Windows 10 in my case)
It's a bit unclear what you're trying to do. So I'm going to give some general info here based on this scenario : You've created a K8S cluster on AWS using Kops
I want to access one of my AWS node as root
This has nothing to do with kops nor it has with Docker. This is basic AWS management. You need to check on your AWS console management to get all the info to connect to your node.
I want to see all the docker image from my windows laptop
Again, this has nothing to do with kops. Kops is a Kubernetes distribution. In Kubernetes, the smallest units of computing that can be managed is the pod. You cannot manage directly docker containers or images with kubernetes.
So if you want to see your docker images, you'll need to somehow connect to your AWS node and then execute
docker image ls
In fact, that's what you're doing with your minikube example. You're just executing the docker command on the VM managed by minikube.
More info on what's a pod here
I want to see all the pods created with kops
Well, assuming that you've succesfully configured your system to access AWS with kops (more info on that here), then you'll just have to directly execute any kubectl command. For example, to list all the pods located in the kube-system namespace :
kubectl -n kube-system get po
Hope this helps !
That is not possible. A pod is an abstraction created and managed by kubernetes. The docker daemon has no idea to what is a pod. You can only see the containers using docker command. But then again, you wont be able to tell which container is associated to which pod.
Answered by #Marc ABOUCHACRA

Kubernetes pods not starting, running behind a proxy

I am running kubernetes on minikube, I am behind a proxy, so I had set the env variables(HTTP_PROXY & NO_PROXY) for docker in /etc/systemd/system/docker.service.d/http-proxy.conf.
I was able to do docker pull but when I run the below example
kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment hello-minikube --type=NodePort
kubectl get pod
pod never starts and I get the error
desc = unable to pull sandbox image \"gcr.io/google_containers/pause-amd64:3.0\"
docker pull gcr.io/google_containers/echoserver:1.4 works fine
I ran into the same problem and am sharing what I learned after making a couple of wrong turns. This is with minikube v0.19.0. If you have an older version you might want to update.
Remember, there are two things we need to accomplish:
Make sure kubctl does not go through the proxy when connecting to minikube on your desktop.
Make sure that the docker daemon in minikube does go through the proxy when it needs to connect to image repositories.
First, make sure your proxy settings are correct in your environment. Here is an example from my .bashrc:
export {http,https,ftp}_proxy=http://${MY_PROXY_HOST}:${MY_PROXY_PORT}
export {HTTP,HTTPS,FTP}_PROXY=${http_proxy}
export no_proxy="localhost,127.0.0.1,localaddress,.your.domain.com,192.168.99.100"
export NO_PROXY=${no_proxy}
A couple things to note:
I set both lower and upper case. Sometimes this matters.
192.168.99.100 is from minikube ip. You can add it after your cluster is started.
OK, so that should take care of kubectl working correctly. Now we have the next issue, which is making sure that the Docker daemon in minikube is configured with your proxy settings. You do this, as mentioned by PMat like this:
$ minikube delete
$ minikube start --docker-env HTTP_PROXY=${http_proxy} --docker-env HTTPS_PROXY=${https_proxy} --docker-env NO_PROXY=192.168.99.0/24
To verify that theses settings have taken, do this:
$ minikube ssh -- systemctl show docker --property=Environment --no-pager
You should see the proxy environment variables listed.
Why do the minikube delete? Because without it the start won't update the Docker environment if you had previously created a cluster (say without the proxy information). Maybe this is why PMat did not have success passing --docker-env to start (or maybe it was on older version of minikube).
I was able to fix it myself.
I had Docker on my host and there is Docker in Minikube.
Docker in Minukube had issues
I had to ssh into minikube VM and follow this post
Cannot download Docker images behind a proxy
and it all works nows,
There should be a better way of doing this, on starting minikube i have passed docker env like below, which did not work
minikube start --docker-env HTTP_PROXY=http://xxxx:8080 --docker-env HTTPS_PROXY=http://xxxx:8080
--docker-env NO_PROXY=localhost,127.0.0.0/8,192.0.0.0/8 --extra-config=kubelet.PodInfraContainerImage=myhub/pause:3.0
I had set the same env variable inside Minikube VM, to make it work
It looks like you need to add the minikube ip to no_proxy:
export NO_PROXY=$no_proxy,$(minikube ip)
see this thread: kubectl behind a proxy

Cannot login to Nexus 3 docker registry

I have set up an AWS EC2 instance with Docker, Nexus3 and a Docker repository in Nexus with HTTP port 8123 and all the necessary settings so that I can see it from Docker. I have added after a lengthy research the right options in my docker config file so that when I run docker info I can see my insecure registry set to the right IP address. I can access the url of the Nexus manager from my machine without any problems and I can create repositories etc.
I then try to do a docker login from within my EC2 instance like this:
docker login -u admin -p admin123 my_ip_address:8123
And after a while I get this:
Error response from daemon: Get http://my_ip_address/v1/users/: dial tcp my_ip_address:8123: i/o timeout
I have tried so many things to fix this and nothing seems to work. I spent so far an entire day trying to understand why docker login cannot see my Nexus3 registry.
Any ideas?

Give access to Docker Swarm cluster

Okay, here is my situation:
I created a cluster of docker swarm using docker machine. I can deploy any container, etc. So basically everything is working fine. My question right now is how to give access to the cluster to someone else. I want other people to deploy container on that cluster using docker-compose.
Docker machine configures the docker engine on each node to be secured using TLS:
https://docs.docker.com/engine/security/https/
The client configuration can be seen when running the "docker-machine config" command, for example the following settings are used to access the remote docker host:
--tlsverify
--tlscacert="~/.docker/machine/certs/ca.pem"
--tlscert="~/.docker/machine/certs/cert.pem"
--tlskey="~/.docker/machine/certs/key.pem"
-H=tcp://....
It's the files under ~/.docker/machine/certs that are needed by other users who want to connect to your swarm.
I expect that docker will eventually create some form of user authentication and authorization.

Resources