htt_proxy and https_proxy setup minikube - docker

i am a beginner i don't know how to fill these proxyname and port in this where can i get this
to solve the below issue
after running minikube start --driver-docker
after some googling i found out that i need to add proxy env so that pod can access internet
enter image description here
but how do i know what to put in place of proxy name and port?
i tried scrolling through all github issues tried it using virtual machine also using hyper v but issue still persist

Related

Unable to reach registry-1.docker.io from Kind cluster node on WSL2

I am setting up and airflow k8s cluster using kind deployment on a WSL2 setup. When I execute standard helm install $RELEASE_NAME apache-airflow/airflow --namespace $NS it fails. Further investigation shows that cluster worker node cannot connect to registry-1.docker.io.
Error log for one the image pull
Failed to pull image "redis:6-buster": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/redis:6-buster": failed to resolve reference "docker.io/library/redis:6-buster": failed to do request: Head "https://registry-1.docker.io/v2/library/redis/manifests/6-buster": dial tcp: lookup registry-1.docker.io on 172.19.0.1:53: no such host
I can access all other websites from this node e.g. google.com, yahoo.com merriam-webster.com etc. ; even docker.com works. This issue is very specific to registry-1.docker.io.
All the search and links seems to be around general internet connection issue.
Current solution:
If I manually change the /etc/resolv.conf on the kind worker node to point to the IP address from /etc/resolv.conf of the WSL2 Debian main IP address, then it works.
But, this is a dynamic cluster and node and I cannot do this every time. I am currently searching for a way as to how the make it a part of the cluster configuration. Some way that makes it work just by saying kind create cluster and one should be able to use kubectl or helm by default.
However, I am more interested in figuring out why this network setup fails specifically for registry-1.docker.io. Is there some configuration that can be done to avoid changing DNS to host IP or google DNS? As the current network configuration seems to work pretty much for the rest of the internet.
I have documented all the steps and investigation details including some of network configuration details on github repositroy. If you need any further information to help solve the issue, please let me know. I will keep on updating the github documentation as I make progress.
Setup:
Windows 11 with WSL2 without any Docker desktop
WSL2 image : Debian bullseye (11) with docker engine on linux
Docker version : 20.10.2
Kind version : 0.11.1
Kind image: kindest/node:v1.20.7#sha256:cbeaf907fc78ac97ce7b625e4bf0de16e3ea725daf6b04f930bd14c
67c671ff9
I am not sure, if it is an answer or not. After spending 2 days trying to find solution. I thought to change the node image version. On the Kind release page, it says 1.21 as the latest image for the kind version 0.11.1. I had problems with 1.21 to even start the cluster. 1.20 faced this strange DNS image. So went with 1.23. It all worked fine with thus image.
However, to my surprise, when I changed the cluster configuration back to 1.20, the DNS issue was gone. So, I do not what changed due to switch of of the image, but I cannot reproduce the issue again! Maybe it will help someone else
I find that i have found the correct workaround for this bug: Switching IPTables to legacy mode has fixed this for me.
https://github.com/docker/for-linux/issues/1406#issuecomment-1183487816

Ssh config for Jenkins when using helm and k8s

So I have a k8s cluster and I am trying to deploy Jenkins using the following repo https://github.com/jenkinsci/helm-charts.
The main issue is I am working behind a proxy and when git tried to pull (using the ssh protocol) it is failing.
I am able to get around this by building my own docker image from the provided, installing socat and using the following .ssh/config in the container:
Host my.git.repo
# LogLevel DEBUG
StrictHostKeyChecking no
ProxyCommand /usr/bin/socat - PROXY:$HOST_PROXY:%h:%p,proxyport=3128
Is there a better way to do this, I was hoping to use the provided image and perhaps have a plugin thast allowed something similar, but everywhere I look I can't seem to find anything.
Thanks for the help.

Why can't I access my container from my internal or external IP within GAE?

I created a very simple docker practice script (Github link), and executed it via the docker application on my MAC OS computer without any problems. I wanted to test it on google clouds compute engine, so i created an instance and re-built the docker image & container via the SSH browser (Using Debian GNU/Linux)
Everything seems to work fine, except when i try to access the container via localhost/external IP. Both give me this response Site can't be reached.
I've adjusted the firewall settings many times, and end up with the same results as the screenshot provided. I ended up resetting the firewall settings to its default settings, just so I could bring this question here. Here are the default settings
What makes me think i'm missing something is the fact that I can use curl http://localhost:5000 (the port i've chosen for exposure), and i'll get this as a response, which is all i had set the page to say once it's launched.
What am I missing that's causing the container to not allow me to view it via localhost/external IP?

Access to internal infrastructure from Kubernetes

If I run Docker (Docker for Desktop, 2.0.0.3 on Windows 10), then access to internal infrastructure and containers is fine. I can easily do
docker pull internal.registry:5005/container:latest
But ones I enable Kubernetes there, I completely lose an access to internal infrastructure and [Errno 113] Host is unreachable in Kubernetes itself or connect: no route to host from Docker appears.
I have tried several ways, including switching of NAT from DockerNAT to Default Switch. That one doesn't work without restart and restart changes it back to DockerNAT, so, no luck here. This option also seems not to work.
let's start from the basics form the official documentation:
Please make sure you meet all the prerequisites and all other instructions were met.
Also you can use this guide. It has more info with details pointing to what might have gone wrong in your case.
If the above won't help, there are few other things to consider:
In case you are using a virtual machine, make sure that the IP you are referring to is the one of the docker-engines’ host and not the one on which the client is running.
Try to add tmpnginx in docker-compose.
Try to delete the pki directory in C:\programdata\DockerDesktop (first stop Docker, delete the dir and than start Docker). The directory will be recreated and k8s-app=kube-dns labels should work fine.
Please let me know if that helped.

Cannot get HDP public repo in Ambari UI

I have cloned the following project
https://github.com/sequenceiq/docker-ambari.
I have successfully managed to make the 3 ambari-docker containers and now i am trying to select an HDP version through the Ambari UI.
My problem is that each time it tries to get a public repo the request returns with a 400 code(could not access base url).
I tried to curl a repo through the ambari-server container but it returns with could not resolve host.
I am running this inside a VM(Ubuntu 18.04) behind a company firewall.
I have no problem with curl inside the VM but it does not work in the container.
I have already tried whatever i could find on proxy editing for docker,ambari,yum,etc.. and since i am new to this i don't know what else to look for.
I expect to be able to choose a public repo to continue with the cluster installation wizard
For Ambari to communicate during setup with the hosts it deploys to and manages, certain ports must be open and available. The easiest way to do this is to temporarily disable iptables, as follows:
systemctl disable firewalld
service firewalld stop

Resources