Kubectl logs returning tls handshake timeout - docker

kubectl -n namespace1 logs -f podname
returns the following error.
Error from server: Get https://ipaddress:10250/containerLogs/namespace1/podname-xxkb9/podname?follow=true: net/http: TLS handshake timeout
Proxies are unset.
unset http_proxy
unset https_proxy
But Still the issue comes.
Could anyone help me with this issue please.

What I know for sure this is not a cert issue. This is API versions mismatching problem or something else related to API. There were a few discussions on stack in the past, I'll attach them in the end. Also, I experienced the same few years ago and at that time I also resolved this problem by kubeadm upgrade
First of all check real error message by running kubectl logs -v9 for maximum verbosity level.
Most probably you checked other commands like kubectl get pods, nodes, etc. None of those commands require the apiserver to contact the kubelet, only kubectl logs does.
And #Kamos asked you absolutely the right question re: exec/attach/portforward. 99% they also doesn't work for you because they also require contacting kubelet directly.
There are a lot of chances you will fix issue with Upgrading kubeadm clusters
References:
1. Kubernetes - net/http: TLS handshake timeout when fetching logs (BareMetal)
2. Kubernetes logs command TLS handshake timeout ANSWER1!!!
3. Kubernetes logs command TLS handshake timeout ANSWER2
4. kubectl logs failed with error: net/http: TLS handshake timeout #71343

Reinstalling Kubernetes without proxy resolved this issue.

Related

Error on etcd health check while setting up RKE cluster

i'm trying to set up a rke cluster, the connection to the nodes goes well but when it starts to check etcd health returns:
failed to check etcd health: failed to get /health for host [xx.xxx.x.xxx]: Get "https://xx.xxx.x.xxx:2379/health": remote error: tls: bad certificate
If you are trying to upgrade the RKE and facing this issue then it could be due to the missing of kube_config_<file>.yml file from the local directory when you perform rke up.
This similar kind of issue was reported and reproduced in this git link . Can you refer to the work around and reproduce it by using the steps provided in the link and let me know if this works.
Refer to this latest SO and doc for more information.

cert-manager does not issue certificate after upgrading to AKS k8s 1.24.6

I have an automatic setup with scripts and helm to create a Kubernetes Cluster on MS Azure and to deploy my application to the cluster.
First of all: everything works fine when I create a cluster with Kubernetes 1.23.12, that means after a few minutes everything is installed and I can access my website and there is a certificate issued by letsencrypt.
But when I delete this cluster completely and reinstall it and only change the Kubernetes version from 1.23.12 to 1.24.6. I dont't get a certificate any more.
I see that the acme challenge is not working. I get the following error:
Waiting for HTTP-01 challenge propagation: failed to perform self check GET request 'http://my.hostname.de/.well-known/acme-challenge/2Y25fxsoeQTIqprKNR4iI4X81jPoLknmRNvj9uhcOLk': Get "http://my.hostname.de/.well-known/acme-challenge/2Y25fxsoeQTIqprKNR4iI4X81jPoLknmRNvj9uhcOLk": dial tcp: lookup my.hostname.de on 10.0.0.10:53: no such host
After some time the error message changes to:
'Error accepting authorization: acme: authorization error for my.hostname.de:
400 urn:ietf:params:acme:error:connection: 20.79.77.156: Fetching http://my.hostname.de/.well-known/acme-challenge/2Y25fxsoeQTIqprKNR4iI4X81jPoLknmRNvj9uhcOLk:
Timeout during connect (likely firewall problem)'
10.0.0.10 is the cluster IP of kube-dns in my kubernetes cluster. When I look at "Services and Ingresses" in Azure portal I can see the port 53/UDP;53/TCP for the cluster IP 10.0.0.10
And I can see there that 20.79.77.156 is the external IP of the ingres-nginx-controller (Ports: 80:32284/TCP;443:32380/TCP)
So I do not understand why the acme challenge cannot be performed successfully.
Here some information about the version numbers:
Azure Kubernetes 1.24.6
helm 3.11
cert-manager 1.11.0
ingress-nginx helm-chart: 4.4.2 -> controller-v1.5.1
I have tried to find the same error on the internet. But you don't find it often and the solutions do not seem to fit to my problem.
Of course I have read a lot about k8s 1.24.
It is not a dockershim problem, because I have tested the cluster with the Detector for Docker Socket (DDS) tool.
I have updated cert-manager and ingress-nginx to new versions (see above)
I have also tried it with Kubernetes 1.25.4 -> same error
I have found this on the cert-manager Website: "cert-manager expects that ServerSideApply is enabled in the cluster for all versions of Kubernetes from 1.24 and above."
I think I understood the difference between Server Side Apply and Client Side Apply, but I don't know if and how I can enable it in my cluster and if this could be a solution to my problem.
Any help is appreciated. Thanks in advance!
I've solved this myself recently, try this for your ingress controller:
ingress-nginx:
rbac:
create: true
controller:
service:
annotations:
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz
k8s 1.24+ is using a different endpoint for health probes.

Unable to connect to OpenVPN client in Docker

I have tried to set up an OpenVPN client under docker, using the dperson/openvpn-client image. I get the following error:
UDPv6: Address not available (code=99)
When googling this problem, I've come across this discussion, but I'm not sure how to look at the client and server logs separately.
I'have the full logs on pastebin here
I am able to get the IP of my home address inside the container, but not the IP from the VPN. I appreciate any help
!
The problem here is that the TLS handshake fails.
I extracted this from your logs:
TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity)
TLS Error: TLS handshake failed
Make sure your server is setup correctly.

installing dashboard on Kubernetes

world.
Trying to install the dashboard in Kubernetes with command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
The reply looks like this:
Failed to pull image "kubernetesui/dashboard:v2.0.0-beta4": rpc error: code = Unknown desc = error pulling image configuration: Get https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/68/6802d83967b995c2c2499645ede60aa234727afc06079e635285f54c82acbceb/data?verify=1568998309-bQcnrEV6vQpN4irzUtO2FEIv%2FkE%3D: dial tcp: lookup production.cloudflare.docker.com on 192.168.73.1:53: read udp 192.168.73.91:35778->192.168.73.1:53: i/o timeout
And a simple ping command said:
ping: unknown host https://production.cloudflare.docker.com
After that I watched the domain from downforeveryoneorjustme service and it told me that the server is down.
It's not just you! production.cloudflare.docker.com is down.
Googling the problem showed that I need to configure the docker proxy, but I have no proxy in my setup..
https://docs.docker.com/network/proxy/#configure-the-docker-client
Any thoughts? Thank you in advance.
Check first Cloudflare status:
There was multiple "DNS delays" and "Cloudflare API service issues" in the past few hours, which might have an effect on your installation.

minikube start get error: "k8s-app=kube-proxy connection refused"

On Linux, with minikube v0.34.1, when run minikube start --logtostderr, get following error:
I0227 18:25:12.625477 13250 kubernetes.go:121] error getting Pods with label selector "k8s-app=kube-proxy" [Get https://192.168.99.102:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy: dial tcp 192.168.99.102:8443: connect: connection refused]
And, none of following environment variable is set: $HTTP_PROXY, $HTTPS_PROXY, $NO_PROXY.
After searching via google, and checked following posts, still unsolved:
minikube may fail with older VM's and apiserver.Authorization.Mode=RBAC: kube-proxy timeout #2948
HTTP_PROXY set: error getting Pods with label selector "k8s-app=kube-proxy" ... kube-proxy: Service Unavailable #2726
Following actions have been tried, with no good news:
minikube delete; minikube start
rm -rf ~minikube/
As a newbie to K8s, really don't understand what this means, any idea ?
#Update - Seems solved
The solution moved to the answer section
I just moved the solution posted by Eric Wang from the question to the answer section:
With following steps seems it's resolved:
Make a backup of ~/.minikube/cache/, optionally.
Otherwise, will need to download those caches again.
Removed config & data via rm -rf ~/.minikube/
mkdir ~/.minikube/
Then restore .minikube/cache/, if you did a backup.
minikube stop
minikube delete
minikube start --logtostderr
Tips:
The --logtostderr flag is useful to get error info on the console.
Without it, the process can stuck there without giving you any information.

Resources