kubectl: Connection to server was refused - docker

When I run kubectl run ... or any command I get an error message saying
The connection to the server localhost:8080 was refused - did you specify the right host or port?
What exactly is this error and how to resolve it?

In my case, working with minikube I had not started minikube. Starting minikube with
minikube start
fixed it.

In most cases, this means a missing kubeconfig file. kubectl is trying to use the default values when there is no $HOME/.kube/config.
You must create or copy a valid config file to solve this problem.
For example if you are using kubeadm you can solve this with:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively you can also export KUBECONFIG variable like this:
export KUBECONFIG=/etc/kubernetes/admin.conf

I really don't know much about kubectl... But the various reasons you have a connection refused to localhost I know of are as follows
1) Make sure you can resolve and ping your local host with the IP(127.XX.XX.XX) and also "localhost" if using a DNS or host file.
2) Make sure the service trying to access the localhost has enough permissions to run as root if trying to use localhost.
3) Check the ports with netstat and check for the appropriate flags you need amongst the "Plantu" flags, Look up the meaning of each of the flags as it would apply to your situation. If the service you are trying to access on localhost is listening on that port, netstat would let you know.
4) Check if you have admin or management settings in your application that needs permissions to access your localhost in the configuration parameters of your application.
5) According to the statement that says did you specify the right host or port, this means that either your "kubectl" run is not configured to run as localhost instead your primary DNS server hostname or IP, Check what host is configured to run your application and like I said check for the appropriate ports to use, You can use telnet to check this port and further troubleshoot form there.
My two cents!

creating cluster before running kubectl worked for me
gcloud container clusters create k0

If swap is not disabled, kubelet service will not start on the masters and nodes, for Platform9 Managed Kubernetes version 3.3 and above..
By running the below command to turn off swap memory
sudo swapoff -a
To make it permanent
go to /etc/fstab and comment the swap line
works well..

I'm a newbie in k8s, came here while working with microk8s &
want to use kubectl on microk8s cluster.
run below command
microk8s config > ~/.kube/config
got the solution from this link
https://microk8s.io/docs/working-with-kubectl
overall, kubectl needs a config file to work with cluster (here microk8s cluster)
Thanks

I also experienced the same issue when I executed kubectl get pods. The reason was docker desktop was not running, then I ran the docker desktop, checked for the docker and k8s running . Then again I ran kubectl get pods
same output. Then I started minikube by minikube start. Everything went normal.

try run with sudo permission mode
sudo kubectl run....

Related

docker compose with remote context gives ssh error connection refused

I have a problem with docker-compose when I try to run this command from my local machine on my remote server using docker context:
docker-compose --context remote up -d
I get this:
ssh: connect to host xxx.xxx.xxx.xxx port 22: Connection refused
I only get this error with this command, everything else works fine (like ps or logs commands). Also regular ssh connection works fine, so I don't think there is something wrong with my ssh configuration.
Running the command with verbose gives no useful information, as far as I can see.
I had this exact issue. These 3 commands would run fine.
docker --context remote ps
docker --context remote run hello-world
ssh root#my-ubuntu-host
However, with more complicated docker-compose.yml files, I would get a connection refused error. Running with sudo did not fix the issue for me.
Turns out, ufw (which was pre-installed and pre-configured on my digital ocean droplet), was rate-limiting new SSH connections. And it seems docker-compose doesn't run everything in a single session, and instead runs each command separately.
The solution for me was to remove the rate-limit on SSH connections.
ufw delete limit 22/tcp
ufw reload
I'm about a year too late with this answer but hopefully it will help someone out there.
For me, using sudo fixed the issue. I've no idea why, but it worked.
Set environment variable COMPOSE_PARAMIKO_SSH: 1 in your .gitlab-ci.yml and read more here.

Kubernetes failed to discover supported resources: getsockopt: connection refused

I am going through the kubernetes tutorial at Udacity. When i run the the nginx image using the following command
kubectl run nginx --image=nginx:1.10.0
It given me the error
error: failed to discover supported resources: Get http://localhost:8080/apis/extensions/v1beta1: dial tcp 127.0.0.1:8080: getsockopt: connection refused
If i try to get pods using the following command
kubectl get pods
it says
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The nginx server is running, i can tell because i can get the appropriate output by running curl http://127.0.0.1
I am not able to figure out what the issue is, and there are not a lot of resources on the internet for this problem. Can anyone please tell me how do i resolve it?
This issue often occurs when kubectl can not find the configuration credential for the intended cluster.
Check $HOME/.kube/config for cluster configuration. If configuration is empty or configuration is set for a wrong cluster, regenerate configuration by running,
gcloud container clusters get-credentials <CLUSTER_NAME> --zone <ZONE>
This will update the configuration credential in $HOME/.kube/config.
Now, everything should work as expected.
Reference: https://github.com/googlecodelabs/feedback/issues/537
Check your kubectl config file (~/.kube/config)
For testing purposes, you can use the admin one:
kubectl --kubeconfig /etc/kubernetes/admin.conf get po
Or (again, for testing)
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
You can see more suggestions in kubernetes/kubernetes issue 23726
As commented below, that requires kubernetes to be installed, for the node to be able to join a cluster:
sudo kubeadm join --token TOKEN MASTER_IP:6443
The solution was simple, as #VonC suggested, i did not have kubernetes installed, i followed this tutorial, and now i can proceed with my work.
failed to discover supported resources ......
kubectl command line tools connects with kube-apiserver at port 8443 for its operations.
To proble whether the apiserver is up, try curl https://192.168.99.100:8443
If it fails, it means kube-apiserver is not running.
Most probably minikube would not be running.
So try:
minikube status
minikube start
OR
restart the VM
In some cases, it is simply because you need the kubectl run command as root (e.g. sudo it).
You need to set up the zone first:
gcloud config set compute/zone us-central1-b
then add a cluster there :
gcloud container clusters create io
now you can run the commands .
Let me know if found a problem there :)

Cannot access docker website by domain

I need help with configuring docker on Debian 9.
I installed docker and docker-compose successfully.
I can access my host by IP (ex. 172.18.0.7), but cannot access by domain name (sitename.loc). I see an error "ERR_NAME_NOT_RESOLVED" or "DNS_PROBE_FINISHED_NXDOMAIN".
Commands
$ docker-compose up -d
$ docker ps
works fine.
I tried disable firewall, it didn't help.
What's wrong? iptables?
Thanks in advance.
You can add the IP and name to your hosts file, but the container IP can change everytime you start it, so a better approach is to map the ports to your host, and then add to the hosts file this mapping:
sitename.loc 127.0.0.1

Kubernetes pods not starting, running behind a proxy

I am running kubernetes on minikube, I am behind a proxy, so I had set the env variables(HTTP_PROXY & NO_PROXY) for docker in /etc/systemd/system/docker.service.d/http-proxy.conf.
I was able to do docker pull but when I run the below example
kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment hello-minikube --type=NodePort
kubectl get pod
pod never starts and I get the error
desc = unable to pull sandbox image \"gcr.io/google_containers/pause-amd64:3.0\"
docker pull gcr.io/google_containers/echoserver:1.4 works fine
I ran into the same problem and am sharing what I learned after making a couple of wrong turns. This is with minikube v0.19.0. If you have an older version you might want to update.
Remember, there are two things we need to accomplish:
Make sure kubctl does not go through the proxy when connecting to minikube on your desktop.
Make sure that the docker daemon in minikube does go through the proxy when it needs to connect to image repositories.
First, make sure your proxy settings are correct in your environment. Here is an example from my .bashrc:
export {http,https,ftp}_proxy=http://${MY_PROXY_HOST}:${MY_PROXY_PORT}
export {HTTP,HTTPS,FTP}_PROXY=${http_proxy}
export no_proxy="localhost,127.0.0.1,localaddress,.your.domain.com,192.168.99.100"
export NO_PROXY=${no_proxy}
A couple things to note:
I set both lower and upper case. Sometimes this matters.
192.168.99.100 is from minikube ip. You can add it after your cluster is started.
OK, so that should take care of kubectl working correctly. Now we have the next issue, which is making sure that the Docker daemon in minikube is configured with your proxy settings. You do this, as mentioned by PMat like this:
$ minikube delete
$ minikube start --docker-env HTTP_PROXY=${http_proxy} --docker-env HTTPS_PROXY=${https_proxy} --docker-env NO_PROXY=192.168.99.0/24
To verify that theses settings have taken, do this:
$ minikube ssh -- systemctl show docker --property=Environment --no-pager
You should see the proxy environment variables listed.
Why do the minikube delete? Because without it the start won't update the Docker environment if you had previously created a cluster (say without the proxy information). Maybe this is why PMat did not have success passing --docker-env to start (or maybe it was on older version of minikube).
I was able to fix it myself.
I had Docker on my host and there is Docker in Minikube.
Docker in Minukube had issues
I had to ssh into minikube VM and follow this post
Cannot download Docker images behind a proxy
and it all works nows,
There should be a better way of doing this, on starting minikube i have passed docker env like below, which did not work
minikube start --docker-env HTTP_PROXY=http://xxxx:8080 --docker-env HTTPS_PROXY=http://xxxx:8080
--docker-env NO_PROXY=localhost,127.0.0.0/8,192.0.0.0/8 --extra-config=kubelet.PodInfraContainerImage=myhub/pause:3.0
I had set the same env variable inside Minikube VM, to make it work
It looks like you need to add the minikube ip to no_proxy:
export NO_PROXY=$no_proxy,$(minikube ip)
see this thread: kubectl behind a proxy

Docker run connection timeout

While running
sudo docker pull centos
it gives connection time out, While it is running behind proxy where the proxy has been set http_proxy & https_proxy. What is the reason apart from proxy,though it seems proxy issue.I checked LINK but in vain, is there some other settings i am missing please let me know.
2014/11/10 23:31:53 Get https://index.docker.io/v1/repositories/centos/images: dial tcp 162.242.195.84:443: connection timed out
I was getting timeouts on Windows 10 Docker 17.03.0-ce-rc1
To fix it I opened Settings / Network and then set the DNS server to 8.8.8.8
If you are running behind proxy then,
add following command or line in /etc/default/docker file,
export http_proxy=<YOUR_PROXY>
Restart docker service and check,
# service docker restart
service docker stop
HTTP_PROXY=http://proxy_ip:port/ docker -d &
This should work.
On Ubuntu, you can add HTTP_PROXY and HTTPS_PROXY to /etc/default/docker
So yes, what worked for me at the end is setting the proxy, as mentioned by other answers.
I went to icon tray --> Right click on docker to windows --> Go to
settings --> set the proxy as ip:port
Please refer screenshot as below
To change for a fast, open and non-intrusive DNS on CentOS 7:
sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0
add the line:
PEERDNS=no
and
sudo vi /etc/resolv.conf
keep only the line:
nameserver 9.9.9.9
If you run into these docker pull timeout issues on Docker Toolbox running on Windows 10 Home and piggybacking off an existing Virtualbox installation, check to see if Virtualbox is separately open and if so, shut down running machines and close Virtualbox (one or more of those running machines within Virtualbox were created and are being leveraged by Docker Toolbox). This heavy-handed way of going about things worked for me
Generally the problem of connection timeout, I know why the internet output was restricted to download docker images from external repositories,
To check this you can try to download the image from another server or another machine with a different internet channel.
If you can send the image from scp use the command: sudo docker save -o /home/your_image.tar your_image_name. and use with this command sudo docker load -i your_image.tar

Resources