I installed Docker Desktop here on Ubuntu but kubectl can't connect with it.
Even if kubectl can connect with minikube.
if I ran kubectl get all, I got this error.
$kubectl get all
Unable to connect to the server: dial tcp 192.168.49.2:8443: connect: no route to host
How to address this error?
Here are my versions
$kubectl version --short
Client Version: v1.24.1
Kustomize Version: v4.5.4
$minikube version
minikube version: v1.25.2
docker-desktop v4.9.0
$docker compose version
Docker Compose version v2.4.1
And kubectl configuration
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
- cluster:
certificate-authority: /home/release/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 18 May 2022 09:31:42 EDT
provider: minikube.sigs.k8s.io
version: v1.25.2
name: cluster_info
server: https://192.168.49.2:8443
name: minikube
contexts:
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-desktop
- context:
cluster: minikube
extensions:
- extension:
last-update: Wed, 18 May 2022 09:31:42 EDT
provider: minikube.sigs.k8s.io
version: v1.25.2
name: context_info
namespace: default
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: docker-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: minikube
user:
client-certificate: /home/release/.minikube/profiles/minikube/client.crt
client-key: /home/release/.minikube/profiles/minikube/client.key
And installed Docker Desktop
kubectl config use-context docker-desktop
$kubectl config use-context docker-desktop
Switched to context "docker-desktop".
I can connect it.
$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 157m
Related
I'm trying to follow this tutorial https://www.youtube.com/watch?v=9EUyMjR6jSc. I'm working on Ubuntu 20.04LTS, I installed k3d and this is ~/.kube/config information
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ...
server: https://192.168.0.13:16443
name: k3d-dev
contexts:
- context:
cluster: k3d-dev
user: admin#k3d-dev
name: k3d-dev
current-context: k3d-dev
kind: Config
preferences: {}
users:
- name: admin#k3d-dev
user:
client-certificate-data:...
client-key-data:...
Docker version is Version: 20.10.2.
According to the tutorial I need to run a halyard container and inside the container I can access the local kubernetes (in this case k3d). The halyard container comes with kubectl, so i just need to create a ~/.kube/config with the above info, but i still get the Unable to connect to server message.
The cluster is up and running as I get this info if I run kubectl cluster-info
Kubernetes control plane is running at https://192.168.0.13:16443
CoreDNS is running at https://192.168.0.13:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://192.168.0.13:16443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
Inside the halyard container
bash-5.0$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.10", GitCommit:"1bea6c00a7055edef03f1d4bb58b773fa8917f11", GitTreeState:"clean", BuildDate:"2020-02-11T20:13:57Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.0.13:16443: i/o timeout
bash-5.0$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.0.13:16443
name: k3d-dev
contexts:
- context:
cluster: k3d-dev
user: admin#k3d-dev
name: k3d-dev
current-context: k3d-dev
kind: Config
preferences: {}
users:
- name: admin#k3d-dev
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
bash-5.0$ kubectl config current-context
k3d-dev
I'm learning Kubernetes and want to set up a Docker registry to run within my cluster, deploy any custom code to this private registry, then have my nodes pull images from this private registry to create pods. I've described my setup in this StackOverflow question
Originally I was caught up trying to figure out SSL certificates, but for now I've postponed that and I'm trying to work with an insecure registry. To that end I've created the following pod to run my registry (I know it's a pod and not a replica set or deployment -- this is only for experimental purposes and I'll make it cleaner once it's working):
apiVersion: v1
kind: Pod
metadata:
name: docker-registry
labels:
app: docker-registry
spec:
containers:
- name: docker-registry
image: registry:2
ports:
- containerPort: 80
hostPort: 80
env:
- name: REGISTRY_HTTP_ADDR
value: 0.0.0.0:80
I then created the following NodePort service:
apiVersion: v1
kind: Service
metadata:
name: docker-registry-external
labels:
app: docker-registry
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 32000
selector:
app: docker-registry
I have a load balancer set up in front of my Kubernetes cluster which I configured to route traffic on port 80 to port 32000. So I can hit this registry at http://example.com
I then updated my local /etc/docker/daemon.json as follows:
{
"insecure-registries": ["example.com"]
}
With this I was able to push an image to my registry successfully:
> docker pull ubuntu
> docker tag ubuntu example.com/my-ubuntu
> docker push exapmle.com/my-ubuntu
The push refers to repository [example.com/my-ubuntu]
cc9d18e90faa: Pushed
0c2689e3f920: Pushed
47dde53750b4: Pushed
latest: digest: sha256:1d7b639619bdca2d008eca2d5293e3c43ff84cbee597ff76de3b7a7de3e84956 size: 943
Now I want to try and pull this image when creating a pod. So I created the following ClusterIP service to make my registry accessible within my cluster:
apiVersion: v1
kind: Service
metadata:
name: docker-registry-internal
labels:
app: docker-registry
spec:
type: ClusterIP
ports:
- targetPort: 80
port: 80
selector:
app: docker-registry
Then I created a secret:
apiVersion: v1
kind: Secret
metadata:
name: local-docker
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: ewoJImluc2VjdXJlLXJlZ2lzdHJpZXMiOiBbImRvY2tlci1yZWdpc3RyeS1pbnRlcm5hbCJdCn0K
The base64 bit decodes to:
{
"insecure-registries": ["docker-registry-internal"]
}
Finally, I created the following pod:
apiVersion: v1
kind: Pod
metadata:
name: test-docker
labels:
name: test
spec:
imagePullSecrets:
- name: local-docker
containers:
- name: test
image: docker-registry-internal/my-ubuntu
When I tried to create this pod (kubectl create -f test-pod.yml) and looked at my cluster, this is what I saw:
> kubectl get pods
NAME READY STATUS RESTARTS AGE
test-docker 0/1 ErrImagePull 0 4s
docker-registry 1/1 Running 0 34m
> kubectl describe pod test-docker
...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m33s default-scheduler Successfully assigned default/test-docker to pool-uqa-dev-3sli8
Normal Pulling 3m22s (x2 over 3m32s) kubelet Pulling image "docker-registry-internal/my-ubuntu"
Warning Failed 3m22s (x2 over 3m32s) kubelet Failed to pull image "docker-registry-internal/my-ubuntu": rpc error: code = Unknown desc = Error response from daemon: pull access denied for docker-registry-internal/my-ubuntu, repository does not exist or may require 'docker login'
Warning Failed 3m22s (x2 over 3m32s) kubelet Error: ErrImagePull
Normal SandboxChanged 3m19s (x7 over 3m32s) kubelet Pod sandbox changed, it will be killed and re-created.
Normal BackOff 3m18s (x6 over 3m30s) kubelet Back-off pulling image "docker-registry-internal/my-ubuntu"
Warning Failed 3m18s (x6 over 3m30s) kubelet Error: ImagePullBackOff
It's clearly failing to find the host "docker-registry-internal", despite the ClusterIP service.
I tried inspecting a pod from the inside using a trick I found online:
> kubectl run -i --tty --rm debug --image=ubuntu --restart=Never -- bash
If you don't see a command prompt, try pressing enter.
root#debug:/# cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.1.67 debug
It doesn't seem like ClusterIP services are being added to the /etc/hosts file, so I'm not sure how services are supposed to find one another?
I tried watching several Kubernetes tutorials on general service communication (e.g. an app pod communicating with a redis pod) and every time all they did was supply the service name as a host and it magically connected. I'm not sure if I'm missing something. Bear in mind I'm brand new to Kubernetes so the internals are still mystical to me.
Errors while setting up CI/CD environment in Ubuntu 18.04 in Parallel Desktop environment:
There is an issue connecting with proxy and running nginx image.
I am trying to setup kubernetes CI CD environment on Ubuntu, but I am getting few errors related to apiserver proxy and kubectl get pods command is failing with unable to connect message.
$sudo minikube start --memory 8000 --cpus 2 --kubernetes-version v1.11.10 --vm-driver none
Wait failed: waiting for k8s-app=kube-proxy: timed out waiting for the condition
$kubectl run nginx --image nginx --port 80
error: failed to discover supported resources: Get https://192.168.64.19:8443/apis/apps/v1?timeout=32s: net/http: TLS handshake timeout
Below are docker, kubectl & minikube version version used:
$ docker --version
Docker version 18.09.7, build 2d0083d
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$ minikube version
minikube version: v1.5.0
commit: d1151d93385a70c5a03775e166e94067791fe2d9
Content of ~/.kube/config content
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/parallels/.minikube/ca.crt
server: https://192.168.64.19:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: ""
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/parallels/.minikube/client.crt
client-key: /home/parallels/.minikube/client.key
I have a very simple "Hello" spring-boot application
#RestController
public class HelloWorld {
#RequestMapping("/")
public String sayHello() {
return "Hello Spring Boot!!";
}
}
I packaged Dockerfile
FROM java:8
COPY ./springsimple-1.0-SNAPSHOT.jar /Users/a/Documents/dev/intellij/dockerImages/
WORKDIR /Users/a/Documents/dev/intellij/dockerImages/
EXPOSE 8090
CMD ["java", "-jar", "springsimple-1.0-SNAPSHOT.jar"]
and pulled into my container registry and deployed it
amhg$ kubectl run testproject --image acontainerregistry.azurecr.io/hellospring:v1
deployment.apps "testproject" created
amhg$ kubectl expose deployments testproject --port=5000 --type=LoadBalancer
service "testproject" exposed
command kubectl get pods
NAME READY STATUS RESTARTS AGE
testproject-bdf5b54d-gkk92 1/1 Running 0 41s
However when I try the command (Starting to serve on 127.0.0.1:8001) I got the error:
amhg$ curl http://127.0.0.1:8001/api/v1/proxy/namespaces/default/pods/testproject-bdf5b54d-gkk92/
Internal Server Error
What is missing?
The description of the pod is
amhg$ kubectl describe pod testproject-bdf5b54d-gkk92
Name: testproject-bdf5b54d-gkk92
Namespace: default
Node: aks-nodepool1-39744669-0/10.240.0.4
Start Time: Thu, 19 Apr 2018 13:13:20 +0200
Labels: pod-template-hash=68916108
run=testproject
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"testproject-bdf5b54d","uid":"aa99808e-43c2-11e8-9537-0a58ac1f0f4...
Status: Running
IP: 10.244.0.40
Controlled By: ReplicaSet/testproject-bdf5b54d
Containers:
testproject:
Container ID: docker://6ed3878fa4476a5d2e56f0ba70908742702709c7505c7b19989efc6ff658ea55
Image: acontainerregistry.azurecr.io/hellospring:v1
Image ID: docker-pullable://acontainerregistry.azurecr.io/azure-vote-front#sha256:e2af252d275c99b802e21b3b469c75b256d7812ee71d7582cd759bd4faf5a6ec
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 19 Apr 2018 13:13:21 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vkpjm (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-vkpjm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vkpjm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 57m default-scheduler Successfully assigned testproject-bdf5b54d-gkk92 to aks-nodepool1-39744669-0
Normal SuccessfulMountVolume 57m kubelet, aks-nodepool1-39744669-0 MountVolume.SetUp succeeded for volume "default-token-vkpjm"
Normal Pulled 57m kubelet, aks-nodepool1-39744669-0 Container image "acontainerregistry.azurecr.io/hellospring:v1" already present on machine
Normal Created 57m kubelet, aks-nodepool1-39744669-0 Created container
Normal Started 57m kubelet, aks-nodepool1-39744669-0 Started container
Let's start from the beginning: it is always better to use YAML config files to do anything with Kubernetes. It will help you with debugging if something goes wrong and repeat your action in future.
First, you use the command to create the pod:
kubectl run testproject --image acontainerregistry.azurecr.io/hellospring:v1
where YAML looks like:
apiVersion: v1
kind: Pod
metadata:
name: test-app
spec:
containers:
- name: java-app
image: acontainerregistry.azurecr.io/hellospring:v1
ports:
- containerPort: 8090
and you can apply it as a command:
kubectl apply -f ./pod.yaml
You get the same result as while running your command, but additionally you have the config file which can be used in future.
You`re trying to expose your pod using command:
kubectl expose deployments testproject --port=5000 --type=LoadBalancer
YAML for your service looks like:
apiVersion: v1
kind: Service
metadata:
name: java-service
labels:
name: test-app
spec:
type: LoadBalancer
ports:
- port: 5000
targetPort: 8090
name: http
selector:
name: test-app
Doing the same but with using YAML allows to describe more and be sure you don't miss anything.
You tried to curl the localhost but I`m not sure what did you expect from this command:
amhg$ curl http://127.0.0.1:8001/api/v1/proxy/namespaces/default/pods/testproject-bdf5b54d-gkk92/
Internal Server Error
After you create the service, you call kubectl describe service $service_name, which you can find here:
LoadBalancer Ingress: XX.XX.XX.XX
Port: http 5000/TCP
You can curl this address and receive the answer from your application.
curl -v XX.XX.XX.XX:5000
Don't forget to open the port on Azure firewall.
I am a newbie in kubernetes and i know i am missing something small but cannot see what.
I am creating a pod with file: kubectl create -f mysql.yaml
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- resources:
limits :
cpu: 2
image: mysql
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
# change this
value: TestingDB1
ports:
- containerPort: 3306
name: mysql
and a service with: kubectl create -f mysql_service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: mysql
name: mysql
spec:
externalIPs:
- 10.19.13.127
ports:
- port: 3306
selector:
name: mysql
Output of "kubectl version"
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"d33fd89e399396658aed4e48dfe7d5d8d50ac6e8", GitTreeState:"clean", BuildDate:"2017-05-26T17:08:24Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"d33fd89e399396658aed4e48dfe7d5d8d50ac6e8", GitTreeState:"clean", BuildDate:"2017-05-26T17:08:24Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Output of "kubectl cluster-info"
Kubernetes master is running at http://localhost:8080
Output of "kubectl get pods"
NAME READY STATUS RESTARTS AGE
mysql 1/1 Running 0 20m
Output of "kubectl get svc"
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 18h
mysql 10.254.129.206 10.19.13.127 3306/TCP 1h
Output of "kubectl get no"
NAME STATUS AGE
10.19.13.127 Ready 19h
Output of "docker ps"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74ea1fb2b383 mysql "docker-entrypoint.sh" 3 minutes ago Up 3 minutes k8s_mysql.ae7893ad_mysql_default_e58d1c09-4a8e-11e7-9baf-fa163ee3f5d9_793d8d7c
I can see the pod is being created normally. Even when i connect to the container I am able to log in to mysql with credentials.
My question is:
How can i access/expose port running on my kubernetes node from my network ? For example I want to do a telnet from my PC to the kubernetes node where the mysql pod is running.
Thank you !
Below command verifies that the Redis server is running in the pod and listening on which port (generally it run on 6379 port) :
kubectl get pods redis-master --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
Output : 6739
Following command gives you port a pod listening on, so you can create route or port forwarding to access the service
kubectl get pod <pod_name> -o "go-template={{(index (index .spec.containers 0).ports 0).containerPort}}