I have setup a kubernetes cluster locally using minikube and want to copy a file from the minikube to my local machine.
I am able to ssh into minikube successfully and run command but the scp command is timing out.
Commands followed
scp -i $(minikube ssh-key) docker#$(minikube ip):/home/docker/.docker/config.json ~/.docker/newconfig.json
and I am getting the following error message
ssh: connect to host 192.168.49.2 port 22: Operation timed out
Has anyone encountered this issue before or knows how to fix it?
Use ‘KUBECTL CP’ to Copy the files and directories from a Kubernetes Container [POD] to the local host and vice versa.
Copy /tmp/foo from a remote pod to /tmp/bar locally
kubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar
I am following a tutorial, where a pod is created using the below command:
kubectl run firstPod --image={image from dockerhub repository}
But I am getting the following error:
Error from server (Forbidden): pods "firstPod" is forbidden: error looking up service account default/default: serviceaccount "default" not found
The goal of command is to pull docker image from my own repository and use it to create pod. I saw already some solutions that use .yaml file (but I didn't like the answer). All I want is to run this command. I am using windows 10 and docker desktop for a kubernetes cluster (minikube etc.).
You can test it with network-multitool. It will keep on running a webserver and have a lot of great tools.
kubectl run multitool --image=praqma/network-multitool --replicas=1
If that works, find the podname
kubectl get pods
Then you can exec into it with the name you found above
kubectl exec -it multitool-3822887632-pwlr1 bash
From inside the container/pod you can tjek that the webserver is running by
curl localhost
If the first command doesnt work, then something is wrong.
Check if the service account exists
kubectl get sa
Thanks for answers. Now I realize, that I forget to start my local cluster minikube.
minikube start
Now it is ok to create a pod.
With minikube i created simple deployment (https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment) in the kubernetes. I'm sure that container must running , because kubernetes pod was started successfully and I can see container running in the Portainer. But I just can't enter into the container!!
(I always could do it with a simple pod, maybe with deployment something wrong)
$ docker exec -it 01a7c90b4267 /bin/bash
rpc error: code = 2 desc = oci runtime error: exec failed: dial unix /tmp/pty870274210/pty.sock: connect: connection refused
Also I found "Error syncing pod" in the container logs, but the container status is running
bash isn't available in your container. Have you tried with sh?
$ docker exec -ti 01a7c90b4267 sh
Also, if you're attaching to a running container within Kubernetes, you probably want to kubectl exec instead of docker exec:
$ kubectl exec -ti <pod_id> sh
It seems that the problem was caused by mounting to the minikubes' tmp folder minikube mount $TMP:/tmp. Without mounting I can exec the /bin/bash in the containers with no problems
I have been trying to run tomcat container on port 5000 on cluster using kubernetes. But when i am using kubectl create -f tmocat_pod.yaml , it creates pod but docker ps does not give any output. Why is it so?
Ideally, when it is running a pod, it means it is running a container inside that pod and that container is defined in yaml file.
Why is that docker ps does not show any containers running?
I am following the below URLs:
http://containertutorials.com/get_started_kubernetes/k8s_example.html
https://blog.jetstack.io/blog/k8s-getting-started-part2/
How can I get it running and see tomcat running on browser on port 5000.
The docker containers should be running on the virtual machine. Since I only installed minikube on my local machine, I confirmed the following will bring what you want:
minikub ssh
...
docker ps
Just try the kubernetes equivalent of minikube ssh.
In Kubernetes, Docker contaienrs are run on Pods, and Pods are run on Nodes, and Nodes are run on your machine (minikube/GKE)
When you run kubectl create -f tmocat_pod.yaml you basically create a pod and it runs the docker container on that pod.
The node that holds this pod, is basically a virtual instance, if you could 'SSH' into that node, docker ps would work.
What you need is:
kubectl get pods <-- It is like docker ps, it shows you all the pods (think of it as docker containers) running
kubectl get nodes <-- view the host machines for your pods.
kubectl describe pods <pod-name> <-- view system logs for your pods.
kubectl logs <pod-name> <-- Will give you logs for the specific pod.
You can connect your Terminal with the docker server what is running inside your Node/VM.
With this command in your terminal: eval $(minikube docker-env)
This only configures your current terminal window.
illustration
may be you are not using docker as container runtime.
I faced the same issue, and i forgot that i switched to gVisor with runsc as handler.
cat /etc/default/kubelet
KUBELET_EXTRA_ARGS="--container-runtime remote --container-runtime-endpoint unix:///run/containerd/containerd.sock"
If so, you need to use runsc command instead of docker.
I'm not sure where you are running the docker ps command, but if you are trying to do that from your host machine and the k8s cluster is located elsewhere, i.e. your machine is not a node in the cluster, docker ps will not return anything since the containers are not tied to your docker host.
Assuming your pod is running, kubectl get pods will display all of your running pods. To check further details, you can use kubectl describe pod <yourpodname> to check the status of each container (in great detail). To get the pod names, you should be able to use tab-complete with the kubernetes cli. Also, if your pod contains multiple containers, you will need to give the container name as well, which you can use tab-complete for after you've selected your pod.
The output will look similar to:
kubectl describe pod comparison-api-dply-reborn-6ffb88b46b-s2mtx
Name: comparison-api-dply-reborn-6ffb88b46b-s2mtx
Namespace: default
Node: aks-nodepool1-99697518-0/10.240.0.5
Start Time: Fri, 20 Apr 2018 14:08:21 -0400
Labels: app=comparison-pod-reborn
pod-template-hash=2996446026
...
Status: Running
IP: *.*.*.*
Controlled By: ReplicaSet/comparison-api-dply-reborn-6ffb88b46b
Containers:
rabbit-mq:
...
Port: 5672/TCP
State: Running
...
If your containers and pods are already running, then you shouldn't need to troubleshoot them too much. To make them accessible from the Public Internet, take a look at Services (https://kubernetes.io/docs/concepts/services-networking/service/) to make your API's IP address fixed and easily reachable.
Have you tried a "docker ps -a" to see if the container is dead? If it is there you can see its logs with "docker logs " and maybe this gives you a hint.
If your pod is running successfully and if you are looking for the container on the node where the pod is scheduled the issue could be kubernetes is using a different container runtime.
Example
root#renjith-laptop:/home/renjith/raspbery-k8s# kubectl exec -it nginx-8586cf59-h92ct bash
root#nginx-8586cf59-h92ct:/# exit
exit
root#renjith-laptop:/home/renjith/raspbery-k8s# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-8586cf59-h92ct 1/1 Running 0 47s 10.20.0.3 renjith-laptop
root#renjith-laptop:/home/renjith/raspbery-k8s# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root#renjith-laptop:/home/renjith/raspbery-k8s#
Here I am able exec to the pod, and I am in the same node where pod is scheduled, but docker ps doesn't show the container. In my case kubelet is using different container runtime, one of the argument to kubelet service is --container-runtime-endpoint=unix:///var/run/cri-containerd.sock
From Kubernetes documentation to get container images running on your system:
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c
Then you get back something like:
2 registry.k8s.io/coredns/coredns:v1.9.3
1 registry.k8s.io/etcd:3.5.4-0
1 registry.k8s.io/kube-apiserver:v1.25.1
1 registry.k8s.io/kube-controller-manager:v1.25.1
3 registry.k8s.io/kube-proxy:v1.25.1
1 registry.k8s.io/kube-scheduler:v1.25.1
I had a Docker container with a folder mounted in it from the host (noureldin.local.crt is a folder):
etc/ssl/CA/ICA01/keys/noureldin.local.crt:etc/ssl/samba.crt:ro
and then I deleted that folder from the host and created a file with exactly the same name instead of the folder in the same path (noureldin.local.crt now is a file), and then restarted the container, but now the container cannot be started because docker tells that this is not a folder, with this error:
d241b7e25143187fbf8258a664f5d409d1abd4d9578f045cb493df26ed204d46
docker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:359: container init caused \\\"rootfs_linux.go:53: mounting \\\\\\\"/etc/ssl/CA/ICA01/keys/noureldin.local.crt\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/overlay/8888974e268a54dafd22ccb2d05f9cd33da4bfa70d3ee1df0070fcc8c804c411/merged\\\\\\\" at \\\\\\\"/var/lib/docker/overlay/8888974e268a54dafd22ccb2d05f9cd33da4bfa70d3ee1df0070fcc8c804c411/merged/etc/ssl/samba.crt\\\\\\\" caused \\\\\\\"not a directory\\\\\\\"\\\"\"\n".
and here I tried to delete that path I see in the error from overlay folder, but always I get the same error with a new created paths. (I know I should have not deleted anything manually).
After that I tried again to restore folders with the same names instead of the files (just like the first step). but now the container doesn't start and exits with error 126.
I tried to delete and then recreate the container but I always get that error (it is something related with the path I am mounting from the host).
Could someone help me to solve that problem (I want to keep the paths the same).
I tried to reproduce this using Docker version 1.12.3 (see shell output below). Removing the directory and replacing it with a file caused the same error. However, once I removed the file and put the directory back I was able to restart the container. The directory was also re-created.
The only thing I could see being different between what you did and what I did is it looks like you're using relative paths for your volume (which I didn't think was supported), or it could be a copy/paste thing where leading / got dropped. The directory name/path is also different, but that also shouldn't make a difference.
~/work ᐅ mkdir ttt
~/work ᐅ docker run -itd -v $(pwd)/ttt:/ttt/ssl/samba.crt:ro ubuntu /bin/bash 2dc4fe36b2d4bf73a019160437a9f64501b05bb54ed7dc74d5b5f6b487171f27
~/work ᐅ rm -rf ttt
~/work ᐅ touch ttt
~/work ᐅ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2dc4fe36b2d4 ubuntu "/bin/bash" 25 seconds ago Up 24 seconds mad_boyd
~/work ᐅ docker restart mad_boyd
Error response from daemon: Cannot restart container mad_boyd: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:359: container init caused \\\"rootfs_linux.go:53: mounting \\\\\\\"/home/roman/work/stackoverflow/volume-stuff/ttt\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/aufs/mnt/ddb94bac7c9f1fa43165b514e84f2584887040b04e42748cfa06011113514d30\\\\\\\" at \\\\\\\"/var/lib/docker/aufs/mnt/ddb94bac7c9f1fa43165b514e84f2584887040b04e42748cfa06011113514d30/ttt/ssl/samba.crt\\\\\\\" caused \\\\\\\"not a directory\\\\\\\"\\\"\"\n"
~/work ᐅ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
~/work ᐅ rm ttt
~/work ᐅ docker start mad_boyd
mad_boyd
~/work ᐅ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2dc4fe36b2d4 ubuntu "/bin/bash" 56 seconds ago Up 1 seconds mad_boyd
~/work ᐅ ls
ttt