I have been trying to run tomcat container on port 5000 on cluster using kubernetes. But when i am using kubectl create -f tmocat_pod.yaml , it creates pod but docker ps does not give any output. Why is it so?
Ideally, when it is running a pod, it means it is running a container inside that pod and that container is defined in yaml file.
Why is that docker ps does not show any containers running?
I am following the below URLs:
http://containertutorials.com/get_started_kubernetes/k8s_example.html
https://blog.jetstack.io/blog/k8s-getting-started-part2/
How can I get it running and see tomcat running on browser on port 5000.
The docker containers should be running on the virtual machine. Since I only installed minikube on my local machine, I confirmed the following will bring what you want:
minikub ssh
...
docker ps
Just try the kubernetes equivalent of minikube ssh.
In Kubernetes, Docker contaienrs are run on Pods, and Pods are run on Nodes, and Nodes are run on your machine (minikube/GKE)
When you run kubectl create -f tmocat_pod.yaml you basically create a pod and it runs the docker container on that pod.
The node that holds this pod, is basically a virtual instance, if you could 'SSH' into that node, docker ps would work.
What you need is:
kubectl get pods <-- It is like docker ps, it shows you all the pods (think of it as docker containers) running
kubectl get nodes <-- view the host machines for your pods.
kubectl describe pods <pod-name> <-- view system logs for your pods.
kubectl logs <pod-name> <-- Will give you logs for the specific pod.
You can connect your Terminal with the docker server what is running inside your Node/VM.
With this command in your terminal: eval $(minikube docker-env)
This only configures your current terminal window.
illustration
may be you are not using docker as container runtime.
I faced the same issue, and i forgot that i switched to gVisor with runsc as handler.
cat /etc/default/kubelet
KUBELET_EXTRA_ARGS="--container-runtime remote --container-runtime-endpoint unix:///run/containerd/containerd.sock"
If so, you need to use runsc command instead of docker.
I'm not sure where you are running the docker ps command, but if you are trying to do that from your host machine and the k8s cluster is located elsewhere, i.e. your machine is not a node in the cluster, docker ps will not return anything since the containers are not tied to your docker host.
Assuming your pod is running, kubectl get pods will display all of your running pods. To check further details, you can use kubectl describe pod <yourpodname> to check the status of each container (in great detail). To get the pod names, you should be able to use tab-complete with the kubernetes cli. Also, if your pod contains multiple containers, you will need to give the container name as well, which you can use tab-complete for after you've selected your pod.
The output will look similar to:
kubectl describe pod comparison-api-dply-reborn-6ffb88b46b-s2mtx
Name: comparison-api-dply-reborn-6ffb88b46b-s2mtx
Namespace: default
Node: aks-nodepool1-99697518-0/10.240.0.5
Start Time: Fri, 20 Apr 2018 14:08:21 -0400
Labels: app=comparison-pod-reborn
pod-template-hash=2996446026
...
Status: Running
IP: *.*.*.*
Controlled By: ReplicaSet/comparison-api-dply-reborn-6ffb88b46b
Containers:
rabbit-mq:
...
Port: 5672/TCP
State: Running
...
If your containers and pods are already running, then you shouldn't need to troubleshoot them too much. To make them accessible from the Public Internet, take a look at Services (https://kubernetes.io/docs/concepts/services-networking/service/) to make your API's IP address fixed and easily reachable.
Have you tried a "docker ps -a" to see if the container is dead? If it is there you can see its logs with "docker logs " and maybe this gives you a hint.
If your pod is running successfully and if you are looking for the container on the node where the pod is scheduled the issue could be kubernetes is using a different container runtime.
Example
root#renjith-laptop:/home/renjith/raspbery-k8s# kubectl exec -it nginx-8586cf59-h92ct bash
root#nginx-8586cf59-h92ct:/# exit
exit
root#renjith-laptop:/home/renjith/raspbery-k8s# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-8586cf59-h92ct 1/1 Running 0 47s 10.20.0.3 renjith-laptop
root#renjith-laptop:/home/renjith/raspbery-k8s# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root#renjith-laptop:/home/renjith/raspbery-k8s#
Here I am able exec to the pod, and I am in the same node where pod is scheduled, but docker ps doesn't show the container. In my case kubelet is using different container runtime, one of the argument to kubelet service is --container-runtime-endpoint=unix:///var/run/cri-containerd.sock
From Kubernetes documentation to get container images running on your system:
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c
Then you get back something like:
2 registry.k8s.io/coredns/coredns:v1.9.3
1 registry.k8s.io/etcd:3.5.4-0
1 registry.k8s.io/kube-apiserver:v1.25.1
1 registry.k8s.io/kube-controller-manager:v1.25.1
3 registry.k8s.io/kube-proxy:v1.25.1
1 registry.k8s.io/kube-scheduler:v1.25.1
Related
I am following a tutorial, where a pod is created using the below command:
kubectl run firstPod --image={image from dockerhub repository}
But I am getting the following error:
Error from server (Forbidden): pods "firstPod" is forbidden: error looking up service account default/default: serviceaccount "default" not found
The goal of command is to pull docker image from my own repository and use it to create pod. I saw already some solutions that use .yaml file (but I didn't like the answer). All I want is to run this command. I am using windows 10 and docker desktop for a kubernetes cluster (minikube etc.).
You can test it with network-multitool. It will keep on running a webserver and have a lot of great tools.
kubectl run multitool --image=praqma/network-multitool --replicas=1
If that works, find the podname
kubectl get pods
Then you can exec into it with the name you found above
kubectl exec -it multitool-3822887632-pwlr1 bash
From inside the container/pod you can tjek that the webserver is running by
curl localhost
If the first command doesnt work, then something is wrong.
Check if the service account exists
kubectl get sa
Thanks for answers. Now I realize, that I forget to start my local cluster minikube.
minikube start
Now it is ok to create a pod.
I have a docker image I have created that works on docker like this (local docker)n...
docker run -p 4000:8080 jrg/hello-kerb
Now I am trying to run it as a Kubernetes pod. To do this I create the deployment...
kubectl create deployment hello-kerb --image=jrg/hello-kerb
Then I run kubectl get deployments but the new deployment comes as unavailable...
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-kerb 1 1 1 0 17s
I was using this site as the instructions. It shows that the status should be available...
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-node 1 1 1 1 1m
What am I missing? Why is the deployment unavailable?
UPDATE
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-kerb-6f8f84b7d6-r7wk7 0/1 ImagePullBackOff 0 12s
If you are running a local image (from docker build) it is directly available to the docker daemon and can be executed. If you are using a remote daemon, f.e. in a kubernetes cluster, it will try to get the image from the default registry, since the image is not available locally. This is usually dockerhub. I checked https://hub.docker.com/u/jrg/ and there seems to be no repository and therefore no jrg/hello-kerb
So how can you solve this? When using minikube, you can build (and provide) the image using the docker daemon that is provided by minikube.
eval $(minikube docker-env)
docker build -t jrg/hello-kerb .
You could also provide the image at a registry that is reachable from your container runtime in the kubernetes cluster, f.e. dockerhub.
I solved this by using kubectl edit deployment hello-kerb then finding "imagePullPolicy" (:/PullPolicy). Finally I changed the value from "Always" to "Never". After saving this when I run kubectl get pod it shows...
NAME READY STATUS RESTARTS AGE
hello-kerb-6f744b6cc5-x6dw6 1/1 Running 0 6m
And I can access it.
I have configured a secret on Kubernetes and inside the node, I am able to pull an image with docker pull perfectly. But when kubectl tries to schedule a pod on the node it shows image pull backoff error. Is there any setting needs to be done while bootstrapping. I am using community AMI on AWS for Kubernetes node.
Try this:
kubectl describe pod-name - see event log at the end. it should show series of events starting from initial image pull to subsequent attempts and may continue to restart in order to achieve desired state as per deployment record
In most scenarios something within container erroring out resulting restart expected behavior by k8s. to check logs - kubectl logs pod-name
Try to keep container running so you can peek inside running container for more troubleshooting using kubectl exec -it pod-name (if single container) or kubectl exec -it pod-name -c container-name.
I have a Docker image with the CMD to run a Java application.
This application is being deployed to container into Kubernetes. Since, I am deploying it as a Docker image, I was expecting it as running as a Docker process. So, I just logged into the pods and was trying "docker ps".
But, I was surprised that it is running as a Java process and not as a docker process. I am able to see the process by "ps -ef"
I am confused, how does it work internally?
As others stated, Kubernetes uses docker internally to deploy the containers. To explain in detail consider the cluster which has 4 nodes, 1 master and 3 slaves.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
******.mylabserver.com Ready master 13d v1.10.5
******.mylabserver.com Ready <none> 13d v1.10.5
******.mylabserver.com Ready <none> 13d v1.10.5
******.mylabserver.com Ready <none> 13d v1.10.5
I am deploying a pod with nignx docker image.
$ cat pod-nginx.yml
apiVersion: v1
kind: Pod
metadata:
name: alpine
namespace: default
spec:
containers:
- name: alpine
image: alpine
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
You can get the status of the pod as below:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
alpine 1/1 Running 0 21s 10.244.3.4 ******.mylabserver.com
Kube-scheduler will schedule the pod on one of the available nodes.
Now the pod is deployed to a server, where you can login to that particular server and find the information that you are looking for.
root#******:/home/user# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
6486de4410ad alpine#sha256:e1871801d30885a610511c867de0d6baca7ed4e6a2573d506bbec7fd3b03873f "sleep 3600" 58 seconds ago Up 57 seconds
k8s_alpine_alpine_default_2e2b3016-79c8-11e8-aaab-
Run the docker exec command in that server to see the process running inside.
root#******:/home/user# docker exec -it 6486de4410ad /bin/sh
/ # ps -eaf
PID USER TIME COMMAND
1 root 0:00 sleep 3600
7 root 0:00 /bin/sh
11 root 0:00 ps -eaf
/ #
https://kubernetes.io/docs/home/- this can give you more info about pods and how deployments happen with pods/containers.
Hope this helps.
Kubernetes using the yaml file that the user provides, deploys a pod (smaller unit of Kubernetes deployment) with one or more containers in it.
You can access the containers inside the pod using the kubectl tool.
For example, in case your pod has one container you can open a shell inside it:
kubectl exec -ti <pod-name> -n <pod-namespace> bash
Through this shell, you can run ps commands and your output will be the isolated processes running inside your container.
In case you want to observe the Docker containers which Kubernetes has deployed in a node, you can connect to that node and run docker ps commands.
I have the following questions:
I am logged into a Kubernetes pod using the following command:
./cluster/kubectl.sh exec my-nginx-0onux -c my-nginx -it bash
The 'ip addr show' command shows its assigned the ip of the pod. Since pod is a logical concept, I am assuming I am logged into a docker container and not a pod, In which case, the pod IP is same as docker container IP. Is that understanding correct?
from a Kubernetes node, I do sudo docker ps and then do the following:-
sudo docker exec 71721cb14283 -it '/bin/bash'
This doesn't work. Does someone know what I am doing wrong?
I want to access the nginx service I created, from within the pod using curl. How can I install curl within this pod or container to access the service from inside. I want to do this to understand the network connectivity.
Here is how you get a curl command line within a kubernetes network to test and explore your internal REST endpoints.
To get a prompt of a busybox running inside the network, execute the following command. (A tip is to use one unique container per developer.)
kubectl run curl-<YOUR NAME> --image=radial/busyboxplus:curl -i --tty --rm
You may omit the --rm and keep the instance running for later re-usage. To reuse it later, type:
kubectl attach <POD ID> -c curl-<YOUR NAME> -i -t
Using the command kubectl get pods you can see all running POD's. The <POD ID> is something similar to curl-yourname-944940652-fvj28.
EDIT: Note that you need to login to google cloud from your terminal (once) before you can do this! Here is an example, make sure to put in your zone, cluster and project:
gcloud container clusters get-credentials example-cluster --zone europe-west1-c --project example-148812
The idea of Kubernetes is that pods are assigned on a host but there is nothing sure or permanent, so you should NOT try to look up the IP of a container or pod from your container, but rather use what Kubernetes calls a Service.
A Kubernetes Service is a path to a pod with a defined set of selectors, through the kube-proxy, which will load balance the request to all pods with the given selectors.
In short:
create a Pod with a label called 'name' for example. let's say name=mypod
create a Service with the selector name=mypod that you call myService for example, to which you assign the port 9000 for example.
then you can curl from a pod to the pods served by this Service using
curl http://myService:9000
This is assuming you have the DNS pod running of course.
If you ask for a LoadBalancer type of Service when creating it, and run on AWS or GKE, this service will also be available from outside your cluster. For internal only service, just set the flag clusterIP: None and it will not be load balanced on the outside.
see reference here:
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/tutorials/services/
Kubernetes uses the IP-per-pod model. All containers in the same pod share the same IP address as if they are running on the same host.
The command should follow docker exec [OPTIONS] CONTAINER COMMAND [ARG...]. In your case, sudo docker exec -it 71721cb14283 '/bin/bash' should work. If not, you should provide the output of your command.
It depends on what image you use. There is nothing special about installing a software in a container. For nginx, try apt-get update && apt-get install curl
There's an official curl team image these days:
https://hub.docker.com/r/curlimages/curl
Run it with:
kubectl run -it --rm --image=curlimages/curl curly -- sh