kubectl run does not create replicacontroller - docker

I'm newbie of the Kubernetes while I'm using Google Cloud Container. I just follow the tutorials as belows:
https://cloud.google.com/container-engine/docs/tutorials/http-balancer
http://kubernetes.io/docs/hellonode/#create-your-pod
In these tutorials, I'll get the replicacontroller after I run the "kubectl run" but there is no replicacontrollers so that I cannot run the command of "kubectl expose rc" in order to open a port.
Here is my result of the commands:
ChangMatthews-MacBook-Pro:frontend changmatthew$ kubectl run nginx --image=nginx --port=80
deployment "nginx" created
ChangMatthews-MacBook-Pro:frontend changmatthew$ kubectl expose rc nginx --target-port=80 --type=NodePort
Error from server: replicationcontrollers "nginx" not found
Here is my result when I run "kubectl get rc,svc,ingress,deployments,pods":
ChangMatthews-MacBook-Pro:frontend changmatthew$ kubectl get rc,svc,ingress,deployments,pods
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.3.240.1 <none> 443/TCP 12m
NAME RULE BACKEND ADDRESS AGE
basic-ingress - nginx:80 107.178.247.247 12m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 1 1 1 11m
NAME READY STATUS RESTARTS AGE
nginx-198147104-zgo7m 1/1 Running 0 11m
One of my solution is to create yaml file which define the replicacontroller. But is there any way to create replicacontroller via kubectl run command like above tutorials?
Thanks,

Now that kubectl run creates a deployment, you specify that the type being exposed in a deployment rather than a replication controller:
kubectl expose deployment nginx --target-port=80 --type=NodePort

The team might still be updating the docs to reflect 1.2. Note the output you got:
$ kubectl run nginx --image=nginx --port=80
deployment "nginx" created
kubectl run now creates a deployemtn+replica-set.
To view these you can do kubectl get deployment, and get rs respectively.
Deployments are essentially a nicer way to perform rolling update server side, but there's a little more to it. See docs: http://kubernetes.io/docs/user-guide/deployments/

In version 1.15.0, it works as follows.
root#k8smaster ~]# kubectl run guestbook --image=coolguy/k8s_guestbook:1.0 --port=8080 --generator=run/v1
kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create
instead.
***replicationcontroller/guestbook created***
In version 1.19.0:
[root#k8smaster ~]# kubectl run guestbook --image=dmsong2008/k8s_guestbook:1.0 --port=8080 --generator=run/v1
***Flag --generator has been deprecated, has no effect and will be removed in the future.***
pod/guestbook created

Related

Kubernetes CoreDNS in CrashLoopBackOff

I understand that this question is asked dozen times, but nothing has helped me through internet searching.
My set up:
CentOS Linux release 7.5.1804 (Core)
Docker Version: 18.06.1-ce
Kubernetes: v1.12.3
Installed by official guide and this one:https://www.techrepublic.com/article/how-to-install-a-kubernetes-cluster-on-centos-7/
CoreDNS pods are in Error/CrashLoopBackOff state.
kube-system coredns-576cbf47c7-8phwt 0/1 CrashLoopBackOff 8 31m
kube-system coredns-576cbf47c7-rn2qc 0/1 CrashLoopBackOff 8 31m
My /etc/resolv.conf:
nameserver 8.8.8.8
Also tried with my local dns-resolver(router)
nameserver 10.10.10.1
Setup and init:
kubeadm init --apiserver-advertise-address=10.10.10.3 --pod-network-cidr=192.168.1.0/16
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
I tried to solve this with:
Editing the coredns: root#kub~]# kubectl edit cm coredns -n kube-system
and changing
proxy . /etc/resolv.conf
directly to
proxy . 10.10.10.1
or
proxy . 8.8.8.8
Also tried to:
kubectl -n kube-system get deployment coredns -o yaml | sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | kubectl apply -f -
And still nothing helps me.
Error from the logs:
plugin/loop: Seen "HINFO IN 7847735572277573283.2952120668710018229." more than twice, loop detected
The other thread - coredns pods have CrashLoopBackOff or Error state didnt help at all, becouse i havent hit any solutions that were described there. Nothing helped.
Even I have got such error and I successfully managed to work by below steps.
However, you missed 8.8.4.4
sudo nano /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4
run following commands to restart daemon and docker service
sudo systemctl daemon-reload
sudo systemctl restart docker
If you are using kubeadm make sure you delete an entire cluster from master and provision cluster again.
kubectl drain <node_name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node_name>
kubeadm reset
Once You Provision the new cluster
kubectl get pods --all-namespaces
It Should give below expected Result
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-gldlr 2/2 Running 0 24s
kube-system coredns-86c58d9df4-lpnj6 1/1 Running 0 40s
kube-system coredns-86c58d9df4-xnb5r 1/1 Running 0 40s
kube-system kube-proxy-kkb7b 1/1 Running 0 40s
kube-system kube-scheduler-osboxes 1/1 Running 0 10s
$kubectl edit cm coredns -n kube-system
delete ‘loop’ ,save and exit
restart master node. It was work for me.
I faced the the same issue in my local k8s in Docker (KIND) setup. CoreDns pod gets crashloop backoff error.
Steps followed to make the pod into running state:
As Tim Chan said in this post and by referring the github issues link, I did the following
kubectl -n kube-system edit configmaps coredns -o yaml
modify the section
forward . /etc/resolv.conf with forward . 172.16.232.1 (mycase i set 8.8.8.8 for the timebeing)
Delete one of the Coredns Pods, or can wait for sometime - the pods will be in running state.
Usually happens when coredns can't talk to the kube-apiserver:
Check that your kubernetes service is in the default namespace:
$ kubectl get svc kubernetes
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 130d
Then (you might have to create a pod):
$ kubectl -n kube-system exec -it <any-pod-with-shell> sh
# ping kubernetes.default.svc.cluster.local
PING kubernetes.default.svc.cluster.local (10.96.0.1): 56 data bytes
Also, try hitting port 443 from the port:
# telnet kubernetes.default.svc.cluster.local 443 # or
# curl kubernetes.default.svc.cluster.local:443
I got the error is:
connect: no route to host","time":"2021-03-19T14:42:05Z"}
crashloopbackoff
in the log showed by kubectl -n kube-system logs coredns-d9fdb9c9f-864rz
The issue is mentioned in https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters
tldr;
Reason: /etc/resolv.conf got updated somehow. The original one is at /run/systemd/resolve/resolv.conf:
e.g:
nameserver 172.16.232.1
Quick fix, edit Corefile:
$ kubectl -n kube-system edit configmaps coredns -o yaml
to replace forward . /etc/resolv.conf with forward . 172.16.232.1
e.g:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . 172.16.232.1 {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2021-03-18T15:58:07Z"
name: coredns
namespace: kube-system
resourceVersion: "49996"
uid: 428a03ff-82d0-4812-a3fa-e913c2911ebd
Done, after that, may need to restart the docker
sudo systemctl restart docker
Update: it could be fixed by just sudo systemctl restart docker

Where is kube-apiserver located

Base question: When I try to use kube-apiserver on my master node, I get command not found error. How I can install/configure kube-apiserver? Any link to example will help.
$ kube-apiserver --enable-admission-plugins DefaultStorageClass
-bash: kube-apiserver: command not found
Details: I am new to Kubernetes and Docker and was trying to create StatefulSet with volumeClaimTemplates. My problem is that the automatic PVs are not created and I get this message in the PVC log: "persistentvolume-controller waiting for a volume to be created". I am not sure if I need to define DefaultStorageClass and so needed kube-apiserver to define it.
Name: nfs
Namespace: default
StorageClass: example-nfs
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner=example.com/nfs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 3m (x2401 over 10h) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator
Here is get pvc result:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs Pending example-nfs 10h
And get storageclass:
$ kubectl describe storageclass example-nfs
Name: example-nfs
IsDefaultClass: No
Annotations: <none>
Provisioner: example.com/nfs
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
How can I troubleshoot this issue (e.g. logs for why the storage was not created)?
You are asking two different questions here, one about kube-apiserver configuration, one about troubleshooting your StorageClass.
Here's an answer for your first question:
kube-apiserver is running as a Docker container on your master node. Therefore, the binary is within the container, not on your host system. It is started by the master's kubelet from a file located at /etc/kubernetes/manifests. kubelet is watching this directory and will start any Pod defined here as "static pods".
To configure kube-apiserver command line arguments you need to modify /etc/kubernetes/manifests/kube-apiserver.yaml on your master.
I'll refer to the question regarding the location of the api-server.
Basic answer (specific to the question title):
The kube apiserver is located on the master node (known as the control plane).
It can be executed:
1 ) Via the host's init system (like systemd).
2 ) As a pod (I'll explain below).
In both cases it will be located on the control plane (left side below):
If its running under systemD you can run: systemctl status api-server to see the path to the configuration (drop-in) file.
If it is running as pod you can view it under the kube-system namespace with all other control panel components (plus kube-proxy and maybe network solution like weave below):
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-lpdlc 1/1 Running 1 2d22h
coredns-f9fd979d6-vcs7g 1/1 Running 1 2d22h
etcd-my-master 1/1 Running 1 2d22h
kube-apiserver-my-master 1/1 Running 1 2d22h #<----Here
kube-controller-manager-my-master 1/1 Running 1 2d22h
kube-proxy-kh2lc 1/1 Running 1 2d22h
kube-scheduler-my-master 1/1 Running 1 2d22h
weave-net-59r5b 2/2 Running 3 2d22h
You can run:
kubectl describe pod/kube-apiserver-my-master -n kube-system
In order to get more details regarding the pod.
A bit more advanced answer:
(regarding the location of /etc/kubernetes/manifests)
Lets say we have no idea where to find the relevant path for the kube-api-server config file.
But we need to remember two important things:
1 ) The kube-api-server is running on the master node.
2 ) The Kubelet isn't running as pod and when the control plane components (plus kube-proxy) are executed as static pods - it is done by the Kubelet on the master node.
So we can start our journey for reaching the manifests path by investigating the Kubelet logs.
If the Kubelet is running for a long time it will be a very large file and we'll need to dump it somewhere and go to the begging - or if Kubelet was started 5 minutes ago we can run:
sudo journalctl -u kubelet --since -5m >> kubelet_5_minutes.log
And a quick search for "api-server" will bring us to the 2 lines below where the path of the manifests in mentioned:
my-master kubelet[71..]: 00:03:21 kubelet.go:261] Adding pod path: /etc/kubernetes/manifests
my-master kubelet[71..]: 00:03:21 kubelet.go:273] Watching apiserver
And also we can see that the Kubelet is trying to create the kube-apiserver pod under my-master node and inside the kube-system namespace:
my-master kubelet[71..]: 00:03:29.05 kubelet.go:1576] ..
Creating a mirror pod for "kube-apiserver-my-master_kube-system
To make the storage class "example-nfs" default, you need to run the below command:
kubectl patch storageclass example-nfs -p '{"metadata":
{"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'

Kubernetes cannot access grafana and prometheus fron Google cloud platform

I have followed this link to install Grafana/Prometheus in Google cloud kubernetes. I hope it is deployed successfully please find the following response for reference,
Service Created successfully :
kubectl --namespace=monitoring get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.27.249.8 <none> 3000:32703/TCP 1h
prometheus NodePort 10.27.249.233 <none> 9090:31939/TCP 1h
Namespace created successfully :
kubectl get namespaces
NAME STATUS AGE
default Active 19h
kube-public Active 19h
kube-system Active 19h
monitoring Active 1h
PODS response :
kubectl --namespace=monitoring get pods
NAME READY STATUS RESTARTS AGE
grafana-1323121529-8614m 1/1 Running 0 1h
node-exporter-lw8cr 0/1 CrashLoopBackOff 17 1h
node-exporter-nv85s 0/1 CrashLoopBackOff 17 1h
node-exporter-r2rfl 0/1 CrashLoopBackOff 17 1h
prometheus-3259208887-x2zjc 1/1 Running 0 1h
Now i am trying to expose the external-Ip for Grafana but i couldn't keep on getting following exception "Error from server (AlreadyExists): services "prometheus" already exists"
kubectl --namespace=monitoring expose deployment/prometheus --type=LoadBalancer
Error from server (AlreadyExists): services "prometheus" already exists
Edited
kubectl -n monitoring edit service prometheus
Edit cancelled, no changes made.
As you have already deployed the Prometheus service manifest file in the monitoring namespace. However, you are trying to deploy a service with the same name at the same namespace.That's not acceptable As Two Service cannot co-exist in the same namespace with same name.
Solutions for your problem
I would use the following command to edit the already deployed Service.
kubectl -n monitoring edit service prometheus
Then your favourite text editor would pop up, you just need to update
type: LoadBalancer
Basically, your service will be edited.
Edited
If you are not able to use the above command, then you do following steps :
you need to edit the Prometheus service manifest file and update it with type: LoadBalancer.
Now you need to apply kubectl apply -f prometheus-service.yaml

Flannel fails in kubernetes cluster due to failure of subnet manager

I am running etcd, kube-apiserver, kube-scheduler, and kube-controllermanager on a master node as well as kubelet and kube-proxy on a minion node as follows (all kube binaries are from kubernetes 1.7.4):
# [master node]
./etcd
./kube-apiserver --logtostderr=true --etcd-servers=http://127.0.0.1:2379 --service-cluster-ip-range=10.10.10.0/24 --insecure-port 8080 --secure-port=0 --allow-privileged=true --insecure-bind-address 0.0.0.0
./kube-scheduler --address=0.0.0.0 --master=http://127.0.0.1:8080
./kube-controller-manager --address=0.0.0.0 --master=http://127.0.0.1:8080
# [minion node]
./kubelet --logtostderr=true --address=0.0.0.0 --api_servers=http://$MASTER_IP:8080 --allow-privileged=true
./kube-proxy --master=http://$MASTER_IP:8080
After this, if I execute kubectl get all --all-namespaces and kubectl get nodes, I get
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default svc/kubernetes 10.10.10.1 <none> 443/TCP 27m
NAME STATUS AGE VERSION
minion-1 Ready 27m v1.7.4+793658f2d7ca7
Then, I apply flannel as follows:
kubectl apply -f kube-flannel-rbac.yml -f kube-flannel.yml
Now, I see a pod is created, but with error:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-flannel-ds-p8tcb 1/2 CrashLoopBackOff 4 2m
When I check the logs inside the failed container in the minion node, I see the following error:
Failed to create SubnetManager: unable to initialize inclusterconfig: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
My question is: how to resolve this? Is this a SSL issue? What step am I missing in setting up my cluster?
Maybe it is your flannel yaml file has something wrong,
you can try this to install your flannel,
check the old ip link
ip link
if it show flannel,please delete it
ip link delete flannel.1
and install , its default pod network cdir is 10.244.0.0/16
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml
You could try to pass --etcd-prefix=/your/prefix and --etcd-endpoints=address to flanneld instead of --kube-subnet-mgr so flannel get net-conf from etcd server and not from api server.
Keep in mind that you must to push net-conf to etcd server.
UPDATE
The problem (/var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory) can appear when execute apiserver without --admission-control=...,ServiceAccount,... or if kubelet is inside a container (eg: hypercube) and this last was my case. If you want execute k8s components inside a container you need to pass 'shared' option to kubelet volume
/var/lib/kubelet/:/var/lib/kubelet:rw,shared
Furthermore enable same option to docker in docker.service
MountFlags=shared
Now the question is: is there a security hole with shared mount?

Google Container Engine: Kubernetes is not exposing external IP after creating container

I am trying to create a "Hello Node" sample application in Google Container Engine, following this tutorial
However even after running the command kubectl expose rc hello-node --type="LoadBalancer", it is not exposing an external-IP to access the port.
vagrant#docker-host:~/node-app$ kubectl run hello-node --image=gcr.io/${PROJECT_ID}/hello-node:v1 --port=8080
replicationcontroller "hello-node" created
vagrant#docker-host:~/node-app$ kubectl expose rc hello-node --type="LoadBalancer"
service "hello-node" exposed
vagrant#docker-host:~/node-app$ kubectl get services hello-node
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hello-node 10.163.248.xxx 8080/TCP run=hello-node 14s
vagrant#docker-host:~/node-app$ kubectl get services hello-node
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hello-node 10.163.248.xxx 8080/TCP run=hello-node 23s
After a few moments, the external IP of the load balancer is listed in
the IP(s) column of the service
Usually it's 1-2 minutes. You was waiting only 23seconds. Try to wait a few moments more and it'll be OK.

Resources