Does kubernetes kubectl run with image creates deployment yaml file - docker

I am trying to use Minikube and Docker to understand the concepts of Kubernetes architecture.
I created a spring boot application with Dockerfile, created tag and pushed to Dockerhub.
In order to deploy the image in K8s cluster, i issued the below command,
# deployed the image
$ kubectl run <deployment-name> --image=<username/imagename>:<version> --port=<port the app runs>
# exposed the port as nodeport
$ kubectl expose deployment <deployment-name> --type=NodePort
Everything worked and i am able to see the 1 pods running kubectl get pods
The Docker image i pushed to Dockerhub didn't had any deployment YAML file.
Below command produced an yaml output
Does kubectl command creates deployment Yaml file out of the box?
$ kubectl get deployments --output yaml
apiVersion: v1
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2019-12-24T14:59:14Z"
generation: 1
labels:
run: hello-service
name: hello-service
namespace: default
resourceVersion: "76195"
selfLink: /apis/apps/v1/namespaces/default/deployments/hello-service
uid: 90950172-1c0b-4b9f-a339-b47569366f4e
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello-service
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-service
spec:
containers:
- image: thirumurthi/hello-service:0.0.1
imagePullPolicy: IfNotPresent
name: hello-service
ports:
- containerPort: 8800
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2019-12-24T14:59:19Z"
lastUpdateTime: "2019-12-24T14:59:19Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2019-12-24T14:59:14Z"
lastUpdateTime: "2019-12-24T14:59:19Z"
message: ReplicaSet "hello-service-75d67cc857" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""

I think the easiest way to understand whats going on under the hood when you create kubernetes resources using imperative commands (versus declarative approach by writing and applying yaml definition files) is to run a simple example with 2 additional flags:
--dry-run
and
--output yaml
Names of these flags are rather self-explanatory so I think there is no further need for comment explaining what they do. You can simply try out the below examples and you'll see the effect:
kubectl run nginx-example --image=nginx:latest --port=80 --dry-run --output yaml
As you can see it produces the appropriate yaml manifest without applying it and creating actual deployment:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx-example
name: nginx-example
spec:
replicas: 1
selector:
matchLabels:
run: nginx-example
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx-example
spec:
containers:
- image: nginx:latest
name: nginx-example
ports:
- containerPort: 80
resources: {}
status: {}
Same with expose command:
kubectl expose deployment nginx-example --type=NodePort --dry-run --output yaml
produces the following output:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: nginx-example
name: nginx-example
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx-example
type: NodePort
status:
loadBalancer: {}
And now the coolest part. You can use simple output redirection:
kubectl run nginx-example --image=nginx:latest --port=80 --dry-run --output yaml > nginx-example-deployment.yaml
kubectl expose deployment nginx-example --type=NodePort --dry-run --output yaml > nginx-example-nodeport-service.yaml
to save generated Deployment and NodePort Service definitions so you can further modify them if needed and apply using either kubectl apply -f filename.yaml or kubectl create -f filename.yaml.
Btw. kubectl run and kubectl expose are generator-based commands and as you may have noticed when creating your deployment (as you probably got the message: kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.) they use --generator flag. If you don't specify it explicitly it gets the default value which for kubectl run is --generator=deployment/apps.v1beta1 so by default it creates a Deployment. But you can modify it by providing --generator=run-pod/v1 nginx-example and instead of Deployment it will create a single Pod. When we go back to our previous example it may look like this:
kubectl run --generator=run-pod/v1 nginx-example --image=nginx:latest --port=80 --dry-run --output yaml
I hope this answered your question and clarified a bit the mechanism of creating kubernetes resources using imperative commands.

Yes, kubectl run creates a deployment. If you look at the label field, you can see run: hello-service. This label is used later in the selector.

Related

Issues with setting up kubernetes for local testing using docker image

I created a docker image of my app which is running an internal server exposed at 8080.
Then I tried to create a local kubernetes cluster for testing, using the following set of commands.
$ kubectl create deployment --image=test-image test-app
$ kubectl set env deployment/test-app DOMAIN=cluster
$ kubectl expose deployment test-app --port=8080 --name=test-service
I am using Docker-desktop on windows to run run kubernetes. This exposes my cluster to external IP localhost but i cannot access my app. I checked the status of the pods and noticed this issue:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-66-ps2 0/1 ImagePullBackOff 0 8h
test-6f-6jh 0/1 InvalidImageName 0 7h42m
May I know what could be causing this issue? And how can i make it work on local ?
Thanks, Look forward to the suggestions!
My YAML file for reference:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
creationTimestamp: "2021-10-13T18:00:15Z"
generation: 4
labels:
app: test-app
name: test-app
namespace: default
resourceVersion: "*****"
uid: ************
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: test-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: test-app
spec:
containers:
- env:
- name: DOMAIN
value: cluster
image: C:\Users\test-image
imagePullPolicy: Always
name: e20f23453f27
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2021-10-13T18:00:15Z"
lastUpdateTime: "2021-10-13T18:00:15Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2021-10-13T18:39:51Z"
lastUpdateTime: "2021-10-13T18:39:51Z"
message: ReplicaSet "test-66" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 4
replicas: 2
unavailableReplicas: 2
updatedReplicas: 1
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-10-13T18:01:49Z"
labels:
app: test-app
name: test-service
namespace: default
resourceVersion: "*****"
uid: *****************
spec:
clusterIP: 10.161.100.100
clusterIPs:
- 10.161.100.100
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 41945
port: 80
protocol: TCP
targetPort: 8080
selector:
app: test-app
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: localhost
The reason you are facing ImagePullBackOff and InvalidImageName issue is because your app image does not exist on the kubernetes cluster you deployed via docker, rather it exists on your local machine!
To resolve this issue for testing purpose what you can do is to mount the project workspace and create your image there on your kubernetes cluster and then build image using docker on the k8s cluster or upload your image to docker hub and then setting your deployment to pick image from docker hub!

Kubernetes: Error from server (NotFound): deployments.apps "kube-verify" not found

I set up a Kubernetes cluster in my private network and managed to deploy a test pods:
now I want to expose an external ip for the service:
but when I run:
kubectl get deployments kube-verify
i get:
Error from server (NotFound): deployments.apps "kube-verify" not found
EDIT
Ok I try a new approach:
i have made a namespace called: verify-cluster
My deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: verify-cluster
namespace: verify-cluster
labels:
app: verify-cluster
spec:
replicas: 1
selector:
matchLabels:
app: verify-cluster
template:
metadata:
labels:
app: verify-cluster
spec:
containers:
- name: nginx
image: nginx:1.18.0
ports:
- containerPort: 80
and service.yaml:
apiVersion: v1
kind: Service
metadata:
name: verify-cluster
namespace: verify-cluster
spec:
type: NodePort
selector:
app: verify-cluster
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30007
then I run:
kubectl create -f deployment.yaml
kubectl create -f service.yaml
than checking
kubectl get all -n verify-cluster
but than I want to check deployment with:
kubectl get all -n verify-cluster
and get:
Error from server (NotFound): deployments.apps "verify-cluster" not found
hope that's better for reproduction ?
EDIT 2
when I deploy it to default namespace it runs directly so the issue must be something in the namespace
I guess that you might have forgotten to create the namespace:
File my-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: <insert-namespace-name-here>
Then:
kubectl create -f ./my-namespace.yaml
First you need to get the deployment by
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
test-deployment 1/1 1 1 15m
If you have used any namespace then
$ kubectl get deployment -n your-namesapce
Then use the exact name in further commands example
kubectl scale deployment test-deployment --replicas=10
I have replicated the use case with the config files you have provided. Everything works well at my end. Make sure that namespace is created correctly without any typo errors.
Alternatively, you can create namespace using below command:
kubectl create namespace <insert-namespace-name-here>
Refer this documentation for detailed information on creating a namespace.
Another approach could be to apply your configuration directly to the requested namespace.
kubectl apply -f deployment.yml -n verify-cluster
kubectl apply -f service.yml -n verify-cluster

How to resolve ImagePullBackOff error in local?

Net core application image and I am trying to create deployment in local kubernetes.
I created docker image as below.
docker tag microservicestest:dev microservicestest .
docker build -t microservicestest .
docker run -d -p 8080:80 --name myapp microservicetest
Then I created deployment as below.
kubectl run microservicestest-deployment --image=microservicestest:latest --port 80 --replicas=3
kubectl expose deployment microservicestest-deployment --type=NodePort
then when I see Kubectl get pods I see below error
Below is the output when I run docker images
Below is the output
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2020-09-22T04:29:14Z"
generation: 1
labels:
run: microservicestest-deployment
name: microservicestest-deployment
namespace: default
resourceVersion: "17282"
selfLink: /apis/apps/v1/namespaces/default/deployments/microservicestest-deployment
uid: bf75410a-d332-4016-9757-50d534114599
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
run: microservicestest-deployment
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: microservicestest-deployment
spec:
containers:
- image: microservicestest:latest
imagePullPolicy: Always
name: microservicestest-deployment
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2020-09-22T04:29:14Z"
lastUpdateTime: "2020-09-22T04:29:14Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2020-09-22T04:29:14Z"
lastUpdateTime: "2020-09-22T04:29:14Z"
message: ReplicaSet "microservicestest-deployment-5c67d587b9" is progressing.
reason: ReplicaSetUpdated
status: "True"
type: Progressing
observedGeneration: 1
replicas: 3
unavailableReplicas: 3
updatedReplicas: 3
I am not able to understand why my pods are not able to pull the image from local. Can someone help me to identify the issue What I am making here. Any help would be appreciated. Thank you
if you are using minikube you first need to build the images in the docker hosted in the minikube machine doing this in your bash session eval $(minikube docker-env) for windows check here
then you need to tell Kubernetes your image pull policy to be Never or IfNotPresent to look for local images
spec:
containers:
- image: my-image:my-tag
name: my-app
imagePullPolicy: Never
check here the official documentation
By default, the kubelet tries to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).
as you are not using yaml file you can create the resources like this
kubectl run microservicestest-deployment --image=microservicestest:latest --image-pull-policy=Never --port 80 --replicas=3
kubectl expose deployment microservicestest-deployment --type=NodePort

Need a working Kubectl binary inside an image

My goal is to have a pod with a working Kubectl binary inside.
Unfortunatly every kubectl image from docker hub I booted using basic yaml resulted in CrashLoopbackOff or else.
Has anyone got some yaml (deployment, pod, etc) that would get me my kubectl ?
I tried a bunch of images with this basic yaml there:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubectl-demo
labels:
app: deploy
role: backend
spec:
replicas: 1
selector:
matchLabels:
app: deploy
role: backend
template:
metadata:
labels:
app: deploy
role: backend
spec:
containers:
- name: kubectl-demo
image: <SOME_IMAGE>
ports:
- containerPort: 80
Thx
Or, you can do this. It works in my context, with kubernetes on VMs, where I know where is kubeconfig file. You would need to make the necessary changes, to make it work in your environment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubectl
spec:
replicas: 1
selector:
matchLabels:
role: kubectl
template:
metadata:
labels:
role: kubectl
spec:
containers:
- image: viejo/kubectl
name: kubelet
tty: true
securityContext:
privileged: true
volumeMounts:
- name: kube-config
mountPath: /root/.kube/
volumes:
- name: kube-config
hostPath:
path: /home/$USER/.kube/
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
This is the result:
$ kubectl get po
NAME READY STATUS RESTARTS AGE
kubectl-cb8bfc6dd-nv6ht 1/1 Running 0 70s
$ kubectl exec kubectl-cb8bfc6dd-nv6ht -- kubectl get no
NAME STATUS ROLES AGE VERSION
kubernetes-1-17-master Ready master 16h v1.17.3
kubernetes-1-17-worker Ready <none> 16h v1.17.3
As Suren already explained in the comments that kubectl is not a daemon so kubectl will run, exit and cause the container to restart.
There are a couple of workarounds for this. One of these is to use sleep command with infinity argument. This would keep the Pod alive, prevent it from restarting and allow you to exec into it.
Here`s an example how to do that:
spec:
containers:
- image: bitnami/kubectl
command:
- sleep
- "infinity"
name: kctl
Let me know if this helps.

Minikube services work when run from command line, but applying through YAML doesn't work

Heres image of my Kubernetes services.
Todo-front-2 is working instance of my app, which I deployed with command line:
kubectl run todo-front --image=todo-front:v7 --image-pull-policy=Never
kubectl expose deployment todo-front --type=NodePort --port=3000
And it's working great. Now I want to move on and use todo-front.yaml file to deploy and expose my service. Todo-front service refers to my current try on it. My deployment file looks like this:
kind: Deployment
apiVersion: apps/v1
metadata:
name: todo-front
spec:
replicas: 1
selector:
matchLabels:
app: todo-front
template:
metadata:
labels:
app: todo-front
spec:
containers:
- name: todo-front
image: todo-front:v7
env:
- name: REACT_APP_API_ROOT
value: "http://localhost:12000"
imagePullPolicy: Never
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: todo-front
spec:
type: NodePort
ports:
- port: 3000
targetPort: 3000
selector:
app: todo-front
I deploy it using:
kubectl apply -f deployment/todo-front.yaml
Here is the output
But when I run
minikube service todo-front
It redirects me to URL saying "Site can't be reached".
I can't figure out what I'm doing wrong. Ports should be ok, and my cluster should be ok since I can get it working by only using command-line without external YAML files. Both deployments are also using the same docker-image. I have also tried changing all ports now "3000" to something different, in case they clash with existing deployment todo-front-2, no luck.
Here is also a screenshot of pods and their status:
Anyone with more experience with Kube and Docker cares to take a look? Thank you!
You can run below commands to generate the yaml files without applying it to the cluster and then compare it with the yamls you manually created and see if there is a mismatch. Also instead of creating yamls manually yourself you can apply the generated yamls itself.
kubectl run todo-front --image=todo-back:v7 --image-pull-policy=Never --dry-run -o yaml > todo-front.yaml
kubectl expose deployment todo-front --type=NodePort --port=3000 --dry-run -o yaml > todo-depoloyment.yaml

Resources