Issues with setting up kubernetes for local testing using docker image - docker

I created a docker image of my app which is running an internal server exposed at 8080.
Then I tried to create a local kubernetes cluster for testing, using the following set of commands.
$ kubectl create deployment --image=test-image test-app
$ kubectl set env deployment/test-app DOMAIN=cluster
$ kubectl expose deployment test-app --port=8080 --name=test-service
I am using Docker-desktop on windows to run run kubernetes. This exposes my cluster to external IP localhost but i cannot access my app. I checked the status of the pods and noticed this issue:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-66-ps2 0/1 ImagePullBackOff 0 8h
test-6f-6jh 0/1 InvalidImageName 0 7h42m
May I know what could be causing this issue? And how can i make it work on local ?
Thanks, Look forward to the suggestions!
My YAML file for reference:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
creationTimestamp: "2021-10-13T18:00:15Z"
generation: 4
labels:
app: test-app
name: test-app
namespace: default
resourceVersion: "*****"
uid: ************
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: test-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: test-app
spec:
containers:
- env:
- name: DOMAIN
value: cluster
image: C:\Users\test-image
imagePullPolicy: Always
name: e20f23453f27
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2021-10-13T18:00:15Z"
lastUpdateTime: "2021-10-13T18:00:15Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2021-10-13T18:39:51Z"
lastUpdateTime: "2021-10-13T18:39:51Z"
message: ReplicaSet "test-66" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 4
replicas: 2
unavailableReplicas: 2
updatedReplicas: 1
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-10-13T18:01:49Z"
labels:
app: test-app
name: test-service
namespace: default
resourceVersion: "*****"
uid: *****************
spec:
clusterIP: 10.161.100.100
clusterIPs:
- 10.161.100.100
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 41945
port: 80
protocol: TCP
targetPort: 8080
selector:
app: test-app
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: localhost

The reason you are facing ImagePullBackOff and InvalidImageName issue is because your app image does not exist on the kubernetes cluster you deployed via docker, rather it exists on your local machine!
To resolve this issue for testing purpose what you can do is to mount the project workspace and create your image there on your kubernetes cluster and then build image using docker on the k8s cluster or upload your image to docker hub and then setting your deployment to pick image from docker hub!

Related

Can't access my local kubernetes service over the internet

Implementation Goal
Expose Zookeeper instance, running on kubernetes, to the internet.
(configuration & version information provided at the bottom)
Implementation Attempt
I currently have a minikube cluster running on ubuntu 14.04, backed by docker containers.
I'm running a bare metal k8s cluster, and I'm trrying to expose a zookeeper service to the internet. Seeing as my cluster is not running on a cloud provider, I set up metallb, in order to provide a network-loadbalancer implementation for my zookeeper service.
On startup everything looks good, an external IP is assigned and I can access it from the same host via a curl command.
$ kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-5c9894b5cd-9gh8m 1/1 Running 0 5h59m
speaker-j2z8q 1/1 Running 0 5h59m
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.xxx.xxx.xxx <none> 443/TCP 6d19h
zk-cs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2181:30035/TCP 56m
zk-hs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2888:30664/TCP,3888:31113/TCP 6m15s
When I curl the above mentioned external IP's, I get a valid response
$ curl -D- "http://172.1.1.x:2181"
curl: (52) Empty reply from server
So far it all looks good, I can access the LB from outside the cluster with no issues, but this is where my lack of Kubernetes/Networking knowledge gets me.I'm finding it impossible to expose this LB to the internet. I've tried running minikube tunnel which I had high hopes for, only to be deeply disappointed.
Running a curl command from another node, whilst minikube tunnel is running will just see the request time out.
$ curl -D- "http://172.1.1.x:2181"
curl: (28) Failed to connect to 172.1.1.x port 2181: Timed out
At this point, as I mentioned before, I'm stuck.
Is there any way that I can get this service exposed to the internet without giving my soul to AWS or GCP?
Any help will be greatly appreciated.
Service Configuration
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2888
targetPort: 2888
name: server
protocol: TCP
- port: 3888
targetPort: 3888
name: leader-election
protocol: TCP
clusterIP: ""
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
ports:
- name: client
protocol: TCP
port: 2181
targetPort: 2181
type: LoadBalancer
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 1
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: zookeeper
imagePullPolicy: Always
image: "library/zookeeper:3.6"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
- name: zoo-config
mountPath: /conf
volumes:
- name: zoo-config
configMap:
name: zoo-config
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zoo-config
namespace: default
data:
zoo.cfg: |
tickTime=10000
dataDir=/var/lib/zookeeper
clientPort=2181
initLimit=10
syncLimit=4
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.1.1.1-172.1.1.10
minikube: v1.13.1
docker: 18.06.3-ce
You can do it with minikube, but the idea of minikube is just to test stuff on your local environment. So, by default, it does not have the correct IPTable permissions, and yes you can adjust that, but if your goal is only to use without any loud provider, I'll higly recommend you to use kubeadm (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
This tool will provide you a very customizable cluster configuration and you will be able to set your network problems without headaches.

How to resolve ImagePullBackOff error in local?

Net core application image and I am trying to create deployment in local kubernetes.
I created docker image as below.
docker tag microservicestest:dev microservicestest .
docker build -t microservicestest .
docker run -d -p 8080:80 --name myapp microservicetest
Then I created deployment as below.
kubectl run microservicestest-deployment --image=microservicestest:latest --port 80 --replicas=3
kubectl expose deployment microservicestest-deployment --type=NodePort
then when I see Kubectl get pods I see below error
Below is the output when I run docker images
Below is the output
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2020-09-22T04:29:14Z"
generation: 1
labels:
run: microservicestest-deployment
name: microservicestest-deployment
namespace: default
resourceVersion: "17282"
selfLink: /apis/apps/v1/namespaces/default/deployments/microservicestest-deployment
uid: bf75410a-d332-4016-9757-50d534114599
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
run: microservicestest-deployment
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: microservicestest-deployment
spec:
containers:
- image: microservicestest:latest
imagePullPolicy: Always
name: microservicestest-deployment
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2020-09-22T04:29:14Z"
lastUpdateTime: "2020-09-22T04:29:14Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2020-09-22T04:29:14Z"
lastUpdateTime: "2020-09-22T04:29:14Z"
message: ReplicaSet "microservicestest-deployment-5c67d587b9" is progressing.
reason: ReplicaSetUpdated
status: "True"
type: Progressing
observedGeneration: 1
replicas: 3
unavailableReplicas: 3
updatedReplicas: 3
I am not able to understand why my pods are not able to pull the image from local. Can someone help me to identify the issue What I am making here. Any help would be appreciated. Thank you
if you are using minikube you first need to build the images in the docker hosted in the minikube machine doing this in your bash session eval $(minikube docker-env) for windows check here
then you need to tell Kubernetes your image pull policy to be Never or IfNotPresent to look for local images
spec:
containers:
- image: my-image:my-tag
name: my-app
imagePullPolicy: Never
check here the official documentation
By default, the kubelet tries to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).
as you are not using yaml file you can create the resources like this
kubectl run microservicestest-deployment --image=microservicestest:latest --image-pull-policy=Never --port 80 --replicas=3
kubectl expose deployment microservicestest-deployment --type=NodePort

(Kubernetes + Docker) Skaffold keeps terminating my deployment files : Error: could not stabilize within 2m0s: context deadline exceeded

I'm trying to deploy a MicroServices system on my local machine using Skaffold.
ingress-srv.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticketing.dot
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
auth-depl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: ****MYDOCKERID****/auth
env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
auth-mongo-depl.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
I've followed through the guidelines in the manual:
https://kubernetes.github.io/ingress-nginx/deploy/
and hit:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.0/deploy/static/provider/cloud/deploy.yaml
However Skaffold keep terminating the deployment:
[34mListing files to watch...[0m
[34m - ****MYDOCKERID****/auth
[0m[34mGenerating tags...[0m
[34m - ****MYDOCKERID****/auth -> [0m****MYDOCKERID****/auth:683e8db
[34mChecking cache...[0m
[34m - ****MYDOCKERID****/auth: [0m[32mFound Locally[0m
[34mTags used in deployment:[0m
[34m - ****MYDOCKERID****/auth -> [0m****MYDOCKERID****/auth:3c4bb66ff693320b5fac3fde91906768f8b54b968813b226822d057d1dd3a995
[34mStarting deploy...[0m
- deployment.apps/auth-depl created
- service/auth-srv created
- deployment.apps/auth-mongo-depl created
- service/auth-mongo-srv created
- ingress.extensions/ingress-service created
[34mWaiting for deployments to stabilize...[0m
- deployment/auth-depl:
- deployment/auth-mongo-depl:
- deployment/auth-depl: waiting for rollout to finish: 0 of 1 updated replicas are available...
- deployment/auth-mongo-depl: waiting for rollout to finish: 0 of 1 updated replicas are available...
- deployment/auth-mongo-depl is ready. [1/2 deployment(s) still pending]
- deployment/auth-depl failed. Error: could not stabilize within 2m0s: context deadline exceeded.
[34mCleaning up...[0m
- deployment.apps "auth-depl" deleted
- service "auth-srv" deleted
- deployment.apps "auth-mongo-depl" deleted
- service "auth-mongo-srv" deleted
- ingress.extensions "ingress-service" deleted
[31mexiting dev mode because first deploy failed: 1/2 deployment(s) failed[0m
How can we fix this annoying issue?
EDIT 9:44 AM ISRAEL TIME :
C:\Development-T410\Micro Services - JAN>kubectl get pods
NAME READY STATUS RESTARTS AGE
auth-depl-645bbf7b9d-llp2q 0/1 CreateContainerConfigError 0 115s
auth-depl-c6c765d7c-7wvcg 0/1 CreateContainerConfigError 0 28m
auth-mongo-depl-6b594c4847-4kzzt 1/1 Running 0 115s
client-depl-5888f95b59-vznh6 1/1 Running 0 114s
nats-depl-7dfccdf5-874vm 1/1 Running 0 114s
orders-depl-74f4d48559-cbwlp 0/1 CreateContainerConfigError 0 114s
orders-depl-78fc845b4-9tfml 0/1 CreateContainerConfigError 0 28m
orders-mongo-depl-688676d675-lrvhp 1/1 Running 0 113s
tickets-depl-7cc7ddbbff-z9pvc 0/1 CreateContainerConfigError 0 113s
tickets-depl-8574fc8f9b-tm6p4 0/1 CreateContainerConfigError 0 28m
tickets-mongo-depl-b95f45947-hf6wq 1/1 Running 0 113s
C:\Development-T410\Micro Services>kubectl logs auth-depl-c6c765d7c-7wvcg
Error from server (BadRequest): container "auth" in pod "auth-depl-c6c765d7c-7wvcg" is waiting to start: CreateContainerConfigError
Looks look your auth-depl deployment is failing. Possibly the container is crashing or erroring out. To debug you can see the pod logs
$ kubectl logs auth-depl-xxxxxxxxxx-xxxxx
Make sure you run skaffold with the --cleanup=false option so that you can debug. For example,
$ skaffold dev --cleanup=false
Update:
Based on the logs it looks like it's an issue with your Kubernetes Secret and how it's defined, possibly the format or YAML format. This answer sheds some details on what the problem may be: Pod status as `CreateContainerConfigError` in Minikube cluster
You should add the environment variable for the mongo image in your deployment file
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: root
- name: MONGO_INITDB_ROOT_PASSWORD
value: "rootuser"

Need a working Kubectl binary inside an image

My goal is to have a pod with a working Kubectl binary inside.
Unfortunatly every kubectl image from docker hub I booted using basic yaml resulted in CrashLoopbackOff or else.
Has anyone got some yaml (deployment, pod, etc) that would get me my kubectl ?
I tried a bunch of images with this basic yaml there:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubectl-demo
labels:
app: deploy
role: backend
spec:
replicas: 1
selector:
matchLabels:
app: deploy
role: backend
template:
metadata:
labels:
app: deploy
role: backend
spec:
containers:
- name: kubectl-demo
image: <SOME_IMAGE>
ports:
- containerPort: 80
Thx
Or, you can do this. It works in my context, with kubernetes on VMs, where I know where is kubeconfig file. You would need to make the necessary changes, to make it work in your environment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubectl
spec:
replicas: 1
selector:
matchLabels:
role: kubectl
template:
metadata:
labels:
role: kubectl
spec:
containers:
- image: viejo/kubectl
name: kubelet
tty: true
securityContext:
privileged: true
volumeMounts:
- name: kube-config
mountPath: /root/.kube/
volumes:
- name: kube-config
hostPath:
path: /home/$USER/.kube/
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
This is the result:
$ kubectl get po
NAME READY STATUS RESTARTS AGE
kubectl-cb8bfc6dd-nv6ht 1/1 Running 0 70s
$ kubectl exec kubectl-cb8bfc6dd-nv6ht -- kubectl get no
NAME STATUS ROLES AGE VERSION
kubernetes-1-17-master Ready master 16h v1.17.3
kubernetes-1-17-worker Ready <none> 16h v1.17.3
As Suren already explained in the comments that kubectl is not a daemon so kubectl will run, exit and cause the container to restart.
There are a couple of workarounds for this. One of these is to use sleep command with infinity argument. This would keep the Pod alive, prevent it from restarting and allow you to exec into it.
Here`s an example how to do that:
spec:
containers:
- image: bitnami/kubectl
command:
- sleep
- "infinity"
name: kctl
Let me know if this helps.

Kubernetes PetSet - FailedCreate of persistent volume

I'm trying to setup a Kubernetes PetSet as described in the documentation. When I create the PetSet I can't seem to get the Persistent Volume Claim to bind to the persistent volume. Here is my Yaml File for defining the PetSet:
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: 'ml-nodes'
spec:
serviceName: "ml-service"
replicas: 1
template:
metadata:
labels:
app: marklogic
tier: backend
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
containers:
- name: 'ml'
image: "192.168.201.7:5000/dcgs-sof/ml8-docker-final:v1"
imagePullPolicy: Always
ports:
- containerPort: 8000
name: ml8000
protocol: TCP
- containerPort: 8001
name: ml8001
- containerPort: 7997
name: ml7997
- containerPort: 8002
name: ml8002
- containerPort: 8040
name: ml8040
- containerPort: 8041
name: ml8041
- containerPort: 8042
name: ml8042
volumeMounts:
- name: ml-data
mountPath: /data/vol-data
lifecycle:
preStop:
exec:
# SIGTERM triggers a quick exit; gracefully terminate instead
command: ["/etc/init.d/MarkLogic stop"]
volumes:
- name: ml-data
persistentVolumeClaim:
claimName: ml-data
terminationGracePeriodSeconds: 30
volumeClaimTemplates:
- metadata:
name: ml-data
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
If I do a 'describe' on my created PetSet I see the following:
Name: ml-nodes
Namespace: default
Image(s): 192.168.201.7:5000/dcgs-sof/ml8-docker-final:v1
Selector: app=marklogic,tier=backend
Labels: app=marklogic,tier=backend
Replicas: 1 current / 1 desired
Annotations: <none>
CreationTimestamp: Tue, 20 Sep 2016 13:23:14 -0400
Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
33m 33m 1 {petset } Warning FailedCreate pvc: ml-data-ml-nodes-0, error: persistentvolumeclaims "ml-data-ml-nodes-0" not found
33m 33m 1 {petset } Normal SuccessfulCreate pet: ml-nodes-0
I'm trying to run this in a minikube environment on my local machine. Not sure what I'm missing here???
There is an open issue on minikube for this. Persistent volume provisioning support appears to be unfinished in minikube at this time.
For it to work with local storage, it needs the following flag on the controller manager and that isn't currently enabled on minikube.
--enable-hostpath-provisioner[=false]: Enable HostPath PV
provisioning when running without a cloud provider. This allows
testing and development of provisioning features. HostPath
provisioning is not supported in any way, won't work in a multi-node
cluster, and should not be used for anything other than testing or
development.
Reference: http://kubernetes.io/docs/admin/kube-controller-manager/
For local development/testing, it would work if you were to use hack/local_up_cluster.sh to start a local cluster, after setting an environment variable:
export ENABLE_HOSTPATH_PROVISIONER=true
You should be able to use PetSets in the latest version of minikube as it uses kubernetes v1.4.1 as the default version.

Resources