Virtual Kubelet with AKS - docker

I followed the doc here
When I tried to create a virtual service for Windows, I get error:
The Deployment "nanoserver-iis" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"nanoserver-iis"}: selector does not match template labels
kubectl get nodes
`NAME STATUS ROLES AGE
VERSION
aks-agentpool-27326293-0 Ready agent 15m
v1.11.3
virtual-kubelet-aci-connector-windows-westeurope Ready agent 9s
v1.11.2`
virtual-kubelet-windows.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nanoserver-iis
spec:
replicas: 1
selector:
matchLabels:
app: aci-helloworld
template:
metadata:
labels:
app: nanoserver-iis
spec:
containers:
- name: nanoserver-iis
image: microsoft/iis:nanoserver
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: virtual-kubelet-aci-connector-windows-westeurope
tolerations:
- key: virtual-kubelet.io/provider
operator: Equal
value: azure
effect: NoSchedule

Try updating the deployment definition with the following. There is an inconsistency in the YAML definition where labels don't match. Labels in the matchLabeles field and labels in the metadata field need to match. In the deployment definition, they are set to different values aci-helloworld and nanoserver-iis respectively.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nanoserver-iis
spec:
replicas: 1
selector:
matchLabels:
app: nanoserver-iis
template:
metadata:
labels:
app: nanoserver-iis
spec:
containers:
- name: nanoserver-iis
image: microsoft/iis:nanoserver
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: virtual-kubelet-aci-connector-windows-westeurope
tolerations:
- key: virtual-kubelet.io/provider
operator: Equal
value: azure
effect: NoSchedule

Related

Can't pull image from private registry

I have this error (rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/registryName/projectName:latest": failed to unpack image on snapshotter overlayfs: unexpected media type text/html) when trying to deploy with the following deployment.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: <projectName>-deployment
namespace: <projectNamespace>
spec:
selector:
matchLabels:
app: <projectname>
replicas: 1
template:
metadata:
labels:
app: <projectName>
spec:
containers:
- name: private-reg
image: <registry>/<projectName>:latest
ports:
- containerPort: 9000
imagePullSecrets:
- name: docker-hub-cred
I can't understand what's happening because using a simple pod.yml it's working:
apiVersion: v1
kind: Pod
metadata:
name: <projectName>-pod
namespace: <projectNamespace>
spec:
containers:
- name: <projectName>
image: <registry>/<projectName>:latest
imagePullSecrets:
- name: docker-hub-cred

Kubectl create error "could not find expected"

I'm using kbuernetes version 1.20.5 with docker 19.03.8 on a virtual machine. I'm trying to create a test elk cluster with kubernetes. When i enter kubectl create i get the followig error:
error parsing testserver.yaml: error converting YAML to JSON: yaml: line 17: could not find expected ':'
I keep checking but can't find where the missing ":" should be. I validated the yaml in yaml lint and i get valid yaml result. The yaml file is like this:
#namespace define
apiVersion: v1
kind: Namespace
metadata:
name: testlog
---
#esnodes
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode1
name: testnode1
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode1
template:
metadata:
labels:
app: testnode1
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "false"
- name: node.name
value: testnode1
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode1
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode1-claim0
# restartPolicy: Always
volumes:
- name: testnode1-claim0
hostPath:
path: /logtest/es1
type: DirectoryOrCreate
---
#es1 portservice
apiVersion: v1
kind: Service
metadata:
name: testnode1-service
namespace: testlog
labels:
app: testnode1
spec:
type: NodePort
ports:
- port: 9200
nodePort: 9201
targetPort: 9200
protocol: TCP
name: testnode1-9200
- port: 9300
nodePort: 9301
targetPort: 9300
protocol: TCP
name: testnode1-9300
selector:
app: testnode1
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode1
namespace: testlog
labels:
app: testnode1
spec:
clusterIP: None
selector:
app: testnode1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode2
name: testnode2
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode2
template:
metadata:
labels:
app: testnode2
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode2
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode2
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode2-claim0
# restartPolicy: Always
volumes:
- name: testnode2-claim0
hostPath:
path: /logtest/es2
type: DirectoryOrCreate
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode2
namespace: testlog
labels:
app: testnode2
spec:
clusterIP: None
selector:
app: testnode2
----
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode3
name: testnode3
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode3
template:
metadata:
labels:
app: testnode3
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode3
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode3
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode3-claim0
# restartPolicy: Always
volumes:
- name: testnode3-claim0
hostPath:
path: /logtest/es3
type: DirectoryOrCreate
---
#es3 dns
apiVersion: v1
kind: Service
metadata:
name: testnode3
namespace: testlog
labels:
app: testnode3
spec:
clusterIP: None
selector:
app: testnode3
---
#kibana dep
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kibana
name: kibana
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://testnode1:9200
- name: ELASTICSEARCH_URL
value: http://testnode1:9200
image: amazon/opendistro-for-elasticsearch-kibana:1.8.0
name: kibana
# restartPolicy: Always
---
#kibana dns
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: testlog
labels:
app: kibana
spec:
clusterIP: None
selector:
app: kibana
---
#kibana port servi
apiVersion: v1
kind: Service
metadata:
name: kibana-service
namespace: testlog
labels:
app: kibana
spec:
type: NodePort
ports:
- port: 5601
nodePort: 5602
targetPort: 5601
protocol: TCP
name: kibana
selector:
app: kibana
----
#elasticsearch-hq deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: elasticsearch-hq
name: elasticsearch-hq
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch-hq
template:
metadata:
labels:
app: elasticsearch-hq
spec:
containers:
- image: elastichq/elasticsearch-hq
name: elasticsearch-hq
# restartPolicy: Always
---
#elasticsearch-hq port service
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-hq-service
namespace: testlog
labels:
app: elasticsearch-hq
spec:
type: NodePort
ports:
- port: 8081
nodePort: 8081
targetPort: 5000
protocol: TCP
name: elasticsearch-hq
selector:
app: elasticsearch-hq
There are couple of issues in the yaml file:
You have used four - in some places whereas the separator is three -.
Once, you fix the first issue, you'll see the following errors related to NodePort services as the valid range for the nodePort is 30000-32767:
Error from server (Invalid): error when creating "testserver.yaml": Service "testnode1-service" is invalid: spec.ports[0].nodePort: Invalid value: 9201: provided port is not in the valid range. The range of valid ports is 30000-32767
Error from server (Invalid): error when creating "testserver.yaml": Service "kibana-service" is invalid: spec.ports[0].nodePort: Invalid value: 5602: provided port is not in the valid range. The range of valid ports is 30000-32767
Error from server (Invalid): error when creating "testserver.yaml": Service "elasticsearch-hq-service" is invalid: spec.ports[0].nodePort: Invalid value: 8081: provided port is not in the valid range. The range of valid ports is 30000-32767
Fixing both the errors will resolve the yaml issues.
Below is the full working yaml file:
#namespace define
apiVersion: v1
kind: Namespace
metadata:
name: testlog
---
#esnodes
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode1
name: testnode1
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode1
template:
metadata:
labels:
app: testnode1
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "false"
- name: node.name
value: testnode1
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode1
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode1-claim0
# restartPolicy: Always
volumes:
- name: testnode1-claim0
hostPath:
path: /logtest/es1
type: DirectoryOrCreate
---
#es1 portservice
apiVersion: v1
kind: Service
metadata:
name: testnode1-service
namespace: testlog
labels:
app: testnode1
spec:
type: NodePort
ports:
- port: 9200
nodePort: 31201
targetPort: 9200
protocol: TCP
name: testnode1-9200
- port: 9300
nodePort: 31301
targetPort: 9300
protocol: TCP
name: testnode1-9300
selector:
app: testnode1
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode1
namespace: testlog
labels:
app: testnode1
spec:
clusterIP: None
selector:
app: testnode1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode2
name: testnode2
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode2
template:
metadata:
labels:
app: testnode2
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode2
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode2
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode2-claim0
# restartPolicy: Always
volumes:
- name: testnode2-claim0
hostPath:
path: /logtest/es2
type: DirectoryOrCreate
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode2
namespace: testlog
labels:
app: testnode2
spec:
clusterIP: None
selector:
app: testnode2
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode3
name: testnode3
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode3
template:
metadata:
labels:
app: testnode3
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode3
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode3
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode3-claim0
# restartPolicy: Always
volumes:
- name: testnode3-claim0
hostPath:
path: /logtest/es3
type: DirectoryOrCreate
---
#es3 dns
apiVersion: v1
kind: Service
metadata:
name: testnode3
namespace: testlog
labels:
app: testnode3
spec:
clusterIP: None
selector:
app: testnode3
---
#kibana dep
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kibana
name: kibana
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
app: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://testnode1:9200
- name: ELASTICSEARCH_URL
value: http://testnode1:9200
image: amazon/opendistro-for-elasticsearch-kibana:1.8.0
name: kibana
# restartPolicy: Always
---
#kibana dns
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: testlog
labels:
app: kibana
spec:
clusterIP: None
selector:
app: kibana
---
#kibana port servi
apiVersion: v1
kind: Service
metadata:
name: kibana-service
namespace: testlog
labels:
app: kibana
spec:
type: NodePort
ports:
- port: 5601
nodePort: 31602
targetPort: 5601
protocol: TCP
name: kibana
selector:
app: kibana
---
#elasticsearch-hq deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: elasticsearch-hq
name: elasticsearch-hq
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch-hq
template:
metadata:
labels:
app: elasticsearch-hq
spec:
containers:
- image: elastichq/elasticsearch-hq
name: elasticsearch-hq
# restartPolicy: Always
---
#elasticsearch-hq port service
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-hq-service
namespace: testlog
labels:
app: elasticsearch-hq
spec:
type: NodePort
ports:
- port: 8081
nodePort: 31081
targetPort: 5000
protocol: TCP
name: elasticsearch-hq
selector:
app: elasticsearch-hq

Kubernetes apply service but endpoints is none

When I tried to apply a service to pod, endpoint is always none. Could someone know any root cause? I also check if selector match to what is defined in the deployment.yaml. Belows are the deployment, service file that I used. I also attached the service describe.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gethnode
namespace: mynamespace
labels:
app: gethnode
env: dev1
spec:
replicas: 1
selector:
matchLabels:
app: gethnode
env: dev1
template:
metadata:
labels:
app: gethnode
env: dev1
spec:
containers:
- name: gethnode
image: myserver.th/bc/gethnode:1.1
ports:
- containerPort: 8550
env:
- name: TZ
value: Asis/Bangkok
tty: true
stdin: true
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 500m
memory: 512Mi
imagePullSecrets:
- name: regcred-harbor
service.yaml
apiVersion: v1
kind: Service
metadata:
name: gethnode
namespace: mynamespace
labels:
app: gethnode
env: dev1
spec:
type: ClusterIP
ports:
- name: tcp
port: 8550
targetPort: 8550
protocol: TCP
selector:
app: gethnode
env: dev1
kubectl describe svc
Name: gethnode
Namespace: mynamespace
Labels: app=gethnode
env=dev1
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"gethnode","env":"dev1"},"name":"gethnode","namespace":"c...
Selector: app=gethnode,env=dev1
Type: ClusterIP
IP: 192.97.37.19
Port: tcp 8550/TCP
TargetPort: 8550/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
kubectl get pods -n mynamespace --show-labels
NAME READY STATUS RESTARTS AGE LABELS
console-bctest-6bff897bf4-xmch8 1/1 Running 0 6d3h app=bctest,env=dev1,pod-template-hash=6bff897bf4
console-dev1-595c47c678-s5mzz 1/1 Running 0 20d app=console,env=dev1,pod-template-hash=595c47c678
gethnode-7f9b7bbd77-pcbfc 1/1 Running 0 3s app=gethnode,env=dev1,pod-template-hash=7f9b7bbd77
gotty-dev1-59dcb68f45-4mwds 0/2 ImagePullBackOff 0 20d app=gotty,env=dev1,pod-template-hash=59dcb68f45
kubectl get svc gethnode -n mynamespace -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
gethnode ClusterIP 192.107.220.229 <none> 8550/TCP 64m app=gethnode,env=dev1
Remove env: dev1 from the selector of the service
apiVersion: v1
kind: Service
metadata:
name: gethnode
namespace: mynamespace
labels:
app: gethnode
env: dev1
spec:
type: ClusterIP
ports:
- name: tcp
port: 8550
targetPort: 8550
protocol: TCP
selector:
app: gethnode
I had same issue, and what I did was to delete the Deployment, Secrets associated, Service, and Ingress to start fresh. Then make sure that my Deployment is consistent with my service in the naming, specifically talking about app.kubernetes.io/name as I used to have just name in my deployment and app.kubernetes.io/name in my service causing this discrepancy. In any case, now I got endpoints populated:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webhook
namespace: apps
labels:
app.kubernetes.io/name: webhook
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: webhook
template:
metadata:
labels:
app.kubernetes.io/name: webhook
spec:
containers:
- name: webhook
image: registry.min.dev/minio/webhook:latest
ports:
- name: http
containerPort: 23411
env:
- name: GH_TOKEN
valueFrom:
secretKeyRef:
name: webhooksecret
key: GH_TOKEN
imagePullSecrets:
- name: registry-creds
apiVersion: v1
kind: Service
metadata:
name: webhook
namespace: apps
labels:
app.kubernetes.io/name: webhook
spec:
ports:
- name: http
port: 23411
selector:
app.kubernetes.io/name: webhook
And as a result:
$ k get ep webhook -n apps
NAME ENDPOINTS AGE
webhook 192.168.177.67:23411 4m15s
|
|___ Got populated!

Issue with Jenkins Deployment File: Unknown resource kind: Deployment

I'm struggling to figure out what the solution might be, so I thought to ask here. I'm trying to use the code below to deploy a Jenkins pod to Kubernetes, but it fails with a Unknown resource kind: Deployment error:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts-alpine
ports:
- name: http-port
containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
The output of kubectl api-versions is:
admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1
coordination.k8s.io/v1beta1
discovery.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
networking.k8s.io/v1beta1
node.k8s.io/v1beta1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
Does anyone know what the problem might be?
If this is an indentation issue, I'm failing to see it.
It seems the apiVersion is deprecated. You can simply convert to current apiVersion and apply.
$ kubectl convert -f jenkins-dep.yml
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: jenkins
name: jenkins-deployment
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 2147483647
selector:
matchLabels:
app: jenkins
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: jenkins
spec:
containers:
- image: jenkins/jenkins:lts-alpine
imagePullPolicy: IfNotPresent
name: jenkins
ports:
- containerPort: 8080
name: http-port
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: jenkins-home
status: {}
$ kubectl convert -f jenkins-dep.yml -oyaml > jenkins-dep-latest.yml
Change the apiVersion from extensions/v1beta1 to apps/v1 and use kubectl version to check if the kubectl client and kube API Server version is matching and not too old.

k8s doesn't download docker container

when i run my command to apply the modification or just to create ( pods, service, Deployments)
kubectl apply -f hello-kubernetes-oliver.yml
I dont have an error.
But when i do docker ps to see if the container was downloaded from my private registery. i've nothing :(
If i run the command docker-all.attanea.net/hello_world:latestit download the container.
i dont understand why it doesn't download my container with the first command ?
you will find below my hello-kubernetes-oliver.yml
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-oliver
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-oliver
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-kubernetes-oliver
spec:
replicas: 1
template:
metadata:
labels:
app: hello-kubernetes-oliver
spec:
containers:
- name: hello-kubernetes-oliver
image: private-registery.net/hello_world:latest
ports:
- containerPort: 80
In order to download Images from the Private registry, You need to create a Secret which is used in the Deployment Manifest.
kubectl create secret docker-registry regcred --docker-server= --docker-username="your-name" --docker-password="your-pword" --docker-email="your-email"
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token
regcred is the name of the secret resources.
Then you attach regcred secret in your deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-kubernetes-oliver
spec:
replicas: 1
template:
metadata:
labels:
app: hello-kubernetes-oliver
spec:
containers:
- name: hello-kubernetes-oliver
image: private-registery.net/hello_world:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred

Resources