I am creating a deployment in circlci that deploys my containerized application to a k3s server I have set up. I have set up a secret using the commands found here.
The secret is created using the command:
kubectl create secret docker-registry regkeyname --docker-server=https://index.docker.io/v1/ \
--docker-username=username \
--docker-password=password \
--docker-email=my#email.com \
--namespace=external
My secret is as follows when running kubectl get secret regkeyname --namespace=external --output=yaml:
apiVersion: v1
data:
.dockerconfigjson: secretbase64stuff
kind: Secret
metadata:
creationTimestamp: "2020-11-24T13:11:07Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:.dockerconfigjson: {}
f:type: {}
manager: kubectl
operation: Update
time: "2020-11-24T13:11:07Z"
name: regkeyname
namespace: external
resourceVersion: "16929381"
selfLink: /api/v1/namespaces/external/secrets/regkeyname
uid: 51b87508-9cf2-490b-b871-0b5a342ab64c
type: kubernetes.io/dockerconfigjson
I'm using helm to deploy my application and the Deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.labels.app }}
labels:
app: {{ .Values.labels.app }}
spec:
selector:
matchLabels:
app: {{ .Values.labels.app }}
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: {{ .Values.labels.app }}
env: {{ .Values.labels.env }}
spec:
imagePullSecrets:
- name: regkeyname
containers:
- name: my-service
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.imagePullPolicy }}
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 5
successThreshold: 1
after deploying however, the images fail to pull and it appears that my secret "regkeyname" is not used/mounted in the pods. the result is as follows:
Name: my-service-856454c6cd-qcp7w
Namespace: external
Priority: 0
Node: worker-2/192.168.1.13
Start Time: Tue, 24 Nov 2020 07:20:08 -0600
Labels: app=my-service
env=development
pod-template-hash=856454c6cd
Annotations: <none>
Status: Pending
IP: 10.42.2.196
Controlled By: ReplicaSet/my-service-856454c6cd
Containers:
auth-service:
Container ID:
Image: my-repo/my-service:latest
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Readiness: http-get http://:8080/health delay=10s timeout=1s period=10s #success=1 #failure=5
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-l9b4k (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-l9b4k:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-l9b4k
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned external/auth-service-856454c6cd-qcp7w to worker-2
Normal Pulling 32m (x4 over 34m) kubelet, worker-2 Pulling image "my-repo/my-service:latest"
Warning Failed 32m (x4 over 34m) kubelet, worker-2 Failed to pull image "my-repo/my-service:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/my-repo/my-service:latest": failed to resolve reference "docker.io/my-repo/my-service:latest": failed to do request: Head https://registry-1.docker.io/v2/my-repo/my-service/manifests/latest: dial tcp: lookup registry-1.docker.io: Try again
Warning Failed 32m (x4 over 34m) kubelet, worker-2 Error: ErrImagePull
Warning Failed 31m (x6 over 34m) kubelet, worker-2 Error: ImagePullBackOff
Normal BackOff 3m54s (x127 over 34m) kubelet, worker-2 Back-off pulling image "my-repo/my-service:latest"
I had this working when running locally with kubernetes so I am assuming the issue must have something to do either with k3s or the fact that now the server is remote rather than local. Any insight would be greatly appreciated. Thanks in advance!
The controller is trying to pull image from the official docker registry:
failed to resolve reference "docker.io/my-repo/my-service:latest"
While creating the imagePullSecret, make sure that you put the correct URL (ie. the URL for your private registry) for performing authentication and pulling image.
$ cat ~/.docker/config.json
{
"auths": {
"https://index.docker.io/v1/": { # <------ change here
"auth": "..........="
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.5 (linux)"
}
Related
I will preface this by saying that I know very little about how to manage Kuberentes clusters, so I am probably doing something dumb. Here goes:
I have created a Kubernetes cluster using GCP's Autopilot mode, and I (think I) registered the cluster to my Gitlab repository using the "Infrastructure->Kubernetes Clusters" menu (It shows as online).
Using Gitlab's CI/CD, I have a build stage which pushes an image to the repo's container registry (I see the the image is indeed there).
I then have a deploy stage with the following script: (censored some data for privacy)
kubectl config use-context <my_username>/<my_reponame>:<my_reponame>
kubectl delete secret regcred --ignore-not-found
kubectl create secret docker-registry regcred --docker-server="$CI_REGISTRY" --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD"
kubectl delete pod <my_reponame> --ignore-not-found
kubectl apply -f kube.yaml
My kube.yaml contains the following:
apiVersion: v1
kind: Pod
metadata:
name: <my_reponame>
spec:
containers:
- name: <my_reponame>
image: registry.gitlab.com/<my_username>/<my_reponame>:latest
imagePullSecrets:
- name: regcred
I also have a .gitlab/agents/<my_reponame>/config.yaml file with the following:
ci_access:
projects:
- id: <my_username>/<my_reponame>
This is the output of the build stage:
Running with gitlab-runner 14.8.0~beta.44.g57df0d52 (57df0d52)
on blue-2.shared.runners-manager.gitlab.com/default XxUrkriX
Preparing the "docker+machine" executor
00:12
Using Docker executor with image bitnami/kubectl:latest ...
Pulling docker image bitnami/kubectl:latest ...
Using docker image sha256:208d070e071a0165e48ad7bf20b30c054328bcaaad76b0c53a9270a5e8627480 for bitnami/kubectl:latest with digest bitnami/kubectl#sha256:51eb9cb7d811e74bba30f97700cb433424d4025aabe70e8ff80e1289a964ab9c ...
Preparing environment
00:01
Running on runner-xxurkrix-project-32365648-concurrent-0 via runner-xxurkrix-shared-1646113353-c926e727...
Getting source from Git repository
00:02
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/<my_username>/<my_reponame>/.git/
Created fresh repository.
Checking out 23743f89 as main...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:13
Using docker image sha256:208d070e071a0165e48ad7bf20b30c054328bcaaad76b0c53a9270a5e8627480 for bitnami/kubectl:latest with digest bitnami/kubectl#sha256:51eb9cb7d811e74bba30f97700cb433424d4025aabe70e8ff80e1289a964ab9c ...
$ kubectl config use-context <my_username>/<my_reponame>:<my_reponame>
Switched to context "<my_username>/<my_reponame>:<my_reponame>".
$ kubectl delete secret regcred --ignore-not-found
secret "regcred" deleted
$ kubectl create secret docker-registry regcred --docker-server="$CI_REGISTRY" --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD"
secret/regcred created
$ kubectl delete pod <my_reponame> --ignore-not-found
$ kubectl apply -f kube.yaml
Warning: Autopilot set default resource requests for Pod default/<my_reponame>, as resource requests were not specified. See http://g.co/gke/autopilot-defaults.
pod/<my_reponame> created
Cleaning up project directory and file based variables
00:00
Job succeeded
However, the deploy to the cluster did not actually work.
Here is some data from running in the gcloud shell
<my_username>#cloudshell:~ (<my_reponame>-342620)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
<my_reponame> 0/1 ImagePullBackOff 0 13h
<my_reponame>-69775f4b-cft8w 0/1 ImagePullBackOff 0 13h
kubectl get pod <my_reponame> -o yaml`
Name: <my_reponame>
Namespace: default
Priority: 0
Node: gk3-<my_reponame>-nap-73l1ao51-a8e3ee2d-wjd8/10.132.0.5
Start Time: Tue, 01 Mar 2022 19:11:18 +0000
Labels: <none>
Annotations: autopilot.gke.io/resource-adjustment:
{"input":{"containers":[{"name":"<my_reponame>"}]},"output":{"containers":[{"limits":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"requ...
seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status: Pending
IP: 10.34.0.77
IPs:
IP: 10.34.0.77
Containers:
<my_reponame>:
Container ID:
Image: registry.gitlab.com/<my_username>/<my_reponame>:latest
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Limits:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
Requests:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96wsd (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-96wsd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17s gke.io/optimize-utilization-scheduler Successfully assigned default/<my_reponame> to gk3-<my_reponame>-nap-73l1ao51-a8e3ee2d-wjd8
Warning Failed 14s kubelet Failed to pull image "registry.gitlab.com/<my_username>/<my_reponame>:latest": rpc error: code = Unknown desc = failed to pull and unpack image "registry.gitlab.com/<my_username>/<my_reponame>:latest": failed to copy: httpReaderSeeker: failed open: failed to authorize: failed to fetch oauth token: unexpected status: 401 Unauthorized
Normal BackOff 14s kubelet Back-off pulling image "registry.gitlab.com/<my_username>/<my_reponame>:latest"
Warning Failed 14s kubelet Error: ImagePullBackOff
Normal Pulling 2s (x2 over 16s) kubelet Pulling image "registry.gitlab.com/<my_username>/<my_reponame>:latest"
Warning Failed 1s (x2 over 14s) kubelet Error: ErrImagePull
Warning Failed 1s kubelet Failed to pull image "registry.gitlab.com/<my_username>/<my_reponame>:latest": rpc error: code = Unknown desc = failed to pull and unpack image "registry.gitlab.com/<my_username>/<my_reponame>:latest": failed to resolve reference "registry.gitlab.com/<my_username>/<my_reponame>:latest": failed to authorize: failed to fetch oauth token: unexpected status: 401 Unauthorized
kubectl get pod <my_reponame> -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
autopilot.gke.io/resource-adjustment: '{"input":{"containers":[{"name":"<my_reponame>"}]},"output":{"containers":[{"limits":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"requests":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"name":"<my_reponame>"}]},"modified":true}'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"<my_reponame>","namespace":"default"},"spec":{"containers":[{"image":"registry.gitlab.com/<my_username>/<my_reponame>:latest","name":"<my_reponame>"}],"imagePullSecrets":[{"name":"regcred"}]}}
seccomp.security.alpha.kubernetes.io/pod: runtime/default
creationTimestamp: "2022-03-01T19:11:18Z"
name: <my_reponame>
namespace: default
resourceVersion: "1299911"
uid: d8a636ee-1dfc-40e9-9ba4-26b5674d3259
spec:
containers:
- image: registry.gitlab.com/<my_username>/<my_reponame>:latest
imagePullPolicy: Always
name: <my_reponame>
resources:
limits:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
requests:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
securityContext:
capabilities:
drop:
- NET_RAW
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-96wsd
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
imagePullSecrets:
- name: regcred
nodeName: gk3-<my_reponame>-nap-73l1ao51-a8e3ee2d-wjd8
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: gke.io/optimize-utilization-scheduler
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-96wsd
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-03-01T19:11:18Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-03-01T19:11:18Z"
message: 'containers with unready status: [<my_reponame>]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-03-01T19:11:18Z"
message: 'containers with unready status: [<my_reponame>]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-03-01T19:11:18Z"
status: "True"
type: PodScheduled
containerStatuses:
- image: registry.gitlab.com/<my_username>/<my_reponame>:latest
imageID: ""
lastState: {}
name: <my_reponame>
ready: false
restartCount: 0
started: false
state:
waiting:
message: 'rpc error: code = Unknown desc = failed to pull and unpack image
"registry.gitlab.com/<my_username>/<my_reponame>:latest": failed to resolve reference
"registry.gitlab.com/<my_username>/<my_reponame>:latest": failed to authorize: failed
to fetch oauth token: unexpected status: 401 Unauthorized'
reason: ErrImagePull
hostIP: 10.132.0.5
phase: Pending
podIP: 10.34.0.77
podIPs:
- ip: 10.34.0.77
qosClass: Guaranteed
startTime: "2022-03-01T19:11:18Z"
It looks like from the last commands log that it was not able to pull from the container registry - I assume I passed the credentials wrong, but I could not find any example or documentation on how to achieve this.
I'm of course willing to give any info if necessary.
Thanks in advance :)
Try https://chris-vermeulen.com/using-gitlab-registry-with-kubernetes/, your cluster cannot pull from the registry because it has no access.
I recently evaluated Kubernetes with a simple test project and I was able to update image of StatefulSet with command like this:
kubectl set image statefulset/cloud-stateful-set cloud-stateful-container=ncccloud:v716
I'm now trying to get our real system to work in Kubernetes and the pods don't do anything when I try to update image, even though I'm using basically the same command.
It says:
statefulset.apps "cloud-stateful-set" image updated
And kubectl describe statefulset.apps/cloud-stateful-set says:
Image: ncccloud:v716"
But kubectl describe pod cloud-stateful-set-0 and kubectl describe pod cloud-stateful-set-1 say:
"Image: ncccloud:latest"
The ncccloud:latest is an image which doesn't work:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cloud-stateful-set-0 0/1 CrashLoopBackOff 7 13m
cloud-stateful-set-1 0/1 CrashLoopBackOff 7 13m
mssql-deployment-6cd4ff766-pzz99 1/1 Running 1 55m
Another strange thing is that every time I try to apply the StatefulSet it says configured instead of unchanged.
$ kubectl apply -f k8s/cloud-stateful-set.yaml
statefulset.apps "cloud-stateful-set" configured
Here is my cloud-stateful-set.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cloud-stateful-set
labels:
app: cloud
group: service
spec:
replicas: 2
# podManagementPolicy: Parallel
serviceName: cloud-stateful-set
selector:
matchLabels:
app: cloud
template:
metadata:
labels:
app: cloud
group: service
spec:
containers:
- name: cloud-stateful-container
image: ncccloud:latest
imagePullPolicy: Never
ports:
- containerPort: 80
volumeMounts:
- name: cloud-stateful-storage
mountPath: /cloud-stateful-data
volumeClaimTemplates:
- metadata:
name: cloud-stateful-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
Here is full output of kubectl describe pod/cloud-stateful-set-1:
Name: cloud-stateful-set-1
Namespace: default
Node: docker-for-desktop/192.168.65.3
Start Time: Tue, 02 Jul 2019 11:03:01 +0300
Labels: app=cloud
controller-revision-hash=cloud-stateful-set-5c9964c897
group=service
statefulset.kubernetes.io/pod-name=cloud-stateful-set-1
Annotations: <none>
Status: Running
IP: 10.1.0.20
Controlled By: StatefulSet/cloud-stateful-set
Containers:
cloud-stateful-container:
Container ID: docker://3ec26930c1a81caa39d5c5a16c4e25adf7584f90a71e0110c0b03ecb60dd9592
Image: ncccloud:latest
Image ID: docker://sha256:394427c40e964e34ca6c9db3ce1df1f8f6ce34c4ba8f3ab10e25da6e89678830
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 139
Started: Tue, 02 Jul 2019 11:19:03 +0300
Finished: Tue, 02 Jul 2019 11:19:03 +0300
Ready: False
Restart Count: 8
Environment: <none>
Mounts:
/cloud-stateful-data from cloud-stateful-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gzxpx (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
cloud-stateful-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: cloud-stateful-storage-cloud-stateful-set-1
ReadOnly: false
default-token-gzxpx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gzxpx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19m default-scheduler Successfully assigned cloud-stateful-set-1 to docker-for-desktop
Normal SuccessfulMountVolume 19m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "pvc-4c9e1796-9c9a-11e9-998f-00155d64fa03"
Normal SuccessfulMountVolume 19m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-gzxpx"
Normal Pulled 17m (x5 over 19m) kubelet, docker-for-desktop Container image "ncccloud:latest" already present on machine
Normal Created 17m (x5 over 19m) kubelet, docker-for-desktop Created container
Normal Started 17m (x5 over 19m) kubelet, docker-for-desktop Started container
Warning BackOff 4m (x70 over 19m) kubelet, docker-for-desktop Back-off restarting failed container
Here is full output of kubectl describe statefulset.apps/cloud-stateful-set:
Name: cloud-stateful-set
Namespace: default
CreationTimestamp: Tue, 02 Jul 2019 11:02:59 +0300
Selector: app=cloud
Labels: app=cloud
group=service
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"labels":{"app":"cloud","group":"service"},"name":"cloud-stateful-set","names...
Replicas: 2 desired | 2 total
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=cloud
group=service
Containers:
cloud-stateful-container:
Image: ncccloud:v716
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/cloud-stateful-data from cloud-stateful-storage (rw)
Volumes: <none>
Volume Claims:
Name: cloud-stateful-storage
StorageClass:
Labels: <none>
Annotations: <none>
Capacity: 10Mi
Access Modes: [ReadWriteOnce]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 25m statefulset-controller create Pod cloud-stateful-set-0 in StatefulSet cloud-stateful-set successful
Normal SuccessfulCreate 25m statefulset-controller create Pod cloud-stateful-set-1 in StatefulSet cloud-stateful-set successful
I'm using Docker Desktop on Windows, if it matters.
in my case imagePullPolicy was set to Always already:
kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.8"}]'
helped in my case, see k8s docs: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update
In the stateful set yaml, change
imagePullPolicy: Never
to
imagePullPolicy: Always
I'm having a few issues getting Ambassador to work correctly. I'm new to Kubernetes and just teaching myself.
I have successfully managed to work through the demo material Ambassador provide - e.g /httpbin/ endpoint is working correctly, but when I try to deploy a Go service it is falling over.
When hitting the 'qotm' endpoint, the page this is the response:
upstream request timeout
Pod status:
CrashLoopBackOff
From my research, it seems to be related to the yaml file not being configured correctly but I'm struggling to find any documentation relating to this use case.
My cluster is running on AWS EKS and the images are being pushed to AWS ECR.
main.go:
package main
import (
"fmt"
"net/http"
"os"
)
func main() {
var PORT string
if PORT = os.Getenv("PORT"); PORT == "" {
PORT = "3001"
}
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello World from path: %s\n", r.URL.Path)
})
http.ListenAndServe(":" + PORT, nil)
}
Dockerfile:
FROM golang:alpine
ADD ./src /go/src/app
WORKDIR /go/src/app
EXPOSE 3001
ENV PORT=3001
CMD ["go", "run", "main.go"]
test.yaml:
apiVersion: v1
kind: Service
metadata:
name: qotm
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: qotm_mapping
prefix: /qotm/
service: qotm
spec:
selector:
app: qotm
ports:
- port: 80
name: http-qotm
targetPort: http-api
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: qotm
spec:
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: qotm
spec:
containers:
- name: qotm
image: ||REMOVED||
ports:
- name: http-api
containerPort: 3001
readinessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 30
periodSeconds: 3
resources:
limits:
cpu: "0.1"
memory: 100Mi
Pod description:
Name: qotm-7b9bf4d499-v9nxq
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: ip-192-168-89-69.eu-west-1.compute.internal/192.168.89.69
Start Time: Sun, 17 Mar 2019 17:19:50 +0000
Labels: app=qotm
pod-template-hash=3656908055
Annotations: <none>
Status: Running
IP: 192.168.113.23
Controlled By: ReplicaSet/qotm-7b9bf4d499
Containers:
qotm:
Container ID: docker://5839996e48b252ac61f604d348a98c47c53225712efd503b7c3d7e4c736920c4
Image: IMGURL
Image ID: docker-pullable://IMGURL
Port: 3001/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sun, 17 Mar 2019 17:30:49 +0000
Finished: Sun, 17 Mar 2019 17:30:49 +0000
Ready: False
Restart Count: 7
Limits:
cpu: 100m
memory: 200Mi
Requests:
cpu: 100m
memory: 200Mi
Readiness: http-get http://:3001/health delay=30s timeout=1s period=3s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5bbxw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-5bbxw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5bbxw
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned default/qotm-7b9bf4d499-v9nxq to ip-192-168-89-69.eu-west-1.compute.internal
Normal Pulled 10m (x5 over 12m) kubelet, ip-192-168-89-69.eu-west-1.compute.internal Container image "IMGURL" already present on machine
Normal Created 10m (x5 over 12m) kubelet, ip-192-168-89-69.eu-west-1.compute.internal Created container
Normal Started 10m (x5 over 11m) kubelet, ip-192-168-89-69.eu-west-1.compute.internal Started container
Warning BackOff 115s (x47 over 11m) kubelet, ip-192-168-89-69.eu-west-1.compute.internal Back-off restarting failed container
In your kubernetes deployment file you have exposed a readiness probe on port 5000 while your application is exposed on port 3001, also while running the container a few times I got OOMKilled so increased the memory limit. Anyways below deployment file should work fine.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: qotm
spec:
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: qotm
spec:
containers:
- name: qotm
image: <YOUR_IMAGE>
imagePullPolicy: Always
ports:
- name: http-api
containerPort: 3001
readinessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 30
periodSeconds: 3
resources:
limits:
cpu: "0.1"
memory: 200Mi
I try to start RStudio in docker container via kubernetes. All objects are created, but when I try to open rstudio using such commands in Ubuntu 18:
kubectl create -f rstudio-ing.yml
IP=$(minikube ip)
xdg-open http://$IP/rstudio/
there is error: #RStudio initialization error: unable connect to service.
Usual docker command works fine:
docker run -d -p 8787:8787 -e PASSWORD=123 -v /home/aabor/r-projects:/home/rstudio aabor/rstudio
The same intended operation in kubernetes fails.
rstudio-ing.yml file creates all objects well. RStudio is accessible if I do not mount any folder. But if I add folder mounts it produces an error. Any suggestions?
The content of the rstudio-ing.yml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: r-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /rstudio/
backend:
serviceName: rstudio
servicePort: 8787
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rstudio
spec:
replicas: 1
selector:
matchLabels:
service: rstudio
template:
metadata:
labels:
service: rstudio
language: R
spec:
containers:
- name: rstudio
image: aabor/rstudio
env:
- name: PASSWORD
value: "123"
volumeMounts:
- name: home-dir
mountPath: /home/rstudio/
volumes:
- name: home-dir
hostPath:
#RStudio initialization error: unable connect to service
path: /home/aabor/r-projects
---
apiVersion: v1
kind: Service
metadata:
name: rstudio
spec:
ports:
- port: 8787
selector:
service: rstudio
This is pod description:
Name: rstudio-689c4fd6c8-fgt7w
Namespace: default
Node: minikube/10.0.2.15
Start Time: Fri, 23 Nov 2018 21:42:35 +0300
Labels: language=R
pod-template-hash=2457098274
service=rstudio
Annotations: <none>
Status: Running
IP: 172.17.0.9
Controlled By: ReplicaSet/rstudio-689c4fd6c8
Containers:
rstudio:
Container ID: docker://a6bdcbfdf8dc5489a4c1fa6f23fb782bc3d58dd75d50823cd370c43bd3bffa3c
Image: aabor/rstudio
Image ID: docker-pullable://aabor/rstudio#sha256:2326e5daa3c4293da2909f7e8fd15fdcab88b4eb54f891b4a3cb536395e5572f
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 23 Nov 2018 21:42:39 +0300
Ready: True
Restart Count: 0
Environment:
PASSWORD: 123
Mounts:
/home/rstudio/ from home-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mrkd8 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
home-dir:
Type: HostPath (bare host directory volume)
Path: /home/aabor/r-projects
HostPathType:
default-token-mrkd8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mrkd8
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10s default-scheduler Successfully assigned rstudio-689c4fd6c8-fgt7w to minikube
Normal SuccessfulMountVolume 10s kubelet, minikube MountVolume.SetUp succeeded for volume "home-dir"
Normal SuccessfulMountVolume 10s kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-mrkd8"
Normal Pulling 9s kubelet, minikube pulling image "aabor/rstudio"
Normal Pulled 7s kubelet, minikube Successfully pulled image "aabor/rstudio"
Normal Created 7s kubelet, minikube Created container
Normal Started 6s kubelet, minikube Started container
You have created a service of type ClusterIP that can only be possible to access in the cluster not the outside. So to make it available outside of the cluster, change the service type LoadBalancer.
apiVersion: v1
kind: Service
metadata:
name: rstudio
spec:
ports:
- port: 8787
selector:
service: rstudio
type: LoadBalancer
In that case, the loadbalancer type service don't need the ingress and use the url as:
$ minikube service rstudio --url
Data showing "xxx" has been masked.
Problem description:
Success Scenario: When i make my image public in docker registry, my pod is getting created successfully.
Failure Scenario: When i make my image private in docker registry. My image pull fails on kubernetes cluster.
Please details below and help.
I have my image published to docker registry.
Following is my kubernetes secret:
c:\xxxxxxx\temp>kubectl get secret regcredx -o yaml
apiVersion: v1
data:
.dockerconfigjson: xxxxxx
kind: Secret
metadata:
creationTimestamp: 2018-10-25T21:38:18Z
name: regcredx
namespace: default
resourceVersion: "1174545"
selfLink: /api/v1/namespaces/default/secrets/regcredx
uid: 49a71ba5-d89e-11e8-8bd2-005056b7126c
type: kubernetes.io/dockerconfigjson
Here is my pod.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: whatever
spec:
containers:
- name: whatever
image: xxxxxxxxx/xxxxxx:123
imagePullPolicy: Always
command: [ "sh", "-c", "tail -f /dev/null" ]
imagePullSecrets:
- name: regcredx
Here is my pod config in cluster:
c:\Sharief\temp>kubectl get pod whatever -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/podIP: 100.96.1.81/32
creationTimestamp: 2018-10-26T20:49:11Z
name: whatever
namespace: default
resourceVersion: "1302024"
selfLink: /api/v1/namespaces/default/pods/whatever
uid: 9783b81f-d960-11e8-94ca-005056b7126c
spec:
containers:
- command:
- sh
- -c
- tail -f /dev/null
image: xxxxxxxxx/xxxxxxx:123
imagePullPolicy: Always
name: whatever
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-4db4c
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcredx
nodeName: xxxx-pvt
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-4db4c
secret:
defaultMode: 420
secretName: default-token-4db4c
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2018-10-26T20:49:33Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2018-10-26T20:49:33Z
message: 'containers with unready status: [whatever]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2018-10-26T20:49:11Z
status: "True"
type: PodScheduled
containerStatuses:
- image: xxxxxxxxx/xxxxxxx:123
imageID: ""
lastState: {}
name: whatever
ready: false
restartCount: 0
state:
waiting:
message: Back-off pulling image "xxxxxxxxx/xxxxxxx:123"
reason: ImagePullBackOff
hostIP: xx.xxx.xx.xx
phase: Pending
podIP: xx.xx.xx.xx
qosClass: BestEffort
startTime: 2018-10-26T20:49:33Z
Here is my pod discription:
c:\xxxxxxx\temp>kubectl describe pod whatever
Name: whatever
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: co2-vmkubwrk01company-pvt/xx.xx.xx.xx
Start Time: Fri, 26 Oct 2018 15:49:33 -0500
Labels: <none>
Annotations: cni.projectcalico.org/podIP=xxx.xx.xx.xx/xx
Status: Pending
IP: xxx.xx.x.xx
Containers:
whatever:
Container ID:
Image: xxxxxxxxx/xxxxxxx:123
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-c
tail -f /dev/null
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4db4c (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-4db4c:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4db4c
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 27m default-scheduler Successfully assigned whatever to xxx
Normal SuccessfulMountVolume 26m kubelet, co2-vmkubwrk01company-pvt MountVolume.SetUp succeeded for volume "default-token-4db4c"
Normal Pulling 25m (x4 over 26m) kubelet, co2-vmkubwrk01company-pvt pulling image "xxxxxxxxx/xxxxxxx:123"
Warning Failed 25m (x4 over 26m) kubelet, co2-vmkubwrk01company-pvt Failed to pull image "xxxxxxxxx/xxxxxxx:123": rpc error: code = Unknown desc = repository docker.io/xxxxxxxxx/xxxxxxx not found: does not exist or no pull access
Warning Failed 25m (x4 over 26m) kubelet, co2-vmkubwrk01company-pvt Error: ErrImagePull
Normal BackOff 16m (x41 over 26m) kubelet, co2-vmkubwrk01company-pvt Back-off pulling image "xxxxxxxxx/xxxxxxx:123"
Warning Failed 1m (x106 over 26m) kubelet, co2-vmkubwrk01company-pvt Error: ImagePullBackOff
Kubernetes could not find your repository , the image path is wrong , you need to fix this:
image: xxxxxxxxx/xxxxxx:123
One thing you can try to test the assumption that pre-pull the image on the node on which the deployment is going to happen. do docker images , note the correct uri/repo:tag and update it in you deployment.