I have a simple .env file with content like this
APP_PORT=5000
I add the values of that file with Kustomize. When I apply my Rails app, it crashes because it cannot find the environment vars:
I also tried to place a puts ENV['APP_PORT'] in application.rb but that one is nil
Rails Version & environment: 6.1.4.1 - development
! Unable to load application: KeyError: key not found: "APP_PORT"
Did you mean? "APP_HOME"
bundler: failed to load command: puma (/app/vendor/bundle/ruby/2.7.0/bin/puma)
KeyError: key not found: "APP_PORT"
Did you mean? "APP_HOME"
/app/config/environments/development.rb:2:in `fetch'
/app/config/environments/dev
when I change my image to image: nginx then the env vars are not there:
env
KUBERNETES_SERVICE_PORT_HTTPS=443
ELASTICSEARCH_PORT_9200_TCP_PORT=9200
KUBERNETES_SERVICE_PORT=443
ELASTICSEARCH_PORT_9200_TCP_ADDR=10.103.1.6
ELASTICSEARCH_SERVICE_HOST=10.103.1.6
HOSTNAME=myapp-backend-api-56b44c7445-h9g5m
ELASTICSEARCH_PORT=tcp://10.103.1.6:9200
PWD=/
ELASTICSEARCH_PORT_9200_TCP=tcp://10.103.1.6:9200
PKG_RELEASE=1~buster
HOME=/root
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
ELASTICSEARCH_SERVICE_PORT_9200=9200
NJS_VERSION=0.6.2
TERM=xterm
SHLVL=1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
ELASTICSEARCH_SERVICE_PORT=9200
ELASTICSEARCH_PORT_9200_TCP_PROTO=tcp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NGINX_VERSION=1.21.3
_=/usr/bin/env
This is my current state:
kustomization.yml
kind: Kustomization
configMapGenerator:
- name: backend-api-configmap
files:
- .env
bases:
- ../../base
patchesStrategicMerge:
- api-deployment.yml
api-deployment.yml
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- image: nginx
imagePullPolicy: Never #imagePullPolicy: Never: the image is assumed to exist locally. No attempt is made to pull the image. envFrom:
- configMapRef:
name: backend-api-configma
That is the describe pod:
❯ k describe pod xxx-backend-api-56774c796d-s2zkd
Name: xxx-backend-api-56774c796d-s2zkd
Namespace: default
Annotations: <none>
Status: Running
IP: 172.17.0.6
IPs:
IP: 172.17.0.6
Controlled By: ReplicaSet/xxx-backend-api-56774c796d
Containers:
xxx-backend-api:
Container ID: docker://5ee3112b0805271ebe4b32d7d8e5d1b267d8bf4e220f990c085638f7b975c41f
Image: xxx-backend-api:latest
Image ID: docker://sha256:55d96a68267d80f19e91aa0b4d1ffb11525e9ede054fcbb6e6ec74356c6a3c7d
Port: 5000/TCP
Ready: False
Restart Count: 3
Environment Variables from:
backend-api-configmap-99fbkbc4c9 ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w5czc (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-w5czc:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 55s default-scheduler Successfully assigned default/xxx-backend-api-56774c796d-s2zkd to minikube
Normal Pulled 8s (x4 over 54s) kubelet Container image "xxx-backend-api:latest" already present on machine
Normal Created 8s (x4 over 54s) kubelet Created container xxx-backend-api
Normal Started 8s (x4 over 54s) kubelet Started container xxx-backend-api
Warning BackOff 4s (x4 over 48s) kubelet Back-off restarting failed container
and that is the describe configmap
❯ k describe configmaps backend-api-configmap-99fbkbc4c9
Name: backend-api-configmap-99fbkbc4c9
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
.env:
----
RAILS_MAX_THREADS=5
APPLICATION_URL=localhost:8000/backend
FRONTEND_URL=localhost:8000
APP_PORT=3000
BinaryData
====
Events: <none>
I got it:
Instead of using files: in my configMapGenerator I had to use envs: like so:
configMapGenerator:
- name: backend-api-configmap
envs:
- .env
Related
I'm using minikube on a Fedora based machine to run a simple mongo-db deployment on my local machine but I'm constantly getting ImagePullBackOff error. Here is the yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
I tried to pull the image locally by using docker pull mongo, minikube image pull mongo & minikube image pull mongo-express several times while restarting docker and minikube several times.
Logining into dockerhub (both in broweser and through terminal didn't work)
I also tried to login into docker using docker login command and then modified my /etc/resolv.conf and adding nameserver 8.8.8.8 and then restartied docker using sudo systemctl restart docker but even that failed to work.
On running kubectl describe pod command I get this output:
Name: mongodb-deployment-6bf8f4c466-85b2h
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Mon, 29 Aug 2022 23:04:12 +0530
Labels: app=mongodb
pod-template-hash=6bf8f4c466
Annotations: <none>
Status: Pending
IP: 172.17.0.2
IPs:
IP: 172.17.0.2
Controlled By: ReplicaSet/mongodb-deployment-6bf8f4c466
Containers:
mongodb:
Container ID:
Image: mongo
Image ID:
Port: 27017/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
MONGO_INITDB_ROOT_USERNAME: <set to the key 'mongo-root-username' in secret 'mongodb-secret'>
Optional: false
MONGO_INITDB_ROOT_PASSWORD: <set to the key 'mongo-root-password' in secret 'mongodb-secret'>
Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vlcxl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-vlcxl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal Scheduled 22m default-scheduler Successfully assigned default/mongodb-deployment-6bf8f4c466-85b2h to minikube
Warning Failed 18m (x2 over 20m) kubelet Failed to pull image "mongo:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 18m (x2 over 20m) kubelet Error: ErrImagePull
Normal BackOff 17m (x2 over 20m) kubelet Back-off pulling image "mongo:latest"
Warning Failed 17m (x2 over 20m) kubelet Error: ImagePullBackOff
Normal Pulling 17m (x3 over 22m) kubelet Pulling image "mongo:latest"
Normal SandboxChanged 11m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulling 3m59s (x4 over 11m) kubelet Pulling image "mongo:latest"
Warning Failed 2m (x4 over 9m16s) kubelet Failed to pull image "mongo:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 2m (x4 over 9m16s) kubelet Error: ErrImagePull
Normal BackOff 83s (x7 over 9m15s) kubelet Back-off pulling image "mongo:latest"
Warning Failed 83s (x7 over 9m15s) kubelet Error: ImagePullBackOff
PS: Ignore any any spacing errors
I think your internet connection is slow. The timeout to pull an image is 120 seconds, so kubectl could not pull the image in under 120 seconds.
First, pull the image via Docker
docker image pull mongo
Then load the downloaded image to minikube
minikube image load mongo
And then everything will work because now kubectl will use the image that is stored locally.
I got error when creating deployment.
This is my Dockerfile that i have run and test it on local, i also push it to DockerHub
FROM node:14.15.4
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
RUN npm install pm2 -g
COPY . .
EXPOSE 3001
CMD [ "pm2-runtime", "server.js" ]
In my raspberry pi 3 model B, i have install k3s using curl -sfL https://get.k3s.io | sh -
Here is my controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-deployment
labels:
app: controller
spec:
replicas: 1
selector:
matchLabels:
app: controller
template:
metadata:
labels:
app: controller
spec:
containers:
- name: controller
image: akirayorunoe/node-controller-server
ports:
- containerPort: 3001
After run this file the pod is error
When i log the pod, it said
standard_init_linux.go:219: exec user process caused: exec format error
Here is the reponse from describe pod
Name: controller-deployment-8669c9c864-sw8kh
Namespace: default
Priority: 0
Node: raspberrypi/192.168.0.30
Start Time: Fri, 21 May 2021 11:21:05 +0700
Labels: app=controller
pod-template-hash=8669c9c864
Annotations: <none>
Status: Running
IP: 10.42.0.43
IPs:
IP: 10.42.0.43
Controlled By: ReplicaSet/controller-deployment-8669c9c864
Containers:
controller:
Container ID: containerd://439edcfdbf49df998e3cabe2c82206b24819a9ae13500b0 13b9bac1df6e56231
Image: akirayorunoe/node-controller-server
Image ID: docker.io/akirayorunoe/node-controller-server#sha256:e1c5115 2f9d596856952d590b1ef9a486e136661076a9d259a9259d4df314226
Port: 3001/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 21 May 2021 11:24:29 +0700
Finished: Fri, 21 May 2021 11:24:29 +0700
Ready: False
Restart Count: 5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-txm85 (ro )
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-txm85:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-txm85
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m33s default-scheduler Successfully ass igned default/controller-deployment-8669c9c864-sw8kh to raspberrypi
Normal Pulled 5m29s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 3.072053213s
Normal Pulled 5m24s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 3.018192177s
Normal Pulled 5m6s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 3.015959209s
Normal Pulled 4m34s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 2.921116157s
Normal Created 4m34s (x4 over 5m29s) kubelet Created containe r controller
Normal Started 4m33s (x4 over 5m28s) kubelet Started containe r controller
Normal Pulling 3m40s (x5 over 5m32s) kubelet Pulling image "a kirayorunoe/node-controller-server"
Warning BackOff 30s (x23 over 5m22s) kubelet Back-off restart ing failed container
Here is the error images
You are trying to launch a container built for x86 (or x86_64, same difference) on an ARM machine. This does not work. Containers for ARM must be built specifically for ARM and contain ARM executables. While major projects are slowly adding ARM support to their builds, most random images you find on Docker Hub or whatever will not work on ARM.
I am creating a deployment in circlci that deploys my containerized application to a k3s server I have set up. I have set up a secret using the commands found here.
The secret is created using the command:
kubectl create secret docker-registry regkeyname --docker-server=https://index.docker.io/v1/ \
--docker-username=username \
--docker-password=password \
--docker-email=my#email.com \
--namespace=external
My secret is as follows when running kubectl get secret regkeyname --namespace=external --output=yaml:
apiVersion: v1
data:
.dockerconfigjson: secretbase64stuff
kind: Secret
metadata:
creationTimestamp: "2020-11-24T13:11:07Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:.dockerconfigjson: {}
f:type: {}
manager: kubectl
operation: Update
time: "2020-11-24T13:11:07Z"
name: regkeyname
namespace: external
resourceVersion: "16929381"
selfLink: /api/v1/namespaces/external/secrets/regkeyname
uid: 51b87508-9cf2-490b-b871-0b5a342ab64c
type: kubernetes.io/dockerconfigjson
I'm using helm to deploy my application and the Deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.labels.app }}
labels:
app: {{ .Values.labels.app }}
spec:
selector:
matchLabels:
app: {{ .Values.labels.app }}
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: {{ .Values.labels.app }}
env: {{ .Values.labels.env }}
spec:
imagePullSecrets:
- name: regkeyname
containers:
- name: my-service
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.imagePullPolicy }}
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 5
successThreshold: 1
after deploying however, the images fail to pull and it appears that my secret "regkeyname" is not used/mounted in the pods. the result is as follows:
Name: my-service-856454c6cd-qcp7w
Namespace: external
Priority: 0
Node: worker-2/192.168.1.13
Start Time: Tue, 24 Nov 2020 07:20:08 -0600
Labels: app=my-service
env=development
pod-template-hash=856454c6cd
Annotations: <none>
Status: Pending
IP: 10.42.2.196
Controlled By: ReplicaSet/my-service-856454c6cd
Containers:
auth-service:
Container ID:
Image: my-repo/my-service:latest
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Readiness: http-get http://:8080/health delay=10s timeout=1s period=10s #success=1 #failure=5
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-l9b4k (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-l9b4k:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-l9b4k
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned external/auth-service-856454c6cd-qcp7w to worker-2
Normal Pulling 32m (x4 over 34m) kubelet, worker-2 Pulling image "my-repo/my-service:latest"
Warning Failed 32m (x4 over 34m) kubelet, worker-2 Failed to pull image "my-repo/my-service:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/my-repo/my-service:latest": failed to resolve reference "docker.io/my-repo/my-service:latest": failed to do request: Head https://registry-1.docker.io/v2/my-repo/my-service/manifests/latest: dial tcp: lookup registry-1.docker.io: Try again
Warning Failed 32m (x4 over 34m) kubelet, worker-2 Error: ErrImagePull
Warning Failed 31m (x6 over 34m) kubelet, worker-2 Error: ImagePullBackOff
Normal BackOff 3m54s (x127 over 34m) kubelet, worker-2 Back-off pulling image "my-repo/my-service:latest"
I had this working when running locally with kubernetes so I am assuming the issue must have something to do either with k3s or the fact that now the server is remote rather than local. Any insight would be greatly appreciated. Thanks in advance!
The controller is trying to pull image from the official docker registry:
failed to resolve reference "docker.io/my-repo/my-service:latest"
While creating the imagePullSecret, make sure that you put the correct URL (ie. the URL for your private registry) for performing authentication and pulling image.
$ cat ~/.docker/config.json
{
"auths": {
"https://index.docker.io/v1/": { # <------ change here
"auth": "..........="
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.5 (linux)"
}
I recently evaluated Kubernetes with a simple test project and I was able to update image of StatefulSet with command like this:
kubectl set image statefulset/cloud-stateful-set cloud-stateful-container=ncccloud:v716
I'm now trying to get our real system to work in Kubernetes and the pods don't do anything when I try to update image, even though I'm using basically the same command.
It says:
statefulset.apps "cloud-stateful-set" image updated
And kubectl describe statefulset.apps/cloud-stateful-set says:
Image: ncccloud:v716"
But kubectl describe pod cloud-stateful-set-0 and kubectl describe pod cloud-stateful-set-1 say:
"Image: ncccloud:latest"
The ncccloud:latest is an image which doesn't work:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cloud-stateful-set-0 0/1 CrashLoopBackOff 7 13m
cloud-stateful-set-1 0/1 CrashLoopBackOff 7 13m
mssql-deployment-6cd4ff766-pzz99 1/1 Running 1 55m
Another strange thing is that every time I try to apply the StatefulSet it says configured instead of unchanged.
$ kubectl apply -f k8s/cloud-stateful-set.yaml
statefulset.apps "cloud-stateful-set" configured
Here is my cloud-stateful-set.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cloud-stateful-set
labels:
app: cloud
group: service
spec:
replicas: 2
# podManagementPolicy: Parallel
serviceName: cloud-stateful-set
selector:
matchLabels:
app: cloud
template:
metadata:
labels:
app: cloud
group: service
spec:
containers:
- name: cloud-stateful-container
image: ncccloud:latest
imagePullPolicy: Never
ports:
- containerPort: 80
volumeMounts:
- name: cloud-stateful-storage
mountPath: /cloud-stateful-data
volumeClaimTemplates:
- metadata:
name: cloud-stateful-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
Here is full output of kubectl describe pod/cloud-stateful-set-1:
Name: cloud-stateful-set-1
Namespace: default
Node: docker-for-desktop/192.168.65.3
Start Time: Tue, 02 Jul 2019 11:03:01 +0300
Labels: app=cloud
controller-revision-hash=cloud-stateful-set-5c9964c897
group=service
statefulset.kubernetes.io/pod-name=cloud-stateful-set-1
Annotations: <none>
Status: Running
IP: 10.1.0.20
Controlled By: StatefulSet/cloud-stateful-set
Containers:
cloud-stateful-container:
Container ID: docker://3ec26930c1a81caa39d5c5a16c4e25adf7584f90a71e0110c0b03ecb60dd9592
Image: ncccloud:latest
Image ID: docker://sha256:394427c40e964e34ca6c9db3ce1df1f8f6ce34c4ba8f3ab10e25da6e89678830
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 139
Started: Tue, 02 Jul 2019 11:19:03 +0300
Finished: Tue, 02 Jul 2019 11:19:03 +0300
Ready: False
Restart Count: 8
Environment: <none>
Mounts:
/cloud-stateful-data from cloud-stateful-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gzxpx (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
cloud-stateful-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: cloud-stateful-storage-cloud-stateful-set-1
ReadOnly: false
default-token-gzxpx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gzxpx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19m default-scheduler Successfully assigned cloud-stateful-set-1 to docker-for-desktop
Normal SuccessfulMountVolume 19m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "pvc-4c9e1796-9c9a-11e9-998f-00155d64fa03"
Normal SuccessfulMountVolume 19m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-gzxpx"
Normal Pulled 17m (x5 over 19m) kubelet, docker-for-desktop Container image "ncccloud:latest" already present on machine
Normal Created 17m (x5 over 19m) kubelet, docker-for-desktop Created container
Normal Started 17m (x5 over 19m) kubelet, docker-for-desktop Started container
Warning BackOff 4m (x70 over 19m) kubelet, docker-for-desktop Back-off restarting failed container
Here is full output of kubectl describe statefulset.apps/cloud-stateful-set:
Name: cloud-stateful-set
Namespace: default
CreationTimestamp: Tue, 02 Jul 2019 11:02:59 +0300
Selector: app=cloud
Labels: app=cloud
group=service
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"labels":{"app":"cloud","group":"service"},"name":"cloud-stateful-set","names...
Replicas: 2 desired | 2 total
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=cloud
group=service
Containers:
cloud-stateful-container:
Image: ncccloud:v716
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/cloud-stateful-data from cloud-stateful-storage (rw)
Volumes: <none>
Volume Claims:
Name: cloud-stateful-storage
StorageClass:
Labels: <none>
Annotations: <none>
Capacity: 10Mi
Access Modes: [ReadWriteOnce]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 25m statefulset-controller create Pod cloud-stateful-set-0 in StatefulSet cloud-stateful-set successful
Normal SuccessfulCreate 25m statefulset-controller create Pod cloud-stateful-set-1 in StatefulSet cloud-stateful-set successful
I'm using Docker Desktop on Windows, if it matters.
in my case imagePullPolicy was set to Always already:
kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.8"}]'
helped in my case, see k8s docs: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update
In the stateful set yaml, change
imagePullPolicy: Never
to
imagePullPolicy: Always
I try to start RStudio in docker container via kubernetes. All objects are created, but when I try to open rstudio using such commands in Ubuntu 18:
kubectl create -f rstudio-ing.yml
IP=$(minikube ip)
xdg-open http://$IP/rstudio/
there is error: #RStudio initialization error: unable connect to service.
Usual docker command works fine:
docker run -d -p 8787:8787 -e PASSWORD=123 -v /home/aabor/r-projects:/home/rstudio aabor/rstudio
The same intended operation in kubernetes fails.
rstudio-ing.yml file creates all objects well. RStudio is accessible if I do not mount any folder. But if I add folder mounts it produces an error. Any suggestions?
The content of the rstudio-ing.yml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: r-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /rstudio/
backend:
serviceName: rstudio
servicePort: 8787
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rstudio
spec:
replicas: 1
selector:
matchLabels:
service: rstudio
template:
metadata:
labels:
service: rstudio
language: R
spec:
containers:
- name: rstudio
image: aabor/rstudio
env:
- name: PASSWORD
value: "123"
volumeMounts:
- name: home-dir
mountPath: /home/rstudio/
volumes:
- name: home-dir
hostPath:
#RStudio initialization error: unable connect to service
path: /home/aabor/r-projects
---
apiVersion: v1
kind: Service
metadata:
name: rstudio
spec:
ports:
- port: 8787
selector:
service: rstudio
This is pod description:
Name: rstudio-689c4fd6c8-fgt7w
Namespace: default
Node: minikube/10.0.2.15
Start Time: Fri, 23 Nov 2018 21:42:35 +0300
Labels: language=R
pod-template-hash=2457098274
service=rstudio
Annotations: <none>
Status: Running
IP: 172.17.0.9
Controlled By: ReplicaSet/rstudio-689c4fd6c8
Containers:
rstudio:
Container ID: docker://a6bdcbfdf8dc5489a4c1fa6f23fb782bc3d58dd75d50823cd370c43bd3bffa3c
Image: aabor/rstudio
Image ID: docker-pullable://aabor/rstudio#sha256:2326e5daa3c4293da2909f7e8fd15fdcab88b4eb54f891b4a3cb536395e5572f
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 23 Nov 2018 21:42:39 +0300
Ready: True
Restart Count: 0
Environment:
PASSWORD: 123
Mounts:
/home/rstudio/ from home-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mrkd8 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
home-dir:
Type: HostPath (bare host directory volume)
Path: /home/aabor/r-projects
HostPathType:
default-token-mrkd8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mrkd8
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10s default-scheduler Successfully assigned rstudio-689c4fd6c8-fgt7w to minikube
Normal SuccessfulMountVolume 10s kubelet, minikube MountVolume.SetUp succeeded for volume "home-dir"
Normal SuccessfulMountVolume 10s kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-mrkd8"
Normal Pulling 9s kubelet, minikube pulling image "aabor/rstudio"
Normal Pulled 7s kubelet, minikube Successfully pulled image "aabor/rstudio"
Normal Created 7s kubelet, minikube Created container
Normal Started 6s kubelet, minikube Started container
You have created a service of type ClusterIP that can only be possible to access in the cluster not the outside. So to make it available outside of the cluster, change the service type LoadBalancer.
apiVersion: v1
kind: Service
metadata:
name: rstudio
spec:
ports:
- port: 8787
selector:
service: rstudio
type: LoadBalancer
In that case, the loadbalancer type service don't need the ingress and use the url as:
$ minikube service rstudio --url