pod stuck in pending state after kubectl apply? - docker

my pod stays in pending state after kubectl apply. I am currently trying to deploy 3 services which are postgres database,api server and the ui of application.The postgres pod is running fine but the remaining 2 services are stuck in pending state.
I tried creating yaml files like this
api server persistant volume
kind: PersistentVolume
apiVersion: v1
metadata:
name: api-initdb-pv-volume
labels:
type: local
app: api
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/home/vignesh/pagedesigneryamls/api"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: api-initdb-pv-claim-one
labels:
app: api
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
api server
apiVersion: v1
kind: Service
metadata:
name: apiserver
labels:
app: apiserver
spec:
ports:
- name: apiport
port: 8000
targetPort: 8000
selector:
app: apiserver
tier: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: apiserver
labels:
app: apiserver
spec:
selector:
matchLabels:
app: apiserver
tier: backend
strategy:
type: Recreate
template:
metadata:
labels:
app: apiserver
tier: backend
spec:
containers:
- image: suji165475/devops-sample:wootz-backend
name: apiserver
ports:
- containerPort: 8000
name: myport
volumeMounts:
- name: api-persistent-storage-one
mountPath: /usr/src/app
- name: api-persistent-storage-two
mountPath: /usr/src/app/node_modules
volumes:
- name: api-persistent-storage-one
persistentVolumeClaim:
claimName: api-initdb-pv-claim-one
- name: api-persistent-storage-two
persistentVolumeClaim:
claimName: api-initdb-pv-claim-two
docker-compose file (just for refernce)
version: "3"
services:
pg_db:
image: postgres
networks:
- wootzinternal
ports:
- 5432
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=wootz
volumes:
- wootz-db:/var/lib/postgresql/data
apiserver:
image: wootz-backend
volumes:
- ./api:/usr/src/app
- /usr/src/app/node_modules
build:
context: ./api
dockerfile: Dockerfile
networks:
- wootzinternal
depends_on:
- pg_db
ports:
- '8000:8000'
ui:
image: wootz-frontend
volumes:
- ./client:/usr/src/app
- /usr/src/app/node_modules
build:
context: ./client
dockerfile: Dockerfile
networks:
- wootzinternal
ports:
- '80:3000'
volumes:
wootz-db:
networks:
wootzinternal:
driver: bridge
when I tried kubectl apply on the api server yaml file, the pod for the api server was stuck in pending state for ever.how do i solve this.

For your future questions, if you need to get more information on what is happening you should be using kubectl describe pod_name as this would give you and us more information and would increase a chance for a proper answer. I used your yaml and after describing the pod:
persistentvolumeclaim "api-initdb-pv-claim-two" not found
After adding second one:
pod has unbound PersistentVolumeClaims (repeated 3 times)
After you add the second PV it should start working.
You have two persistent volume claims and only one persistent volume. You can't bind two PVC to a PV. So in this case you need to add another PV and another PVC to the manifests.
You can read more about it here.
A PersistentVolume (PV) is an atomic abstraction. You can not
subdivide it across multiple claims.
More information about Persistent Volumes and how they work can be found in the official documentation.
Also if you are trying to deploy PostgresSQL here is a good guide on how to do that. And another one which will be easier as it is using managed Kubernetes service - how to run HA PostgreSQL on GKE.

api-initdb-pv-claim-two pvc doesnt exist.
you need to create pv's and bound it using one pvc each

Related

rancher persistent data lost after reboot

I have installed nfs-provisioner in my rancher cluster. I make persistant volume for my MongoDB. When I restart server or upgrade mongodb container all my data is lost. How to fix this?
My mongodb configuration
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo-db
spec:
selector:
matchLabels:
app: mongo-db
serviceName: mongo-db
replicas: 3
template:
metadata:
labels:
app: mongo-db
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: data #reference the volumeClaimTemplate below
mountPath: /data/db
#this is a key difference with statefulsets
#A unique volume will be attached to each pod
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
#If no storageClassName is provided the default storage class will be used
#storageClassName: "standard"
resources:
requests:
storage: 2Gi

How do I copy a Kubernetes configmap to a write enabled area of a pod?

I am trying to deploy a redis sentinel deployment in Kubernetes. I have accomplished that but want to use ConfigMaps to allow us to change the IP address of the master in the sentinel.conf file. I started this but redis cant write to the config file because the mount point for configMaps are readOnly.
I was hoping to run an init container and copy the redis conf to a different dir just in the pod. But the init container couldn't find the conf file.
What are my options? Init Container? Something other than ConfigMap?
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: IP/redis-sentinel
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: sentinel-redis-config
items:
- key: redis-config-sentinel
path: sentinel.conf
According to #P Ekambaram proposal, you can try this one:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: redis:5.0.4
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
initContainers:
- name: copy
image: redis:5.0.4
command: ["bash", "-c", "cp /redis-master/redis.conf /redis-master-data/"]
volumeMounts:
- mountPath: /redis-master
name: config
- mountPath: /redis-master-data
name: data
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
In this example initContainer copy the file from ConfigMap into writable dir.
Note:
An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. Containers in the Pod can all read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each Container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
Create a startup script. In that copy the configMap file that is mounted in a volume to writable location. Then run the container process.

I need information about docker-compose.yml - how to configure/export to kubernetes ingress

I'm building an application written in PHP/Symfony4. I have prepared an API service and some services written in NodeJS/Express.
I'm configuring server structure with Google Cloud Platform. The best idea, for now, is to have multizone multi-clusters configuration with the load balancer.
I was using this link https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress/tree/master/examples/zone-printer as a source for my configuration. But now I don't know how to upload/build docker-compose.yml do GCR which can be used in Google Kubernetes.
version: '3'
services:
php:
image: gcr.io/XXX/php
build: build/php
expose:
- '9000'
volumes:
- ./symfony:/var/www/html/symfony:cached
- ./logs:/var/log
web:
image: gcr.io/XXX/nginx
build: build/nginx
restart: always
ports:
- "81:80"
depends_on:
- php
volumes:
- ./symfony:/var/www/html/symfony:cached
- ./logs:/var/log/nginx
I need to have a single container GCR.io/XXX/XXX/XXX for kubernetes-ingress configuration. Should I use docker-compose.yml or find something else? Which solution will be best?
docker-compose and Kubernetes declarations are not compatible with each other. If you want to use Kubernetes you can use a Pod with 2 containers (according to your example). If you want to take it a step further, you can use a Kubernetes Deployment that can manage your pod replicas, in case you are using multiple replicas.
Something like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: php
image: gcr.io/XXX/php
ports:
- containerPort: 9000
volumeMounts:
- mountPath: /var/www/html/symfony
name: symphony
- mountPath: /var/log
name: logs
- name: web
image: gcr.io/XXX/nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/symfony
name: symphony
- mountPath: /var/log
name: logs
volumes:
- name: symphony
hostPath:
path: /home/symphony
- name: logs
hostPath:
path: /home/logs
Even further, you can remove your web container and use nginx ingress controller. More about Kubernetes Ingresses here

Kubernetes: port-forwarding automatically for services

I have a rails project that using postgres database. I want to build a database server using Kubernetes and rails server will connect to this database.
For example here is my defined postgres.yml
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
ports:
- name: "5432"
port: 5432
targetPort: 5432
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- env:
- name: POSTGRES_DB
value: hades_dev
- name: POSTGRES_PASSWORD
value: "1234"
name: postgres
image: postgres:latest
ports:
- containerPort: 5432
resources: {}
stdin: true
tty: true
volumeMounts:
- mountPath: /var/lib/postgresql/data/
name: database-hades-volume
restartPolicy: Always
volumes:
- name: database-hades-volume
persistentVolumeClaim:
claimName: database-hades-volume
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-hades-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
I run this by following commands: kubectl run -f postgres.yml.
But when I try to run rails server. I always meet following exception:
PG::Error
invalid encoding name: utf8
I try to forwarding port, and rails server successfully connects to database server:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-3681891707-8ch4l 1/1 Running 0 1m
Then run following command:
kubectl port-forward postgres-3681891707-8ch4l 5432:5432
I think this solution not good. How can I define in my postgres.yml so I don't need to port-forwarding manually as above.
Thanks
You can try by exposing your service on NodePort and then accessing the service on that port.
Check here https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport

How to Run a script at the start of Container in Cloud Containers Engine with Kubernetes

I am trying to run a shell script at the start of a docker container running on Google Cloud Containers using Kubernetes. The structure of my app directory is something like this. I'd like to run prod_start.sh script at the start of the container (I don't want to put it as part of the Dockerfile though). The current setup fails to start the container with Command not found file ./prod_start.sh does not exist. Any idea how to fix this?
app/
...
Dockerfile
prod_start.sh
web-controller.yaml
Gemfile
...
Dockerfile
FROM ruby
RUN mkdir /backend
WORKDIR /backend
ADD Gemfile /backend/Gemfile
ADD Gemfile.lock /backend/Gemfile.lock
RUN bundle install
web-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: backend
labels:
app: myapp
tier: backend
spec:
replicas: 1
selector:
app: myapp
tier: backend
template:
metadata:
labels:
app: myapp
tier: backend
spec:
volumes:
- name: secrets
secret:
secretName: secrets
containers:
- name: my-backend
command: ['./prod_start.sh']
image: gcr.io/myapp-id/myapp-backend:v1
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: http-server
After a lot of experimentations I believe adding the script to the Dockerfile:
ADD prod_start.sh /backend/prod_start.sh
And then calling the command like this in the yaml controller file:
command: ['/bin/sh', './prod_start.sh']
Fixed it.
you can add a config map to your yaml instead of adding to your dockerfile.
apiVersion: v1
kind: ReplicationController
metadata:
name: backend
labels:
app: myapp
tier: backend
spec:
replicas: 1
selector:
app: myapp
tier: backend
template:
metadata:
labels:
app: myapp
tier: backend
spec:
volumes:
- name: secrets
secret:
secretName: secrets
- name: prod-start-config
configMap:
name: prod-start-config-script
defaultMode: 0744
containers:
- name: my-backend
command: ['./prod_start.sh']
image: gcr.io/myapp-id/myapp-backend:v1
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
- name: prod-start-config
mountpath: /backend/
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: http-server
Then create another yaml file for your script:
script.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prod-start-config-script
data:
prod_start.sh: |
apt-get update
When deployed the script will be in the scripts directory

Resources