I have the following deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: iam-mysql
labels:
app: iam
tier: mysql
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: iam
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: iam
tier: mysql
spec:
containers:
- image: mysql:5.6
name: iam-mysql
envFrom:
- configMapRef:
name: iam-mysql-conf-dev
- secretRef:
name: iam-mysql-pass-dev
ports:
- containerPort: 3306
name: iam-mysql
volumeMounts:
- name: iam-mysql-persistent-storage
mountPath: /var/lib/mysql
- name: mysql-initdb
mountPath: /docker-entrypoint-initdb.d
restartPolicy: Always
volumes:
- name: iam-mysql-persistent-storage
persistentVolumeClaim:
claimName: iam-mysql-pv-claim
- name: mysql-initdb
configMap:
name: iam-mysql-initdb-dev
I cannot reload "iam-mysql-initdb-dev" with the new schema once it was create. In fact, I deleted a table (user) inside of the pod and when I created the deployment again the table (user) wasn't there. That means kubernetes is not reload the schema once the deployment has benn recreated.
That's expected behavior. Init files under /docker-entrypoint-initdb.d/ directory, are ran only when the data directory is empty. That means only 1st time.
If you look into the entry-point script of MySQL 5.6, you can see this process.
In line 98, it checks if the data directory is empty.
If it is empty, The script will run the init files of /docker-entrypoint-initdb.d/ directory. See in line 190-192
If it is not empty, the whole entry point script will be ignored, From line 98 - 202
In kubernetes, when you are using a persistent volume, the volumes persists regarding deletion and recreation of pods. So, when pod restarts, the data directory is not empty. So, MySQL is skipping the init part, From line 98 - 202
Related
I am deploying new deployment after changes in kubernetes service. But I am facing strange issue. When I delete deployment, it deleted fine but its replica sets and pods not deleted. Therefor after apply that deployment again, new replica sets and pods created. But that newly created pods throws "FailedScheduling" error with message "0/1 nodes are available: 1 Too many pods.". And that's why new changes are not reflecting
Following are the commands which I am using
kubectl delete -f render.yaml
kubectl apply -f render.yaml
My yaml file code
apiVersion: apps/v1
kind: Deployment
metadata:
name: renderdev-deployment
labels:
app: renderdev-deployment
spec:
replicas: 6
selector:
matchLabels:
app: renderdev-deployment
template:
metadata:
labels:
app: renderdev-deployment
spec:
containers:
- name: renderdev-deployment
image: renderdev.azurecr.io/renderdev:latest
ports:
- containerPort: 5000
volumeMounts:
- name: azuresquarevfiles
mountPath: /mnt/azuresquarevfiles
volumes:
- name: azuresquarevfiles
azureFile:
secretName: azure-secret
shareName: videos
readOnly: false
So when I delete deployment first it should delete replica sets/pods respectively but it does not. What will be the issue? Do I have to delete that replica sets and pods manually?
I have the postgres container running in a Pod on GKE and a PersistentVolume set up to store the data. However, all of the data in the database is lost if the cluster reboots or if the Pod is deleted.
If I run kubectl delete <postgres_pod> to delete the existing Pod and check the newly created (by kubernetes) Pod to replace the deleted one, the respective database has not the data that it had before the Pod being deleted.
Here are the yaml files I used to deploy postgres.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: custom-storage
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain
volumeBindingMode: Immediate
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-volume-claim
spec:
storageClassName: custom-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11.5
resources: {}
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: "dbname"
- name: POSTGRES_USER
value: "user"
- name: POSTGRES_PASSWORD
value: "password"
volumeMounts:
- mountPath: /var/lib/postgresql/
name: postgresdb
volumes:
- name: postgresdb
persistentVolumeClaim:
claimName: postgres-volume-claim
I double checked that the persistentVolumeReclaimPolicy has value Retain.
What am I missing?
Is the cluster creating a new volume each time you delete a pod? Check with kubectl get pv.
Is this a multi-zone cluster? Your storage class is not provisionig regional disks, so you might be getting a new disk when the pod moves from one zone to another.
Possibly related to your problem, the postgres container reference recommends mounting at /var/lib/postgresql/data/pgdata and setting the PGDATA env variable:
https://hub.docker.com/_/postgres#pgdata
I am trying to deploy a redis sentinel deployment in Kubernetes. I have accomplished that but want to use ConfigMaps to allow us to change the IP address of the master in the sentinel.conf file. I started this but redis cant write to the config file because the mount point for configMaps are readOnly.
I was hoping to run an init container and copy the redis conf to a different dir just in the pod. But the init container couldn't find the conf file.
What are my options? Init Container? Something other than ConfigMap?
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: IP/redis-sentinel
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: sentinel-redis-config
items:
- key: redis-config-sentinel
path: sentinel.conf
According to #P Ekambaram proposal, you can try this one:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: redis:5.0.4
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
initContainers:
- name: copy
image: redis:5.0.4
command: ["bash", "-c", "cp /redis-master/redis.conf /redis-master-data/"]
volumeMounts:
- mountPath: /redis-master
name: config
- mountPath: /redis-master-data
name: data
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
In this example initContainer copy the file from ConfigMap into writable dir.
Note:
An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. Containers in the Pod can all read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each Container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
Create a startup script. In that copy the configMap file that is mounted in a volume to writable location. Then run the container process.
I have run into a Kubernetes related issue. I just moved from a Pod configuration to a ReplicationController for a Ruby on Rails app and I'm using persistent disks for the Rails pod. When I try apply the ReplicationController it gives the following error:
The ReplicationController "cartelhouse-ror" is invalid.
spec.template.spec.volumes[0].gcePersistentDisk.readOnly: Invalid
value: false: must be true for replicated pods > 1; GCE PD can only be
mounted on multiple machines if it is read-only
Does this mean there is no way to use persistent disks (R/W) when using ReplicationControllers or is there another way?
If not, how can I scale and/or apply rolling updates to the Pod configuration?
Pod configuration:
apiVersion: v1
kind: Pod
metadata:
name: appname
labels:
name: appname
spec:
containers:
- image: gcr.io/proj/appname:tag
name: appname
env:
- name: POSTGRES_PASSWORD
# Change this - must match postgres.yaml password.
value: pazzzzwd
- name: POSTGRES_USER
value: rails
ports:
- containerPort: 80
name: appname
volumeMounts:
# Name must match the volume name below.
- name: appname-disk-per-sto
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: appname-disk-per-sto
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: appname-disk-per-sto
fsType: ext4
ReplicationController configuration:
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: appname
name: appname
spec:
replicas: 2
selector:
name: appname
template:
metadata:
labels:
name: appname
spec:
containers:
- image: gcr.io/proj/app:tag
name: appname
env:
- name: POSTGRES_PASSWORD
# Change this - must match postgres.yaml password.
value: pazzzzwd
- name: POSTGRES_USER
value: rails
ports:
- containerPort: 80
name: appname
volumeMounts:
# Name must match the volume name below.
- name: appname-disk-per-sto
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: appname-disk-per-sto
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: appname-disk-per-sto
fsType: ext4
You can't achieve this with current Kubernetes - see Independent storage for replicated pods. This will be covered by the implementation of PetSets due in v1.3.
The problem is not with Kubernetes, but with shared block device and filesystem that can not be mounted at the same time to more than one host.
https://unix.stackexchange.com/questions/68790/can-the-same-ext4-disk-be-mounted-from-two-hosts-one-readonly
You can try to use Claims: http://kubernetes.io/docs/user-guide/persistent-volumes/
Or another filesystem, e.g. nfs: http://kubernetes.io/docs/user-guide/volumes/
I am trying to run a shell script at the start of a docker container running on Google Cloud Containers using Kubernetes. The structure of my app directory is something like this. I'd like to run prod_start.sh script at the start of the container (I don't want to put it as part of the Dockerfile though). The current setup fails to start the container with Command not found file ./prod_start.sh does not exist. Any idea how to fix this?
app/
...
Dockerfile
prod_start.sh
web-controller.yaml
Gemfile
...
Dockerfile
FROM ruby
RUN mkdir /backend
WORKDIR /backend
ADD Gemfile /backend/Gemfile
ADD Gemfile.lock /backend/Gemfile.lock
RUN bundle install
web-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: backend
labels:
app: myapp
tier: backend
spec:
replicas: 1
selector:
app: myapp
tier: backend
template:
metadata:
labels:
app: myapp
tier: backend
spec:
volumes:
- name: secrets
secret:
secretName: secrets
containers:
- name: my-backend
command: ['./prod_start.sh']
image: gcr.io/myapp-id/myapp-backend:v1
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: http-server
After a lot of experimentations I believe adding the script to the Dockerfile:
ADD prod_start.sh /backend/prod_start.sh
And then calling the command like this in the yaml controller file:
command: ['/bin/sh', './prod_start.sh']
Fixed it.
you can add a config map to your yaml instead of adding to your dockerfile.
apiVersion: v1
kind: ReplicationController
metadata:
name: backend
labels:
app: myapp
tier: backend
spec:
replicas: 1
selector:
app: myapp
tier: backend
template:
metadata:
labels:
app: myapp
tier: backend
spec:
volumes:
- name: secrets
secret:
secretName: secrets
- name: prod-start-config
configMap:
name: prod-start-config-script
defaultMode: 0744
containers:
- name: my-backend
command: ['./prod_start.sh']
image: gcr.io/myapp-id/myapp-backend:v1
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
- name: prod-start-config
mountpath: /backend/
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: http-server
Then create another yaml file for your script:
script.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prod-start-config-script
data:
prod_start.sh: |
apt-get update
When deployed the script will be in the scripts directory