Passing Docker container's run parameters in Kubernetes - docker

I have two containers (GitLab and PostgreSQL) running on RancherOS v1.0.3. I would like to make them part of Kubernetes cluster.
[rancher#rancher-agent-1 ~]$ cat postgresql.sh
docker run --name gitlab-postgresql -d \
--env 'POSTGRES_DB=gitlabhq_production' \
--env 'POSTGRES_USER=gitlab' --env 'POSTGRES_PASSWORD=password' \
--volume /srv/docker/gitlab/postgresql:/var/lib/postgresql \
postgres:9.6-2
[rancher#rancher-agent-1 ~]$ cat gitlab.sh
docker run --name gitlab -d \
--link gitlab-postgresql:postgresql \
--publish 443:443 --publish 80:80 \
--env 'GITLAB_PORT=80' --env 'GITLAB_SSH_PORT=10022' \
--env 'GITLAB_SECRETS_DB_KEY_BASE=64-char-key-A' \
--env 'GITLAB_SECRETS_SECRET_KEY_BASE=64-char-key-B' \
--env 'GITLAB_SECRETS_OTP_KEY_BASE=64-char-key-C' \
--volume /srv/docker/gitlab/gitlab:/home/git/data \
sameersbn/gitlab:9.4.5
Queries:
1) I have some idea about how to use YAML files to provision pods, replication controller etc. but i am not sure how to pass the above docker run parameters to Kubernetes so that it can apply the same to image(s) correctly.
2) I'm not sure whether --link argument (used in gitlab.sh above) also need to be passed in Kubernetes. Although i am currently deploying both containers on single host but will be creating cluster of each (PostgreSQL and GitLab) later, so just wanted to confirm whether inter-host communication will automatically be taken care of by Kubernetes. If not, then what options can be explored?

You should first try to represent your run statements into a docker-compose.yml file. Which is quite easy and it would turn something like below
version: '3'
services:
postgresql:
image: postgres:9.6-2
environment:
- "POSTGRES_DB=gitlabhq_production"
- "POSTGRES_USER=gitlab"
- "POSTGRES_PASSWORD=password"
volumes:
- /srv/docker/gitlab/postgresql:/var/lib/postgresql
gitlab:
image: sameersbn/gitlab:9.4.5
ports:
- "443:443"
- "80:80"
environment:
- "GITLAB_PORT=80"
- "GITLAB_SSH_PORT=10022"
- "GITLAB_SECRETS_DB_KEY_BASE=64-char-key-A"
- "GITLAB_SECRETS_SECRET_KEY_BASE=64-char-key-B"
- "GITLAB_SECRETS_OTP_KEY_BASE=64-char-key-C"
volumes:
- /srv/docker/gitlab/gitlab:/home/git/data
Now there is a amazing tool name kompose from kompose.io which does the conversion part for you. If you convert the above you will get the related files
$ kompose convert -f docker-compose.yml
WARN Volume mount on the host "/srv/docker/gitlab/gitlab" isn't supported - ignoring path on the host
WARN Volume mount on the host "/srv/docker/gitlab/postgresql" isn't supported - ignoring path on the host
INFO Kubernetes file "gitlab-service.yaml" created
INFO Kubernetes file "postgresql-service.yaml" created
INFO Kubernetes file "gitlab-deployment.yaml" created
INFO Kubernetes file "gitlab-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "postgresql-deployment.yaml" created
INFO Kubernetes file "postgresql-claim0-persistentvolumeclaim.yaml" created
Now you have to fix the volume mount part as per kubernetes. This completes 80% of the work and you just need figure out the rest 20%
Here is a cat of all the generate files, so that you can just see what kind of files are generated
==> gitlab-claim0-persistentvolumeclaim.yaml <==
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: gitlab-claim0
name: gitlab-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
==> gitlab-deployment.yaml <==
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: gitlab
name: gitlab
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: gitlab
spec:
containers:
- env:
- name: GITLAB_PORT
value: "80"
- name: GITLAB_SECRETS_DB_KEY_BASE
value: 64-char-key-A
- name: GITLAB_SECRETS_OTP_KEY_BASE
value: 64-char-key-C
- name: GITLAB_SECRETS_SECRET_KEY_BASE
value: 64-char-key-B
- name: GITLAB_SSH_PORT
value: "10022"
image: sameersbn/gitlab:9.4.5
name: gitlab
ports:
- containerPort: 443
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /home/git/data
name: gitlab-claim0
restartPolicy: Always
volumes:
- name: gitlab-claim0
persistentVolumeClaim:
claimName: gitlab-claim0
status: {}
==> gitlab-service.yaml <==
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: gitlab
name: gitlab
spec:
ports:
- name: "443"
port: 443
targetPort: 443
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: gitlab
status:
loadBalancer: {}
==> postgresql-claim0-persistentvolumeclaim.yaml <==
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: postgresql-claim0
name: postgresql-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
==> postgresql-deployment.yaml <==
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: postgresql
name: postgresql
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: postgresql
spec:
containers:
- env:
- name: POSTGRES_DB
value: gitlabhq_production
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: gitlab
image: postgres:9.6-2
name: postgresql
resources: {}
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgresql-claim0
restartPolicy: Always
volumes:
- name: postgresql-claim0
persistentVolumeClaim:
claimName: postgresql-claim0
status: {}
==> postgresql-service.yaml <==
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: postgresql
name: postgresql
spec:
clusterIP: None
ports:
- name: headless
port: 55555
targetPort: 0
selector:
io.kompose.service: postgresql
status:
loadBalancer: {}

Related

Connect to mongodb with mongoose both in kubernetes

I have a microservice which I developed and tested using docker-compose. Now I would like to deploy it to kubernetes.
Part of my docker-compose file looks like this:
tasksdb:
container_name: tasks-db
image: mongo:4.4.1
restart: always
ports:
- '6004:27017'
volumes:
- ./tasks_service/tasks_db:/data/db
networks:
- backend
tasks-service:
container_name: tasks-service
build: ./tasks_service
restart: always
ports:
- "5004:3000"
volumes:
- ./tasks_service/logs:/usr/src/app/logs
- ./tasks_service/tasks_attachments/:/usr/src/app/tasks_attachments
depends_on:
- tasksdb
networks:
- backend
I used mongoose to connect to the database and it worked fine:
const connection = "mongodb://tasks-db:27017/tasks";
const connectDb = () => {
mongoose.connect(connection, {useNewUrlParser:true, useCreateIndex:true, useFindAndModify: false});
return mongoose.connect(connection);
};
Utilizing Kompose, I created a deployment file however I had to modify the persistent volume and persistent volume claim accordingly.
I have something like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: tasks-volume
labels:
type: local
spec:
storageClassName: manual
volumeMode: Filesystem
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.60.50
path: /tasks_db
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tasksdb-claim0
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
I changed the mongourl as shown here like this:
const connection = "mongodb://tasksdb.default.svc.cluster.local:27017/tasks";
My deployment looks like this:
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasks-service
name: tasks-service
spec:
ports:
- name: "5004"
port: 5004
targetPort: 3000
selector:
io.kompose.service: tasks-service
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
name: tasksdb
spec:
ports:
- name: "6004"
port: 6004
targetPort: 27017
selector:
io.kompose.service: tasksdb
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasks-service
name: tasks-service
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: tasks-service
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasks-service
spec:
containers:
- image: 192.168.60.50:5000/blascal_tasks-service
name: tasks-service
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
restartPolicy: Always
status: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
name: tasksdb
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: tasksdb
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
spec:
containers:
- image: mongo:4.4.1
name: tasks-db
ports:
- containerPort: 27017
resources: {}
volumeMounts:
- mountPath: /data/db
name: tasksdb-claim0
restartPolicy: Always
volumes:
- name: tasksdb-claim0
persistentVolumeClaim:
claimName: tasksdb-claim0
status: {}
Having several services I added an ingress resource for my routing:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: tasks-service
servicePort: 5004
The deployment seems to run fine as you can see .
However, I have three issues:
Despite the fact that I can hit my default path which just reads "tasks service is up" I cannot access my mongoose routes like /api/task/raise which connects to the db, it says "..buffering timed out" like . I guess, the path does not link up to the database service?
the tasks service pod gives this
Whenever there is a power surge and my machine goes off, bringing up the db deployment fails until I delete the config files from the persistent volume, how do I prevent this corruption of files?
I have been researching in an elaborate way of changing the master ip of my cluster as I intend to transfer my cluster to a different network. Any guidance please?
kubectl logs --namespace=kube-system -l k8s-app=kube-dns
the above gives this :
Your tasksdb Service exposes port 6004, not 27017. Try using the following URL:
const connection = "mongodb://tasksdb.default.svc.cluster.local:6004/tasks";
Changing your network depends on what networking CNI plugin you are using. Every plugin has different steps . For Calico please see https://docs.projectcalico.org/networking/migrate-pools
I believe this is your cluster ip setting for mongodb instance:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
name: tasksdb
spec:
ports:
- name: "6004"
port: 6004
targetPort: 27017
selector:
io.kompose.service: tasksdb
status:
loadBalancer: {}
When you create an instance of mongodb inside kubernetes, it runs inside a pod. To connect to a pod, we have to go through the cluster IP service. Anytime we are trying to connect to a cluster IP service we are going to write the name of that cluster iP service for the domain of connection url. in this case you connection url must be
mongodb://tasksdb:6004/nameOfDatabase

having problem with giving deployment using kubernetes [duplicate]

This question already has answers here:
no matches for kind "Deployment" in version "extensions/v1beta1"
(8 answers)
Closed 2 years ago.
i had this docker compose file which is working absolutely fine.But the i use "kompose convert -f docker-compose.yam -o deploy.yaml" in order to get yaml file for kubernetes deployment.
but when i go for "kubectl apply -f deploy.yaml"
i am getting this error
"service/cms created
service/mysqldb created
persistentvolumeclaim/my-datavolume configured
unable to recognize no matches for kind "Deployment" in version "extensions/v1beta1"
unable to recognize no matches for kind "Deployment" in version "extensions/v1beta1"
i am using minikube.
Please help me out.
docker-compose file content
version: "2"
services:
cms:
image: 1511981217/cms_mysql:0.0.2
ports:
- "8080:8080"
networks:
- cms-network
depends_on:
- mysqldb
mysqldb:
image: mysql:8
ports:
- "3306:3306"
networks:
- cms-network
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=cmsdb
volumes:
- my-datavolume:/var/lib/mysql
networks:
cms-network:
volumes:
my-datavolume:
deploy.yaml file content
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: cms
name: cms
spec:
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: cms
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: mysqldb
name: mysqldb
spec:
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
io.kompose.service: mysqldb
status:
loadBalancer: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: cms
name: cms
spec:
replicas: 1
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: cms
spec:
containers:
- image: 1511981217/cms_mysql:0.0.2
name: cms
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
status: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: mysqldb
name: mysqldb
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: mysqldb
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: cmsdb
- name: MYSQL_ROOT_PASSWORD
value: root
image: mysql:8
name: mysqldb
ports:
- containerPort: 3306
resources: {}
volumeMounts:
- mountPath: /var/lib/mysql
name: my-datavolume
restartPolicy: Always
volumes:
- name: my-datavolume
persistentVolumeClaim:
claimName: my-datavolume
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: my-datavolume
name: my-datavolume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
kind: List
metadata: {}
```
extensions/v1beta1 was deprecated in kubernetes version 1.16 .Change extensions/v1beta1 to apps/v1 in the yaml and it should work with a kubernetes cluster which is of version higher than 1.16

Convert Kafka docker run command to Kubernetes

The Kubernetes yaml below creates resources as expected and there are no errors/restarts. External requests to the kafka broker at localhost:80005 hang indefinitely. My understanding is that the docker notion of a "network" is achieved in Kubernetes via services. In this case, I have a NodePort service which brings traffic into the cluster and then a ClusterIp port which handles communication between containers. Based on the non-Kubernetes, docker run version of this Zookeeper/Kafka configuration, I suspect my issue lies somewhere in my service and/or advertised listeners. The internet is replete with docker compose files and helm charts but I don't seem to be able to find a combination of configurations that work.
Kafka/Zookeeper YAML:
# Zookeeper
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper-deploy
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper-app
template:
metadata:
labels:
app: zookeeper-app
spec:
containers:
- name: zookeeper-app
image: confluentinc/cp-zookeeper:5.4.1
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_TICK_TIME
value: "2000"
- name: ZOOKEEPER_CLIENT_PORT
value: "2181"
---
apiVersion: v1
kind: Service
metadata:
name: zk-cluster-svc
labels:
app: zk
spec:
type: ClusterIP
selector:
app: zookeeper-app
ports:
- port: 2181
targetPort: 2181
protocol: TCP
---
# Kafka
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-deploy
spec:
replicas: 1
selector:
matchLabels:
app: kafka-app
template:
metadata:
labels:
app: kafka-app
spec:
containers:
- name: kafka-app
image: confluentinc/cp-kafka:5.4.1
ports:
- containerPort: 9092
- containerPort: 29092
env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zk-cluster-svc:2181"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INSIDE://kafka-cluster-svc:29092,OUTSIDE://localhost:30005"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INSIDE"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
app: kafka-service
spec:
type: NodePort
ports:
- port: 9092
nodePort: 30005
protocol: TCP
selector:
app: kafka-app
---
apiVersion: v1
kind: Service
metadata:
name: kafka-cluster-svc
labels:
app: kafka-app
spec:
selector:
app: kafka-app
ports:
- port: 29092
targetPort: 29092
protocol: TCP
^ This is based on a working docker run version of these containers:
docker network create kafzoonet
docker run --name app-zookeeper -d --network="kafzoonet" --env ZOOKEEPER_TICK_TIME=2000 --env ZOOKEEPER_CLIENT_PORT=2181 confluentinc/cp-zookeeper:5.4.1
docker run --name app-kafka -p 30005:9092 -d --network="kafzoonet" --env KAFKA_BROKER_ID=1 --env KAFKA_ZOOKEEPER_CONNECT="app-zookeeper:2181" --env KAFKA_ADVERTISED_LISTENERS="PLAINTEXT://app-kafka:29092,PLAINTEXT_HOST://localhost:30005" --env KAFKA_LISTENER_SECURITY_PROTOCOL_MAP="PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT" --env KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT --env KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka:5.4.1
Are the services configured correctly for a single Kafka node? It appears that Kafka is able to reach Zookeeper but for some reason when a client connects, it isn't getting the correct listener echoed back. Unfortunately I don't have any error messages to support this as the client hangs and neither the Kafka or Zookeeper pods log anything.

Kubernetes: port-forwarding automatically for services

I have a rails project that using postgres database. I want to build a database server using Kubernetes and rails server will connect to this database.
For example here is my defined postgres.yml
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
ports:
- name: "5432"
port: 5432
targetPort: 5432
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- env:
- name: POSTGRES_DB
value: hades_dev
- name: POSTGRES_PASSWORD
value: "1234"
name: postgres
image: postgres:latest
ports:
- containerPort: 5432
resources: {}
stdin: true
tty: true
volumeMounts:
- mountPath: /var/lib/postgresql/data/
name: database-hades-volume
restartPolicy: Always
volumes:
- name: database-hades-volume
persistentVolumeClaim:
claimName: database-hades-volume
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-hades-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
I run this by following commands: kubectl run -f postgres.yml.
But when I try to run rails server. I always meet following exception:
PG::Error
invalid encoding name: utf8
I try to forwarding port, and rails server successfully connects to database server:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-3681891707-8ch4l 1/1 Running 0 1m
Then run following command:
kubectl port-forward postgres-3681891707-8ch4l 5432:5432
I think this solution not good. How can I define in my postgres.yml so I don't need to port-forwarding manually as above.
Thanks
You can try by exposing your service on NodePort and then accessing the service on that port.
Check here https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport

How do pods in Google Container Engine talk/link to each other

I fiddle around with this example: https://cloud.google.com/container-engine/docs/tutorials/persistent-disk/
My modifications:
- I'm using my own Wordpress image [x] Works
Service starts (it needed more CPU 0.8 instead of 0.5, but now it works)
I want to use mariadb instead of mysql [ ] Fails!
I can't figure out how two pods link together!!!! ~5h + still failing
Here are my .yaml-Files
apiVersion: v1
kind: Pod
metadata:
name: wpsite
labels:
name: wpsite
spec:
containers:
- image: <my image on gcr.io>
name: wpsite
env:
- name: WORDPRESS_DB_PASSWORD
# Change this - must match mysql.yaml password.
value: example
ports:
- containerPort: 80
name: wpsite
volumeMounts:
# Name must match the volume name below.
- name: wpsite-disk
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: wpsite-disk
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: wpsite-disk
fsType: ext4
service:
apiVersion: v1
kind: Service
metadata:
labels:
name: wpsite
name: wpsite
spec:
type: LoadBalancer
ports:
# The port that this service should serve on.
- port: 80
targetPort: 80
protocol: TCP
# Label keys and values that must match in order to receive traffic for this service.
selector:
name: wpsite
mariadb:
apiVersion: v1
kind: Pod
metadata:
name: mariadb
labels:
name: mariadb
spec:
containers:
- resources:
limits:
# 0.5 hat nicht funktioniert
# Fehlermeldung in: kubectl describe pod mariadb
cpu: 0.8
image: mariadb:10.1
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
# Change this password!
value: example
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
# This name must match the volumes.name below.
- name: mariadb-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mariadb-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mariadb-disk
fsType: ext4
maria-db-service:
apiVersion: v1
kind: Service
metadata:
labels:
name: mariadb
name: mariadb
spec:
ports:
# The port that this service should serve on.
- port: 3306
# Label keys and values that must match in
# order to receive traffic for this service.
selector:
name: mysql
kubectl logs wpsite shows error messages like this: Warning: mysqli::mysqli(): php_network_getaddresses: getaddrinfo failed: Name or service not known in - on line 10
OK - found it out!
It's the name in mariadb-service.yaml
metadata.name must be mysql and not mariadb, the selector in mariadb-service must point to mariadb (the pod)
Here are the working files:
mariadb.yaml
apiVersion: v1
kind: Pod
metadata:
name: mariadb
labels:
name: mariadb
spec:
containers:
- resources:
limits:
# 0.5 hat nicht funktioniert
# Fehlermeldung in: kubectl describe pod mariadb
cpu: 0.8
image: mariadb:10.1
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
# Change this password!
value: example
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
# This name must match the volumes.name below.
- name: mariadb-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mariadb-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mariadb-disk
fsType: ext4
mariadb-service.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
ports:
# The port that this service should serve on.
- port: 3306
# Label keys and values that must match in
# order to receive traffic for this service.
selector:
name: mariadb
wpsite.yaml
apiVersion: v1
kind: Pod
metadata:
name: wpsite
labels:
name: wpsite
spec:
containers:
- image: <change this to your imagename on gcr.io>
name: wpsite
env:
- name: WORDPRESS_DB_PASSWORD
# Change this - must match mysql.yaml password.
value: example
ports:
- containerPort: 80
name: wpsite
volumeMounts:
# Name must match the volume name below.
- name: wpsite-disk
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: wpsite-disk
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: wpsite-disk
fsType: ext4
wpsite-service.yaml
apiVersion: v1
kind: Service
metadata:
name: wpsite
labels:
name: wpsite
spec:
type: LoadBalancer
ports:
# The port that this service should serve on.
- port: 80
targetPort: 80
protocol: TCP
# Label keys and values that must match in order to receive traffic for this service.
selector:
name: wpsite
With these settings I run: (my yaml-files are under gke)
$ kubectl create -f gke/mariadb.yaml
# Check
$ kubectl get pod
$ kubectl create -f gke/mariadb-service.yaml
# Check
$ kubectl get service mysql!!!! (name in mariadb = mysql)
$ kubectl create -f gke/wpsite.yaml
# Check
$ kubectl get pod
$ kubectl create -f gke/wpsite-service.yaml
# Check
$ kubectl describe service wpsite
Hope this helps someone...

Resources