How do pods in Google Container Engine talk/link to each other - docker

I fiddle around with this example: https://cloud.google.com/container-engine/docs/tutorials/persistent-disk/
My modifications:
- I'm using my own Wordpress image [x] Works
Service starts (it needed more CPU 0.8 instead of 0.5, but now it works)
I want to use mariadb instead of mysql [ ] Fails!
I can't figure out how two pods link together!!!! ~5h + still failing
Here are my .yaml-Files
apiVersion: v1
kind: Pod
metadata:
name: wpsite
labels:
name: wpsite
spec:
containers:
- image: <my image on gcr.io>
name: wpsite
env:
- name: WORDPRESS_DB_PASSWORD
# Change this - must match mysql.yaml password.
value: example
ports:
- containerPort: 80
name: wpsite
volumeMounts:
# Name must match the volume name below.
- name: wpsite-disk
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: wpsite-disk
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: wpsite-disk
fsType: ext4
service:
apiVersion: v1
kind: Service
metadata:
labels:
name: wpsite
name: wpsite
spec:
type: LoadBalancer
ports:
# The port that this service should serve on.
- port: 80
targetPort: 80
protocol: TCP
# Label keys and values that must match in order to receive traffic for this service.
selector:
name: wpsite
mariadb:
apiVersion: v1
kind: Pod
metadata:
name: mariadb
labels:
name: mariadb
spec:
containers:
- resources:
limits:
# 0.5 hat nicht funktioniert
# Fehlermeldung in: kubectl describe pod mariadb
cpu: 0.8
image: mariadb:10.1
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
# Change this password!
value: example
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
# This name must match the volumes.name below.
- name: mariadb-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mariadb-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mariadb-disk
fsType: ext4
maria-db-service:
apiVersion: v1
kind: Service
metadata:
labels:
name: mariadb
name: mariadb
spec:
ports:
# The port that this service should serve on.
- port: 3306
# Label keys and values that must match in
# order to receive traffic for this service.
selector:
name: mysql
kubectl logs wpsite shows error messages like this: Warning: mysqli::mysqli(): php_network_getaddresses: getaddrinfo failed: Name or service not known in - on line 10

OK - found it out!
It's the name in mariadb-service.yaml
metadata.name must be mysql and not mariadb, the selector in mariadb-service must point to mariadb (the pod)
Here are the working files:
mariadb.yaml
apiVersion: v1
kind: Pod
metadata:
name: mariadb
labels:
name: mariadb
spec:
containers:
- resources:
limits:
# 0.5 hat nicht funktioniert
# Fehlermeldung in: kubectl describe pod mariadb
cpu: 0.8
image: mariadb:10.1
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
# Change this password!
value: example
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
# This name must match the volumes.name below.
- name: mariadb-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mariadb-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mariadb-disk
fsType: ext4
mariadb-service.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
ports:
# The port that this service should serve on.
- port: 3306
# Label keys and values that must match in
# order to receive traffic for this service.
selector:
name: mariadb
wpsite.yaml
apiVersion: v1
kind: Pod
metadata:
name: wpsite
labels:
name: wpsite
spec:
containers:
- image: <change this to your imagename on gcr.io>
name: wpsite
env:
- name: WORDPRESS_DB_PASSWORD
# Change this - must match mysql.yaml password.
value: example
ports:
- containerPort: 80
name: wpsite
volumeMounts:
# Name must match the volume name below.
- name: wpsite-disk
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: wpsite-disk
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: wpsite-disk
fsType: ext4
wpsite-service.yaml
apiVersion: v1
kind: Service
metadata:
name: wpsite
labels:
name: wpsite
spec:
type: LoadBalancer
ports:
# The port that this service should serve on.
- port: 80
targetPort: 80
protocol: TCP
# Label keys and values that must match in order to receive traffic for this service.
selector:
name: wpsite
With these settings I run: (my yaml-files are under gke)
$ kubectl create -f gke/mariadb.yaml
# Check
$ kubectl get pod
$ kubectl create -f gke/mariadb-service.yaml
# Check
$ kubectl get service mysql!!!! (name in mariadb = mysql)
$ kubectl create -f gke/wpsite.yaml
# Check
$ kubectl get pod
$ kubectl create -f gke/wpsite-service.yaml
# Check
$ kubectl describe service wpsite
Hope this helps someone...

Related

Kubernetes deployment database connection error

I'm trying to deploy GLPI application (http://glpi-project.org/) over my Kubernetes cluster but i encounter an issue.
Here is my deployment code:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-claim-glpi
labels:
type: openebs
spec:
storageClassName: openebs-storageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: glpi
namespace: jb
labels:
app: glpi
spec:
selector:
matchLabels:
app: glpi
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: glpi
spec:
volumes:
- name: pv-storage-glpi
persistentVolumeClaim:
claimName: pv-claim-glpi
containers:
- name: mariadb
image: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: "glpi"
- name: MYSQL_DATABASE
value: "glpi"
- name: MYSQL_USER
value: "glpi"
- name: MYSQL_PASSWORD
value: "glpi"
- name: GLPI_SOURCE_URL
value: "https://forge.glpi-project.org/attachments/download/2020/glpi-0.85.4.tar.gz"
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- mountPath: /var/lib/mariadb/
name: pv-storage-glpi
subPath: mariadb
- name: glpi
image: driket54/glpi
ports:
- containerPort: 80
name: http
- containerPort: 8090
name: https
volumeMounts:
- mountPath: /var/glpidata
name: pv-storage-glpi
subPath: glpidata
---
apiVersion: v1
kind: Service
metadata:
name: glpi
namespace: jb
spec:
selector:
app: glpi
ports:
- protocol: "TCP"
port: 80
targetPort: http
name: http
- protocol: "TCP"
port: 8090
targetPort: https
name: https
- protocol: "TCP"
port: 3306
targetPort: mariadb
name: mariadb
type: NodePort
---
The docker image is properly deployed but in my test phase, during the setup of the app, i get the following error while setting up the database (mysql).
I've already checked the credentials (host, username, password) and the are correct
Please help
Not really an answer since I don' t have the Kubernetes knowledge expected, but I can't add a comment yet :(
What you should alter first is your GLPi version.
Use this link. It's the last one:
https://github.com/glpi-project/glpi/releases/download/9.3.0/glpi-9.3.tgz
Then you may use cli tools to setup the database.
https://glpi-install.readthedocs.io/en/latest/command-line.html
Using what I get from your file:
php scripts/cliinstall.php --host=mariadb(not sure about this one in your environment but you get hte idea) --db=glpi --user=glpi --pass=glpi

Passing Docker container's run parameters in Kubernetes

I have two containers (GitLab and PostgreSQL) running on RancherOS v1.0.3. I would like to make them part of Kubernetes cluster.
[rancher#rancher-agent-1 ~]$ cat postgresql.sh
docker run --name gitlab-postgresql -d \
--env 'POSTGRES_DB=gitlabhq_production' \
--env 'POSTGRES_USER=gitlab' --env 'POSTGRES_PASSWORD=password' \
--volume /srv/docker/gitlab/postgresql:/var/lib/postgresql \
postgres:9.6-2
[rancher#rancher-agent-1 ~]$ cat gitlab.sh
docker run --name gitlab -d \
--link gitlab-postgresql:postgresql \
--publish 443:443 --publish 80:80 \
--env 'GITLAB_PORT=80' --env 'GITLAB_SSH_PORT=10022' \
--env 'GITLAB_SECRETS_DB_KEY_BASE=64-char-key-A' \
--env 'GITLAB_SECRETS_SECRET_KEY_BASE=64-char-key-B' \
--env 'GITLAB_SECRETS_OTP_KEY_BASE=64-char-key-C' \
--volume /srv/docker/gitlab/gitlab:/home/git/data \
sameersbn/gitlab:9.4.5
Queries:
1) I have some idea about how to use YAML files to provision pods, replication controller etc. but i am not sure how to pass the above docker run parameters to Kubernetes so that it can apply the same to image(s) correctly.
2) I'm not sure whether --link argument (used in gitlab.sh above) also need to be passed in Kubernetes. Although i am currently deploying both containers on single host but will be creating cluster of each (PostgreSQL and GitLab) later, so just wanted to confirm whether inter-host communication will automatically be taken care of by Kubernetes. If not, then what options can be explored?
You should first try to represent your run statements into a docker-compose.yml file. Which is quite easy and it would turn something like below
version: '3'
services:
postgresql:
image: postgres:9.6-2
environment:
- "POSTGRES_DB=gitlabhq_production"
- "POSTGRES_USER=gitlab"
- "POSTGRES_PASSWORD=password"
volumes:
- /srv/docker/gitlab/postgresql:/var/lib/postgresql
gitlab:
image: sameersbn/gitlab:9.4.5
ports:
- "443:443"
- "80:80"
environment:
- "GITLAB_PORT=80"
- "GITLAB_SSH_PORT=10022"
- "GITLAB_SECRETS_DB_KEY_BASE=64-char-key-A"
- "GITLAB_SECRETS_SECRET_KEY_BASE=64-char-key-B"
- "GITLAB_SECRETS_OTP_KEY_BASE=64-char-key-C"
volumes:
- /srv/docker/gitlab/gitlab:/home/git/data
Now there is a amazing tool name kompose from kompose.io which does the conversion part for you. If you convert the above you will get the related files
$ kompose convert -f docker-compose.yml
WARN Volume mount on the host "/srv/docker/gitlab/gitlab" isn't supported - ignoring path on the host
WARN Volume mount on the host "/srv/docker/gitlab/postgresql" isn't supported - ignoring path on the host
INFO Kubernetes file "gitlab-service.yaml" created
INFO Kubernetes file "postgresql-service.yaml" created
INFO Kubernetes file "gitlab-deployment.yaml" created
INFO Kubernetes file "gitlab-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "postgresql-deployment.yaml" created
INFO Kubernetes file "postgresql-claim0-persistentvolumeclaim.yaml" created
Now you have to fix the volume mount part as per kubernetes. This completes 80% of the work and you just need figure out the rest 20%
Here is a cat of all the generate files, so that you can just see what kind of files are generated
==> gitlab-claim0-persistentvolumeclaim.yaml <==
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: gitlab-claim0
name: gitlab-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
==> gitlab-deployment.yaml <==
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: gitlab
name: gitlab
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: gitlab
spec:
containers:
- env:
- name: GITLAB_PORT
value: "80"
- name: GITLAB_SECRETS_DB_KEY_BASE
value: 64-char-key-A
- name: GITLAB_SECRETS_OTP_KEY_BASE
value: 64-char-key-C
- name: GITLAB_SECRETS_SECRET_KEY_BASE
value: 64-char-key-B
- name: GITLAB_SSH_PORT
value: "10022"
image: sameersbn/gitlab:9.4.5
name: gitlab
ports:
- containerPort: 443
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /home/git/data
name: gitlab-claim0
restartPolicy: Always
volumes:
- name: gitlab-claim0
persistentVolumeClaim:
claimName: gitlab-claim0
status: {}
==> gitlab-service.yaml <==
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: gitlab
name: gitlab
spec:
ports:
- name: "443"
port: 443
targetPort: 443
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: gitlab
status:
loadBalancer: {}
==> postgresql-claim0-persistentvolumeclaim.yaml <==
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: postgresql-claim0
name: postgresql-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
==> postgresql-deployment.yaml <==
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: postgresql
name: postgresql
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: postgresql
spec:
containers:
- env:
- name: POSTGRES_DB
value: gitlabhq_production
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: gitlab
image: postgres:9.6-2
name: postgresql
resources: {}
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgresql-claim0
restartPolicy: Always
volumes:
- name: postgresql-claim0
persistentVolumeClaim:
claimName: postgresql-claim0
status: {}
==> postgresql-service.yaml <==
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: postgresql
name: postgresql
spec:
clusterIP: None
ports:
- name: headless
port: 55555
targetPort: 0
selector:
io.kompose.service: postgresql
status:
loadBalancer: {}

Kubernetes: port-forwarding automatically for services

I have a rails project that using postgres database. I want to build a database server using Kubernetes and rails server will connect to this database.
For example here is my defined postgres.yml
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
ports:
- name: "5432"
port: 5432
targetPort: 5432
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- env:
- name: POSTGRES_DB
value: hades_dev
- name: POSTGRES_PASSWORD
value: "1234"
name: postgres
image: postgres:latest
ports:
- containerPort: 5432
resources: {}
stdin: true
tty: true
volumeMounts:
- mountPath: /var/lib/postgresql/data/
name: database-hades-volume
restartPolicy: Always
volumes:
- name: database-hades-volume
persistentVolumeClaim:
claimName: database-hades-volume
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-hades-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
I run this by following commands: kubectl run -f postgres.yml.
But when I try to run rails server. I always meet following exception:
PG::Error
invalid encoding name: utf8
I try to forwarding port, and rails server successfully connects to database server:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-3681891707-8ch4l 1/1 Running 0 1m
Then run following command:
kubectl port-forward postgres-3681891707-8ch4l 5432:5432
I think this solution not good. How can I define in my postgres.yml so I don't need to port-forwarding manually as above.
Thanks
You can try by exposing your service on NodePort and then accessing the service on that port.
Check here https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport

DNS not working with Kubernetes PetSet

Ok, following the examples and documentation on the Kubernetes website along with extensive research on Google, I still cannot get DNS resolution between the containers within my Pod.
I have a Service and a PetSet with 2 containers defined. When I deploy the PetSet and Service, they start and run successfully, but if I attempt to ping the host of one of my containers from the other by hostname or by the full domain name I get destination unreachable. I can ping by IP address though.
Here is my Kubernetes configuration file:
apiVersion: v1
kind: Service
metadata:
name: ml-service
labels:
app: marklogic
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
#restartPolicy: OnFailure
clusterIP: None
selector:
app: marklogic
ports:
- protocol: TCP
port: 7997
#nodePort: 31997
name: ml7997
- protocol: TCP
port: 8000
#nodePort: 32000
name: ml8000
# ... More ports defined
#type: NodePort
---
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: marklogic
spec:
serviceName: "ml-service"
replicas: 2
template:
metadata:
labels:
app: marklogic
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
terminationGracePeriodSeconds: 30
containers:
- name: 'marklogic'
image: "{local docker registry ip}:5000/dcgs-sof/ml8-docker-final:v1"
imagePullPolicy: Always
command: ["/opt/entry-point.sh", "-l", "/opt/mlconfig.sh"]
ports:
- containerPort: 7997
name: ml7997
- containerPort: 8000
name: ml8000
- containerPort: 8001
name: ml8001
- containerPort: 8002
name: ml8002
- containerPort: 8040
name: ml8040
- containerPort: 8041
name: ml8041
- containerPort: 8042
name: ml8042
- containerPort: 8050
name: ml8050
- containerPort: 8051
name: ml8051
- containerPort: 8060
name: ml8060
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
lifecycle:
preStop:
exec:
command: ["/etc/init.d/MarkLogic stop"]
volumeMounts:
- name: ml-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: ml-data
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
I commented out the type: NodePort definition as I thought that might be the culprit, but still no success.
Additionally, if I run docker#minikube:/$ docker exec b4d21c4bc065 /bin/bash -c 'nslookup marklogic-1.marklogic.default.svc.cluster.local' it cannot resolve the name.
What am I missing???
You are resolving the wrong domain name.
See http://kubernetes.io/docs/user-guide/petset/#network-identity
You should try to resolve:
marklogic-0.ml-service.default.svc.cluster.local
If everything is within the default namespace, the DNS name is:
<pod_name>.<svc_name>.default.svc.cluster.local

Kubernetes - ReplicationController and Persistent Disks

I have run into a Kubernetes related issue. I just moved from a Pod configuration to a ReplicationController for a Ruby on Rails app and I'm using persistent disks for the Rails pod. When I try apply the ReplicationController it gives the following error:
The ReplicationController "cartelhouse-ror" is invalid.
spec.template.spec.volumes[0].gcePersistentDisk.readOnly: Invalid
value: false: must be true for replicated pods > 1; GCE PD can only be
mounted on multiple machines if it is read-only
Does this mean there is no way to use persistent disks (R/W) when using ReplicationControllers or is there another way?
If not, how can I scale and/or apply rolling updates to the Pod configuration?
Pod configuration:
apiVersion: v1
kind: Pod
metadata:
name: appname
labels:
name: appname
spec:
containers:
- image: gcr.io/proj/appname:tag
name: appname
env:
- name: POSTGRES_PASSWORD
# Change this - must match postgres.yaml password.
value: pazzzzwd
- name: POSTGRES_USER
value: rails
ports:
- containerPort: 80
name: appname
volumeMounts:
# Name must match the volume name below.
- name: appname-disk-per-sto
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: appname-disk-per-sto
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: appname-disk-per-sto
fsType: ext4
ReplicationController configuration:
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: appname
name: appname
spec:
replicas: 2
selector:
name: appname
template:
metadata:
labels:
name: appname
spec:
containers:
- image: gcr.io/proj/app:tag
name: appname
env:
- name: POSTGRES_PASSWORD
# Change this - must match postgres.yaml password.
value: pazzzzwd
- name: POSTGRES_USER
value: rails
ports:
- containerPort: 80
name: appname
volumeMounts:
# Name must match the volume name below.
- name: appname-disk-per-sto
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: appname-disk-per-sto
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: appname-disk-per-sto
fsType: ext4
You can't achieve this with current Kubernetes - see Independent storage for replicated pods. This will be covered by the implementation of PetSets due in v1.3.
The problem is not with Kubernetes, but with shared block device and filesystem that can not be mounted at the same time to more than one host.
https://unix.stackexchange.com/questions/68790/can-the-same-ext4-disk-be-mounted-from-two-hosts-one-readonly
You can try to use Claims: http://kubernetes.io/docs/user-guide/persistent-volumes/
Or another filesystem, e.g. nfs: http://kubernetes.io/docs/user-guide/volumes/

Resources