I have a rails project that using postgres database. I want to build a database server using Kubernetes and rails server will connect to this database.
For example here is my defined postgres.yml
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
ports:
- name: "5432"
port: 5432
targetPort: 5432
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- env:
- name: POSTGRES_DB
value: hades_dev
- name: POSTGRES_PASSWORD
value: "1234"
name: postgres
image: postgres:latest
ports:
- containerPort: 5432
resources: {}
stdin: true
tty: true
volumeMounts:
- mountPath: /var/lib/postgresql/data/
name: database-hades-volume
restartPolicy: Always
volumes:
- name: database-hades-volume
persistentVolumeClaim:
claimName: database-hades-volume
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-hades-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
I run this by following commands: kubectl run -f postgres.yml.
But when I try to run rails server. I always meet following exception:
PG::Error
invalid encoding name: utf8
I try to forwarding port, and rails server successfully connects to database server:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-3681891707-8ch4l 1/1 Running 0 1m
Then run following command:
kubectl port-forward postgres-3681891707-8ch4l 5432:5432
I think this solution not good. How can I define in my postgres.yml so I don't need to port-forwarding manually as above.
Thanks
You can try by exposing your service on NodePort and then accessing the service on that port.
Check here https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
Related
I used the following yaml files to deploy couchbase in kubernetes.
Master:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-master-rc
spec:
replicas: 1
selector:
app: master-pod
template:
metadata:
labels:
app: master-pod
spec:
containers:
- name: couchbase-master
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: MASTER
ports:
- containerPort: 8091
---
apiVersion: v1
kind: Service
metadata:
name: couchbase-master-service
labels:
app: couchbase-master-service
spec:
ports:
- port: 8091
selector:
app: master-pod
type: LoadBalancer
Worker:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-worker-rc
spec:
replicas: 1
selector:
app: couchbase-worker-pod
template:
metadata:
labels:
app: couchbase-worker-pod
spec:
containers:
- name: couchbase-worker
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: "WORKER"
- name: COUCHBASE_MASTER
value: "couchbase-master-service"
- name: AUTO_REBALANCE
value: "false"
ports:
- containerPort: 8091
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: couchbase
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: xxx.com
http:
paths:
- path: /
backend:
serviceName: couchbase-master-service
servicePort: 8091
The pods started running and nothing seems to have an issue at first glance. But when I tried to hit the HostUrl it gives me bad gateway. And when I look into the logs of master's pod it shows me connection refused at 127.0.0.1:8091. I tried to exec into the pod and apply the curl statements from entrypoint.sh manually, but it also gave me the error "failed to connect to 127.0.0.1 port 8091: Connection refused".
I have found that master image is using this entrypoint script
I ran this container image and it looks like the curl is failing because 15s sleep is not enough time for couchbase-server to start and open 8091 port.
The easiest thing you could do is to set this sleep to higher value, but sleep is usually not the best option. (Actually this whole image is full of bad practises).
Better approach would be to replace sleep with following lines that wait until port 8091 is open:
while ! nc -z localhost 8091; do
sleep 1
done
I have installed nfs-provisioner in my rancher cluster. I make persistant volume for my MongoDB. When I restart server or upgrade mongodb container all my data is lost. How to fix this?
My mongodb configuration
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo-db
spec:
selector:
matchLabels:
app: mongo-db
serviceName: mongo-db
replicas: 3
template:
metadata:
labels:
app: mongo-db
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: data #reference the volumeClaimTemplate below
mountPath: /data/db
#this is a key difference with statefulsets
#A unique volume will be attached to each pod
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
#If no storageClassName is provided the default storage class will be used
#storageClassName: "standard"
resources:
requests:
storage: 2Gi
all!!
I'm deploying private registry within K8S cluster with following yaml file:
kind: PersistentVolume
apiVersion: v1
metadata:
name: registry
labels:
type: local
spec:
capacity:
storage: 4Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/registry/"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: registry-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: Service
metadata:
name: registry
labels:
app: registry
spec:
ports:
- port: 5000
targetPort: 5000
nodePort: 30400
name: registry
selector:
app: registry
tier: registry
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: registry-ui
labels:
app: registry
spec:
ports:
- port: 8080
targetPort: 8080
name: registry
selector:
app: registry
tier: registry
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: registry
labels:
app: registry
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: registry
tier: registry
spec:
containers:
- image: registry:2
name: registry
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
- name: registry-persistent-storage
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
- name: registryui
image: hyper/docker-registry-web:latest
ports:
- containerPort: 8080
env:
- name: REGISTRY_URL
value: http://localhost:5000/v2
- name: REGISTRY_NAME
value: cluster-registry
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
- name: registry-persistent-storage
persistentVolumeClaim:
claimName: registry-claim
I'm just wondering that there is no option to delete docker images after pushing them to the local registry. I found the way how it suppose to work here: https://github.com/byrnedo/docker-reg-tool. I can list docker images inside local repository, see all tags via command line, but unable delete them. After reading the docker registry documentation, I've found that registry docker need to be run with following env: REGISTRY_STORAGE_DELETE_ENABLED=true.
I tried to add this variable into yaml file:
.........
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: registry
labels:
app: registry
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: registry
tier: registry
spec:
containers:
- image: registry:2
name: registry
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
- name: registry-persistent-storage
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
env:
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: true
But applying this yaml file with command kubectl apply -f manifests/registry.yaml return following error message:
Deployment in version "v1beta1" cannot be handled as a Deployment: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true}],"ima|..., bigger context ...|"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":true}],"image":"registry:2","name":"registry","port|...
After I find another suggestion:
The registry accepts configuration settings either via a file or via
environment variables. So the environment variable
REGISTRY_STORAGE_DELETE_ENABLED=true is equivalent to this in your
config file:
storage:
delete:
enabled: true
I've tried this option as well in my yaml file but still no luck...
Any suggestions how to enable docker images deletion in my yaml file are highly appreciated.
The value of true in yaml is parsed into a boolean data type and the syntax calls for a string. You'll need to explicitly quote it:
value: "true"
I'm trying to deploy GLPI application (http://glpi-project.org/) over my Kubernetes cluster but i encounter an issue.
Here is my deployment code:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-claim-glpi
labels:
type: openebs
spec:
storageClassName: openebs-storageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: glpi
namespace: jb
labels:
app: glpi
spec:
selector:
matchLabels:
app: glpi
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: glpi
spec:
volumes:
- name: pv-storage-glpi
persistentVolumeClaim:
claimName: pv-claim-glpi
containers:
- name: mariadb
image: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: "glpi"
- name: MYSQL_DATABASE
value: "glpi"
- name: MYSQL_USER
value: "glpi"
- name: MYSQL_PASSWORD
value: "glpi"
- name: GLPI_SOURCE_URL
value: "https://forge.glpi-project.org/attachments/download/2020/glpi-0.85.4.tar.gz"
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- mountPath: /var/lib/mariadb/
name: pv-storage-glpi
subPath: mariadb
- name: glpi
image: driket54/glpi
ports:
- containerPort: 80
name: http
- containerPort: 8090
name: https
volumeMounts:
- mountPath: /var/glpidata
name: pv-storage-glpi
subPath: glpidata
---
apiVersion: v1
kind: Service
metadata:
name: glpi
namespace: jb
spec:
selector:
app: glpi
ports:
- protocol: "TCP"
port: 80
targetPort: http
name: http
- protocol: "TCP"
port: 8090
targetPort: https
name: https
- protocol: "TCP"
port: 3306
targetPort: mariadb
name: mariadb
type: NodePort
---
The docker image is properly deployed but in my test phase, during the setup of the app, i get the following error while setting up the database (mysql).
I've already checked the credentials (host, username, password) and the are correct
Please help
Not really an answer since I don' t have the Kubernetes knowledge expected, but I can't add a comment yet :(
What you should alter first is your GLPi version.
Use this link. It's the last one:
https://github.com/glpi-project/glpi/releases/download/9.3.0/glpi-9.3.tgz
Then you may use cli tools to setup the database.
https://glpi-install.readthedocs.io/en/latest/command-line.html
Using what I get from your file:
php scripts/cliinstall.php --host=mariadb(not sure about this one in your environment but you get hte idea) --db=glpi --user=glpi --pass=glpi
I fiddle around with this example: https://cloud.google.com/container-engine/docs/tutorials/persistent-disk/
My modifications:
- I'm using my own Wordpress image [x] Works
Service starts (it needed more CPU 0.8 instead of 0.5, but now it works)
I want to use mariadb instead of mysql [ ] Fails!
I can't figure out how two pods link together!!!! ~5h + still failing
Here are my .yaml-Files
apiVersion: v1
kind: Pod
metadata:
name: wpsite
labels:
name: wpsite
spec:
containers:
- image: <my image on gcr.io>
name: wpsite
env:
- name: WORDPRESS_DB_PASSWORD
# Change this - must match mysql.yaml password.
value: example
ports:
- containerPort: 80
name: wpsite
volumeMounts:
# Name must match the volume name below.
- name: wpsite-disk
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: wpsite-disk
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: wpsite-disk
fsType: ext4
service:
apiVersion: v1
kind: Service
metadata:
labels:
name: wpsite
name: wpsite
spec:
type: LoadBalancer
ports:
# The port that this service should serve on.
- port: 80
targetPort: 80
protocol: TCP
# Label keys and values that must match in order to receive traffic for this service.
selector:
name: wpsite
mariadb:
apiVersion: v1
kind: Pod
metadata:
name: mariadb
labels:
name: mariadb
spec:
containers:
- resources:
limits:
# 0.5 hat nicht funktioniert
# Fehlermeldung in: kubectl describe pod mariadb
cpu: 0.8
image: mariadb:10.1
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
# Change this password!
value: example
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
# This name must match the volumes.name below.
- name: mariadb-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mariadb-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mariadb-disk
fsType: ext4
maria-db-service:
apiVersion: v1
kind: Service
metadata:
labels:
name: mariadb
name: mariadb
spec:
ports:
# The port that this service should serve on.
- port: 3306
# Label keys and values that must match in
# order to receive traffic for this service.
selector:
name: mysql
kubectl logs wpsite shows error messages like this: Warning: mysqli::mysqli(): php_network_getaddresses: getaddrinfo failed: Name or service not known in - on line 10
OK - found it out!
It's the name in mariadb-service.yaml
metadata.name must be mysql and not mariadb, the selector in mariadb-service must point to mariadb (the pod)
Here are the working files:
mariadb.yaml
apiVersion: v1
kind: Pod
metadata:
name: mariadb
labels:
name: mariadb
spec:
containers:
- resources:
limits:
# 0.5 hat nicht funktioniert
# Fehlermeldung in: kubectl describe pod mariadb
cpu: 0.8
image: mariadb:10.1
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
# Change this password!
value: example
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
# This name must match the volumes.name below.
- name: mariadb-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mariadb-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mariadb-disk
fsType: ext4
mariadb-service.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
ports:
# The port that this service should serve on.
- port: 3306
# Label keys and values that must match in
# order to receive traffic for this service.
selector:
name: mariadb
wpsite.yaml
apiVersion: v1
kind: Pod
metadata:
name: wpsite
labels:
name: wpsite
spec:
containers:
- image: <change this to your imagename on gcr.io>
name: wpsite
env:
- name: WORDPRESS_DB_PASSWORD
# Change this - must match mysql.yaml password.
value: example
ports:
- containerPort: 80
name: wpsite
volumeMounts:
# Name must match the volume name below.
- name: wpsite-disk
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: wpsite-disk
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: wpsite-disk
fsType: ext4
wpsite-service.yaml
apiVersion: v1
kind: Service
metadata:
name: wpsite
labels:
name: wpsite
spec:
type: LoadBalancer
ports:
# The port that this service should serve on.
- port: 80
targetPort: 80
protocol: TCP
# Label keys and values that must match in order to receive traffic for this service.
selector:
name: wpsite
With these settings I run: (my yaml-files are under gke)
$ kubectl create -f gke/mariadb.yaml
# Check
$ kubectl get pod
$ kubectl create -f gke/mariadb-service.yaml
# Check
$ kubectl get service mysql!!!! (name in mariadb = mysql)
$ kubectl create -f gke/wpsite.yaml
# Check
$ kubectl get pod
$ kubectl create -f gke/wpsite-service.yaml
# Check
$ kubectl describe service wpsite
Hope this helps someone...