I have simple golang application which talks to postgres database. The posrgres container is working as expected However my golang application does not start.
In my config.go the environment variables is specified as
type Config struct {
Port uint `env:"PORT" envDefault:"8000"`
PostgresURL string `env:"POSTGRES_URL" envDefault:"postgres://user:pass#127.0.0.1/simple-service"`
}
My golang application service and deployment yaml looks like:
apiVersion: v1
kind: Service
metadata:
name: simple-service-webapp-service
labels:
app: simple-service-webapp
spec:
ports:
- port: 8080
name: http
selector:
app: simple-service-webapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-service-webapp-v1
labels:
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: simple-service-webapp
version: v1
template:
metadata:
labels:
app: simple-service-webapp
version: v1
spec:
containers:
- name: simple-service-webapp
image: docker.io/225517/simple-service-webapp:v1
resources:
requests:
cpu: 100m
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: POSTGRES_HOST
value: postgresdb
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_DB
value: simple-service
- name: POSTGRES_USER
value: user
- name: POSTGRES_PASSWORD
value: pass
readinessProbe:
httpGet:
path: /live
port: 8080
---
Below is the service and deployment yaml file for postgres service
apiVersion: v1
kind: Service
metadata:
name: postgresdb
labels:
app: postgresdb
spec:
ports:
- port: 5432
name: tcp
selector:
app: postgresdb
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresdb-v1
labels:
app: postgresdb
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: postgresdb
version: v1
template:
metadata:
labels:
app: postgresdb
version: v1
spec:
containers:
- name: postgresdb
image: postgres
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: simple-service
- name: POSTGRES_USER
value: user
- name: POSTGRES_PASSWORD
value: pass
readinessProbe:
exec:
command: ["psql", "-W", "pass", "-U", "user", "-d", "simple-service", "-c", "SELECT 1"]
initialDelaySeconds: 15
timeoutSeconds: 2
livenessProbe:
exec:
command: ["psql", "-W", "pass", "-U", "user", "-d", "simple-service", "-c", "SELECT 1"]
initialDelaySeconds: 45
timeoutSeconds: 2
---
Do I need to change anything in config.go? Could you please help thanks!
This is a community wiki answer based on the info from the comments, feel free to expand on it. I'm placing this for better visibility for the community.
Question was solved by setting appropriate postgres DB port (5432) and rewriting config so it look for the correct variable set on the deployment.
Related
I want to run embeded redis with spring boot application in kubernetes cluster but it is not creating any container with embeded redis.
Getting error: CRASH LOOP BACKOFF.
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-registration
spec:
selector:
matchLabels:
app: user-registration
replicas: 1
template:
metadata:
labels:
app: user-registration
spec:
containers:
- name: user-registration
image: user-registration:1.0
ports:
- containerPort: 8080
env:
- name: DB_HOST
value: mysql
- name: DB_NAME
value: user_registration
- name: DB_USERNAME
value: root
- name: DB_PASSWORD
value: root
- image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_DATABASE
value: user_registration
ports:
- containerPort: 3306
name: mysql
- name: redis
image: "docker.io/redis:6.0.5"
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: user-registration
spec:
selector:
app: user-registration
ports:
- protocol: "TCP"
port: 8080
targetPort: 8080
nodePort: 30163
type: NodePort
It is working fine when iam commenting the redis config but not working when it is enabled.
I'm dealing with Kubernetes. It is not possible to connect the server and the database. Tell me what I'm doing wrong and what else I need to do.
apiVersion: apps/v1
kind: Deployment
metadata:
name: testdb
spec:
replicas: 1
selector:
matchLabels:
tier: testdb
app: testapp
template:
metadata:
labels:
app: testdb
spec:
containers:
- name: testdb
image: pryby/testdbcon
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: root
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: server
spec:
replicas: 1
selector:
matchLabels:
app: server
template:
metadata:
labels:
app: server
spec:
containers:
- name: server
image: pryby/testdocker
env:
- name: LISTEN
value: 0.0.0.0:8080
- name: DB_HOST
value: testdb
- name: DB_PORT
value: "5432"
- name: DB_USER
value: postgres
- name: DB_DBNAME
value: postgres
- name: DB_PASSWORD
value: root
- name: DB_SSL
value: disable
ports:
- containerPort: 8080
What and where do I need to specify in order to get access to the database?
If possible, give an example for working with the database and api, I climbed everywhere where I can, I didn't find anything like that. Maybe I didn't search well.
Firstly, Here in your testDB deployment your selector's match labels and template's label need to be the same.
Secondly, You need to create a Service of Type ClusterIP.
Here is the example:
apiVersion: v1
kind: Service
metadata:
name: testdb
labels:
app: testdb
spec:
ports:
- port: 5432
name: web
clusterIP: None
selector:
app: testdb
The final yaml should look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: testdb
spec:
replicas: 1
selector:
matchLabels:
# tier: testdb
app: testdb # <- *****changed here******
template:
metadata:
labels:
app: testdb
spec:
containers:
- name: testdb
image: pryby/testdbcon
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: root
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: server
spec:
replicas: 1
selector:
matchLabels:
app: server
template:
metadata:
labels:
app: server
spec:
containers:
- name: server
image: pryby/testdocker
env:
- name: LISTEN
value: 0.0.0.0:8080
- name: DB_HOST
value: testdb
- name: DB_PORT
value: "5432"
- name: DB_USER
value: postgres
- name: DB_DBNAME
value: postgres
- name: DB_PASSWORD
value: root
- name: DB_SSL
value: disable
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: testdb
labels:
app: testdb
spec:
ports:
- port: 5432
name: web
clusterIP: None
selector:
app: testdb
expose your testdb deployment with a service of type ClusterIP and port 5432.
either craft the yaml yourself, or run kubectl expose deployment testdb --port 5432 --targetPort 5432 --name testdb
you can also use the following command to generate the yaml definition file:
kubectl expose deployment testdb --port 5432 --targetPort 5432 --name testdb -dry-run=client -o yaml > service-definition.yaml
My tomcat and mysql containers are not connecting.so how can I link them so that my war file can run succesfully.
I built my tomcat image using docker file
FROM picoded/tomcat7
COPY data-core-0.0.1-SNAPSHOT.war /usr/local/tomcat/webapps/data-core-0.0.1-SNAPSHOT.war
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: "IfNotPresent"
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_DATABASE
value: data-core
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
mysqlpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/home/vignesh/stackoverflow/tmp/data" //this is the path were my
sql init script is placed.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
tomcat.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat
labels:
app: tomcat
spec:
type: NodePort
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: tomcat
tier: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat
labels:
app: tomcat
spec:
selector:
matchLabels:
app: tomcat
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: tomcat
tier: frontend
spec:
containers:
- image: suji165475/vignesh:tomcatserver
name: tomcat
env:
- name: DB_PORT_3306_TCP_ADDR
value: mysql #service name of mysql
- name: DB_ENV_MYSQL_DATABASE
value: data-core
- name: DB_ENV_MYSQL_ROOT_PASSWORD
value: root
ports:
- containerPort: 8080
name: http
volumeMounts:
- name: tomcat-persistent-storage
mountPath: /var/data
volumes:
- name: tomcat-persistent-storage
persistentVolumeClaim:
claimName: tomcat-pv-claim
tomcatpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: tomcat-pv
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/app"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: tomcat-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
currently using type:Nodeport for tomcat service. Do I have to use Nodeport for mysql also?? If so then should i give the same nodeport or different??
Note: Iam running all of this on a server using putty terminal
When kubetnetes start service, it adds env variables for host, port etc. Try using environment variable MYSQL_SERVICE_HOST
I have three different images related to my application which works fine in docker-compose and has issues running on kubernetes cluster in GCP.
Below is the deployment file.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql-database
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql-database
tier: database
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql-database
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-database
tier: database
spec:
hostname: mysql
containers:
- image: mysql/mysql-server:5.7
name: mysql
env:
- name: "MYSQL_USER"
value: "root"
- name: "MYSQL_HOST"
value: "mysql"
- name: "MYSQL_DATABASE"
value: "xxxx"
- name: "MYSQL_PORT"
value: "3306"
- name: "MYSQL_PASSWORD"
value: "password"
- name: "MYSQL_ROOT_PASSWORD"
value: "password"
- name: "RAILS_ENV"
value: "production"
ports:
- containerPort: 5432
name: db
---
apiVersion: v1
kind: Service
metadata:
name: dgservice
labels:
app: dgservice
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
name: dgservice
tier: dgservice
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dgservice
labels:
app: dgservice
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: dgservice
tier: dgservice
spec:
hostname: dgservice
containers:
- image: gcr.io/sample/sample-image:check_1
name: dgservice
ports:
- containerPort: 8080
name: dgservice
---
apiVersion: v1
kind: Service
metadata:
name: dg-ui
labels:
name: dg-ui
spec:
type: NodePort
ports:
- nodePort: 30156
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: dg-ui
tier: dg
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dg-ui
labels:
app: dg-ui
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: dg-ui
tier: dg
spec:
hostname: dg-ui
containers:
- image: gcr.io/sample/sample:latest
name: dg-ui
env:
- name: "MYSQL_USER"
value: "root"
- name: "MYSQL_HOST"
value: "mysql"
- name: "MYSQL_DATABASE"
value: "xxxx"
- name: "MYSQL_PORT"
value: "3306"
- name: "MYSQL_PASSWORD"
value: "password"
- name: "MYSQL_ROOT_PASSWORD"
value: "password"
- name: "RAILS_ENV"
value: "production"
- name: "DG_SERVICE_HOST"
value: "dgservice"
ports:
- containerPort: 8000
name: dg-ui
The image is being pulled successfully from GCR as well.
The connection between mysql and ui service also works fine and my data's are getting migrated without any issues. But the connection is not established between the service and the ui.
Why ui is not able to access service in my application?
As your deployment has the following lables so service need to have same labels in order to create endpoint object
endpoints are the API object behind a service. The endpoints are where a service will route connections to when a connection is made to the ClusterIP of a service
Following are the labels of deployments
labels:
app: dgservice
tier: dgservice
New Service definition with correct labels
apiVersion: v1
kind: Service
metadata:
name: dgservice
labels:
app: dgservice
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
app: dgservice
tier: dgservice
I am assuming by "service" you are referring to your "dgservice". With the yaml presented above, I believe you also need to specify the DG_SERVICE_PORT (port 8080) to correctly access "dgservice".
As mentioned by Suresh in the comments, you should expose internal services using ClusterIP type. The NodePort is a superset of ClusterIP, and will expose the service internally to your cluster at service-name:port, and externally at node-ip:nodeport, targeting your deployment/pod at targetport.
I'm trying to deploy GLPI application (http://glpi-project.org/) over my Kubernetes cluster but i encounter an issue.
Here is my deployment code:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-claim-glpi
labels:
type: openebs
spec:
storageClassName: openebs-storageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: glpi
namespace: jb
labels:
app: glpi
spec:
selector:
matchLabels:
app: glpi
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: glpi
spec:
volumes:
- name: pv-storage-glpi
persistentVolumeClaim:
claimName: pv-claim-glpi
containers:
- name: mariadb
image: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: "glpi"
- name: MYSQL_DATABASE
value: "glpi"
- name: MYSQL_USER
value: "glpi"
- name: MYSQL_PASSWORD
value: "glpi"
- name: GLPI_SOURCE_URL
value: "https://forge.glpi-project.org/attachments/download/2020/glpi-0.85.4.tar.gz"
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- mountPath: /var/lib/mariadb/
name: pv-storage-glpi
subPath: mariadb
- name: glpi
image: driket54/glpi
ports:
- containerPort: 80
name: http
- containerPort: 8090
name: https
volumeMounts:
- mountPath: /var/glpidata
name: pv-storage-glpi
subPath: glpidata
---
apiVersion: v1
kind: Service
metadata:
name: glpi
namespace: jb
spec:
selector:
app: glpi
ports:
- protocol: "TCP"
port: 80
targetPort: http
name: http
- protocol: "TCP"
port: 8090
targetPort: https
name: https
- protocol: "TCP"
port: 3306
targetPort: mariadb
name: mariadb
type: NodePort
---
The docker image is properly deployed but in my test phase, during the setup of the app, i get the following error while setting up the database (mysql).
I've already checked the credentials (host, username, password) and the are correct
Please help
Not really an answer since I don' t have the Kubernetes knowledge expected, but I can't add a comment yet :(
What you should alter first is your GLPi version.
Use this link. It's the last one:
https://github.com/glpi-project/glpi/releases/download/9.3.0/glpi-9.3.tgz
Then you may use cli tools to setup the database.
https://glpi-install.readthedocs.io/en/latest/command-line.html
Using what I get from your file:
php scripts/cliinstall.php --host=mariadb(not sure about this one in your environment but you get hte idea) --db=glpi --user=glpi --pass=glpi