I want to run embeded redis with spring boot application in kubernetes cluster but it is not creating any container with embeded redis.
Getting error: CRASH LOOP BACKOFF.
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-registration
spec:
selector:
matchLabels:
app: user-registration
replicas: 1
template:
metadata:
labels:
app: user-registration
spec:
containers:
- name: user-registration
image: user-registration:1.0
ports:
- containerPort: 8080
env:
- name: DB_HOST
value: mysql
- name: DB_NAME
value: user_registration
- name: DB_USERNAME
value: root
- name: DB_PASSWORD
value: root
- image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_DATABASE
value: user_registration
ports:
- containerPort: 3306
name: mysql
- name: redis
image: "docker.io/redis:6.0.5"
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: user-registration
spec:
selector:
app: user-registration
ports:
- protocol: "TCP"
port: 8080
targetPort: 8080
nodePort: 30163
type: NodePort
It is working fine when iam commenting the redis config but not working when it is enabled.
Related
I'm dealing with Kubernetes. It is not possible to connect the server and the database. Tell me what I'm doing wrong and what else I need to do.
apiVersion: apps/v1
kind: Deployment
metadata:
name: testdb
spec:
replicas: 1
selector:
matchLabels:
tier: testdb
app: testapp
template:
metadata:
labels:
app: testdb
spec:
containers:
- name: testdb
image: pryby/testdbcon
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: root
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: server
spec:
replicas: 1
selector:
matchLabels:
app: server
template:
metadata:
labels:
app: server
spec:
containers:
- name: server
image: pryby/testdocker
env:
- name: LISTEN
value: 0.0.0.0:8080
- name: DB_HOST
value: testdb
- name: DB_PORT
value: "5432"
- name: DB_USER
value: postgres
- name: DB_DBNAME
value: postgres
- name: DB_PASSWORD
value: root
- name: DB_SSL
value: disable
ports:
- containerPort: 8080
What and where do I need to specify in order to get access to the database?
If possible, give an example for working with the database and api, I climbed everywhere where I can, I didn't find anything like that. Maybe I didn't search well.
Firstly, Here in your testDB deployment your selector's match labels and template's label need to be the same.
Secondly, You need to create a Service of Type ClusterIP.
Here is the example:
apiVersion: v1
kind: Service
metadata:
name: testdb
labels:
app: testdb
spec:
ports:
- port: 5432
name: web
clusterIP: None
selector:
app: testdb
The final yaml should look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: testdb
spec:
replicas: 1
selector:
matchLabels:
# tier: testdb
app: testdb # <- *****changed here******
template:
metadata:
labels:
app: testdb
spec:
containers:
- name: testdb
image: pryby/testdbcon
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: root
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: server
spec:
replicas: 1
selector:
matchLabels:
app: server
template:
metadata:
labels:
app: server
spec:
containers:
- name: server
image: pryby/testdocker
env:
- name: LISTEN
value: 0.0.0.0:8080
- name: DB_HOST
value: testdb
- name: DB_PORT
value: "5432"
- name: DB_USER
value: postgres
- name: DB_DBNAME
value: postgres
- name: DB_PASSWORD
value: root
- name: DB_SSL
value: disable
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: testdb
labels:
app: testdb
spec:
ports:
- port: 5432
name: web
clusterIP: None
selector:
app: testdb
expose your testdb deployment with a service of type ClusterIP and port 5432.
either craft the yaml yourself, or run kubectl expose deployment testdb --port 5432 --targetPort 5432 --name testdb
you can also use the following command to generate the yaml definition file:
kubectl expose deployment testdb --port 5432 --targetPort 5432 --name testdb -dry-run=client -o yaml > service-definition.yaml
I have simple golang application which talks to postgres database. The posrgres container is working as expected However my golang application does not start.
In my config.go the environment variables is specified as
type Config struct {
Port uint `env:"PORT" envDefault:"8000"`
PostgresURL string `env:"POSTGRES_URL" envDefault:"postgres://user:pass#127.0.0.1/simple-service"`
}
My golang application service and deployment yaml looks like:
apiVersion: v1
kind: Service
metadata:
name: simple-service-webapp-service
labels:
app: simple-service-webapp
spec:
ports:
- port: 8080
name: http
selector:
app: simple-service-webapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-service-webapp-v1
labels:
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: simple-service-webapp
version: v1
template:
metadata:
labels:
app: simple-service-webapp
version: v1
spec:
containers:
- name: simple-service-webapp
image: docker.io/225517/simple-service-webapp:v1
resources:
requests:
cpu: 100m
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: POSTGRES_HOST
value: postgresdb
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_DB
value: simple-service
- name: POSTGRES_USER
value: user
- name: POSTGRES_PASSWORD
value: pass
readinessProbe:
httpGet:
path: /live
port: 8080
---
Below is the service and deployment yaml file for postgres service
apiVersion: v1
kind: Service
metadata:
name: postgresdb
labels:
app: postgresdb
spec:
ports:
- port: 5432
name: tcp
selector:
app: postgresdb
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresdb-v1
labels:
app: postgresdb
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: postgresdb
version: v1
template:
metadata:
labels:
app: postgresdb
version: v1
spec:
containers:
- name: postgresdb
image: postgres
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: simple-service
- name: POSTGRES_USER
value: user
- name: POSTGRES_PASSWORD
value: pass
readinessProbe:
exec:
command: ["psql", "-W", "pass", "-U", "user", "-d", "simple-service", "-c", "SELECT 1"]
initialDelaySeconds: 15
timeoutSeconds: 2
livenessProbe:
exec:
command: ["psql", "-W", "pass", "-U", "user", "-d", "simple-service", "-c", "SELECT 1"]
initialDelaySeconds: 45
timeoutSeconds: 2
---
Do I need to change anything in config.go? Could you please help thanks!
This is a community wiki answer based on the info from the comments, feel free to expand on it. I'm placing this for better visibility for the community.
Question was solved by setting appropriate postgres DB port (5432) and rewriting config so it look for the correct variable set on the deployment.
I work on an open source system that is comprised of a Postgres database and a tomcat server. I have docker images for each component. We currently use docker-compose to test the application.
I am attempting to model this application with kubernetes.
Here is my first attempt.
apiVersion: v1
kind: Pod
metadata:
name: dspace-pod
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
#
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
I have a configMap that is setting the hostname to the name of the pod.
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspace-pod:5432/dspace
dspace.hostname = dspace-pod
dspace.baseUrl = http://dspace-pod:8080
solr.server=http://dspace-pod:8080/solr
This application has a number of tasks that are run from the command line.
I have created a 3rd Docker image that contains the jars that are needed on the command line.
I am interested in modeling these command line tasks as Jobs in Kubernetes. Assuming that is a appropriate way to handle these tasks, how do I specify that a job should run within a Pod that is already running?
Here is my first attempt at defining a job.
apiVersion: batch/v1
kind: Job
#https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test#test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
The following configuration has allowed me to start my services (tomcat and postgres) as I hoped.
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
# example of a simple property defined using --from-literal
#example.property.1: hello
#example.property.2: world
# example of a complex property defined using --from-file
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspacedb-service:5432/dspace
dspace.hostname = dspace-service
dspace.baseUrl = http://dspace-service:8080
solr.server=http://dspace-service:8080/solr
---
apiVersion: v1
kind: Service
metadata:
name: dspacedb-service
labels:
app: dspacedb-app
spec:
type: NodePort
selector:
app: dspacedb-app
ports:
- protocol: TCP
port: 5432
# targetPort: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspacedb-deploy
labels:
app: dspacedb-app
spec:
selector:
matchLabels:
app: dspacedb-app
template:
metadata:
labels:
app: dspacedb-app
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
containers:
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
---
apiVersion: v1
kind: Service
metadata:
name: dspace-service
labels:
app: dspace-app
spec:
type: NodePort
selector:
app: dspace-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspace-deploy
labels:
app: dspace-app
spec:
selector:
matchLabels:
app: dspace-app
template:
metadata:
labels:
app: dspace-app
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x-jdk8-test
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
After applying the configuration above, I have the following results.
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dspace-service NodePort 10.104.224.245 <none> 8080:32459/TCP 3s app=dspace-app
dspacedb-service NodePort 10.96.212.9 <none> 5432:30947/TCP 3s app=dspacedb-app
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h <none>
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10
I was pleased to see that the service name can be used for port forwarding.
$ kubectl port-forward service/dspace-service 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
I am also able to run the following job using the defined service names in the configMap.
apiVersion: batch/v1
kind: Job
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test#test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
Results
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-create-admin-kl6wd 0/1 Completed 0 5m
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10m
I still have some work to do persisting the volumes.
To deploy Kafka with Zookeeper we are using below yaml code.
Kafka is running successfully and we are able to write and read data on Kafka topic through pod IP, but we are not able to write to Kafka topic through service IP.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zookeeper-cluster1
namespace: default
labels:
app: zookeeper-cluster1
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper-cluster1
template:
metadata:
labels:
name: zookeeper-cluster1
app: zookeeper-cluster1
spec:
hostname: zookeeper-cluster1
containers:
- name: zookeeper-cluster1
image: wurstmeister/zookeeper:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper-cluster1
namespace: default
labels:
app: zookeeper-cluster1
spec:
type: NodePort
selector:
app: zookeeper-cluster1
ports:
- name: zookeeper-cluster1
protocol: TCP
port: 2181
targetPort: 2181
- name: zookeeper-follower-cluster1
protocol: TCP
port: 2888
targetPort: 2888
- name: zookeeper-leader-cluster1
protocol: TCP
port: 3888
targetPort: 3888
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka-cluster
namespace: default
labels:
app: kafka-cluster
spec:
replicas: 1
selector:
matchLabels:
app: kafka-cluster
template:
metadata:
labels:
name: kafka-cluster
app: kafka-cluster
spec:
hostname: kafka-cluster
containers:
- name: kafka-cluster
image: wurstmeister/kafka:latest
imagePullPolicy: IfNotPresent
env:
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://kafka-cluster:9092
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-cluster1:2181
ports:
- containerPort: 9092
---
apiVersion: v1
kind: Service
metadata:
name: kafka-cluster
namespace: default
labels:
app: kafka-cluster
spec:
type: NodePort
selector:
app: kafka-cluster
ports:
- name: kafka-cluster
protocol: TCP
port: 9092
targetPort: 9092
Please suggest why service associated with Kafka on port 9092 is not working.
Thanks
I'm trying to deploy GLPI application (http://glpi-project.org/) over my Kubernetes cluster but i encounter an issue.
Here is my deployment code:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-claim-glpi
labels:
type: openebs
spec:
storageClassName: openebs-storageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: glpi
namespace: jb
labels:
app: glpi
spec:
selector:
matchLabels:
app: glpi
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: glpi
spec:
volumes:
- name: pv-storage-glpi
persistentVolumeClaim:
claimName: pv-claim-glpi
containers:
- name: mariadb
image: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: "glpi"
- name: MYSQL_DATABASE
value: "glpi"
- name: MYSQL_USER
value: "glpi"
- name: MYSQL_PASSWORD
value: "glpi"
- name: GLPI_SOURCE_URL
value: "https://forge.glpi-project.org/attachments/download/2020/glpi-0.85.4.tar.gz"
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- mountPath: /var/lib/mariadb/
name: pv-storage-glpi
subPath: mariadb
- name: glpi
image: driket54/glpi
ports:
- containerPort: 80
name: http
- containerPort: 8090
name: https
volumeMounts:
- mountPath: /var/glpidata
name: pv-storage-glpi
subPath: glpidata
---
apiVersion: v1
kind: Service
metadata:
name: glpi
namespace: jb
spec:
selector:
app: glpi
ports:
- protocol: "TCP"
port: 80
targetPort: http
name: http
- protocol: "TCP"
port: 8090
targetPort: https
name: https
- protocol: "TCP"
port: 3306
targetPort: mariadb
name: mariadb
type: NodePort
---
The docker image is properly deployed but in my test phase, during the setup of the app, i get the following error while setting up the database (mysql).
I've already checked the credentials (host, username, password) and the are correct
Please help
Not really an answer since I don' t have the Kubernetes knowledge expected, but I can't add a comment yet :(
What you should alter first is your GLPi version.
Use this link. It's the last one:
https://github.com/glpi-project/glpi/releases/download/9.3.0/glpi-9.3.tgz
Then you may use cli tools to setup the database.
https://glpi-install.readthedocs.io/en/latest/command-line.html
Using what I get from your file:
php scripts/cliinstall.php --host=mariadb(not sure about this one in your environment but you get hte idea) --db=glpi --user=glpi --pass=glpi