Kubernetes: Managing environment config - docker

The reccomended method for managing environment configuration containers running in a pod is through the use of configmap. See the docs here.
This is great although we have containers that require massive amounts of environment variables, this will only expand in the future. Using the prescribed configmap method this become unweildy and hard to manage.
For example a simple deplyment file becomes massive:
apiVersion: v1
kind: Service
metadata:
name: my-app-api
labels:
name: my-app-api
environment: staging
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
name: my-app-api
environment: staging
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app-api
spec:
replicas: 2
revisionHistoryLimit: 10
template:
metadata:
labels:
name: my-app-api
environment: staging
spec:
containers:
- name: my-app-api
imagePullPolicy: Always
image: myapp/my-app-api:latest
ports:
- containerPort: 80
env:
- name: API_HOST
value: XXXXXXXXXXX
- name: API_ENV
value: XXXXXXXXXXX
- name: API_DEBUG
value: XXXXXXXXXXX
- name: API_KEY
value: XXXXXXXXXXX
- name: EJ_API_ENDPOINT
value: XXXXXXXXXXX
- name: WEB_HOST
value: XXXXXXXXXXX
- name: AWS_ACCESS_KEY
value: XXXXXXXXXXX
- name: AWS_SECRET_KEY
value: XXXXXXXXXXX
- name: CDN
value: XXXXXXXXXXX
- name: STRIPE_KEY
value: XXXXXXXXXXX
- name: STRIPE_SECRET
value: XXXXXXXXXXX
- name: DB_HOST
value: XXXXXXXXXXX
- name: MYSQL_ROOT_PASSWORD
value: XXXXXXXXXXX
- name: MYSQL_DATABASE
value: XXXXXXXXXXX
- name: REDIS_HOST
value: XXXXXXXXXXX
imagePullSecrets:
- name: my-registry-key
Are there alternate easy to inject a central environment configuration?
UPDATE
This was proposed for 1.5 although did not make the cut and looks like it will be included in 1.6. Fingers crossed...

There is a proposal currently targeted for 1.5 that aims to make this easier. As proposed, you would be able to pull all variables from a ConfigMap in one go, without having to spell out each one separately.
If implemented, it would allow you to do something like this:
Warning: This doesn't actually work yet!
ConfigMap:
apiVersion: v1
data:
space-ships: 1
ship-type: battle-cruiser
weapon: laser-cannon
kind: ConfigMap
metadata:
name: space-config
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: space-simulator
spec:
template:
metadata:
labels:
app: space-simulator
spec:
containers:
- name: space-simulator
image: foo/space-simulator
# This is the magic piece that would allow you to avoid all that boilerplate!
- envFrom:
configMap: space-config

Related

Bitnami airflow scheduler could not connect database while webserve can connect even they have same properties?

I want to configure airflow on openshift.
I set database on openshift like below :
kind: Service
apiVersion: v1
metadata:
name: airflow-database
namespace: ersin-poc
spec:
ports:
- name: 5432-tcp
protocol: TCP
port: 5432
targetPort: 5432
selector:
deployment: airflow-database
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
and my database deployment like below :
kind: Deployment
apiVersion: apps/v1
metadata:
name: airflow-database
namespace: ersin-poc
labels:
deployment: airflow-database
spec:
replicas: 1
selector:
matchLabels:
deployment: airflow-database
template:
metadata:
creationTimestamp: null
labels:
deployment: airflow-database
spec:
volumes:
- name: generic
persistentVolumeClaim:
claimName: generic
- name: empty1
emptyDir: {}
containers:
- resources: {}
name: airflow-database
env:
- name: POSTGRESQL_USERNAME
value: 'bn_airflow'
- name: POSTGRESQL_PASSWORD
value: 'bitnami1'
- name: POSTGRESQL_DATABASE
value: 'bitnami_airflow'
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: generic
mountPath: /bitnami/postgresql/
image: >-
bitnami/postgresql:latest
hostname: airflow-database
I can connect this db from my webserver like below :
kind: Deployment
apiVersion: apps/v1
metadata:
name: airflow-webserver
namespace: ersin-poc
labels:
deployment: airflow-webserver
spec:
replicas: 1
selector:
matchLabels:
deployment: airflow-webserver
template:
metadata:
creationTimestamp: null
labels:
deployment: airflow-webserver
spec:
volumes:
- name: generic
persistentVolumeClaim:
claimName: generic
- name: empty1
emptyDir: {}
containers:
- resources: {}
name: airflow-webserver
env:
- name: AIRFLOW_HOME
value: /home/appuser
- name: USER
value: appuser
- name: AIRFLOW_FERNET_KEY
value: '46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho='
- name: AIRFLOW_SECRET_KEY
value: 'a25mQ1FHTUh3MnFRSk5KMEIyVVU2YmN0VGRyYTVXY08='
- name: AIRFLOW_EXECUTOR
value: 'CeleryExecutor'
- name: AIRFLOW_DATABASE_NAME
value: 'bitnami_airflow'
- name: AIRFLOW_DATABASE_USERNAME
value: 'bn_airflow'
- name: AIRFLOW_DATABASE_PASSWORD
value: 'bitnami1'
- name: AIRFLOW_LOAD_EXAMPLES
value: 'yes'
- name: AIRFLOW_PASSWORD
value: 'bitnami123'
- name: AIRFLOW_USERNAME
value: 'user'
- name: AIRFLOW_EMAIL
value: 'user#example.com'
- name: AIRFLOW_DATABASE_HOST
value: 'airflow-database'
- name: AIRFLOW_DATABASE_PORT_NUMBER
value: '5432'
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- name: generic
mountPath: /home/appuser
- name: generic
mountPath: /home/appuser/logs/
- name: generic
mountPath: /home/appuser/dags/
image: >-
bitnami/airflow:latest
hostname: airflow-webserver
but when i try it with airflow-scheduler it gives error :
airflow-scheduler 09:29:43.31 INFO ==> Trying to connect to the database server airflow-scheduler 09:30:47.42 ERROR ==> Could not connect to the database
and my scheduler yaml is :
kind: Deployment
apiVersion: apps/v1
metadata:
name: airflow-scheduler
namespace: ersin-poc
labels:
deployment: airflow-scheduler
spec:
replicas: 1
selector:
matchLabels:
deployment: airflow-scheduler
template:
metadata:
labels:
deployment: airflow-scheduler
spec:
volumes:
- name: generic
persistentVolumeClaim:
claimName: generic
- name: empty1
emptyDir: {}
containers:
- resources: {}
name: airflow-scheduler
env:
- name: AIRFLOW_HOME
value: /home/appuser
- name: USER
value: appuser
- name: AIRFLOW_FERNET_KEY
value: '46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho='
- name: AIRFLOW_SECRET_KEY
value: 'a25mQ1FHTUh3MnFRSk5KMEIyVVU2YmN0VGRyYTVXY08='
- name: AIRFLOW_EXECUTOR
value: 'CeleryExecutor'
- name: AIRFLOW_DATABASE_NAME
value: 'bitnami_airflow'
- name: AIRFLOW_DATABASE_USERNAME
value: 'bn_airflow'
- name: AIRFLOW_DATABASE_PASSWORD
value: 'bitnami1'
- name: AIRFLOW_DATABASE_HOST
value: 'airflow-database'
- name: AIRFLOW_DATABASE_PORT_NUMBER
value: '5432'
- name: AIRFLOW_WEBSERVER_HOST
value: 'airflow-webserver'
- name: AIRFLOW_WEBSERVER_PORT_NUMBER
value: '8080'
- name: REDIS_HOST
value: 'airflow-redis'
- name: REDIS_PORT_NUMBER
value: '6379'
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- name: generic
mountPath: /home/appuser
- name: generic
mountPath: /home/appuser/logs/
- name: generic
mountPath: /home/appuser/dags/
image: >-
bitnami/airflow-scheduler:latest
hostname: airflow-scheduler
so i cant understand why i got this error with same properties?
thanks in advance
EDIT
and I try in scheduler pod this commands to see whether i can connect to db or not :
psql -h airflow-database -p 5432 -U bn_airflow -d bitnami_airflow -W
pass: bitnami1
select * from public.ab_user;
and yes I can.
After a lot of search , I ve decided to make this with apache/airflow images. (posgresql and redis are still bitnami - it doesnt important)
You can see all ymal files about airflow on openshift.
https://github.com/ersingulbahar/airflow_on_openshift
It works now as expected

Registry UI (joxit) does not return stored images

I have setup a private registry (Kubernetes) using the following configuration based on this repo https://github.com/sleighzy/k8s-docker-registry:
Create the password file, see the Apache htpasswd documentation for more information on this command.
htpasswd -b -c -B htpasswd docker-registry registry-password!
Adding password for user docker-registry
Create namespace
kubectl create namespace registry
Add the generated password file as a Kubernetes secret.
kubectl create secret generic basic-auth --from-file=./htpasswd -n registry
secret/basic-auth created
registry-secrets.yaml
---
# https://kubernetes.io/docs/concepts/configuration/secret/
apiVersion: v1
kind: Secret
metadata:
name: s3
namespace: registry
data:
REGISTRY_STORAGE_S3_ACCESSKEY: Y2hlc0FjY2Vzc2tleU1pbmlv
REGISTRY_STORAGE_S3_SECRETKEY: Y2hlc1NlY3JldGtleQ==
registry-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: registry
spec:
ports:
- protocol: TCP
name: registry
port: 5000
selector:
app: registry
I am using my MinIO (already deployed and running)
registry-deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: registry
name: registry
labels:
app: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:2
ports:
- name: registry
containerPort: 5000
volumeMounts:
- name: credentials
mountPath: /auth
readOnly: true
env:
- name: REGISTRY_LOG_ACCESSLOG_DISABLED
value: "true"
- name: REGISTRY_HTTP_HOST
value: "https://registry.mydomain.io:5000"
- name: REGISTRY_LOG_LEVEL
value: info
- name: REGISTRY_HTTP_SECRET
value: registry-http-secret
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: homelab
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: /auth/htpasswd
- name: REGISTRY_STORAGE
value: s3
- name: REGISTRY_STORAGE_S3_REGION
value: ignored-cos-minio
- name: REGISTRY_STORAGE_S3_REGIONENDPOINT
value: charity.api.com -> This is the valid MinIO API
- name: REGISTRY_STORAGE_S3_BUCKET
value: "charitybucket"
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: "true"
- name: REGISTRY_HEALTH_STORAGEDRIVER_ENABLED
value: "false"
- name: REGISTRY_STORAGE_S3_ACCESSKEY
valueFrom:
secretKeyRef:
name: s3
key: REGISTRY_STORAGE_S3_ACCESSKEY
- name: REGISTRY_STORAGE_S3_SECRETKEY
valueFrom:
secretKeyRef:
name: s3
key: REGISTRY_STORAGE_S3_SECRETKEY
volumes:
- name: credentials
secret:
secretName: basic-auth
I have created an entry in /etc/hosts
192.168.xx.xx registry.mydomain.io
registry-IngressRoute.yaml
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: registry
namespace: registry
spec:
entryPoints:
- websecure
routes:
- match: Host(`registry.mydomain.io`)
kind: Rule
services:
- name: registry
port: 5000
tls:
certResolver: tlsresolver
I have accees to the private registry using http://registry.mydomain.io:5000/ and it obviously returns a blank page.
I have already pushed some images and http://registry.mydomain.io:5000/v2/_catalog returns:
{"repositories":["console-image","hello-world","hello-world-2","hello-world-ha","myfirstimage","ubuntu-my"]}
Above configuration seems to work.
Then I tried to add a registry-ui provide by joxit with the following configuration:
registry-ui-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: registry-ui
namespace: registry
spec:
ports:
- protocol: TCP
name: registry-ui
port: 80
selector:
app: registry-ui
registry-ui-deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: registry
name: registry-ui
labels:
app: registry-ui
spec:
replicas: 1
selector:
matchLabels:
app: registry-ui
template:
metadata:
labels:
app: registry-ui
spec:
containers:
- name: registry-ui
image: joxit/docker-registry-ui:1.5-static
ports:
- name: registry-ui
containerPort: 80
env:
- name: REGISTRY_URL
value: https://registry.mydomain.io
- name: SINGLE_REGISTRY
value: "true"
- name: REGISTRY_TITLE
value: "CHARITY Registry UI"
- name: DELETE_IMAGES
value: "true"
registry-ui-ingress-route.yaml
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: registry-ui
namespace: registry
spec:
entryPoints:
- websecure
routes:
- match: Host(`registry.mydomain.io`) && PathPrefix(`/ui/`)
kind: Rule
services:
- name: registry-ui
port: 80
middlewares:
- name: stripprefix
tls:
certResolver: tlsresolver
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: stripprefix
namespace: registry
spec:
stripPrefix:
prefixes:
- /ui/
I have access to the browser ui at https://registry.mydomain.io/ui/, however it returns nothing.
Am I missing something here?
As the owner of that repository there may be something missing here. Your IngressRoute rule has an entryPoint of websecure and certResolver of tlsresolver. This is intended to be the https entrypoint for Traefik, see my other repository https://github.com/sleighzy/k3s-traefik-v2-kubernetes-crd and associated Traefik document of which this Docker Registry repo is based on.
Can you review your Traefik deployment to ensure that you have this entrypoint, and you also have this certificate resolver along with a generated https certificate that this is using. Can you also check the traefik logs to see if there are any errors there during startup, e.g. missing certs etc. and any access log information in there as well which may indicate why this is not routing to there.
If you don't have these items setup you could help narrow this down further by changing this IngressRoute config to use just the web entrypoint and remove the tls section as well in your registry-ui-ingress-route.yaml manifest file and then reapply that. This will mean you can access this over http to at least rule out any https issues.

Kubernetes Access service nextcloud from /nextcloud

I am trying to host my own Nextcloud server using Kubernetes.
I want my Nextcloud server to be accessed from http://localhost:32738/nextcloud but every time I access that URL, it gets redirected to http://localhost:32738/login and gives me 404 Not Found.
If I replace the path with:
path: /
then, it works without problems on http://localhost:32738/login but as I said, it is not the solution I am looking for. The login page should be accessed from http://localhost:32738/nextcloud/login.
Going to http://127.0.0.1:32738/nextcloud/ does work for the initial setup but after that it becomes inaccessible as it always redirects to:
http://127.0.0.1:32738/apps/dashboard/
and not to:
http://127.0.0.1:32738/nextcloud/apps/dashboard/
This is my yaml:
#Nextcloud-Dep
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud-server
labels:
app: nextcloud
spec:
replicas: 1
selector:
matchLabels:
pod-label: nextcloud-server-pod
template:
metadata:
labels:
pod-label: nextcloud-server-pod
spec:
containers:
- name: nextcloud
image: nextcloud:22.2.0-apache
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: nextcloud
key: db-name
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: nextcloud
key: db-username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: db-password
- name: POSTGRES_HOST
value: nextcloud-database:5432
volumeMounts:
- name: server-storage
mountPath: /var/www/html
subPath: server-data
volumes:
- name: server-storage
persistentVolumeClaim:
claimName: nextcloud
---
#Nextcloud-Serv
apiVersion: v1
kind: Service
metadata:
name: nextcloud-server
labels:
app: nextcloud
spec:
selector:
pod-label: nextcloud-server-pod
ports:
- port: 80
protocol: TCP
name: nextcloud-server
---
#Database-Dep
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud-database
labels:
app: nextcloud
spec:
replicas: 1
selector:
matchLabels:
pod-label: nextcloud-database-pod
template:
metadata:
labels:
pod-label: nextcloud-database-pod
spec:
containers:
- name: postgresql
image: postgres:13.4
env:
- name: POSTGRES_DATABASE
valueFrom:
secretKeyRef:
name: nextcloud
key: db-name
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: nextcloud
key: db-username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: db-password
- name: POSTGRES_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: db-rootpassword
- name: PGDATA
value: /var/lib/postgresql/data/
volumeMounts:
- name: database-storage
mountPath: /var/lib/postgresql/data/
subPath: data
volumes:
- name: database-storage
persistentVolumeClaim:
claimName: nextcloud
---
#Database-Serv
apiVersion: v1
kind: Service
metadata:
name: nextcloud-database
labels:
app: nextcloud
spec:
selector:
pod-label: nextcloud-database-pod
ports:
- port: 5432
protocol: TCP
name: nextcloud-database
---
#PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: nextcloud-pv
labels:
type: local
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp"
---
#PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nextcloud
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
#Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nextcloud-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- backend:
service:
name: nextcloud-server
port:
number: 80
pathType: Prefix
path: /nextcloud(/.*)
---
#Secret
apiVersion: v1
kind: Secret
metadata:
name: nextcloud
labels:
app: nextcloud
immutable: true
stringData:
db-name: nextcloud
db-username: nextcloud
db-password: changeme
db-rootpassword: longpassword
username: admin
password: changeme
ingress-nginx was installed with:
helm install nginx ingress-nginx/ingress-nginx
Please tell me if you want me to supply more information.
In your case there is a difference between the exposed URL in the backend service and the specified path in the Ingress rule. That's why you get an error.
To avoid that you can use rewrite rule.
Using that one your ingress paths will be rewritten to value you provide.
This annotation ingress.kubernetes.io/rewrite-target: /login will rewrite the URL /nextcloud/login to /login before sending the request to the backend service.
But:
Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions.
On this documentation you can find following example:
$ echo '
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: rewrite.bar.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /something(/|$)(.*)
' | kubectl create -f -
In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $2, which is then used as a parameter in the rewrite-target annotation.
So in your URL you could see wanted /nextcloud/login, but rewriting will couse changing path to /login in the Ingress rule and finding your backend. I would suggest use one of following option:
path: /nextcloud(/.*)
nginx.ingress.kubernetes.io/rewrite-target: /$1
or
path: /nextcloud/login
nginx.ingress.kubernetes.io/rewrite-target: /login
See also this article.

AzureFunctions AppInsights Logging does not work in Azure AKS

I've been using Azure functions (non static with proper DI) for a short while now. I recently added ApplicationInsights by using the APPINSIGHTS_INSTRUMENTATIONKEY key. When debugging locally it works all fine.
If I run it by publishing the function and using the following dockerfile to run it locally on docker it works fine as well.
FROM mcr.microsoft.com/azure-functions/dotnet:2.0-alpine
ENV AzureWebJobsScriptRoot=/home/site/wwwroot
COPY ./publish/ /home/site/wwwroot
However. If i go a step further and try to deploy it to kubernetes (in my case Azure AKS) by using the following YAML files. The function starts fine with log files showing the loading of the Application insights parameter. However, it does not log to insights.
deployment.yaml
apiVersion: v1
kind: Secret
metadata:
name: mytestfunction-secrets
namespace: "testfunction"
type: Opaque
data:
ApplicationInsights: YTljOTA4ZDgtMTkyZC00ODJjLTkwNmUtMTI2OTQ3OGZhYjZmCg==
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mytestfunction
namespace: "testfunction"
labels:
app: mytestfunction
spec:
replicas: 1
template:
metadata:
namespace: "testfunction"
labels:
app: mytestfunction
spec:
containers:
- image: mytestfunction:1.1
name: mytestfunction
ports:
- containerPort: 5000
imagePullPolicy: Always
env:
- name: AzureFunctionsJobHost__Logging__Console__IsEnabled
value: 'true'
- name: ASPNETCORE_ENVIRONMENT
value: PRODUCTION
- name: ASPNETCORE_URLS
value: http://+:5000
- name: WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
value: '5'
- name: APPINSIGHTS_INSTRUMENTATIONKEY
valueFrom:
secretKeyRef:
name: mytestfunction-secrets
key: ApplicationInsights
imagePullSecrets:
- name: imagepullsecrets
However. I did alter the yaml by not storing the key as a secret and then it did work.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mytestfunction
namespace: "testfunction"
labels:
app: mytestfunction
spec:
replicas: 1
template:
metadata:
namespace: "testfunction"
labels:
app: mytestfunction
spec:
containers:
- image: mytestfunction:1.1
name: mytestfunction
ports:
- containerPort: 5000
imagePullPolicy: Always
env:
- name: AzureFunctionsJobHost__Logging__Console__IsEnabled
value: 'true'
- name: ASPNETCORE_ENVIRONMENT
value: PRODUCTION
- name: ASPNETCORE_URLS
value: http://+:5000
- name: WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
value: '5'
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: a9c908d8-192d-482c-906e-1269478fab6f
imagePullSecrets:
- name: imagepullsecrets
I'm kind of surprised by the fact that the difference between the notation is causing Azure functions to not log to insights. My impression was that the running application does not care or know whether the value came from a secret or a regular notation in kubernetes. Even though it might be debatable whether the instrumentationkey is a secret or not, i would prefer to store it there. Does anyone have an idea why this might cause it?
# Not working
- name: APPINSIGHTS_INSTRUMENTATIONKEY
valueFrom:
secretKeyRef:
name: mytestfunction-secrets
key: ApplicationInsights
# Working
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: a9c908d8-192d-482c-906e-1269478fab6f
These are the versions i'm using
Azure Functions Core Tools (2.4.419)
Function Runtime Version: 2.0.12332.0
Azure AKS: 1.12.x
Also. The Instrumentation key is a fake one for sharing purposes. not an actual one.

Kubernetes deployment database connection error

I'm trying to deploy GLPI application (http://glpi-project.org/) over my Kubernetes cluster but i encounter an issue.
Here is my deployment code:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-claim-glpi
labels:
type: openebs
spec:
storageClassName: openebs-storageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: glpi
namespace: jb
labels:
app: glpi
spec:
selector:
matchLabels:
app: glpi
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: glpi
spec:
volumes:
- name: pv-storage-glpi
persistentVolumeClaim:
claimName: pv-claim-glpi
containers:
- name: mariadb
image: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: "glpi"
- name: MYSQL_DATABASE
value: "glpi"
- name: MYSQL_USER
value: "glpi"
- name: MYSQL_PASSWORD
value: "glpi"
- name: GLPI_SOURCE_URL
value: "https://forge.glpi-project.org/attachments/download/2020/glpi-0.85.4.tar.gz"
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- mountPath: /var/lib/mariadb/
name: pv-storage-glpi
subPath: mariadb
- name: glpi
image: driket54/glpi
ports:
- containerPort: 80
name: http
- containerPort: 8090
name: https
volumeMounts:
- mountPath: /var/glpidata
name: pv-storage-glpi
subPath: glpidata
---
apiVersion: v1
kind: Service
metadata:
name: glpi
namespace: jb
spec:
selector:
app: glpi
ports:
- protocol: "TCP"
port: 80
targetPort: http
name: http
- protocol: "TCP"
port: 8090
targetPort: https
name: https
- protocol: "TCP"
port: 3306
targetPort: mariadb
name: mariadb
type: NodePort
---
The docker image is properly deployed but in my test phase, during the setup of the app, i get the following error while setting up the database (mysql).
I've already checked the credentials (host, username, password) and the are correct
Please help
Not really an answer since I don' t have the Kubernetes knowledge expected, but I can't add a comment yet :(
What you should alter first is your GLPi version.
Use this link. It's the last one:
https://github.com/glpi-project/glpi/releases/download/9.3.0/glpi-9.3.tgz
Then you may use cli tools to setup the database.
https://glpi-install.readthedocs.io/en/latest/command-line.html
Using what I get from your file:
php scripts/cliinstall.php --host=mariadb(not sure about this one in your environment but you get hte idea) --db=glpi --user=glpi --pass=glpi

Resources