GKE: pods(dotnet application) often restart with error 139 - docker

I have a private gke cluster. It contains 3 nodes (each has 2 CPUs and 7.5GB of memory) and 3 pods' replica (it's a .NET Core application). I've noticed that my containers sometimes restart with "error 139 SIGSEGV", which says that there is problem with accessing memory:
This occurs when a program attempts to access a memory location that it’s not allowed to access, or attempts to access a memory location in a way that’s not allowed.
I don't have application logs with the error before restarting the container, therefore it's impossible to debug it.
I've added a property false
in application but it didn't solve the problem.
How can I fix this problem?
Manifest:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: stage-deployment
namespace: stage
spec:
replicas: 3
minReadySeconds: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: stage
template:
metadata:
labels:
app: stage
spec:
containers:
- name: stage-container
image: my.registry/stage/core:latest
imagePullPolicy: "Always"
ports:
- containerPort: 5000
name: http
- containerPort: 22
name: ssh
readinessProbe:
tcpSocket:
port: 5000
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 5000
initialDelaySeconds: 5
periodSeconds: 20
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: db-credentials
key: dbname
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=my-instance:us-west1:dbserver=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: instance-credentials
secret:
secretName: instance-credentials
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: stage-service
namespace: stage
spec:
type: NodePort
selector:
app: stage
ports:
- protocol: TCP
port: 80
targetPort: 5000
name: https
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 300m
nginx.ingress.kubernetes.io/proxy-buffer-size: 128k
nginx.ingress.kubernetes.io/proxy-buffers-number: 4 256k
nginx.org/client-max-body-size: 1000m
name: ingress
namespace: stage
spec:
rules:
- host: my.site.com
http:
paths:
- backend:
serviceName: stage-service
servicePort: 80
tls:
- hosts:
- my.site.com
secretName: my-certs

Related

Kubernetes Access service nextcloud from /nextcloud

I am trying to host my own Nextcloud server using Kubernetes.
I want my Nextcloud server to be accessed from http://localhost:32738/nextcloud but every time I access that URL, it gets redirected to http://localhost:32738/login and gives me 404 Not Found.
If I replace the path with:
path: /
then, it works without problems on http://localhost:32738/login but as I said, it is not the solution I am looking for. The login page should be accessed from http://localhost:32738/nextcloud/login.
Going to http://127.0.0.1:32738/nextcloud/ does work for the initial setup but after that it becomes inaccessible as it always redirects to:
http://127.0.0.1:32738/apps/dashboard/
and not to:
http://127.0.0.1:32738/nextcloud/apps/dashboard/
This is my yaml:
#Nextcloud-Dep
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud-server
labels:
app: nextcloud
spec:
replicas: 1
selector:
matchLabels:
pod-label: nextcloud-server-pod
template:
metadata:
labels:
pod-label: nextcloud-server-pod
spec:
containers:
- name: nextcloud
image: nextcloud:22.2.0-apache
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: nextcloud
key: db-name
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: nextcloud
key: db-username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: db-password
- name: POSTGRES_HOST
value: nextcloud-database:5432
volumeMounts:
- name: server-storage
mountPath: /var/www/html
subPath: server-data
volumes:
- name: server-storage
persistentVolumeClaim:
claimName: nextcloud
---
#Nextcloud-Serv
apiVersion: v1
kind: Service
metadata:
name: nextcloud-server
labels:
app: nextcloud
spec:
selector:
pod-label: nextcloud-server-pod
ports:
- port: 80
protocol: TCP
name: nextcloud-server
---
#Database-Dep
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud-database
labels:
app: nextcloud
spec:
replicas: 1
selector:
matchLabels:
pod-label: nextcloud-database-pod
template:
metadata:
labels:
pod-label: nextcloud-database-pod
spec:
containers:
- name: postgresql
image: postgres:13.4
env:
- name: POSTGRES_DATABASE
valueFrom:
secretKeyRef:
name: nextcloud
key: db-name
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: nextcloud
key: db-username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: db-password
- name: POSTGRES_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: db-rootpassword
- name: PGDATA
value: /var/lib/postgresql/data/
volumeMounts:
- name: database-storage
mountPath: /var/lib/postgresql/data/
subPath: data
volumes:
- name: database-storage
persistentVolumeClaim:
claimName: nextcloud
---
#Database-Serv
apiVersion: v1
kind: Service
metadata:
name: nextcloud-database
labels:
app: nextcloud
spec:
selector:
pod-label: nextcloud-database-pod
ports:
- port: 5432
protocol: TCP
name: nextcloud-database
---
#PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: nextcloud-pv
labels:
type: local
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp"
---
#PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nextcloud
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
#Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nextcloud-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- backend:
service:
name: nextcloud-server
port:
number: 80
pathType: Prefix
path: /nextcloud(/.*)
---
#Secret
apiVersion: v1
kind: Secret
metadata:
name: nextcloud
labels:
app: nextcloud
immutable: true
stringData:
db-name: nextcloud
db-username: nextcloud
db-password: changeme
db-rootpassword: longpassword
username: admin
password: changeme
ingress-nginx was installed with:
helm install nginx ingress-nginx/ingress-nginx
Please tell me if you want me to supply more information.
In your case there is a difference between the exposed URL in the backend service and the specified path in the Ingress rule. That's why you get an error.
To avoid that you can use rewrite rule.
Using that one your ingress paths will be rewritten to value you provide.
This annotation ingress.kubernetes.io/rewrite-target: /login will rewrite the URL /nextcloud/login to /login before sending the request to the backend service.
But:
Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions.
On this documentation you can find following example:
$ echo '
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: rewrite.bar.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /something(/|$)(.*)
' | kubectl create -f -
In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $2, which is then used as a parameter in the rewrite-target annotation.
So in your URL you could see wanted /nextcloud/login, but rewriting will couse changing path to /login in the Ingress rule and finding your backend. I would suggest use one of following option:
path: /nextcloud(/.*)
nginx.ingress.kubernetes.io/rewrite-target: /$1
or
path: /nextcloud/login
nginx.ingress.kubernetes.io/rewrite-target: /login
See also this article.

exposing TCP traffic with nginx ingress controller

Currently I am testing on Windows using Docker Desktop with Kubernetes feature on.
I want to stream RTMP data over TCP through the Ingress Controller.
I followed the NGINX controller installation guide https://kubernetes.github.io/ingress-nginx/deploy/ and I tried to configure the TCP like https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
Please note - --tcp-services-configmap=rtmp/tcp-services
If I push data through port 1936 the connection cannot be established. If I try with 1935 it works. I would like to have the Ingress controller route the traffic to my service and get rid of the LoadBalancer since it doesn't really make sense to have one balancer after another.
With the following configuration I was expecting that sending data to 1936 would work.
Am I missing something?
apiVersion: v1
kind: Service
metadata:
name: restreamer1-service
namespace: rtmp
spec:
type: LoadBalancer
selector:
app: restreamer1-service
ports:
- protocol: TCP
port: 1935
targetPort: 1935
name: rtml-com
- protocol: TCP
port: 8080
targetPort: 8080
name: http-com
---
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: rtmp
data:
1936: "rtmp/restreamer1-service:1935"
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-3.23.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.44.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: k8s.gcr.io/ingress-nginx/controller:v0.44.0#sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --tcp-services-configmap=rtmp/tcp-services
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission

Kubernetes : Golang application container does not start

I have simple golang application which talks to postgres database. The posrgres container is working as expected However my golang application does not start.
In my config.go the environment variables is specified as
type Config struct {
Port uint `env:"PORT" envDefault:"8000"`
PostgresURL string `env:"POSTGRES_URL" envDefault:"postgres://user:pass#127.0.0.1/simple-service"`
}
My golang application service and deployment yaml looks like:
apiVersion: v1
kind: Service
metadata:
name: simple-service-webapp-service
labels:
app: simple-service-webapp
spec:
ports:
- port: 8080
name: http
selector:
app: simple-service-webapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-service-webapp-v1
labels:
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: simple-service-webapp
version: v1
template:
metadata:
labels:
app: simple-service-webapp
version: v1
spec:
containers:
- name: simple-service-webapp
image: docker.io/225517/simple-service-webapp:v1
resources:
requests:
cpu: 100m
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: POSTGRES_HOST
value: postgresdb
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_DB
value: simple-service
- name: POSTGRES_USER
value: user
- name: POSTGRES_PASSWORD
value: pass
readinessProbe:
httpGet:
path: /live
port: 8080
---
Below is the service and deployment yaml file for postgres service
apiVersion: v1
kind: Service
metadata:
name: postgresdb
labels:
app: postgresdb
spec:
ports:
- port: 5432
name: tcp
selector:
app: postgresdb
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresdb-v1
labels:
app: postgresdb
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: postgresdb
version: v1
template:
metadata:
labels:
app: postgresdb
version: v1
spec:
containers:
- name: postgresdb
image: postgres
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: simple-service
- name: POSTGRES_USER
value: user
- name: POSTGRES_PASSWORD
value: pass
readinessProbe:
exec:
command: ["psql", "-W", "pass", "-U", "user", "-d", "simple-service", "-c", "SELECT 1"]
initialDelaySeconds: 15
timeoutSeconds: 2
livenessProbe:
exec:
command: ["psql", "-W", "pass", "-U", "user", "-d", "simple-service", "-c", "SELECT 1"]
initialDelaySeconds: 45
timeoutSeconds: 2
---
Do I need to change anything in config.go? Could you please help thanks!
This is a community wiki answer based on the info from the comments, feel free to expand on it. I'm placing this for better visibility for the community.
Question was solved by setting appropriate postgres DB port (5432) and rewriting config so it look for the correct variable set on the deployment.

how docker-registry persist images in openshift origin

i'm new to openshift/kubernetes/docker and i was wondering where the docker registry of openshift origin persist the images , knowing that :
1.in the deployment's yaml of the docker registry , there is only emptyDir volumes declaration
volumes:
- emptyDir: {}
name: registry-storage
2.in the machine where the pod is deployed i can't see no volume using
docker volumes ls
3.the images are still persisted even if i restart the pod
docker registry deployment's yaml :
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
creationTimestamp: '2020-04-26T18:16:50Z'
generation: 1
labels:
docker-registry: default
name: docker-registry
namespace: default
resourceVersion: '1844231'
selfLink: >-
/apis/apps.openshift.io/v1/namespaces/default/deploymentconfigs/docker-registry
uid: 1983153d-87ea-11ea-a4bc-fa163ee581f7
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
docker-registry: default
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
docker-registry: default
spec:
containers:
- env:
- name: REGISTRY_HTTP_ADDR
value: ':5000'
- name: REGISTRY_HTTP_NET
value: tcp
- name: REGISTRY_HTTP_SECRET
value:
- name: REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA
value: 'false'
- name: OPENSHIFT_DEFAULT_REGISTRY
value: 'docker-registry.default.svc:5000'
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /etc/secrets/registry.crt
- name: REGISTRY_OPENSHIFT_SERVER_ADDR
value: 'docker-registry.default.svc:5000'
- name: REGISTRY_HTTP_TLS_KEY
value: /etc/secrets/registry.key
image: 'docker.io/openshift/origin-docker-registry:v3.11'
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 5000
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: registry
ports:
- containerPort: 5000
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 5000
scheme: HTTPS
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
requests:
cpu: 100m
memory: 256Mi
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /registry
name: registry-storage
- mountPath: /etc/secrets
name: registry-certificates
dnsPolicy: ClusterFirst
nodeSelector:
node-role.kubernetes.io/infra: 'true'
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: registry
serviceAccountName: registry
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: registry-storage
- name: registry-certificates
secret:
defaultMode: 420
secretName: registry-certificates
test: false
triggers:
- type: ConfigChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: '2020-04-26T18:17:12Z'
lastUpdateTime: '2020-04-26T18:17:12Z'
message: replication controller "docker-registry-1" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
- lastTransitionTime: '2020-05-05T09:39:57Z'
lastUpdateTime: '2020-05-05T09:39:57Z'
message: Deployment config has minimum availability.
status: 'True'
type: Available
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 1
observedGeneration: 1
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
to restart : i just delete the pod and a new one is created since i'm using a deployment
i'm creating the file in the /registry
Restarting does not mean the data is deleted, it still exist in the container top layer, suggest you get started by reading this.
Persistence is, for example in Kubernetes, when a pod is deleted and re-created on another node and still maintains the same state of a volume.

Kubernetes deployment database connection error

I'm trying to deploy GLPI application (http://glpi-project.org/) over my Kubernetes cluster but i encounter an issue.
Here is my deployment code:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-claim-glpi
labels:
type: openebs
spec:
storageClassName: openebs-storageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: glpi
namespace: jb
labels:
app: glpi
spec:
selector:
matchLabels:
app: glpi
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: glpi
spec:
volumes:
- name: pv-storage-glpi
persistentVolumeClaim:
claimName: pv-claim-glpi
containers:
- name: mariadb
image: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: "glpi"
- name: MYSQL_DATABASE
value: "glpi"
- name: MYSQL_USER
value: "glpi"
- name: MYSQL_PASSWORD
value: "glpi"
- name: GLPI_SOURCE_URL
value: "https://forge.glpi-project.org/attachments/download/2020/glpi-0.85.4.tar.gz"
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- mountPath: /var/lib/mariadb/
name: pv-storage-glpi
subPath: mariadb
- name: glpi
image: driket54/glpi
ports:
- containerPort: 80
name: http
- containerPort: 8090
name: https
volumeMounts:
- mountPath: /var/glpidata
name: pv-storage-glpi
subPath: glpidata
---
apiVersion: v1
kind: Service
metadata:
name: glpi
namespace: jb
spec:
selector:
app: glpi
ports:
- protocol: "TCP"
port: 80
targetPort: http
name: http
- protocol: "TCP"
port: 8090
targetPort: https
name: https
- protocol: "TCP"
port: 3306
targetPort: mariadb
name: mariadb
type: NodePort
---
The docker image is properly deployed but in my test phase, during the setup of the app, i get the following error while setting up the database (mysql).
I've already checked the credentials (host, username, password) and the are correct
Please help
Not really an answer since I don' t have the Kubernetes knowledge expected, but I can't add a comment yet :(
What you should alter first is your GLPi version.
Use this link. It's the last one:
https://github.com/glpi-project/glpi/releases/download/9.3.0/glpi-9.3.tgz
Then you may use cli tools to setup the database.
https://glpi-install.readthedocs.io/en/latest/command-line.html
Using what I get from your file:
php scripts/cliinstall.php --host=mariadb(not sure about this one in your environment but you get hte idea) --db=glpi --user=glpi --pass=glpi

Resources