Pods start command in kubernetes - docker

I would like to add the docker command --user $(id -u):$(id -g) to my k8sdeployment definition. What is the equivalent for that in k8s?
args or command?
How the container gets started normally:
docker run -d -p 5901:5901 -p 6901:6901 --user $(id -u):$(id -g) khwhahn/daedalus:0.1
k8s deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose --file docker-compose.yaml convert
kompose.version: 1.10.0 (8bb0907)
creationTimestamp: null
labels:
io.kompose.service: daedalus
name: daedalus
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: daedalus
spec:
containers:
- env:
- name: DISPLAY
image: khwhahn/daedalus:0.1
imagePullPolicy: Always
ports:
- containerPort: 5901
name: vnc
protocol: TCP
- containerPort: 6901
name: http
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 6901
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /
port: 6901
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
name: daedalus
resources: {}
volumeMounts:
- mountPath: /tmp/.X11-unix
name: daedalus-claim0
- mountPath: /home/daedalus/daedalus/tls
name: cardano-tls
restartPolicy: Always
volumes:
- name: daedalus-claim0
persistentVolumeClaim:
claimName: daedalus-claim0
- name: cardano-tls
persistentVolumeClaim:
claimName: cardano-tls
status: {}
Thanks

That was requested initially in kubernetes issue 22179.
Implemented partially in:
PR 52077: "API Changes for RunAsGroup",
PR 756: "Allow specifying the primary group id of the container "
PodSecurityContext allows Kubernetes users to specify RunAsUser which can be overriden by RunAsUser in SecurityContext on a per Container basis.
Introduce a new API field in SecurityContext and PodSecurityContext called RunAsGroup.
See "Configure a Security Context for a Pod or Container".

Related

Apache server runs with docker run but kubernetes pod fails with CrashLoopBackOff

My application uses apache2 web server. Due to restrictions in the kubernetes cluster, I do not have root previliges inside pod. So I have changed default port of apache2 from 80 to 8080 to be able to run as non-root user.
My problem is that once I build the docker image and run it in local it runs fine, but when I deploy using kubernetes in the cluster it keeps failing with:
Action '-D FOREGROUND' failed.
resulting in CrashLoopBackOff.
So, basically the apache2 server is not able to run in the pod with non-root user, but runs fine in local with docker run.
Any help is appreciated.
I am attaching my deployment and service files for reference:
apiVersion: apps/v1
kind: Deployment
metadata:
name: &DeploymentName app
spec:
replicas: 1
selector:
matchLabels: &appName
app: *DeploymentName
template:
metadata:
name: main
labels:
<<: *appName
spec:
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsGroup: 3000
volumes:
- name: var-lock
emptyDir: {}
containers:
- name: *DeploymentName
image: image:id
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /etc/apache2/conf-available
name: var-lock
- mountPath: /var/lock/apache2
name: var-lock
- mountPath: /var/log/apache2
name: var-lock
- mountPath: /mnt/log/apache2
name: var-lock
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 180
periodSeconds: 60
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 300
periodSeconds: 180
imagePullPolicy: Always
tty: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
envFrom:
- configMapRef:
name: *DeploymentName
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: &hpaName app
spec:
maxReplicas: 1
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: *hpaName
targetCPUUtilizationPercentage: 60
---
apiVersion: v1
kind: Service
metadata:
labels:
app: app
name: app
spec:
selector:
app: app
ports:
- protocol: TCP
name: http-web-port
port: 80
targetPort: 8080
- protocol: TCP
name: https-web-port
port: 443
targetPort: 443
CrashLoopBackOff is a common error in Kubernetes, indicating a pod constantly crashing in an endless loop.
The CrashLoopBackOff error can be caused by a variety of issues, including:
Insufficient resources-lack of resources prevents the container from loading Locked file—a file was already locked by another container
Locked database-the database is being used and locked by other pods
Failed reference—reference to scripts or binaries that are not present on the container
Setup error- an issue with the init-container setup in Kubernetes
Config loading error—a server cannot load the configuration file.
Misconfigurations- a general file system misconfiguration
Connection issues—DNS or kube-DNS is not able to connect to a third-party service
Deploying failed services—an attempt to deploy services/applications that have already failed (e.g. due to a lack of access to other services)
To fix kubernetes CrashLoopbackoff error refer to this link and also check out stackpost for more information.

Nginx Daemonset - How do I select which IP address to bind to?

I have a Docker Enterprise k8 bare metal cluster running on Centos8, and following the official docs to install NGINX using manifest files from GIT: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/
The pod seems to be running:
kubectl -n nginx-ingress describe pod nginx-ingress-fzr2j
Name: nginx-ingress-fzr2j
Namespace: nginx-ingress
Priority: 0
Node: server.example.com/172.16.1.180
Start Time: Sun, 16 Aug 2020 16:48:49 -0400
Labels: app=nginx-ingress
controller-revision-hash=85879fb7bc
pod-template-generation=2
Annotations: kubernetes.io/psp: privileged
Status: Running
IP: 192.168.225.27
IPs:
IP: 192.168.225.27
But my issue is the IP address it has selected is a 192.168.225.27. This is a second network on this server. How do I tell nginx to use the 172.16.1.180 address that is has in the Node: part?
The Daemset config is :
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
#annotations:
#prometheus.io/scrape: "true"
#prometheus.io/port: "9113"
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:edge
imagePullPolicy: Always
name: nginx-ingress
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: readiness-port
containerPort: 8081
#- name: prometheus
#containerPort: 9113
readinessProbe:
httpGet:
path: /nginx-ready
port: readiness-port
periodSeconds: 1
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
I can't see any configuration option for which IP address to bind to.
The thing you are likely looking for is hostNetwork: true, which:
Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false
spec:
template:
spec:
hostNetwork: true
containers:
- image: nginx/nginx-ingress:edge
name: nginx-ingress
You would only then need to specify a bind address if it bothered you having the Ingress controller bound to all addresses on the host. If that's still a requirement, you can have the Node's IP injected via the valueFrom: mechanism:
...
containers:
- env:
- name: MY_NODE_IP
valueFrom:
fieldRef:
status.hostIP

how docker-registry persist images in openshift origin

i'm new to openshift/kubernetes/docker and i was wondering where the docker registry of openshift origin persist the images , knowing that :
1.in the deployment's yaml of the docker registry , there is only emptyDir volumes declaration
volumes:
- emptyDir: {}
name: registry-storage
2.in the machine where the pod is deployed i can't see no volume using
docker volumes ls
3.the images are still persisted even if i restart the pod
docker registry deployment's yaml :
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
creationTimestamp: '2020-04-26T18:16:50Z'
generation: 1
labels:
docker-registry: default
name: docker-registry
namespace: default
resourceVersion: '1844231'
selfLink: >-
/apis/apps.openshift.io/v1/namespaces/default/deploymentconfigs/docker-registry
uid: 1983153d-87ea-11ea-a4bc-fa163ee581f7
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
docker-registry: default
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
docker-registry: default
spec:
containers:
- env:
- name: REGISTRY_HTTP_ADDR
value: ':5000'
- name: REGISTRY_HTTP_NET
value: tcp
- name: REGISTRY_HTTP_SECRET
value:
- name: REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA
value: 'false'
- name: OPENSHIFT_DEFAULT_REGISTRY
value: 'docker-registry.default.svc:5000'
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /etc/secrets/registry.crt
- name: REGISTRY_OPENSHIFT_SERVER_ADDR
value: 'docker-registry.default.svc:5000'
- name: REGISTRY_HTTP_TLS_KEY
value: /etc/secrets/registry.key
image: 'docker.io/openshift/origin-docker-registry:v3.11'
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 5000
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: registry
ports:
- containerPort: 5000
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 5000
scheme: HTTPS
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
requests:
cpu: 100m
memory: 256Mi
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /registry
name: registry-storage
- mountPath: /etc/secrets
name: registry-certificates
dnsPolicy: ClusterFirst
nodeSelector:
node-role.kubernetes.io/infra: 'true'
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: registry
serviceAccountName: registry
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: registry-storage
- name: registry-certificates
secret:
defaultMode: 420
secretName: registry-certificates
test: false
triggers:
- type: ConfigChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: '2020-04-26T18:17:12Z'
lastUpdateTime: '2020-04-26T18:17:12Z'
message: replication controller "docker-registry-1" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
- lastTransitionTime: '2020-05-05T09:39:57Z'
lastUpdateTime: '2020-05-05T09:39:57Z'
message: Deployment config has minimum availability.
status: 'True'
type: Available
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 1
observedGeneration: 1
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
to restart : i just delete the pod and a new one is created since i'm using a deployment
i'm creating the file in the /registry
Restarting does not mean the data is deleted, it still exist in the container top layer, suggest you get started by reading this.
Persistence is, for example in Kubernetes, when a pod is deleted and re-created on another node and still maintains the same state of a volume.

Kubernetes - nginx-ingress is crashing after file upload via php

I'am running Kubernetes cluster on Google Cloud Platform via their Kubernetes Engine. Cluster version is 1.13.11-gke.14. PHP application pod contains 2 containers - Nginx as a reverse proxy and php-fpm (7.2).
In google cloud is used TCP Load Balancer and then internal routing via Nginx Ingress.
Problem is:
when I upload some bigger file (17MB), ingress is crashing with this error:
W 2019-12-01T14:26:06.341588Z Dynamic reconfiguration failed: Post http+unix://nginx-status/configuration/backends: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
E 2019-12-01T14:26:06.341658Z Unexpected failure reconfiguring NGINX:
W 2019-12-01T14:26:06.345575Z requeuing initial-sync, err Post http+unix://nginx-status/configuration/backends: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
I 2019-12-01T14:26:06.354869Z Configuration changes detected, backend reload required.
E 2019-12-01T14:26:06.393528796Z Post http+unix://nginx-status/configuration/backends: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
E 2019-12-01T14:26:08.077580Z healthcheck error: Get http+unix://nginx-status/healthz: dial unix /tmp/nginx-status-server.sock: connect: connection refused
I 2019-12-01T14:26:12.314526990Z 10.132.0.25 - [10.132.0.25] - - [01/Dec/2019:14:26:12 +0000] "GET / HTTP/2.0" 200 541 "-" "GoogleStackdriverMonitoring-UptimeChecks(https://cloud.google.com/monitoring)" 99 1.787 [bap-staging-bap-staging-80] [] 10.102.2.4:80 553 1.788 200 5ac9d438e5ca31618386b35f67e2033b
E 2019-12-01T14:26:12.455236Z healthcheck error: Get http+unix://nginx-status/healthz: dial unix /tmp/nginx-status-server.sock: connect: connection refused
I 2019-12-01T14:26:13.156963Z Exiting with 0
Here is yaml configuration of Nginx ingress. Configuration is default by Gitlab's system that is creating cluster on their own.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: "2019-11-24T17:35:04Z"
generation: 3
labels:
app: nginx-ingress
chart: nginx-ingress-1.22.1
component: controller
heritage: Tiller
release: ingress
name: ingress-nginx-ingress-controller
namespace: gitlab-managed-apps
resourceVersion: "2638973"
selfLink: /apis/apps/v1/namespaces/gitlab-managed-apps/deployments/ingress-nginx-ingress-controller
uid: bfb695c2-0ee0-11ea-a36a-42010a84009f
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx-ingress
release: ingress
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app: nginx-ingress
component: controller
release: ingress
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=gitlab-managed-apps/ingress-nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=gitlab-managed-apps/ingress-nginx-ingress-controller
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 33
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/nginx/modsecurity/modsecurity.conf
name: modsecurity-template-volume
subPath: modsecurity.conf
- mountPath: /var/log/modsec
name: modsecurity-log-volume
- args:
- /bin/sh
- -c
- tail -f /var/log/modsec/audit.log
image: busybox
imagePullPolicy: Always
name: modsecurity-log
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/log/modsec
name: modsecurity-log-volume
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: ingress-nginx-ingress
serviceAccountName: ingress-nginx-ingress
terminationGracePeriodSeconds: 60
volumes:
- configMap:
defaultMode: 420
items:
- key: modsecurity.conf
path: modsecurity.conf
name: ingress-nginx-ingress-controller
name: modsecurity-template-volume
- emptyDir: {}
name: modsecurity-log-volume
I have no Idea what else to try. I'm running cluster on 3 nodes (2x 1vCPU, 1.5GB RAM and 1x Preemptile 2vCPU, 1,8GB RAM), all of them on SSD drives.
Anytime i upload the image, disk IO will get crazy.
Disk IOPS
Disk I/O
Thanks for your help.
Found solution. Nginx-ingress pod contained modsecurity too. All requests were analyzed by mod security and bigger uploaded files caused those crashes. It wasn't crash at all but took too much CPU and I/O, that caused longer healthcheck response to all other pods. Solution is to configure correctly modsecurity or disable.

How to replace fields value of angular config.json using environment variable in kubernetes and nginx in CI &CD VSTS

I am trying to replace authenticationEndpoint url and other configuation in the config.json of angular project using environment variable in kubernetes dynamically. For that configured in the helm chart for environment variable in the CI & CD pipeline of VSTS. But not sure how config.json field will be replaced with environment variable in kubernetes. Could you please help me on this.?
env in pods (kubernetes) ran printenv cmd
authenticationEndpoint=http://localhost:8888/security/auth
config.json
{
"authenticationEndpoint": "http://localhost:8080/Security/auth",
"authenticationClientId": "my-project",
"baseApiUrl": "http://localhost:8080/",
"homeUrl": "http://localhost:4300/"
}
Generated yaml file from helm chart
# Source: sample-web/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: cloying-rattlesnake-sample-web
labels:
app.kubernetes.io/name: sample-web
helm.sh/chart: sample-web-0.1.0
app.kubernetes.io/instance: cloying-rattlesnake
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app.kubernetes.io/name: sample-web
app.kubernetes.io/instance: cloying-rattlesnake
---
# Source: sample-web/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloying-rattlesnake-sample-web
labels:
app.kubernetes.io/name: sample-web
helm.sh/chart: sample-web-0.1.0
app.kubernetes.io/instance: cloying-rattlesnake
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: sample-web
app.kubernetes.io/instance: cloying-rattlesnake
template:
metadata:
labels:
app.kubernetes.io/name: sample-web
app.kubernetes.io/instance: cloying-rattlesnake
spec:
containers:
- name: sample-web
image: "sample-web:stable"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
env:
- name: authenticationEndpoint
value: "http://localhost:8080/security/auth"
resources:
{}
---
# Source: sample-web/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cloying-rattlesnake-sample-web
labels:
app.kubernetes.io/name: sample-web
helm.sh/chart: sample-web-0.1.0
app.kubernetes.io/instance: cloying-rattlesnake
app.kubernetes.io/managed-by: Tiller
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: ""
http:
paths:
- path: /?(.*)
backend:
serviceName: cloying-rattlesnake-sample-web
servicePort: 80
Absolute path of config.json
Ran shell cmd - kubectl exec -it sample-web-55b71d19c6-v82z4 /bin/sh
path: usr/share/nginx/html/config.json
Use a init container to modify your config.json when the pod starts.
Updated your Deployment.yaml
# Source: sample-web/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloying-rattlesnake-sample-web
labels:
app.kubernetes.io/name: sample-web
helm.sh/chart: sample-web-0.1.0
app.kubernetes.io/instance: cloying-rattlesnake
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: sample-web
app.kubernetes.io/instance: cloying-rattlesnake
template:
metadata:
labels:
app.kubernetes.io/name: sample-web
app.kubernetes.io/instance: cloying-rattlesnake
spec:
initContainers:
- name: init-myconfig
image: busybox:1.28
command: ['sh', '-c', 'cat /usr/share/nginx/html/config.json | sed -e "s#\$authenticationEndpoint#$authenticationEndpoint#g" > /tmp/config.json && cp /tmp/config.json /usr/share/nginx/html/config.json']
env:
- name: authenticationEndpoint
value: "http://localhost:8080/security/auth"
containers:
- name: sample-web
image: "sample-web:stable"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
env:
- name: authenticationEndpoint
value: "http://localhost:8080/security/auth"
volumeMounts:
- mountPath: /usr/share/nginx/html/config.json
name: config-volume
volumes:
- name: config-volume
hostPath:
path: /mnt/data.json # Create this file in the host where the pod starts. Content below.
type: File
Create /mnt/data.json file in the host where the pod starts
{
"authenticationEndpoint": "$authenticationEndpoint",
"authenticationClientId": "my-project",
"baseApiUrl": "http://localhost:8080/",
"homeUrl": "http://localhost:4300/"
}
I have found simple solution. using shell script I am applying same command to replace the content of the config.json then start Nginx for running application. It works....
Config.json
{
"authenticationEndpoint": "$AUTHENTICATION_ENDPOINT",
"authenticationClientId": "$AUTHENTICATION_CLIENT_ID",
"baseApiUrl": "http://localhost:8080/",
"homeUrl": "http://localhost:4300/"
}
setup.sh
sed -i -e 's#$AUTHENTICATION_ENDPOINT#'"$AUTHENTICATION_ENDPOINT"'#g' usr/share/nginx/html/config.json
sed -i -e 's#$AUTHENTICATION_CLIENT_ID#'"$AUTHENTICATION_CLIENT_ID"'#g' /usr/share/nginx/html/config.json
nginx -g 'daemon off;'

Resources