Kubernetes deploy failed jenkins deployment "failed to create containerd task" - docker

While deploying jenkins pod in our kubernetes cluster, kubernetes return the following error:
Error: failed to create containerd task: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/var/run/docker.sock\\\" to rootfs \\\"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/jenkins/rootfs\\\" at \\\"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/jenkins/rootfs/run\\\" caused \\\"not a directory\\\"\"": unknown Back-off restarting failed container
My Deployment yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins
spec:
imagePullSecrets:
- name: my-secret-key
containers:
- name: jenkins
image: image-repo-url
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-sock
mountPath: /var/run/
- name: docker-storage
mountPath: /var/lib/docker
securityContext:
privileged: true
volumes:
- name: jenkins-home
emptyDir: {}
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: docker-storage
emptyDir: {}
I tried for docker-sock volume:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
type: file
--- and ---
- name: docker-sock
hostPath:
path: /var/run/docker.sock
type: Socket
But it doesn't work. Actually, this configuration was working. But ıt doesn't work right now.
I tried for volume mounts:
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-sock
mountPath: /var/run/docker.sock
Deployment created. But Docker couldn't work.
We are using IBM Cloud Kubernetes Service.
Cluster Version:
1.15.11_1533
Kubernetes Api Version:
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
batch/v2alpha1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1
coordination.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
metrics.k8s.io/v1beta1
networking.k8s.io/v1
networking.k8s.io/v1beta1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:16:51Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.11+IKS", GitCommit:"0562ba8a2dfdd05f7f8721ab4952c02fe1605860", GitTreeState:"clean", BuildDate:"2020-03-13T14:45:42Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}

Newer IKS clusters don't have Docker installed - they use containerd to execute containers.
If you still want to execute Docker on Jenkins you can either use Kubernetes plugin and pods with dind containers or rebuild your own jenkins based on dind - something like this: https://hub.docker.com/r/vixns/jenkins-dind/

Related

Chaincode Build Failed in Hyperledger Fabric on Kubernetes

Deploying Hyperledger Fabric v2.0 in Kubernetes
I am Trying to deploy a sample chaincode in a Private Kubernetes Cluster which is running in Azure Cloud. After creating the nodes and then running the Install chaincode operation is getting failed and throwing the below error. I am only using a single Kubernetes cluster.
Error:
chaincode install failed with status: 500 - failed to invoke backing implementation of 'InstallChaincode': could not build chaincode: docker build failed: docker image inspection failed: cannot connect to Docker endpoint
command terminated with exit code 1
Below is the peer configuration template for Deployment, Service & ConfigMap
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: ${PEER}
name: ${PEER}
namespace: ${ORG}
spec:
replicas: 1
selector:
matchLabels:
app: ${PEER}
strategy: {}
template:
metadata:
labels:
app: ${PEER}
spec:
containers:
- name: couchdb
image: blockchainpractice.azurecr.io/hyperledger/fabric-couchdb
env:
- name: COUCHDB_USER
value: couchdb
- name: COUCHDB_PASSWORD
value: couchdb
ports:
- containerPort: 5984
- name: fabric-peer
image: blockchainpractice.azurecr.io/hyperledger/fabric-peer:2.0
resources: {}
envFrom:
- configMapRef:
name: ${PEER}
volumeMounts:
- name: dockersocket
mountPath: "/host/var/run/docker.sock"
- name: ${PEER}
mountPath: "/etc/hyperledger/fabric-peer"
- name: client-root-tlscas
mountPath: "/etc/hyperledger/fabric-peer/client-root-tlscas"
volumes:
- name: dockersocket
hostPath:
path: "/var/run/docker.sock"
- name: ${PEER}
secret:
secretName: ${PEER}
items:
- key: key.pem
path: msp/keystore/key.pem
- key: cert.pem
path: msp/signcerts/cert.pem
- key: tlsca-cert.pem
path: msp/tlsca/tlsca-cert.pem
- key: ca-cert.pem
path: msp/cacerts/ca-cert.pem
- key: config.yaml
path: msp/config.yaml
- key: tls.crt
path: tls/tls.crt
- key: tls.key
path: tls/tls.key
- key: orderer-tlsca-cert.pem
path: orderer-tlsca-cert.pem
- key: core.yaml
path: core.yaml
- name: client-root-tlscas
secret:
secretName: client-root-tlscas
---
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: ${PEER}
namespace: ${ORG}
data:
CORE_PEER_ADDRESSAUTODETECT: "true"
CORE_PEER_ID: ${PEER}
CORE_PEER_LISTENADDRESS: 0.0.0.0:7051
CORE_PEER_PROFILE_ENABLED: "true"
CORE_PEER_LOCALMSPID: ${ORG_MSP}
CORE_PEER_MSPCONFIGPATH: /etc/hyperledger/fabric-peer/msp
# Gossip
CORE_PEER_GOSSIP_BOOTSTRAP: peer0.${ORG}:7051
CORE_PEER_GOSSIP_EXTERNALENDPOINT: "${PEER}.${ORG}:7051"
CORE_PEER_GOSSIP_ORGLEADER: "false"
CORE_PEER_GOSSIP_USELEADERELECTION: "true"
# TLS
CORE_PEER_TLS_ENABLED: "true"
CORE_PEER_TLS_CERT_FILE: "/etc/hyperledger/fabric-peer/tls/tls.crt"
CORE_PEER_TLS_KEY_FILE: "/etc/hyperledger/fabric-peer/tls/tls.key"
CORE_PEER_TLS_ROOTCERT_FILE: "/etc/hyperledger/fabric-peer/msp/tlsca/tlsca-cert.pem"
CORE_PEER_TLS_CLIENTAUTHREQUIRED: "false"
ORDERER_TLS_ROOTCERT_FILE: "/etc/hyperledger/fabric-peer/orderer-tlsca-cert.pem"
CORE_PEER_TLS_CLIENTROOTCAS_FILES: "/etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.${ORG}-cert.pem"
CORE_PEER_TLS_CLIENTCERT_FILE: "/etc/hyperledger/fabric-peer/tls/tls.crt"
CORE_PEER_TLS_CLIENTKEY_FILE: "/etc/hyperledger/fabric-peer/tls/tls.key"
# Docker
CORE_PEER_NETWORKID: ${ORG}-fabnet
CORE_VM_ENDPOINT: unix:///host/var/run/docker.sock
CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE: "bridge"
# CouchDB
CORE_LEDGER_STATE_STATEDATABASE: CouchDB
CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS: 0.0.0.0:5984
CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME: couchdb
CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD: couchdb
# Logging
CORE_LOGGING_PEER: "info"
CORE_LOGGING_CAUTHDSL: "info"
CORE_LOGGING_GOSSIP: "info"
CORE_LOGGING_LEDGER: "info"
CORE_LOGGING_MSP: "info"
CORE_LOGGING_POLICIES: "debug"
CORE_LOGGING_GRPC: "info"
GODEBUG: "netdns=go"
---
apiVersion: v1
kind: Service
metadata:
name: ${PEER}
namespace: ${ORG}
spec:
selector:
app: ${PEER}
ports:
- name: request
port: 7051
targetPort: 7051
- name: event
port: 7053
targetPort: 7053
type: LoadBalancer
Can anyone help me out. Thanks in advance
I'd suggest that it would be good to look at the K8S test network deployment in fabric-samples (https://github.com/hyperledger/fabric-samples/tree/main/test-network-k8s)
Note that the classic way the peer creates chaincode is create a new docker container via the docker daemon. This really doesn't sit well with K8S. So the chaincode-as-a-service approach is strongly recommended.

Deploy Dind and secure Docker Registry on Kubernetes (colon issues)

I have an issue with one of my project. Here is what I want to do :
Have a private docker registry on my cluster Kubernetes
Have a docker deamon running so that I can pull / push and build image directly inside the cluster
For this project I'm using some certificate to secure all those interactions.
1. How to reproduce :
Note: I'm working on a linux-based system
Here are the files that I'm using :
Deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker
spec:
replicas: 1
selector:
matchLabels:
app: docker
template:
metadata:
labels:
app: docker
spec:
containers:
- name: docker
image: docker:dind
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
memory: "128Mi"
securityContext:
privileged: true
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
- name: docker-graph-storage
mountPath: /var/lib/docker
- name: dind-registry-cert
mountPath: >-
/etc/docker/certs.d/registry:5000/ca.crt
ports:
- containerPort: 2376
volumes:
- name: docker-graph-storage
emptyDir: {}
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: ca.crt
- name: init-reg-vol
secret:
secretName: init-reg
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:2
env:
- name: DOCKER_TLS_CERTDIR
value: /certs
- name: REGISTRY_HTTP_TLS_KEY
value: /certs/registry.pem
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /certs/registry.crt
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
- name: dind-registry-cert
mountPath: /certs/
- name: registry-data
mountPath: /var/lib/registry
ports:
- containerPort: 5000
volumes:
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: registry
- name: registry-data
persistentVolumeClaim:
claimName: registry-data
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: docker
command: ['sleep','200']
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
memory: "128Mi"
env:
- name: DOCKER_HOST
value: tcp://docker:2376
- name: DOCKER_TLS_VERIFY
value: '1'
- name: DOCKER_TLS_CERTDIR
value: /certs
- name: DOCKER_CERT_PATH
value: /certs/client
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /certs/registry.crt
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
readOnly: true
- name: dind-registry-cert
mountPath: /usr/local/share/ca-certificate/ca.crt
readOnly: true
volumes:
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: ca.crt
Services.yaml
---
apiVersion: v1
kind: Service
metadata:
name: docker
spec:
selector:
app: docker
ports:
- name: docker
protocol: TCP
port: 2376
targetPort: 2376
---
apiVersion: v1
kind: Service
metadata:
name: registry
spec:
selector:
app: registry
ports:
- name: registry
protocol: TCP
port: 5000
targetPort: 5000
Pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: certs-client
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
status: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: registry-data
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
limits:
storage: 50Gi
requests:
storage: 2Gi
status: {}
For the cert files I have the following folder certs/ certs/client certs.d/registry:5000/ and I use these command line to generate the certs :
openssl req -newkey rsa:4096 -nodes -keyout ./certs/registry.pem -x509 -days 365 -out ./certs/registry.crt -subj "/C=''/ST=''/L=''/O=''/OU=''/CN=registry"
cp ./certs/registry.crt ./certs.d/registry\:5000/ca.crt
Then I use secrets to pass those certs inside the pods :
kubectl create secret generic registry --from-file=certs/registry.crt --from-file=certs/registry.pem
kubectl create secret generic ca.crt --from-file=certs/registry.crt
The to launch the project the following line is used :
kubectl apply -f pvc.yaml,deployment.yaml,service.yaml
2. My issues
I have a problem on my docker pods with this error :
Error: Error response from daemon: invalid volume specification: '/var/lib/kubelet/pods/727d0f2a-bef6-4217-a292-427c5d76e071/volumes/kubernetes.io~secret/dind-registry-cert:/etc/docker/certs.d/registry:5000/ca.crt:ro
So the problem seems to comme from the colon in the path name. Then I tried to escape the colon and I got this sublime error
error: error parsing deployment.yaml: error converting YAML to JSON: yaml: line 34: found unknown escape character
The real problem here is that if the folder is not named 'registry:5000' the certificat is not reconised as correct and I have a x509 error when trying to push an image from the client.
For the overall project I know that it can work like that since I already succes to deploy it localy with a docker-compose (here is the link to the github project if any of you are curious)
So I looked a bit on to it and found out that it's a recuring problem on docker (I mean on Docker Desktop for mount volumes on containers) but I can't find anything about the same issue on Kubernetes.
Do any of you have any lead / suggestion / workaround on this mater ?
As always, thanks for your times :)
------------------------------- EDIT following #HelloWorld answer -------------------------------
Thanks to the workaround with simlink the ca.cert is correctly mounted inside. Howerver since I was mounting it on the deployement that was use to run the docker deamon, the entrypoint of the container docker:dind was overwrite by the commands. For future reader here is the solution that I found : geting the entry-point.sh and running it manualy.
Here is the deployement as I write those lines :
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker
spec:
replicas: 1
selector:
matchLabels:
app: docker
template:
metadata:
labels:
app: docker
spec:
containers:
- name: docker
image: docker:dind
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
memory: "128Mi"
securityContext:
privileged: true
command: ['sh', '-c', 'mkdir -p /etc/docker/certs.d/registry:5000 && ln -s /random/registry.crt /etc/docker/certs.d/registry:5000/ca.crt && wget https://raw.githubusercontent.com/docker-library/docker/a73d96e731e2dd5d6822c99a9af4dcbfbbedb2be/19.03/dind/dockerd-entrypoint.sh && chmod +x dockerd-entrypoint.sh && ./dockerd-entrypoint.sh']
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
readOnly: false
- name: dind-registry-cert
mountPath: /random/
readOnly: false
ports:
- containerPort: 2376
volumes:
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: ca.crt
I hope it will be usefull for someone in the futur :)
The only thing I come up with is using symlinks. I tested it and it works. I also tried searching for better solution but didn't find anything satisfying.
Have a look at this example:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: centos:7
command: ['sh', '-c', 'mkdir -p /etc/docker/certs.d/registry:5000 && ln -s /some/random/path/ca.crt /etc/docker/certs.d/registry:5000/ca.crt && exec sleep 10000']
volumeMounts:
- mountPath: '/some/random/path'
name: registry-cert
volumes:
- name: registry-cert
secret:
secretName: my-secret
And here is a template secret i used:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: default
type: Opaque
data:
ca.crt: <<< some_random_Data >>>
I have mounted this secret into a /some/random/path location (without colon so it wouldn't throw errors) and created a symlink between /some/random/path/ca.crt and /etc/docker/certs.d/registry:5000/ca.crt.
Of course you also need to create a dir structure before running ln -s ..., that is why I run mkdir -p ....
Let me know if you have any further questions. I'd be happy to answer them.

Kubernetes DaemonSet Permission Denied on mounted Volume - Docker in Docker dind

I tried running simple DaemonSet on kube cluster - the Idea was that other kube pods would connect to that containers docker daemon (dockerd) and execute commands on it. (The other pods are Jenkins slaves and would have just env DOCKER_HOST point to 'tcp://localhost:2375'); In short the config looks like this:
dind.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: dind
spec:
selector:
matchLabels:
name: dind
template:
metadata:
labels:
name: dind
spec:
# tolerations:
# - key: node-role.kubernetes.io/master
# effect: NoSchedule
containers:
- name: dind
image: docker:18.05-dind
resources:
limits:
memory: 2000Mi
requests:
cpu: 100m
memory: 500Mi
volumeMounts:
- name: dind-storage
mountPath: /var/lib/docker
volumes:
- name: dind-storage
emptyDir: {}
Error message when running
mount: mounting none on /sys/kernel/security failed: Permission denied
Could not mount /sys/kernel/security.
AppArmor detection and --privileged mode might break.
mount: mounting none on /tmp failed: Permission denied
I took the idea from medium post that didn't describe it fully: https://medium.com/hootsuite-engineering/building-docker-images-inside-kubernetes-42c6af855f25 describing docker of docker, docker in docker and Kaniko
found the solution
apiVersion: v1
kind: Pod
metadata:
name: dind
spec:
containers:
- name: jenkins-slave
image: gcr.io/<my-project>/myimg # it has docker installed on it
command: ['docker', 'run', '-p', '80:80', 'httpd:latest']
resources:
requests:
cpu: 10m
memory: 256Mi
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
- name: dind-daemon
image: docker:18.05-dind
resources:
requests:
cpu: 20m
memory: 512Mi
securityContext:
privileged: true
volumeMounts:
- name: docker-graph-storage
mountPath: /var/lib/docker
volumes:
- name: docker-graph-storage
emptyDir: {}

Kubernetes-Help Needed-FailedMount 3m38s

I have created a Jenkins cluster on Kubernetes (Master + 2 workers) with local volumes on the Master node.
I created a persistent vol of 2GB and the claim is 1 GB.
I created a deployment with the image: jenkins/jenkins:lts and volume mount from /var/jenkins_home to PVC: claimname
I have already copied the data on local folder which is Persistent Volume but I am not able to see my jobs on jenkins server.
kubectl describe pod dep-jenkins-8648454f65-4v8tb
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 3m38s (x149 over 4h50m) kubelet, kube-worker001 MountVolume.SetUp failed for volume "default-token-424m4" : secret "default-token-424m4" not found
What is the correct way to mount a local directory in a POD so that I can transfer my Jenkins data to newly created Jenkins server on Kubernetes?
Looks like the Warning in your pod description is related to mounting a secret and not mounting any PV. To set up your JENKINS_HOME as a persistent volume you would do something like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: my-jenkins-image
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-home

How to pass docker container flags via kubernetes pod

Hi I am running kubernetes cluster where I run mailhog container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run mailhog/mailhog -auth-file=./auth.file
But I need to run it via Kubernetes pod. My pod looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
ports:
- containerPort: 8025
How to achieve to run Docker container with parameter -auth-file=./auth.file via kubernetes. Thanks.
I tried adding under containers
command: ["-auth-file", "/data/mailhog/auth.file"]
but then I get
Failed to start container with docker id 7565654 with error: Error response from daemon: Container command '-auth-file' not found or does not exist.
thanks to #lang2
here is my deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
volumes:
- name: secrets-volume
secret:
secretName: mailhog-login
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
resources:
limits:
cpu: 70m
memory: 30Mi
requests:
cpu: 50m
memory: 20Mi
volumeMounts:
- name: secrets-volume
mountPath: /data/mailhog
readOnly: true
ports:
- containerPort: 8025
- containerPort: 1025
args:
- "-auth-file=/data/mailhog/auth.file"
In kubernetes, command is equivalent of ENTRYPOINT. In your case, args should be used.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#container-v1-core
You are on the right track. It's just that you also need to include the name of the binary in the command array as the first element. You can find that out by looking​ in the respective Dockerfile (CMD and/or ENTRYPOINT).
In this case:
command: ["Mailhog", "-auth-file", "/data/mailhog/auth.file"]
I needed similar task (my aim was passing the application profile to app) and what I did is the following:
Setting an environment variable in Deployment section of the kubernetes yml file.
env:
- name: PROFILE
value: "dev"
Using this environment variable in dockerfile as command line argument.
CMD java -jar -Dspring.profiles.active=${PROFILE} /opt/app/xyz-service-*.jar

Resources