Image update in the external docker registry doesn't trigger deployment - docker

I am importing the following template in the OpenShift web client to create ImageStream, DeploymentConfig & Service.
ImageStream is created from a Docker Image available on an External Docker Registry.
Everything seems to be working fine apart from the fact that whenever the Docker Image changes in the external registry redeployment doesn't take place.
Is it possible with Openshift & External Registry to trigger auto deployments when Docker Image is changed in the external registry.
{
"kind": "Template",
"apiVersion": "v1",
"metadata": {
"name": "test-100"
},
"objects": [
{
"kind": "ImageStream",
"apiVersion": "image.openshift.io/v1",
"metadata": {
"name": "test-100",
"creationTimestamp": null,
"labels": {
"app": "test-100"
},
"annotations": {
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"lookupPolicy": {
"local": false
},
"tags": [
{
"name": "latest",
"annotations": {
"openshift.io/imported-from": "artifactory.company.com/docker-dev-local/test/dev/test:latest"
},
"from": {
"kind": "DockerImage",
"name": "artifactory.company.com/docker-dev-local/test/dev/test:latest"
},
"generation": null,
"importPolicy": {},
"referencePolicy": {
"type": ""
}
}
]
}
},
{
"kind": "DeploymentConfig",
"apiVersion": "apps.openshift.io/v1",
"metadata": {
"name": "test-100",
"creationTimestamp": null,
"labels": {
"app": "test-100"
},
"annotations": {
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"strategy": {
"resources": {}
},
"triggers": [
{
"type": "ConfigChange"
},
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"test-100"
],
"from": {
"kind": "ImageStreamTag",
"name": "test-100:latest"
}
}
}
],
"replicas": 1,
"test": false,
"selector": {
"app": "test-100",
"deploymentconfig": "test-100"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "test-100",
"deploymentconfig": "test-100"
},
"annotations": {
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"containers": [
{
"name": "test-100",
"image": "artifactory.company.com/docker-dev-local/test/dev/test:latest",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
},
{
"containerPort": 8443,
"protocol": "TCP"
},
{
"containerPort": 8778,
"protocol": "TCP"
}
],
"resources": {}
}
]
}
}
}
},
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "test-100",
"creationTimestamp": null,
"labels": {
"app": "test-100"
},
"annotations": {
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"ports": [
{
"name": "8080-tcp",
"protocol": "TCP",
"port": 8080,
"targetPort": 8080
},
{
"name": "8443-tcp",
"protocol": "TCP",
"port": 8443,
"targetPort": 8443
},
{
"name": "8778-tcp",
"protocol": "TCP",
"port": 8778,
"targetPort": 8778
}
],
"selector": {
"app": "test-100",
"deploymentconfig": "test-100"
}
}
}
]
}

OpenShift can not detect image changes on external registry. So you should configure importPolicy.scheduled: true to refresh the image.
e.g.> you can configure imagePolicy every image tag.
apiVersion: v1
kind: ImageStream
metadata:
name: ruby
spec:
tags:
- from:
kind: DockerImage
name: openshift/ruby-20-centos7
name: latest
importPolicy:
scheduled: true
The interval is 15 minutes by default. If you want to change the value, you can adjust the config from /etc/origin/master/master-config.yaml as follows.
e.g.> ScheduledImageImportMinimumIntervalSeconds is interval time for imagestream import. Refer Image Policy Configuration
for other parameter details.
imagePolicyConfig:
MaxScheduledImageImportsPerMinute: 10
ScheduledImageImportMinimumIntervalSeconds: 1800
disableScheduledImport: false
maxImagesBulkImportedPerRepository: 3
Further information is here: Automatically Update Red Hat Container Images on OpenShift 3.11.

Related

how can JAVA_OPTIONS added in deployconfig in OpenshiftContainer

I am trying to add below JAVA_OPTIONS in deployconfig in OpenshiftContainer but is throwing syntax error .Could anyone help me how to add parameters in OpenshiftContainer please
JAVA_OPTIONS
-Djavax.net.ssl.trustStore={KEYSTORE_PATH}/cacerts.ts,
-Djavax.net.ssl.trustStorePassword=changeit,
Djavax.net.ssl.keyStore=${KEYSTORE_PATH}/keystore.pkcs12-Djavax.net.ssl.keyStorePassword=${KEYSTORE_PASS}
-Djava.awt.headless=true,
deploymentConfig as json:
{
"apiVersion": "apps.openshift.io/v1",
"kind": "DeploymentConfig",
"metadata": {
"labels": {
"app": "${APP_NAME}"
},
"name": "${APP_NAME}"
},
"spec": {
"replicas": 1,
"selector": {
"app": "${APP_NAME}",
"deploymentconfig": "${APP_NAME}"
},
"strategy": null,
"template": {
"metadata": {
"labels": {
"app": "${APP_NAME}",
"deploymentconfig": "${APP_NAME}"
}
},
"spec": {
"containers": [
{
"env": [
{
"name": "SPRING_PROFILE",
"value": "migration"
},
{
"name": "JAVA_MAIN_CLASS",
"value": "com.agcs.Application"
},
{
"name": "JAVA_OPTIONS",
"value":"-Djavax.net.ssl.trustStore={KEYSTORE_PATH}/cacerts.ts",
"-Djavax.net.ssl.trustStorePassword=changeit",
-Djavax.net.ssl.keyStore=${KEYSTORE_PATH}/keystore.pkcs12
-Djavax.net.ssl.keyStorePassword=${KEYSTORE_PASS}
-Djava.awt.headless=true,
},
{
"name": "MONGO_AUTH_DB",
"valueFrom": {
"secretKeyRef": {
"key": "spring.data.mongodb.authentication-database",
"name": "mongodb-secret"
}
}
},
],
"image": "${IMAGE_NAME}",
"imagePullPolicy": "Always",
"name": "${APP_NAME}",
"ports": [
{
"containerPort": 8103,
"protocol": "TCP"
}
],
"resources": {
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "500m",
"memory": "500Mi"
}
},
"volumeMounts":[
{
"name": "secret-volume",
"mountPath": "/mnt/secrets",
"readOnly": true
}
]
}
],
"volumes": [
{
"name": "secret-volume",
"secret": {
"secretName": "keystore-new"
}
}
]
}
}
}
}
{
"name": "JAVA_OPTIONS",
"value":"-Djavax.net.ssl.trustStore={KEYSTORE_PATH}/cacerts.ts",
"-Djavax.net.ssl.trustStorePassword=changeit",
-Djavax.net.ssl.keyStore=${KEYSTORE_PATH}/keystore.pkcs12
-Djavax.net.ssl.keyStorePassword=${KEYSTORE_PASS}
-Djava.awt.headless=true,
},
This is invalid json, as the key value can only have one value, while you have provided multiple comma separated strings.
JAVA_OPTIONS isn't a standard environment variable, so we don't know how it's processed but maybe this will work?
{
"name": "JAVA_OPTIONS",
"value":"-Djavax.net.ssl.trustStore={KEYSTORE_PATH}/cacerts.ts -Djavax.net.ssl.trustStorePassword=changeit -Djavax.net.ssl.keyStore=${KEYSTORE_PATH}/keystore.pkcs12 -Djavax.net.ssl.keyStorePassword=${KEYSTORE_PASS} -Djava.awt.headless=true"
},
But there's still probably an issue, because it seems like {KEYSTORE_PATH} is supposed to be a variable. That's not defined or expanded in this file. For a first attempt, probably just hardcode the values of all these variables.
For secrets (such as passwords) you can hardcode some value for initial testing, but please use OpenShift Secrets for formal testing and the actual deployment.

Rabbitmq in Kubernetes: Command not found

Trying to start up rabbitmq in K8s while attaching a configmap gives me the following error:
/usr/local/bin/docker-entrypoint.sh: line 367: rabbitmq-plugins: command not found
/usr/local/bin/docker-entrypoint.sh: line 405: exec: rabbitmq-server: not found
Exactly the same setup is working fine with docker-compose, so I am a bit lost. Using rabbitmq:3.8.3
Here is a snippet from my deployment:
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "rabbitmq"
}
},
"spec": {
"volumes": [
{
"name": "rabbitmq-configuration",
"configMap": {
"name": "rabbitmq-configuration",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "rabbitmq",
"image": "rabbitmq:3.8.3",
"ports": [
{
"containerPort": 5672,
"protocol": "TCP"
}
],
"env": [
{
"name": "RABBITMQ_DEFAULT_USER",
"value": "guest"
},
{
"name": "RABBITMQ_DEFAULT_PASS",
"value": "guest"
},
{
"name": "RABBITMQ_ENABLED_PLUGINS_FILE",
"value": "/opt/enabled_plugins"
}
],
"resources": {},
"volumeMounts": [
{
"name": "rabbitmq-configuration",
"mountPath": "/opt/"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
And here is the configuration:
{
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"name": "rabbitmq-configuration",
"namespace": "e360",
"selfLink": "/api/v1/namespaces/default/configmaps/rabbitmq-configuration",
"uid": "28071976-98f6-11ea-86b2-0244a03303e1",
"resourceVersion": "1034540",
"creationTimestamp": "2020-05-18T10:55:58Z"
},
"data": {
"enabled_plugins": "[rabbitmq_management].\n"
}
}
That's because you're monting a volume in /opt, which is the rabbitmq home path.
So, the entrypoint script cannot find any of the rabbitmq binaries.
You can see the rabbitmq Dockerfile here

Readiness probe failed: Get http://10.244.2.183:5000/: dial tcp 10.244.2.183:5000: connect: connection refused Back-off restarting failed container

I am trying to deploy my application by using Gitlab-CI through pushing the docker images on Azure container and from there deploying the images on azure kubernetes service. these all process is happening automatically through GitlabCI. but i'm facing challenge in deployment section. i can able to see the services, pods is running status also tiller is deployed on kubernetes but it is throwing the below error This is deployment YAML which i took from kubernetes
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "review-37-in-cust-iosa7i",
"namespace": "XYZ",
"selfLink": "/apis/extensions/v1beta1/namespaces/XYZ/deployments/review-37-in-cust-iosa7i",
"uid": "9f5f7fff-9d65-11e9-8ceb-0e7a6fb80992",
"resourceVersion": "7143337",
"generation": 1,
"creationTimestamp": "2019-07-03T07:39:00Z",
"labels": {
"app": "review-37-in-cust-iosa7i",
"chart": "auto-deploy-app-0.2.9",
"heritage": "Tiller",
"release": "review-37-in-cust-iosa7i",
"tier": "web",
"track": "stable"
},
"annotations": {
"deployment.kubernetes.io/revision": "1"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "review-37-in-cust-iosa7i",
"release": "review-37-in-cust-iosa7i",
"tier": "web",
"track": "stable"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "review-37-in-cust-iosa7i",
"release": "review-37-in-cust-iosa7i",
"tier": "web",
"track": "stable"
},
"annotations": {
"checksum/application-secrets": ""
}
},
"spec": {
"containers": [
{
"name": "auto-deploy-app",
"image": "stratuscentcrdeve.azurecr.io/XYZ/dev/37-in-customer-group-customer-form-when-admin-opens-up-the-poli:65d2e2bc554242c584d5c6480e172690659ef98b",
"ports": [
{
"name": "web",
"containerPort": 5000,
"protocol": "TCP"
}
],
"env": [
{
"name": "DATABASE_URL",
"value": "postgres://user:testing-password#review-37-in-cust-iosa7i-postgres:5432/review-37-in-cust-iosa7i"
}
],
"resources": {},
"livenessProbe": {
"httpGet": {
"path": "/",
"port": 5000,
"scheme": "HTTP"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 3
},
"readinessProbe": {
"httpGet": {
"path": "/",
"port": 5000,
"scheme": "HTTP"
},
"initialDelaySeconds": 5,
"timeoutSeconds": 3,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 3
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"imagePullSecrets": [
{
"name": "gitlab-registry"
}
],
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": 1,
"maxSurge": 1
}
},
"revisionHistoryLimit": 2147483647,
"progressDeadlineSeconds": 2147483647
},
"status": {
"observedGeneration": 1,
"replicas": 1,
"updatedReplicas": 1,
"unavailableReplicas": 1,
"conditions": [
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2019-07-03T07:39:00Z",
"lastTransitionTime": "2019-07-03T07:39:00Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
}
]
}
}
Please comment if any additional info is required?

Kubernetes create Service in deployment+rbd mode, also same configured to succeed in default namespace, fail under non default namespace?

Kubernetes create Service in deployment+rbd mode, also same configured to succeed in default namespace, fail under non default namespace?
The config(json):
{
"kind": "Deployment",
"spec": {
"replicas": "1",
"template": {
"spec": {
"volumes": [
{
"rbd": {
"secretRef": {
"name": "ceph-secret"
},
"image": "zhaosiyi.24",
"fsType": "ext4",
"readOnly": false,
"user": "admin",
"monitors": [
"xxx.xxx.xxx.6:6789",
"xxx.xxx.xxx.7:6789",
"xxx.xxx.xxx.8:6789"
],
"pool": "rrkd.rbd"
},
"name": "aa"
}
],
"imagePullSecrets": [
{
"name": "registrykey-m3-1"
}
],
"containers": [
{
"image": "ccr.ccs.tencentyun.com/rrkd/rrkd-nginx:1.0",
"volumeMounts": [
{
"readOnly": false,
"mountPath": "/mnt",
"name": "aa"
}
],
"name": "aa",
"ports": [
{
"protocol": "TCP",
"containerPort": 80
}
]
}
]
},
"metadata": {
"labels": {
"name": "aa"
}
}
},
"selector": {
"matchLabels": {
"name": "aa"
}
}
},
"apiVersion": "extensions/v1beta1",
"metadata": {
"labels": {
"name": "aa"
},
"name": "aa"
}
}
{
"kind": "Service",
"spec": {
"type": "NodePort",
"ports": [
{
"targetPort": 80,
"protocol": "TCP",
"port": 80
}
],
"selector": {
"name": "aa"
}
},
"apiVersion": "v1",
"metadata": {
"labels": {
"name": "aa"
},
"name": "aa"
}
}
The strangest thing is that the details of the describe pod show success, without any error information, but the get pod is not actually successful, as below:
enter image description here
enter image description here
The problem has been solved. Non default of namespace is unsuccessful because the new namespace is not have Secret, so the authentication failed at pull image. Need to manually create the Secret for pull image. If it is RBD or PVC mode, you also need to manually create the Secret for mounting storage.
Can you show more detail for the describe of pod?
I think it will show the failed log
it always you don't have secret or the secret not setting well,
and the ceph maybe wrong too

Service host/port undefined, Kubernetes/Google Container Engine

I have a service with the name mongodb. According to the documentation, the service host and port should be available to other pods in the same cluster through $MONGODB_SERVICE_HOST and $MONGODB_SERVICE_PORT.
However, neither of these are set in my frontend pods. What are the requirements for this to work?
frontend-controller.json
{
"id": "frontend",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 1,
"replicaSelector": {"name": "spatula", "role": "frontend"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "frontend",
"containers": [{
"name": "frontend",
"image": "gcr.io/crafty_apex_841/spatula_frontend",
"cpu": 100,
"ports": [{"name": "spatula-server", "containerPort": 80}]
}]
}
},
"labels": { "name": "spatula", "role": "frontend" }
}
},
"labels": { "name": "spatula", "role": "frontend" }
}
frontend-service.json
{
"apiVersion": "v1beta1",
"kind": "Service",
"id": "frontend",
"port": 80,
"containerPort": "spatula-server",
"labels": { "name": "spatula", "role": "frontend" },
"selector": { "name": "spatula", "role": "frontend" },
"createExternalLoadBalancer": true
}
mongodb-service.json
{
"apiVersion": "v1beta1",
"kind": "Service",
"id": "mongodb",
"port": 27017,
"containerPort": "mongodb-server",
"labels": { "name": "spatula", "role": "mongodb" },
"selector": { "name": "spatula", "role": "mongodb" }
}
mongodb-controller.json
{
"id": "mongodb",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 1,
"replicaSelector": {"name": "spatula", "role": "mongodb"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "mongodb",
"containers": [{
"name": "mongodb",
"image": "dockerfile/mongodb",
"cpu": 100,
"ports": [{"name": "mongodb-server", "containerPort": 27017}]
}]
}
},
"labels": { "name": "spatula", "role": "mongodb" }
}
},
"labels": { "name": "spatula", "role": "mongodb" }
}
The service:
$ gcloud preview container services list
NAME LABELS SELECTOR IP PORT
mongodb name=spatula,role=mongodb name=spatula,role=mongodb 10.111.240.154 27017
The pod:
$ gcloud preview container pods list
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
9ffd980f-ab56-11e4-ad76-42010af069b6 10.108.0.11 mongodb dockerfile/mongodb k8s-spatula-node-1.c.crafty-apex-841.internal/104.154.44.77 name=spatula,role=mongodb Running
Because environment variables for pods are only created when the pod is started, the service has to exist before a given pod in order for that pod to see the service's environment variables. You should be able to see them from all new pods you create.
If you'd like to learn more, additional explanation of how services work can be found in the documentation.
Alternatively, all newly created clusters in Container Engine (version 0.9.2 and above) have a SkyDNS service running in the cluster that you can use to access services from pods, even those without the environment variables.

Resources