I have a service with the name mongodb. According to the documentation, the service host and port should be available to other pods in the same cluster through $MONGODB_SERVICE_HOST and $MONGODB_SERVICE_PORT.
However, neither of these are set in my frontend pods. What are the requirements for this to work?
frontend-controller.json
{
"id": "frontend",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 1,
"replicaSelector": {"name": "spatula", "role": "frontend"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "frontend",
"containers": [{
"name": "frontend",
"image": "gcr.io/crafty_apex_841/spatula_frontend",
"cpu": 100,
"ports": [{"name": "spatula-server", "containerPort": 80}]
}]
}
},
"labels": { "name": "spatula", "role": "frontend" }
}
},
"labels": { "name": "spatula", "role": "frontend" }
}
frontend-service.json
{
"apiVersion": "v1beta1",
"kind": "Service",
"id": "frontend",
"port": 80,
"containerPort": "spatula-server",
"labels": { "name": "spatula", "role": "frontend" },
"selector": { "name": "spatula", "role": "frontend" },
"createExternalLoadBalancer": true
}
mongodb-service.json
{
"apiVersion": "v1beta1",
"kind": "Service",
"id": "mongodb",
"port": 27017,
"containerPort": "mongodb-server",
"labels": { "name": "spatula", "role": "mongodb" },
"selector": { "name": "spatula", "role": "mongodb" }
}
mongodb-controller.json
{
"id": "mongodb",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 1,
"replicaSelector": {"name": "spatula", "role": "mongodb"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "mongodb",
"containers": [{
"name": "mongodb",
"image": "dockerfile/mongodb",
"cpu": 100,
"ports": [{"name": "mongodb-server", "containerPort": 27017}]
}]
}
},
"labels": { "name": "spatula", "role": "mongodb" }
}
},
"labels": { "name": "spatula", "role": "mongodb" }
}
The service:
$ gcloud preview container services list
NAME LABELS SELECTOR IP PORT
mongodb name=spatula,role=mongodb name=spatula,role=mongodb 10.111.240.154 27017
The pod:
$ gcloud preview container pods list
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
9ffd980f-ab56-11e4-ad76-42010af069b6 10.108.0.11 mongodb dockerfile/mongodb k8s-spatula-node-1.c.crafty-apex-841.internal/104.154.44.77 name=spatula,role=mongodb Running
Because environment variables for pods are only created when the pod is started, the service has to exist before a given pod in order for that pod to see the service's environment variables. You should be able to see them from all new pods you create.
If you'd like to learn more, additional explanation of how services work can be found in the documentation.
Alternatively, all newly created clusters in Container Engine (version 0.9.2 and above) have a SkyDNS service running in the cluster that you can use to access services from pods, even those without the environment variables.
Related
I'm deploying my app in azure container instance (Container Group).
I've 3 docker containers
web_api
redis
neo4j
I'm able to access the database by using localhost:7474 as hostname, but can't access redis by using localhost as the hostname.
This is the same problem I'm facing when I run containers locally using docker run command.
NOTE: I can't use docker-compose as my intention is to use ACI.
azuredeploy.json
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"containerGroupName": {
"type": "string",
"defaultValue": "devCG",
"metadata": {
"description": ""
}
}
},
"variables": {
"name_web": "web-api",
"image_web": "dev.azurecr.io/web-api:89",
"name_redis": "redis",
"image_redis": "redis:5.0.9",
"name_neo4j": "neo4j",
"image_neo4j": "neo4j:3.5.6"
},
"resources": [
{
"name": "[parameters('containerGroupName')]",
"type": "Microsoft.ContainerInstance/containerGroups",
"apiVersion": "2019-12-01",
"location": "[resourceGroup().location]",
"properties": {
"containers": [
{
"name": "[variables('name_web')]",
"properties": {
"image": "[variables('image_web')]",
"resources": {
"requests": {
"cpu": 1,
"memoryInGb": 0.5
}
},
"ports": [
{
"port": 80
},
{
"port": 8080
}
]
}
},
{
"name": "[variables('name_redis')]",
"properties": {
"image": "[variables('image_redis')]",
"resources": {
"requests": {
"cpu": 0.5,
"memoryInGb": 0.2
}
}
}
},
{
"name": "[variables('name_neo4j')]",
"properties": {
"image": "[variables('image_neo4j')]",
"resources": {
"requests": {
"cpu": 0.5,
"memoryInGb": 0.2
}
},
"ports": [
{
"port": 7474
}
]
}
}
],
"imageRegistryCredentials": [
{
"server": "dev.azurecr.io",
"username": "dev",
"password": "********************"
}
],
"restartPolicy": "Always",
"osType": "Linux",
"volumes": [
{
"name": "devfs",
"azureFile": {
"shareName": "dev",
"readOnly": "false",
"storageAccountName": "devfs",
"storageAccountKey": "*****************************"
}
}
],
"ipAddress": {
"type": "Public",
"ports": [
{
"protocol": "tcp",
"port": 80
}
],
"dnsNameLabel": "dev"
}
}
}
],
"outputs": {
"containerIPv4Address": {
"type": "string",
"value": "[reference(resourceId('Microsoft.ContainerInstance/containerGroups/', parameters('containerGroupName'))).ipAddress.ip]"
}
}
}
Accessing redis over local host requires special tweaks in the redis config.
Have a look at
https://github.com/docker-library/redis/issues/45
And
https://github.com/luin/ioredis/issues/763
The recommandation is to connect over the redis hostname, you can nap this local up to redis hostname
Trying to start up rabbitmq in K8s while attaching a configmap gives me the following error:
/usr/local/bin/docker-entrypoint.sh: line 367: rabbitmq-plugins: command not found
/usr/local/bin/docker-entrypoint.sh: line 405: exec: rabbitmq-server: not found
Exactly the same setup is working fine with docker-compose, so I am a bit lost. Using rabbitmq:3.8.3
Here is a snippet from my deployment:
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "rabbitmq"
}
},
"spec": {
"volumes": [
{
"name": "rabbitmq-configuration",
"configMap": {
"name": "rabbitmq-configuration",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "rabbitmq",
"image": "rabbitmq:3.8.3",
"ports": [
{
"containerPort": 5672,
"protocol": "TCP"
}
],
"env": [
{
"name": "RABBITMQ_DEFAULT_USER",
"value": "guest"
},
{
"name": "RABBITMQ_DEFAULT_PASS",
"value": "guest"
},
{
"name": "RABBITMQ_ENABLED_PLUGINS_FILE",
"value": "/opt/enabled_plugins"
}
],
"resources": {},
"volumeMounts": [
{
"name": "rabbitmq-configuration",
"mountPath": "/opt/"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
And here is the configuration:
{
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"name": "rabbitmq-configuration",
"namespace": "e360",
"selfLink": "/api/v1/namespaces/default/configmaps/rabbitmq-configuration",
"uid": "28071976-98f6-11ea-86b2-0244a03303e1",
"resourceVersion": "1034540",
"creationTimestamp": "2020-05-18T10:55:58Z"
},
"data": {
"enabled_plugins": "[rabbitmq_management].\n"
}
}
That's because you're monting a volume in /opt, which is the rabbitmq home path.
So, the entrypoint script cannot find any of the rabbitmq binaries.
You can see the rabbitmq Dockerfile here
I am importing the following template in the OpenShift web client to create ImageStream, DeploymentConfig & Service.
ImageStream is created from a Docker Image available on an External Docker Registry.
Everything seems to be working fine apart from the fact that whenever the Docker Image changes in the external registry redeployment doesn't take place.
Is it possible with Openshift & External Registry to trigger auto deployments when Docker Image is changed in the external registry.
{
"kind": "Template",
"apiVersion": "v1",
"metadata": {
"name": "test-100"
},
"objects": [
{
"kind": "ImageStream",
"apiVersion": "image.openshift.io/v1",
"metadata": {
"name": "test-100",
"creationTimestamp": null,
"labels": {
"app": "test-100"
},
"annotations": {
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"lookupPolicy": {
"local": false
},
"tags": [
{
"name": "latest",
"annotations": {
"openshift.io/imported-from": "artifactory.company.com/docker-dev-local/test/dev/test:latest"
},
"from": {
"kind": "DockerImage",
"name": "artifactory.company.com/docker-dev-local/test/dev/test:latest"
},
"generation": null,
"importPolicy": {},
"referencePolicy": {
"type": ""
}
}
]
}
},
{
"kind": "DeploymentConfig",
"apiVersion": "apps.openshift.io/v1",
"metadata": {
"name": "test-100",
"creationTimestamp": null,
"labels": {
"app": "test-100"
},
"annotations": {
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"strategy": {
"resources": {}
},
"triggers": [
{
"type": "ConfigChange"
},
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"test-100"
],
"from": {
"kind": "ImageStreamTag",
"name": "test-100:latest"
}
}
}
],
"replicas": 1,
"test": false,
"selector": {
"app": "test-100",
"deploymentconfig": "test-100"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "test-100",
"deploymentconfig": "test-100"
},
"annotations": {
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"containers": [
{
"name": "test-100",
"image": "artifactory.company.com/docker-dev-local/test/dev/test:latest",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
},
{
"containerPort": 8443,
"protocol": "TCP"
},
{
"containerPort": 8778,
"protocol": "TCP"
}
],
"resources": {}
}
]
}
}
}
},
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "test-100",
"creationTimestamp": null,
"labels": {
"app": "test-100"
},
"annotations": {
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"ports": [
{
"name": "8080-tcp",
"protocol": "TCP",
"port": 8080,
"targetPort": 8080
},
{
"name": "8443-tcp",
"protocol": "TCP",
"port": 8443,
"targetPort": 8443
},
{
"name": "8778-tcp",
"protocol": "TCP",
"port": 8778,
"targetPort": 8778
}
],
"selector": {
"app": "test-100",
"deploymentconfig": "test-100"
}
}
}
]
}
OpenShift can not detect image changes on external registry. So you should configure importPolicy.scheduled: true to refresh the image.
e.g.> you can configure imagePolicy every image tag.
apiVersion: v1
kind: ImageStream
metadata:
name: ruby
spec:
tags:
- from:
kind: DockerImage
name: openshift/ruby-20-centos7
name: latest
importPolicy:
scheduled: true
The interval is 15 minutes by default. If you want to change the value, you can adjust the config from /etc/origin/master/master-config.yaml as follows.
e.g.> ScheduledImageImportMinimumIntervalSeconds is interval time for imagestream import. Refer Image Policy Configuration
for other parameter details.
imagePolicyConfig:
MaxScheduledImageImportsPerMinute: 10
ScheduledImageImportMinimumIntervalSeconds: 1800
disableScheduledImport: false
maxImagesBulkImportedPerRepository: 3
Further information is here: Automatically Update Red Hat Container Images on OpenShift 3.11.
I am trying to deploy my application by using Gitlab-CI through pushing the docker images on Azure container and from there deploying the images on azure kubernetes service. these all process is happening automatically through GitlabCI. but i'm facing challenge in deployment section. i can able to see the services, pods is running status also tiller is deployed on kubernetes but it is throwing the below error This is deployment YAML which i took from kubernetes
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "review-37-in-cust-iosa7i",
"namespace": "XYZ",
"selfLink": "/apis/extensions/v1beta1/namespaces/XYZ/deployments/review-37-in-cust-iosa7i",
"uid": "9f5f7fff-9d65-11e9-8ceb-0e7a6fb80992",
"resourceVersion": "7143337",
"generation": 1,
"creationTimestamp": "2019-07-03T07:39:00Z",
"labels": {
"app": "review-37-in-cust-iosa7i",
"chart": "auto-deploy-app-0.2.9",
"heritage": "Tiller",
"release": "review-37-in-cust-iosa7i",
"tier": "web",
"track": "stable"
},
"annotations": {
"deployment.kubernetes.io/revision": "1"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "review-37-in-cust-iosa7i",
"release": "review-37-in-cust-iosa7i",
"tier": "web",
"track": "stable"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "review-37-in-cust-iosa7i",
"release": "review-37-in-cust-iosa7i",
"tier": "web",
"track": "stable"
},
"annotations": {
"checksum/application-secrets": ""
}
},
"spec": {
"containers": [
{
"name": "auto-deploy-app",
"image": "stratuscentcrdeve.azurecr.io/XYZ/dev/37-in-customer-group-customer-form-when-admin-opens-up-the-poli:65d2e2bc554242c584d5c6480e172690659ef98b",
"ports": [
{
"name": "web",
"containerPort": 5000,
"protocol": "TCP"
}
],
"env": [
{
"name": "DATABASE_URL",
"value": "postgres://user:testing-password#review-37-in-cust-iosa7i-postgres:5432/review-37-in-cust-iosa7i"
}
],
"resources": {},
"livenessProbe": {
"httpGet": {
"path": "/",
"port": 5000,
"scheme": "HTTP"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 3
},
"readinessProbe": {
"httpGet": {
"path": "/",
"port": 5000,
"scheme": "HTTP"
},
"initialDelaySeconds": 5,
"timeoutSeconds": 3,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 3
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"imagePullSecrets": [
{
"name": "gitlab-registry"
}
],
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": 1,
"maxSurge": 1
}
},
"revisionHistoryLimit": 2147483647,
"progressDeadlineSeconds": 2147483647
},
"status": {
"observedGeneration": 1,
"replicas": 1,
"updatedReplicas": 1,
"unavailableReplicas": 1,
"conditions": [
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2019-07-03T07:39:00Z",
"lastTransitionTime": "2019-07-03T07:39:00Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
}
]
}
}
Please comment if any additional info is required?
Kubernetes create Service in deployment+rbd mode, also same configured to succeed in default namespace, fail under non default namespace?
The config(json):
{
"kind": "Deployment",
"spec": {
"replicas": "1",
"template": {
"spec": {
"volumes": [
{
"rbd": {
"secretRef": {
"name": "ceph-secret"
},
"image": "zhaosiyi.24",
"fsType": "ext4",
"readOnly": false,
"user": "admin",
"monitors": [
"xxx.xxx.xxx.6:6789",
"xxx.xxx.xxx.7:6789",
"xxx.xxx.xxx.8:6789"
],
"pool": "rrkd.rbd"
},
"name": "aa"
}
],
"imagePullSecrets": [
{
"name": "registrykey-m3-1"
}
],
"containers": [
{
"image": "ccr.ccs.tencentyun.com/rrkd/rrkd-nginx:1.0",
"volumeMounts": [
{
"readOnly": false,
"mountPath": "/mnt",
"name": "aa"
}
],
"name": "aa",
"ports": [
{
"protocol": "TCP",
"containerPort": 80
}
]
}
]
},
"metadata": {
"labels": {
"name": "aa"
}
}
},
"selector": {
"matchLabels": {
"name": "aa"
}
}
},
"apiVersion": "extensions/v1beta1",
"metadata": {
"labels": {
"name": "aa"
},
"name": "aa"
}
}
{
"kind": "Service",
"spec": {
"type": "NodePort",
"ports": [
{
"targetPort": 80,
"protocol": "TCP",
"port": 80
}
],
"selector": {
"name": "aa"
}
},
"apiVersion": "v1",
"metadata": {
"labels": {
"name": "aa"
},
"name": "aa"
}
}
The strangest thing is that the details of the describe pod show success, without any error information, but the get pod is not actually successful, as below:
enter image description here
enter image description here
The problem has been solved. Non default of namespace is unsuccessful because the new namespace is not have Secret, so the authentication failed at pull image. Need to manually create the Secret for pull image. If it is RBD or PVC mode, you also need to manually create the Secret for mounting storage.
Can you show more detail for the describe of pod?
I think it will show the failed log
it always you don't have secret or the secret not setting well,
and the ceph maybe wrong too