MQTT Component spec in Dapr - mqtt

I am trying to use MQTT based trigger mechanism for Dapr .
https://docs.dapr.io/reference/components-reference/supported-pubsub/setup-mqtt/
However my setup of component does not seem to work even with free mqtt broker online like hivemq
https://www.hivemq.com/public-mqtt-broker/
Below is my component spec:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mqtt-pubsub
spec:
type: pubsub.mqtt
version: v1
metadata:
- name: url
value: "tcp://broker.hivemq.com:8000"
- name: consumerid
value: {uuid}
- name: qos
value: 1
- name: retain
value: "false"
- name: cleanSession
value: "false"
- name: backOffMaxRetries
value: "5"
component initialization fails and there is no way to know the error.
It will be saving, if anybody has experimented with this..

Related

Registry UI (joxit) does not return stored images

I have setup a private registry (Kubernetes) using the following configuration based on this repo https://github.com/sleighzy/k8s-docker-registry:
Create the password file, see the Apache htpasswd documentation for more information on this command.
htpasswd -b -c -B htpasswd docker-registry registry-password!
Adding password for user docker-registry
Create namespace
kubectl create namespace registry
Add the generated password file as a Kubernetes secret.
kubectl create secret generic basic-auth --from-file=./htpasswd -n registry
secret/basic-auth created
registry-secrets.yaml
---
# https://kubernetes.io/docs/concepts/configuration/secret/
apiVersion: v1
kind: Secret
metadata:
name: s3
namespace: registry
data:
REGISTRY_STORAGE_S3_ACCESSKEY: Y2hlc0FjY2Vzc2tleU1pbmlv
REGISTRY_STORAGE_S3_SECRETKEY: Y2hlc1NlY3JldGtleQ==
registry-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: registry
spec:
ports:
- protocol: TCP
name: registry
port: 5000
selector:
app: registry
I am using my MinIO (already deployed and running)
registry-deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: registry
name: registry
labels:
app: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:2
ports:
- name: registry
containerPort: 5000
volumeMounts:
- name: credentials
mountPath: /auth
readOnly: true
env:
- name: REGISTRY_LOG_ACCESSLOG_DISABLED
value: "true"
- name: REGISTRY_HTTP_HOST
value: "https://registry.mydomain.io:5000"
- name: REGISTRY_LOG_LEVEL
value: info
- name: REGISTRY_HTTP_SECRET
value: registry-http-secret
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: homelab
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: /auth/htpasswd
- name: REGISTRY_STORAGE
value: s3
- name: REGISTRY_STORAGE_S3_REGION
value: ignored-cos-minio
- name: REGISTRY_STORAGE_S3_REGIONENDPOINT
value: charity.api.com -> This is the valid MinIO API
- name: REGISTRY_STORAGE_S3_BUCKET
value: "charitybucket"
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: "true"
- name: REGISTRY_HEALTH_STORAGEDRIVER_ENABLED
value: "false"
- name: REGISTRY_STORAGE_S3_ACCESSKEY
valueFrom:
secretKeyRef:
name: s3
key: REGISTRY_STORAGE_S3_ACCESSKEY
- name: REGISTRY_STORAGE_S3_SECRETKEY
valueFrom:
secretKeyRef:
name: s3
key: REGISTRY_STORAGE_S3_SECRETKEY
volumes:
- name: credentials
secret:
secretName: basic-auth
I have created an entry in /etc/hosts
192.168.xx.xx registry.mydomain.io
registry-IngressRoute.yaml
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: registry
namespace: registry
spec:
entryPoints:
- websecure
routes:
- match: Host(`registry.mydomain.io`)
kind: Rule
services:
- name: registry
port: 5000
tls:
certResolver: tlsresolver
I have accees to the private registry using http://registry.mydomain.io:5000/ and it obviously returns a blank page.
I have already pushed some images and http://registry.mydomain.io:5000/v2/_catalog returns:
{"repositories":["console-image","hello-world","hello-world-2","hello-world-ha","myfirstimage","ubuntu-my"]}
Above configuration seems to work.
Then I tried to add a registry-ui provide by joxit with the following configuration:
registry-ui-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: registry-ui
namespace: registry
spec:
ports:
- protocol: TCP
name: registry-ui
port: 80
selector:
app: registry-ui
registry-ui-deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: registry
name: registry-ui
labels:
app: registry-ui
spec:
replicas: 1
selector:
matchLabels:
app: registry-ui
template:
metadata:
labels:
app: registry-ui
spec:
containers:
- name: registry-ui
image: joxit/docker-registry-ui:1.5-static
ports:
- name: registry-ui
containerPort: 80
env:
- name: REGISTRY_URL
value: https://registry.mydomain.io
- name: SINGLE_REGISTRY
value: "true"
- name: REGISTRY_TITLE
value: "CHARITY Registry UI"
- name: DELETE_IMAGES
value: "true"
registry-ui-ingress-route.yaml
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: registry-ui
namespace: registry
spec:
entryPoints:
- websecure
routes:
- match: Host(`registry.mydomain.io`) && PathPrefix(`/ui/`)
kind: Rule
services:
- name: registry-ui
port: 80
middlewares:
- name: stripprefix
tls:
certResolver: tlsresolver
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: stripprefix
namespace: registry
spec:
stripPrefix:
prefixes:
- /ui/
I have access to the browser ui at https://registry.mydomain.io/ui/, however it returns nothing.
Am I missing something here?
As the owner of that repository there may be something missing here. Your IngressRoute rule has an entryPoint of websecure and certResolver of tlsresolver. This is intended to be the https entrypoint for Traefik, see my other repository https://github.com/sleighzy/k3s-traefik-v2-kubernetes-crd and associated Traefik document of which this Docker Registry repo is based on.
Can you review your Traefik deployment to ensure that you have this entrypoint, and you also have this certificate resolver along with a generated https certificate that this is using. Can you also check the traefik logs to see if there are any errors there during startup, e.g. missing certs etc. and any access log information in there as well which may indicate why this is not routing to there.
If you don't have these items setup you could help narrow this down further by changing this IngressRoute config to use just the web entrypoint and remove the tls section as well in your registry-ui-ingress-route.yaml manifest file and then reapply that. This will mean you can access this over http to at least rule out any https issues.

Unable to reach Jenkins Pod from outside of kubernetes cluster under Metallb

I have a Kubernetes cluster that is running a Jenkins Pod with a service set up for Metallb. Currently when I try to hit the loadBalancerIP for the pod outside of my cluster I am unable to. I also have a kube-verify pod that is running on the cluster with a service that is also using Metallb. When I try to hit that pod outside of my cluster I can hit it with no problem.
When I switch the service for the Jenkins pod to be of type NodePort it works but as soon as I switch it back to be of type LoadBalancer it stops working. Both the Jenkins pod and the working kube-verify pod are running on the same node.
Cluster Details:
The master node is running and is connected to my router wirelessly. On the master node I have dnsmasq setup along with iptable rules that forward the connection from the wireless port to the Ethernet port. Each of the nodes is connected together via a switch via Ethernet. Metallb is setup up in layer2 mode with an address pool that is on the same subnet as the ip address of the wireless port of the master node. The kube-proxy is set to use strictArp and ipvs mode.
Jenkins Manifest:
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins-sa
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
---
apiVersion: v1
kind: Secret
metadata:
name: jenkins-secret
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
type: Opaque
data:
jenkins-admin-password: ***************
jenkins-admin-user: ********
---
apiVersion: v1
kind: ConfigMap
metadata:
name: jenkins
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
data:
jenkins.yaml: |-
jenkins:
authorizationStrategy:
loggedInUsersCanDoAnything:
allowAnonymousRead: false
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "${jenkins-admin-username}"
name: "Jenkins Admin"
password: "${jenkins-admin-password}"
disableRememberMe: false
mode: NORMAL
numExecutors: 0
labelString: ""
projectNamingStrategy: "standard"
markupFormatter:
plainText
clouds:
- kubernetes:
containerCapStr: "10"
defaultsProviderTemplate: "jenkins-base"
connectTimeout: "5"
readTimeout: 15
jenkinsUrl: "jenkins-ui:8080"
jenkinsTunnel: "jenkins-discover:50000"
maxRequestsPerHostStr: "32"
name: "kubernetes"
serverUrl: "https://kubernetes"
podLabels:
- key: "jenkins/jenkins-agent"
value: "true"
templates:
- name: "default"
#id: eeb122dab57104444f5bf23ca29f3550fbc187b9d7a51036ea513e2a99fecf0f
containers:
- name: "jnlp"
alwaysPullImage: false
args: "^${computer.jnlpmac} ^${computer.name}"
command: ""
envVars:
- envVar:
key: "JENKINS_URL"
value: "jenkins-ui:8080"
image: "jenkins/inbound-agent:4.11-1"
ttyEnabled: false
workingDir: "/home/jenkins/agent"
idleMinutes: 0
instanceCap: 2147483647
label: "jenkins-agent"
nodeUsageMode: "NORMAL"
podRetention: Never
showRawYaml: true
serviceAccount: "jenkins-sa"
slaveConnectTimeoutStr: "100"
yamlMergeStrategy: override
crumbIssuer:
standard:
excludeClientIPFromCrumb: true
security:
apiToken:
creationOfLegacyTokenEnabled: false
tokenGenerationOnCreationEnabled: false
usageStatisticsEnabled: true
unclassified:
location:
adminAddress:
url: jenkins-ui:8080
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv-volume
labels:
type: local
spec:
storageClassName: local-storage
claimRef:
name: jenkins-pv-claim
namespace: devops-tools
capacity:
storage: 16Gi
accessModes:
- ReadWriteMany
local:
path: /mnt
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- heine-cluster1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
namespace: devops-tools
labels:
app: jenkins
version: v1
tier: backend
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: jenkins-cr
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
---
# This role is used to allow Jenkins scheduling of agents via Kubernetes plugin.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins-role-schedule-agents
namespace: devops-tools
labels:
app: jenkins
version: v1
tier: backend
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec", "pods/log", "persistentvolumeclaims", "events"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods", "pods/exec", "persistentvolumeclaims"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
---
# The sidecar container which is responsible for reloading configuration changes
# needs permissions to watch ConfigMaps
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins-casc-reload
namespace: devops-tools
labels:
app: jenkins
version: v1
tier: backend
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jenkins-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: jenkins-cr
subjects:
- kind: ServiceAccount
name: jenkins-sa
namespace: "devops-tools"
---
# We bind the role to the Jenkins service account. The role binding is created in the namespace
# where the agents are supposed to run.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins-schedule-agents
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins-role-schedule-agents
subjects:
- kind: ServiceAccount
name: jenkins-sa
namespace: "devops-tools"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins-watch-configmaps
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins-casc-reload
subjects:
- kind: ServiceAccount
name: jenkins-sa
namespace: "devops-tools"
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
annotations:
metallb.universe.tf/address-pool: default
spec:
type: LoadBalancer
loadBalancerIP: 172.16.1.5
ports:
- name: ui
port: 8080
targetPort: 8080
externalTrafficPolicy: Local
selector:
app: jenkins
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-agent
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
spec:
ports:
- name: agents
port: 50000
targetPort: 50000
selector:
app: jenkins
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
version: v1
tier: backend
annotations:
checksum/config: c0daf24e0ec4e4cb59c8a66305181a17249770b37283ca8948e189a58e29a4a5
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
containers:
- name: jenkins
image: "heineza/jenkins-master:2.323-jdk11-1"
imagePullPolicy: Always
args: [ "--httpPort=8080"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false -Dorg.apache.commons.jelly.tags.fmt.timeZone=America/Chicago
- name: JENKINS_SLAVE_AGENT_PORT
value: "50000"
ports:
- containerPort: 8080
name: ui
- containerPort: 50000
name: agents
resources:
limits:
cpu: 2000m
memory: 4096Mi
requests:
cpu: 50m
memory: 256Mi
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
readOnly: false
- name: jenkins-config
mountPath: /var/jenkins_home/jenkins.yaml
- name: admin-secret
mountPath: /run/secrets/jenkins-admin-username
subPath: jenkins-admin-user
readOnly: true
- name: admin-secret
mountPath: /run/secrets/jenkins-admin-password
subPath: jenkins-admin-password
readOnly: true
serviceAccountName: "jenkins-sa"
volumes:
- name: jenkins-cache
emptyDir: {}
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-pv-claim
- name: jenkins-config
configMap:
name: jenkins
- name: admin-secret
secret:
secretName: jenkins-secret
This Jenkins manifest is a modified version of what the Jenkins helm-chart generates. I redacted my secret but in the actual manifest there are base64 encoded strings. Also, the docker image I created and use in the deployment uses the Jenkins 2.323-jdk11 image as a base image and I just installed some plugins for Configuration as Code, kubernetes, and Git. What could be preventing the Jenkins pod from being accessible outside of my cluster when using Metallb?
MetalLB doesn't allow by default to re-use/share the same LoadBalancerIP addresscase.
According to MetalLB documentation:
MetalLB respects the spec.loadBalancerIP parameter, so if you want your service to be set up with a specific address, you can request it by setting that parameter.
If MetalLB does not own the requested address, or if the address is already in use by another service, assignment will fail and MetalLB will log a warning event visible in kubectl describe service <service name>.[1]
In case you need to have services on a single IP you can enable selective IP sharing. To do so you have to add the metallb.universe.tf/allow-shared-ip annotation to services.
The value of the annotation is a “sharing key.” Services can share an IP address under the following conditions:
They both have the same sharing key.
They request the use of different ports (e.g. tcp/80 for one and tcp/443 for the other).
They both use the Cluster external traffic policy, or they both point to the exact same set of pods (i.e. the pod selectors are identical). [2]
UPDATE
I tested your setup successfully with one minor difference -
I needed to remove: externalTrafficPolicy: Local from Jenkins Service spec.
Try this solution, if it still doesn't work then it's a problem with your cluster environment.

How can I communicate with a DB External to my Kubernetes Cluster

Good afternoon, I have a question, I am new to Kubernetes and I need to connect to a DB that is outside of my cluster, I could only connect to the DB using the hostNetwork = true, however, this is not recommended, in this case there is a method to communicate with External DB?
I leave you the yaml that I am currently using, my pod contains one container that work with spring boot
apiVersion: apps/v1
kind: Deployment
metadata:
name: find-complementary-account-info
labels:
app: find-complementary-account-info
spec:
replicas: 2
selector:
matchLabels:
app: find-complementary-account-info
template:
metadata:
labels:
app: find-complementary-account-info
spec:
hostNetwork: true
dnsPolicy: Default
containers:
- name: find-complementary-account-info
image:find-complementary-account-info:latest
imagePullPolicy: IfNotPresent
resources:
limits:
memory: "350Mi"
requests:
memory: "300Mi"
ports:
- containerPort: 8081
env:
- name: URL_CONNECTION_BD
value: jdbc:oracle:thin:#11.160.9.18:1558/DEFAULTSRV.WORLD
- name: USERNAME_CONNECTION_BD
valueFrom:
secretKeyRef:
name: credentials-bd-pers
key: user_pers
- name: PASSWORD_CONNECTION_BD
valueFrom:
secretKeyRef:
name: credentials-bd-pers
key: password_pers
key: password_pers
---
apiVersion: v1
kind: Service
metadata:
name: find-complementary-account-info
spec:
type: NodePort
selector:
app: find-complementary-account-info
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30020
Anyone have an idea how to communicate with external DB? This is not a cloud cluster, it is OnPremise
hostNetwork parameter is used for accessing pods from outside of the Cluster, you don't need that.
Pods from inside the Cluster can communicate externally because they are NATted. If not, something external prevent it, like a firewall or a missing routing.
The quicker way to find that is to ssh to one of your Kubernetes cluster nodes and try
telnet 11.160.9.18 1558
Anyway that IP address seems a Public one, so you have to check your company firewall imho

AzureFunctions AppInsights Logging does not work in Azure AKS

I've been using Azure functions (non static with proper DI) for a short while now. I recently added ApplicationInsights by using the APPINSIGHTS_INSTRUMENTATIONKEY key. When debugging locally it works all fine.
If I run it by publishing the function and using the following dockerfile to run it locally on docker it works fine as well.
FROM mcr.microsoft.com/azure-functions/dotnet:2.0-alpine
ENV AzureWebJobsScriptRoot=/home/site/wwwroot
COPY ./publish/ /home/site/wwwroot
However. If i go a step further and try to deploy it to kubernetes (in my case Azure AKS) by using the following YAML files. The function starts fine with log files showing the loading of the Application insights parameter. However, it does not log to insights.
deployment.yaml
apiVersion: v1
kind: Secret
metadata:
name: mytestfunction-secrets
namespace: "testfunction"
type: Opaque
data:
ApplicationInsights: YTljOTA4ZDgtMTkyZC00ODJjLTkwNmUtMTI2OTQ3OGZhYjZmCg==
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mytestfunction
namespace: "testfunction"
labels:
app: mytestfunction
spec:
replicas: 1
template:
metadata:
namespace: "testfunction"
labels:
app: mytestfunction
spec:
containers:
- image: mytestfunction:1.1
name: mytestfunction
ports:
- containerPort: 5000
imagePullPolicy: Always
env:
- name: AzureFunctionsJobHost__Logging__Console__IsEnabled
value: 'true'
- name: ASPNETCORE_ENVIRONMENT
value: PRODUCTION
- name: ASPNETCORE_URLS
value: http://+:5000
- name: WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
value: '5'
- name: APPINSIGHTS_INSTRUMENTATIONKEY
valueFrom:
secretKeyRef:
name: mytestfunction-secrets
key: ApplicationInsights
imagePullSecrets:
- name: imagepullsecrets
However. I did alter the yaml by not storing the key as a secret and then it did work.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mytestfunction
namespace: "testfunction"
labels:
app: mytestfunction
spec:
replicas: 1
template:
metadata:
namespace: "testfunction"
labels:
app: mytestfunction
spec:
containers:
- image: mytestfunction:1.1
name: mytestfunction
ports:
- containerPort: 5000
imagePullPolicy: Always
env:
- name: AzureFunctionsJobHost__Logging__Console__IsEnabled
value: 'true'
- name: ASPNETCORE_ENVIRONMENT
value: PRODUCTION
- name: ASPNETCORE_URLS
value: http://+:5000
- name: WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
value: '5'
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: a9c908d8-192d-482c-906e-1269478fab6f
imagePullSecrets:
- name: imagepullsecrets
I'm kind of surprised by the fact that the difference between the notation is causing Azure functions to not log to insights. My impression was that the running application does not care or know whether the value came from a secret or a regular notation in kubernetes. Even though it might be debatable whether the instrumentationkey is a secret or not, i would prefer to store it there. Does anyone have an idea why this might cause it?
# Not working
- name: APPINSIGHTS_INSTRUMENTATIONKEY
valueFrom:
secretKeyRef:
name: mytestfunction-secrets
key: ApplicationInsights
# Working
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: a9c908d8-192d-482c-906e-1269478fab6f
These are the versions i'm using
Azure Functions Core Tools (2.4.419)
Function Runtime Version: 2.0.12332.0
Azure AKS: 1.12.x
Also. The Instrumentation key is a fake one for sharing purposes. not an actual one.

Kubernetes: Managing environment config

The reccomended method for managing environment configuration containers running in a pod is through the use of configmap. See the docs here.
This is great although we have containers that require massive amounts of environment variables, this will only expand in the future. Using the prescribed configmap method this become unweildy and hard to manage.
For example a simple deplyment file becomes massive:
apiVersion: v1
kind: Service
metadata:
name: my-app-api
labels:
name: my-app-api
environment: staging
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
name: my-app-api
environment: staging
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app-api
spec:
replicas: 2
revisionHistoryLimit: 10
template:
metadata:
labels:
name: my-app-api
environment: staging
spec:
containers:
- name: my-app-api
imagePullPolicy: Always
image: myapp/my-app-api:latest
ports:
- containerPort: 80
env:
- name: API_HOST
value: XXXXXXXXXXX
- name: API_ENV
value: XXXXXXXXXXX
- name: API_DEBUG
value: XXXXXXXXXXX
- name: API_KEY
value: XXXXXXXXXXX
- name: EJ_API_ENDPOINT
value: XXXXXXXXXXX
- name: WEB_HOST
value: XXXXXXXXXXX
- name: AWS_ACCESS_KEY
value: XXXXXXXXXXX
- name: AWS_SECRET_KEY
value: XXXXXXXXXXX
- name: CDN
value: XXXXXXXXXXX
- name: STRIPE_KEY
value: XXXXXXXXXXX
- name: STRIPE_SECRET
value: XXXXXXXXXXX
- name: DB_HOST
value: XXXXXXXXXXX
- name: MYSQL_ROOT_PASSWORD
value: XXXXXXXXXXX
- name: MYSQL_DATABASE
value: XXXXXXXXXXX
- name: REDIS_HOST
value: XXXXXXXXXXX
imagePullSecrets:
- name: my-registry-key
Are there alternate easy to inject a central environment configuration?
UPDATE
This was proposed for 1.5 although did not make the cut and looks like it will be included in 1.6. Fingers crossed...
There is a proposal currently targeted for 1.5 that aims to make this easier. As proposed, you would be able to pull all variables from a ConfigMap in one go, without having to spell out each one separately.
If implemented, it would allow you to do something like this:
Warning: This doesn't actually work yet!
ConfigMap:
apiVersion: v1
data:
space-ships: 1
ship-type: battle-cruiser
weapon: laser-cannon
kind: ConfigMap
metadata:
name: space-config
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: space-simulator
spec:
template:
metadata:
labels:
app: space-simulator
spec:
containers:
- name: space-simulator
image: foo/space-simulator
# This is the magic piece that would allow you to avoid all that boilerplate!
- envFrom:
configMap: space-config

Resources