AzureFunctions AppInsights Logging does not work in Azure AKS - docker

I've been using Azure functions (non static with proper DI) for a short while now. I recently added ApplicationInsights by using the APPINSIGHTS_INSTRUMENTATIONKEY key. When debugging locally it works all fine.
If I run it by publishing the function and using the following dockerfile to run it locally on docker it works fine as well.
FROM mcr.microsoft.com/azure-functions/dotnet:2.0-alpine
ENV AzureWebJobsScriptRoot=/home/site/wwwroot
COPY ./publish/ /home/site/wwwroot
However. If i go a step further and try to deploy it to kubernetes (in my case Azure AKS) by using the following YAML files. The function starts fine with log files showing the loading of the Application insights parameter. However, it does not log to insights.
deployment.yaml
apiVersion: v1
kind: Secret
metadata:
name: mytestfunction-secrets
namespace: "testfunction"
type: Opaque
data:
ApplicationInsights: YTljOTA4ZDgtMTkyZC00ODJjLTkwNmUtMTI2OTQ3OGZhYjZmCg==
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mytestfunction
namespace: "testfunction"
labels:
app: mytestfunction
spec:
replicas: 1
template:
metadata:
namespace: "testfunction"
labels:
app: mytestfunction
spec:
containers:
- image: mytestfunction:1.1
name: mytestfunction
ports:
- containerPort: 5000
imagePullPolicy: Always
env:
- name: AzureFunctionsJobHost__Logging__Console__IsEnabled
value: 'true'
- name: ASPNETCORE_ENVIRONMENT
value: PRODUCTION
- name: ASPNETCORE_URLS
value: http://+:5000
- name: WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
value: '5'
- name: APPINSIGHTS_INSTRUMENTATIONKEY
valueFrom:
secretKeyRef:
name: mytestfunction-secrets
key: ApplicationInsights
imagePullSecrets:
- name: imagepullsecrets
However. I did alter the yaml by not storing the key as a secret and then it did work.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mytestfunction
namespace: "testfunction"
labels:
app: mytestfunction
spec:
replicas: 1
template:
metadata:
namespace: "testfunction"
labels:
app: mytestfunction
spec:
containers:
- image: mytestfunction:1.1
name: mytestfunction
ports:
- containerPort: 5000
imagePullPolicy: Always
env:
- name: AzureFunctionsJobHost__Logging__Console__IsEnabled
value: 'true'
- name: ASPNETCORE_ENVIRONMENT
value: PRODUCTION
- name: ASPNETCORE_URLS
value: http://+:5000
- name: WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
value: '5'
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: a9c908d8-192d-482c-906e-1269478fab6f
imagePullSecrets:
- name: imagepullsecrets
I'm kind of surprised by the fact that the difference between the notation is causing Azure functions to not log to insights. My impression was that the running application does not care or know whether the value came from a secret or a regular notation in kubernetes. Even though it might be debatable whether the instrumentationkey is a secret or not, i would prefer to store it there. Does anyone have an idea why this might cause it?
# Not working
- name: APPINSIGHTS_INSTRUMENTATIONKEY
valueFrom:
secretKeyRef:
name: mytestfunction-secrets
key: ApplicationInsights
# Working
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: a9c908d8-192d-482c-906e-1269478fab6f
These are the versions i'm using
Azure Functions Core Tools (2.4.419)
Function Runtime Version: 2.0.12332.0
Azure AKS: 1.12.x
Also. The Instrumentation key is a fake one for sharing purposes. not an actual one.

Related

Registry UI (joxit) does not return stored images

I have setup a private registry (Kubernetes) using the following configuration based on this repo https://github.com/sleighzy/k8s-docker-registry:
Create the password file, see the Apache htpasswd documentation for more information on this command.
htpasswd -b -c -B htpasswd docker-registry registry-password!
Adding password for user docker-registry
Create namespace
kubectl create namespace registry
Add the generated password file as a Kubernetes secret.
kubectl create secret generic basic-auth --from-file=./htpasswd -n registry
secret/basic-auth created
registry-secrets.yaml
---
# https://kubernetes.io/docs/concepts/configuration/secret/
apiVersion: v1
kind: Secret
metadata:
name: s3
namespace: registry
data:
REGISTRY_STORAGE_S3_ACCESSKEY: Y2hlc0FjY2Vzc2tleU1pbmlv
REGISTRY_STORAGE_S3_SECRETKEY: Y2hlc1NlY3JldGtleQ==
registry-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: registry
spec:
ports:
- protocol: TCP
name: registry
port: 5000
selector:
app: registry
I am using my MinIO (already deployed and running)
registry-deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: registry
name: registry
labels:
app: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:2
ports:
- name: registry
containerPort: 5000
volumeMounts:
- name: credentials
mountPath: /auth
readOnly: true
env:
- name: REGISTRY_LOG_ACCESSLOG_DISABLED
value: "true"
- name: REGISTRY_HTTP_HOST
value: "https://registry.mydomain.io:5000"
- name: REGISTRY_LOG_LEVEL
value: info
- name: REGISTRY_HTTP_SECRET
value: registry-http-secret
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: homelab
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: /auth/htpasswd
- name: REGISTRY_STORAGE
value: s3
- name: REGISTRY_STORAGE_S3_REGION
value: ignored-cos-minio
- name: REGISTRY_STORAGE_S3_REGIONENDPOINT
value: charity.api.com -> This is the valid MinIO API
- name: REGISTRY_STORAGE_S3_BUCKET
value: "charitybucket"
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: "true"
- name: REGISTRY_HEALTH_STORAGEDRIVER_ENABLED
value: "false"
- name: REGISTRY_STORAGE_S3_ACCESSKEY
valueFrom:
secretKeyRef:
name: s3
key: REGISTRY_STORAGE_S3_ACCESSKEY
- name: REGISTRY_STORAGE_S3_SECRETKEY
valueFrom:
secretKeyRef:
name: s3
key: REGISTRY_STORAGE_S3_SECRETKEY
volumes:
- name: credentials
secret:
secretName: basic-auth
I have created an entry in /etc/hosts
192.168.xx.xx registry.mydomain.io
registry-IngressRoute.yaml
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: registry
namespace: registry
spec:
entryPoints:
- websecure
routes:
- match: Host(`registry.mydomain.io`)
kind: Rule
services:
- name: registry
port: 5000
tls:
certResolver: tlsresolver
I have accees to the private registry using http://registry.mydomain.io:5000/ and it obviously returns a blank page.
I have already pushed some images and http://registry.mydomain.io:5000/v2/_catalog returns:
{"repositories":["console-image","hello-world","hello-world-2","hello-world-ha","myfirstimage","ubuntu-my"]}
Above configuration seems to work.
Then I tried to add a registry-ui provide by joxit with the following configuration:
registry-ui-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: registry-ui
namespace: registry
spec:
ports:
- protocol: TCP
name: registry-ui
port: 80
selector:
app: registry-ui
registry-ui-deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: registry
name: registry-ui
labels:
app: registry-ui
spec:
replicas: 1
selector:
matchLabels:
app: registry-ui
template:
metadata:
labels:
app: registry-ui
spec:
containers:
- name: registry-ui
image: joxit/docker-registry-ui:1.5-static
ports:
- name: registry-ui
containerPort: 80
env:
- name: REGISTRY_URL
value: https://registry.mydomain.io
- name: SINGLE_REGISTRY
value: "true"
- name: REGISTRY_TITLE
value: "CHARITY Registry UI"
- name: DELETE_IMAGES
value: "true"
registry-ui-ingress-route.yaml
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: registry-ui
namespace: registry
spec:
entryPoints:
- websecure
routes:
- match: Host(`registry.mydomain.io`) && PathPrefix(`/ui/`)
kind: Rule
services:
- name: registry-ui
port: 80
middlewares:
- name: stripprefix
tls:
certResolver: tlsresolver
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: stripprefix
namespace: registry
spec:
stripPrefix:
prefixes:
- /ui/
I have access to the browser ui at https://registry.mydomain.io/ui/, however it returns nothing.
Am I missing something here?
As the owner of that repository there may be something missing here. Your IngressRoute rule has an entryPoint of websecure and certResolver of tlsresolver. This is intended to be the https entrypoint for Traefik, see my other repository https://github.com/sleighzy/k3s-traefik-v2-kubernetes-crd and associated Traefik document of which this Docker Registry repo is based on.
Can you review your Traefik deployment to ensure that you have this entrypoint, and you also have this certificate resolver along with a generated https certificate that this is using. Can you also check the traefik logs to see if there are any errors there during startup, e.g. missing certs etc. and any access log information in there as well which may indicate why this is not routing to there.
If you don't have these items setup you could help narrow this down further by changing this IngressRoute config to use just the web entrypoint and remove the tls section as well in your registry-ui-ingress-route.yaml manifest file and then reapply that. This will mean you can access this over http to at least rule out any https issues.

Can't enable images deletion from a private docker registry

all!!
I'm deploying private registry within K8S cluster with following yaml file:
kind: PersistentVolume
apiVersion: v1
metadata:
name: registry
labels:
type: local
spec:
capacity:
storage: 4Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/registry/"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: registry-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: Service
metadata:
name: registry
labels:
app: registry
spec:
ports:
- port: 5000
targetPort: 5000
nodePort: 30400
name: registry
selector:
app: registry
tier: registry
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: registry-ui
labels:
app: registry
spec:
ports:
- port: 8080
targetPort: 8080
name: registry
selector:
app: registry
tier: registry
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: registry
labels:
app: registry
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: registry
tier: registry
spec:
containers:
- image: registry:2
name: registry
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
- name: registry-persistent-storage
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
- name: registryui
image: hyper/docker-registry-web:latest
ports:
- containerPort: 8080
env:
- name: REGISTRY_URL
value: http://localhost:5000/v2
- name: REGISTRY_NAME
value: cluster-registry
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
- name: registry-persistent-storage
persistentVolumeClaim:
claimName: registry-claim
I'm just wondering that there is no option to delete docker images after pushing them to the local registry. I found the way how it suppose to work here: https://github.com/byrnedo/docker-reg-tool. I can list docker images inside local repository, see all tags via command line, but unable delete them. After reading the docker registry documentation, I've found that registry docker need to be run with following env: REGISTRY_STORAGE_DELETE_ENABLED=true.
I tried to add this variable into yaml file:
.........
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: registry
labels:
app: registry
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: registry
tier: registry
spec:
containers:
- image: registry:2
name: registry
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
- name: registry-persistent-storage
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
env:
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: true
But applying this yaml file with command kubectl apply -f manifests/registry.yaml return following error message:
Deployment in version "v1beta1" cannot be handled as a Deployment: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true}],"ima|..., bigger context ...|"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":true}],"image":"registry:2","name":"registry","port|...
After I find another suggestion:
The registry accepts configuration settings either via a file or via
environment variables. So the environment variable
REGISTRY_STORAGE_DELETE_ENABLED=true is equivalent to this in your
config file:
storage:
delete:
enabled: true
I've tried this option as well in my yaml file but still no luck...
Any suggestions how to enable docker images deletion in my yaml file are highly appreciated.
The value of true in yaml is parsed into a boolean data type and the syntax calls for a string. You'll need to explicitly quote it:
value: "true"

Kubernetes deployment database connection error

I'm trying to deploy GLPI application (http://glpi-project.org/) over my Kubernetes cluster but i encounter an issue.
Here is my deployment code:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-claim-glpi
labels:
type: openebs
spec:
storageClassName: openebs-storageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: glpi
namespace: jb
labels:
app: glpi
spec:
selector:
matchLabels:
app: glpi
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: glpi
spec:
volumes:
- name: pv-storage-glpi
persistentVolumeClaim:
claimName: pv-claim-glpi
containers:
- name: mariadb
image: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: "glpi"
- name: MYSQL_DATABASE
value: "glpi"
- name: MYSQL_USER
value: "glpi"
- name: MYSQL_PASSWORD
value: "glpi"
- name: GLPI_SOURCE_URL
value: "https://forge.glpi-project.org/attachments/download/2020/glpi-0.85.4.tar.gz"
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- mountPath: /var/lib/mariadb/
name: pv-storage-glpi
subPath: mariadb
- name: glpi
image: driket54/glpi
ports:
- containerPort: 80
name: http
- containerPort: 8090
name: https
volumeMounts:
- mountPath: /var/glpidata
name: pv-storage-glpi
subPath: glpidata
---
apiVersion: v1
kind: Service
metadata:
name: glpi
namespace: jb
spec:
selector:
app: glpi
ports:
- protocol: "TCP"
port: 80
targetPort: http
name: http
- protocol: "TCP"
port: 8090
targetPort: https
name: https
- protocol: "TCP"
port: 3306
targetPort: mariadb
name: mariadb
type: NodePort
---
The docker image is properly deployed but in my test phase, during the setup of the app, i get the following error while setting up the database (mysql).
I've already checked the credentials (host, username, password) and the are correct
Please help
Not really an answer since I don' t have the Kubernetes knowledge expected, but I can't add a comment yet :(
What you should alter first is your GLPi version.
Use this link. It's the last one:
https://github.com/glpi-project/glpi/releases/download/9.3.0/glpi-9.3.tgz
Then you may use cli tools to setup the database.
https://glpi-install.readthedocs.io/en/latest/command-line.html
Using what I get from your file:
php scripts/cliinstall.php --host=mariadb(not sure about this one in your environment but you get hte idea) --db=glpi --user=glpi --pass=glpi

cannot query kubernetes (unauthorized): endpoints is forbidden: User cannot list endpoints in the namespace

I am running kubernetes 1.9.4 on my gke cluster
I have two pods , gate which is trying to connect to coolapp, both written in elixir
I am using libcluster to connect my nodes
I get the following error:
[libcluster:app_name] cannot query kubernetes (unauthorized): endpoints is forbidden: User "system:serviceaccount:staging:default" cannot list endpoints in the namespace "staging": Unknown user "system:serviceaccount:staging:default"
here is my config in gate under config/prod:
config :libcluster,
topologies: [
app_name: [
strategy: Cluster.Strategy.Kubernetes,
config: [
kubernetes_selector: "tier=backend",
kubernetes_node_basename: System.get_env("MY_POD_NAMESPACE") || "${MY_POD_NAMESPACE}"]]]
here is my configuration:
vm-args
## Name of the node
-name ${MY_POD_NAMESPACE}#${MY_POD_IP}
## Cookie for distributed erlang
-setcookie ${ERLANG_COOKIE}
# Enable SMP automatically based on availability
-smp auto
creating the secrets:
kubectl create secret generic erlang-config --namespace staging --from-literal=erlang-cookie=xxxxxx
kubectl create configmap vm-config --namespace staging --from-file=vm.args
gate/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gate
namespace: staging
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: gate
tier: backend
spec:
securityContext:
runAsUser: 0
runAsNonRoot: false
containers:
- name: gate
image: gcr.io/development/gate:0.1.7
args:
- foreground
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /beamconfig
env:
- name: MY_POD_NAMESPACE
value: staging
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: RELEASE_CONFIG_DIR
value: /beamconfig
- name: ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: erlang-config
key: erlang-cookie
volumes:
- name: config-volume
configMap:
name: vm-config
coolapp/deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: coolapp
namespace: staging
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: coolapp
tier: backend
spec:
securityContext:
runAsUser: 0
runAsNonRoot: false
# volumes
volumes:
- name: config-volume
configMap:
name: vm-config
containers:
- name: coolapp
image: gcr.io/development/coolapp:1.0.3
volumeMounts:
- name: secrets-volume
mountPath: /secrets
readOnly: true
- name: config-volume
mountPath: /beamconfig
ports:
- containerPort: 80
args:
- "foreground"
env:
- name: MY_POD_NAMESPACE
value: staging
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: REPLACE_OS_VARS
value: "true"
- name: RELEASE_CONFIG_DIR
value: /beamconfig
- name: ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: erlang-config
key: erlang-cookie
# proxy_container
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=staging:us-central1:com-staging=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: cloudsql
mountPath: /cloudsql
The default service account for the staging namespace (in which apparently your Pods using libcluster are running) lacks RBAC permissions to get endpoints in that namespace.
Likely your application requires a number of other permissions (that are not mentioned in the above error message) to work correctly; identifying all such permissions is out of scope for SO.
A way to resolve this issue is to grant superuser permissions that service account. This is not a secure solution but a stop gap fix.
$ kubectl create clusterrolebinding make-staging-sa-cluster-admin \
--serviceaccount=staging:default \
--clusterrole=cluster-admin
clusterrolebinding "make-staging-sa-cluster-admin" created
To grant the specific permission only (get endpoints in the staging namespace) you would need to create a Role first:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: some-permissions
namespace: staging
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch"]
And create a RoleBinding for the default service account in the staging namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: give-default-sa-some-permissions
namespace: staging
subjects:
- kind: ServiceAccount
name: default
namespace: staging
roleRef:
kind: Role
name: some-permissions
apiGroup: rbac.authorization.k8s.io
Not an erlang/elixir or libcluster user, but it seems it is trying to use the default service account for the namespace to try and query the master for a list of endpoints available in the cluster.
The readme for libcluster says as much:
If set to Cluster.Strategy.Kubernetes, it will use the Kubernetes API
to query endpoints based on a basename and label selector, using the
token and namespace injected into every pod; once it has a list of
endpoints, it uses that list to form a cluster, and keep it up to
date.
Reading the code for the kubernetes.ex in libcluster and the error you get confirm as much.
You will need to setup a ClusterRole and RoleBinding for the service account in the staging namespace. This will allow libcluster to dynamically query the master to discover other erlang nodes in the cluster/namespace.
Here are some handy resources for follow up reading:
https://kubernetes.io/docs/admin/service-accounts-admin/
https://kubernetes.io/docs/admin/authorization/rbac/
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/

Kubernetes: Managing environment config

The reccomended method for managing environment configuration containers running in a pod is through the use of configmap. See the docs here.
This is great although we have containers that require massive amounts of environment variables, this will only expand in the future. Using the prescribed configmap method this become unweildy and hard to manage.
For example a simple deplyment file becomes massive:
apiVersion: v1
kind: Service
metadata:
name: my-app-api
labels:
name: my-app-api
environment: staging
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
name: my-app-api
environment: staging
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app-api
spec:
replicas: 2
revisionHistoryLimit: 10
template:
metadata:
labels:
name: my-app-api
environment: staging
spec:
containers:
- name: my-app-api
imagePullPolicy: Always
image: myapp/my-app-api:latest
ports:
- containerPort: 80
env:
- name: API_HOST
value: XXXXXXXXXXX
- name: API_ENV
value: XXXXXXXXXXX
- name: API_DEBUG
value: XXXXXXXXXXX
- name: API_KEY
value: XXXXXXXXXXX
- name: EJ_API_ENDPOINT
value: XXXXXXXXXXX
- name: WEB_HOST
value: XXXXXXXXXXX
- name: AWS_ACCESS_KEY
value: XXXXXXXXXXX
- name: AWS_SECRET_KEY
value: XXXXXXXXXXX
- name: CDN
value: XXXXXXXXXXX
- name: STRIPE_KEY
value: XXXXXXXXXXX
- name: STRIPE_SECRET
value: XXXXXXXXXXX
- name: DB_HOST
value: XXXXXXXXXXX
- name: MYSQL_ROOT_PASSWORD
value: XXXXXXXXXXX
- name: MYSQL_DATABASE
value: XXXXXXXXXXX
- name: REDIS_HOST
value: XXXXXXXXXXX
imagePullSecrets:
- name: my-registry-key
Are there alternate easy to inject a central environment configuration?
UPDATE
This was proposed for 1.5 although did not make the cut and looks like it will be included in 1.6. Fingers crossed...
There is a proposal currently targeted for 1.5 that aims to make this easier. As proposed, you would be able to pull all variables from a ConfigMap in one go, without having to spell out each one separately.
If implemented, it would allow you to do something like this:
Warning: This doesn't actually work yet!
ConfigMap:
apiVersion: v1
data:
space-ships: 1
ship-type: battle-cruiser
weapon: laser-cannon
kind: ConfigMap
metadata:
name: space-config
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: space-simulator
spec:
template:
metadata:
labels:
app: space-simulator
spec:
containers:
- name: space-simulator
image: foo/space-simulator
# This is the magic piece that would allow you to avoid all that boilerplate!
- envFrom:
configMap: space-config

Resources