I was looking into an entirely separate issue and then came across this question which raised some concerns:
https://stackoverflow.com/a/50510753/3123109
I'm doing something pretty similar. I'm using the CSI Driver for Azure to integrate Azure Kubernetes Service with Azure Key Vault. My manifests for the integration are something like:
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentity
metadata:
name: aks-akv-identity
namespace: prod
spec:
type: 0
resourceID: $identityResourceId
clientID: $identityClientId
---
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentityBinding
metadata:
name: aks-akv-identity-binding
namespace: prod
spec:
azureIdentity: aks-akv-identity
selector: aks-akv-identity-binding-selector
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: aks-akv-secret-provider
namespace: prod
spec:
provider: azure
secretObjects:
- secretName: ${resourcePrefix}-prod-secrets
type: Opaque
data:
- objectName: PROD-PGDATABASE
key: PGDATABASE
- objectName: PROD-PGHOST
key: PGHOST
- objectName: PROD-PGPORT
key: PGPORT
- objectName: PROD-PGUSER
key: PGUSER
- objectName: PROD-PGPASSWORD
key: PGPASSWORD
parameters:
usePodIdentity: "true"
keyvaultName: ${resourceGroupName}akv
cloudName: ""
objects: |
array:
objectName: PROD-PGDATABASE
objectType: secret
objectVersion: ""
- |
objectName: PROD-PGHOST
objectType: secret
objectVersion: ""
- |
objectName: PROD-PGPORT
objectType: secret
objectVersion: ""
- |
objectName: PROD-PGUSER
objectType: secret
objectVersion: ""
- |
objectName: PROD-PGPASSWORD
objectType: secret
objectVersion: ""
tenantId: $tenantId
Then in the micro service manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment-prod
namespace: prod
spec:
replicas: 3
selector:
matchLabels:
component: api
template:
metadata:
labels:
component: api
aadpodidbinding: aks-akv-identity-binding-selector
spec:
containers:
- name: api
image: appacr.azurecr.io/app-api
ports:
- containerPort: 5000
env:
- name: PGDATABASE
valueFrom:
secretKeyRef:
name: app-prod-secrets
key: PGDATABASE
- name: PGHOST
value: postgres-cluster-ip-service-prod
- name: PGPORT
valueFrom:
secretKeyRef:
name: app-prod-secrets
key: PGPORT
- name: PGUSER
valueFrom:
secretKeyRef:
name: app-prod-secrets
key: PGUSER
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: app-prod-secrets
key: PGPASSWORD
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
---
apiVersion: v1
kind: Service
metadata:
name: api-cluster-ip-service-prod
namespace: prod
spec:
type: ClusterIP
selector:
component: api
ports:
- port: 5000
targetPort: 5000
Then in my application settings.py:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ['PGDATABASE'],
'USER': os.environ['PGUSER'],
'PASSWORD': os.environ['PGPASSWORD'],
'HOST': os.environ['PGHOST'],
'PORT': os.environ['PGPORT'],
}
}
Nothing in my Dockerfile refers to any of these variables, just the Django micro service code.
According to the link, one of the comments was:
current best practices advise against doing this exactly. secrets managed through environment variables in docker are easily viewed and should not be considered secure.
So I'm second guessing this approach.
Do I need to look into revising what I have here?
The suggestion in the link is to place the os.environ[] with a call to a method that pulls the credentials from a key vault... but the credentials to even access the key vault would need to be stored in secrets... so I'm not seeing how it is any different.
Note: One thing I noticed is this is the use of env: and mounting the secrets to a volume is redundant. The latter was done per the documentation on the integration, but it makes the secrets available from /mnt/secrets-store so you can do something like cat /mnt/secrets-store/PROD-PGUSER. os.environ[] isn't really necessary and the env: I don't think because you could pull the secret from that location in the Pod.
At least doing something like the following prints out the secret value:
kubectl exec -it $(kubectl get pods -l component=api -o custom-columns=:metadata.name -n prod) -n prod -- cat /mnt/secrets-store/PROD-PGUSER
The comment on the answer you linked was incorrect. I've left a note to explain the confusion. What you have is fine, if possibly over-built :) You're not actually gaining any security vs. just using Kubernetes Secrets directly but if you prefer the workflow around AKV then this looks fine. You might want to look at externalsecrets rather than this weird side feature of the CSI stuff? The CSI driver is more for exposing stuff as files rather than external->Secret->envvar.
Related
I have setup a private registry (Kubernetes) using the following configuration based on this repo https://github.com/sleighzy/k8s-docker-registry:
Create the password file, see the Apache htpasswd documentation for more information on this command.
htpasswd -b -c -B htpasswd docker-registry registry-password!
Adding password for user docker-registry
Create namespace
kubectl create namespace registry
Add the generated password file as a Kubernetes secret.
kubectl create secret generic basic-auth --from-file=./htpasswd -n registry
secret/basic-auth created
registry-secrets.yaml
---
# https://kubernetes.io/docs/concepts/configuration/secret/
apiVersion: v1
kind: Secret
metadata:
name: s3
namespace: registry
data:
REGISTRY_STORAGE_S3_ACCESSKEY: Y2hlc0FjY2Vzc2tleU1pbmlv
REGISTRY_STORAGE_S3_SECRETKEY: Y2hlc1NlY3JldGtleQ==
registry-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: registry
spec:
ports:
- protocol: TCP
name: registry
port: 5000
selector:
app: registry
I am using my MinIO (already deployed and running)
registry-deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: registry
name: registry
labels:
app: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:2
ports:
- name: registry
containerPort: 5000
volumeMounts:
- name: credentials
mountPath: /auth
readOnly: true
env:
- name: REGISTRY_LOG_ACCESSLOG_DISABLED
value: "true"
- name: REGISTRY_HTTP_HOST
value: "https://registry.mydomain.io:5000"
- name: REGISTRY_LOG_LEVEL
value: info
- name: REGISTRY_HTTP_SECRET
value: registry-http-secret
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: homelab
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: /auth/htpasswd
- name: REGISTRY_STORAGE
value: s3
- name: REGISTRY_STORAGE_S3_REGION
value: ignored-cos-minio
- name: REGISTRY_STORAGE_S3_REGIONENDPOINT
value: charity.api.com -> This is the valid MinIO API
- name: REGISTRY_STORAGE_S3_BUCKET
value: "charitybucket"
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: "true"
- name: REGISTRY_HEALTH_STORAGEDRIVER_ENABLED
value: "false"
- name: REGISTRY_STORAGE_S3_ACCESSKEY
valueFrom:
secretKeyRef:
name: s3
key: REGISTRY_STORAGE_S3_ACCESSKEY
- name: REGISTRY_STORAGE_S3_SECRETKEY
valueFrom:
secretKeyRef:
name: s3
key: REGISTRY_STORAGE_S3_SECRETKEY
volumes:
- name: credentials
secret:
secretName: basic-auth
I have created an entry in /etc/hosts
192.168.xx.xx registry.mydomain.io
registry-IngressRoute.yaml
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: registry
namespace: registry
spec:
entryPoints:
- websecure
routes:
- match: Host(`registry.mydomain.io`)
kind: Rule
services:
- name: registry
port: 5000
tls:
certResolver: tlsresolver
I have accees to the private registry using http://registry.mydomain.io:5000/ and it obviously returns a blank page.
I have already pushed some images and http://registry.mydomain.io:5000/v2/_catalog returns:
{"repositories":["console-image","hello-world","hello-world-2","hello-world-ha","myfirstimage","ubuntu-my"]}
Above configuration seems to work.
Then I tried to add a registry-ui provide by joxit with the following configuration:
registry-ui-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: registry-ui
namespace: registry
spec:
ports:
- protocol: TCP
name: registry-ui
port: 80
selector:
app: registry-ui
registry-ui-deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: registry
name: registry-ui
labels:
app: registry-ui
spec:
replicas: 1
selector:
matchLabels:
app: registry-ui
template:
metadata:
labels:
app: registry-ui
spec:
containers:
- name: registry-ui
image: joxit/docker-registry-ui:1.5-static
ports:
- name: registry-ui
containerPort: 80
env:
- name: REGISTRY_URL
value: https://registry.mydomain.io
- name: SINGLE_REGISTRY
value: "true"
- name: REGISTRY_TITLE
value: "CHARITY Registry UI"
- name: DELETE_IMAGES
value: "true"
registry-ui-ingress-route.yaml
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: registry-ui
namespace: registry
spec:
entryPoints:
- websecure
routes:
- match: Host(`registry.mydomain.io`) && PathPrefix(`/ui/`)
kind: Rule
services:
- name: registry-ui
port: 80
middlewares:
- name: stripprefix
tls:
certResolver: tlsresolver
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: stripprefix
namespace: registry
spec:
stripPrefix:
prefixes:
- /ui/
I have access to the browser ui at https://registry.mydomain.io/ui/, however it returns nothing.
Am I missing something here?
As the owner of that repository there may be something missing here. Your IngressRoute rule has an entryPoint of websecure and certResolver of tlsresolver. This is intended to be the https entrypoint for Traefik, see my other repository https://github.com/sleighzy/k3s-traefik-v2-kubernetes-crd and associated Traefik document of which this Docker Registry repo is based on.
Can you review your Traefik deployment to ensure that you have this entrypoint, and you also have this certificate resolver along with a generated https certificate that this is using. Can you also check the traefik logs to see if there are any errors there during startup, e.g. missing certs etc. and any access log information in there as well which may indicate why this is not routing to there.
If you don't have these items setup you could help narrow this down further by changing this IngressRoute config to use just the web entrypoint and remove the tls section as well in your registry-ui-ingress-route.yaml manifest file and then reapply that. This will mean you can access this over http to at least rule out any https issues.
Deploying Hyperledger Fabric v2.0 in Kubernetes
I am Trying to deploy a sample chaincode in a Private Kubernetes Cluster which is running in Azure Cloud. After creating the nodes and then running the Install chaincode operation is getting failed and throwing the below error. I am only using a single Kubernetes cluster.
Error:
chaincode install failed with status: 500 - failed to invoke backing implementation of 'InstallChaincode': could not build chaincode: docker build failed: docker image inspection failed: cannot connect to Docker endpoint
command terminated with exit code 1
Below is the peer configuration template for Deployment, Service & ConfigMap
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: ${PEER}
name: ${PEER}
namespace: ${ORG}
spec:
replicas: 1
selector:
matchLabels:
app: ${PEER}
strategy: {}
template:
metadata:
labels:
app: ${PEER}
spec:
containers:
- name: couchdb
image: blockchainpractice.azurecr.io/hyperledger/fabric-couchdb
env:
- name: COUCHDB_USER
value: couchdb
- name: COUCHDB_PASSWORD
value: couchdb
ports:
- containerPort: 5984
- name: fabric-peer
image: blockchainpractice.azurecr.io/hyperledger/fabric-peer:2.0
resources: {}
envFrom:
- configMapRef:
name: ${PEER}
volumeMounts:
- name: dockersocket
mountPath: "/host/var/run/docker.sock"
- name: ${PEER}
mountPath: "/etc/hyperledger/fabric-peer"
- name: client-root-tlscas
mountPath: "/etc/hyperledger/fabric-peer/client-root-tlscas"
volumes:
- name: dockersocket
hostPath:
path: "/var/run/docker.sock"
- name: ${PEER}
secret:
secretName: ${PEER}
items:
- key: key.pem
path: msp/keystore/key.pem
- key: cert.pem
path: msp/signcerts/cert.pem
- key: tlsca-cert.pem
path: msp/tlsca/tlsca-cert.pem
- key: ca-cert.pem
path: msp/cacerts/ca-cert.pem
- key: config.yaml
path: msp/config.yaml
- key: tls.crt
path: tls/tls.crt
- key: tls.key
path: tls/tls.key
- key: orderer-tlsca-cert.pem
path: orderer-tlsca-cert.pem
- key: core.yaml
path: core.yaml
- name: client-root-tlscas
secret:
secretName: client-root-tlscas
---
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: ${PEER}
namespace: ${ORG}
data:
CORE_PEER_ADDRESSAUTODETECT: "true"
CORE_PEER_ID: ${PEER}
CORE_PEER_LISTENADDRESS: 0.0.0.0:7051
CORE_PEER_PROFILE_ENABLED: "true"
CORE_PEER_LOCALMSPID: ${ORG_MSP}
CORE_PEER_MSPCONFIGPATH: /etc/hyperledger/fabric-peer/msp
# Gossip
CORE_PEER_GOSSIP_BOOTSTRAP: peer0.${ORG}:7051
CORE_PEER_GOSSIP_EXTERNALENDPOINT: "${PEER}.${ORG}:7051"
CORE_PEER_GOSSIP_ORGLEADER: "false"
CORE_PEER_GOSSIP_USELEADERELECTION: "true"
# TLS
CORE_PEER_TLS_ENABLED: "true"
CORE_PEER_TLS_CERT_FILE: "/etc/hyperledger/fabric-peer/tls/tls.crt"
CORE_PEER_TLS_KEY_FILE: "/etc/hyperledger/fabric-peer/tls/tls.key"
CORE_PEER_TLS_ROOTCERT_FILE: "/etc/hyperledger/fabric-peer/msp/tlsca/tlsca-cert.pem"
CORE_PEER_TLS_CLIENTAUTHREQUIRED: "false"
ORDERER_TLS_ROOTCERT_FILE: "/etc/hyperledger/fabric-peer/orderer-tlsca-cert.pem"
CORE_PEER_TLS_CLIENTROOTCAS_FILES: "/etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.${ORG}-cert.pem"
CORE_PEER_TLS_CLIENTCERT_FILE: "/etc/hyperledger/fabric-peer/tls/tls.crt"
CORE_PEER_TLS_CLIENTKEY_FILE: "/etc/hyperledger/fabric-peer/tls/tls.key"
# Docker
CORE_PEER_NETWORKID: ${ORG}-fabnet
CORE_VM_ENDPOINT: unix:///host/var/run/docker.sock
CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE: "bridge"
# CouchDB
CORE_LEDGER_STATE_STATEDATABASE: CouchDB
CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS: 0.0.0.0:5984
CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME: couchdb
CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD: couchdb
# Logging
CORE_LOGGING_PEER: "info"
CORE_LOGGING_CAUTHDSL: "info"
CORE_LOGGING_GOSSIP: "info"
CORE_LOGGING_LEDGER: "info"
CORE_LOGGING_MSP: "info"
CORE_LOGGING_POLICIES: "debug"
CORE_LOGGING_GRPC: "info"
GODEBUG: "netdns=go"
---
apiVersion: v1
kind: Service
metadata:
name: ${PEER}
namespace: ${ORG}
spec:
selector:
app: ${PEER}
ports:
- name: request
port: 7051
targetPort: 7051
- name: event
port: 7053
targetPort: 7053
type: LoadBalancer
Can anyone help me out. Thanks in advance
I'd suggest that it would be good to look at the K8S test network deployment in fabric-samples (https://github.com/hyperledger/fabric-samples/tree/main/test-network-k8s)
Note that the classic way the peer creates chaincode is create a new docker container via the docker daemon. This really doesn't sit well with K8S. So the chaincode-as-a-service approach is strongly recommended.
I've been using Azure functions (non static with proper DI) for a short while now. I recently added ApplicationInsights by using the APPINSIGHTS_INSTRUMENTATIONKEY key. When debugging locally it works all fine.
If I run it by publishing the function and using the following dockerfile to run it locally on docker it works fine as well.
FROM mcr.microsoft.com/azure-functions/dotnet:2.0-alpine
ENV AzureWebJobsScriptRoot=/home/site/wwwroot
COPY ./publish/ /home/site/wwwroot
However. If i go a step further and try to deploy it to kubernetes (in my case Azure AKS) by using the following YAML files. The function starts fine with log files showing the loading of the Application insights parameter. However, it does not log to insights.
deployment.yaml
apiVersion: v1
kind: Secret
metadata:
name: mytestfunction-secrets
namespace: "testfunction"
type: Opaque
data:
ApplicationInsights: YTljOTA4ZDgtMTkyZC00ODJjLTkwNmUtMTI2OTQ3OGZhYjZmCg==
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mytestfunction
namespace: "testfunction"
labels:
app: mytestfunction
spec:
replicas: 1
template:
metadata:
namespace: "testfunction"
labels:
app: mytestfunction
spec:
containers:
- image: mytestfunction:1.1
name: mytestfunction
ports:
- containerPort: 5000
imagePullPolicy: Always
env:
- name: AzureFunctionsJobHost__Logging__Console__IsEnabled
value: 'true'
- name: ASPNETCORE_ENVIRONMENT
value: PRODUCTION
- name: ASPNETCORE_URLS
value: http://+:5000
- name: WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
value: '5'
- name: APPINSIGHTS_INSTRUMENTATIONKEY
valueFrom:
secretKeyRef:
name: mytestfunction-secrets
key: ApplicationInsights
imagePullSecrets:
- name: imagepullsecrets
However. I did alter the yaml by not storing the key as a secret and then it did work.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mytestfunction
namespace: "testfunction"
labels:
app: mytestfunction
spec:
replicas: 1
template:
metadata:
namespace: "testfunction"
labels:
app: mytestfunction
spec:
containers:
- image: mytestfunction:1.1
name: mytestfunction
ports:
- containerPort: 5000
imagePullPolicy: Always
env:
- name: AzureFunctionsJobHost__Logging__Console__IsEnabled
value: 'true'
- name: ASPNETCORE_ENVIRONMENT
value: PRODUCTION
- name: ASPNETCORE_URLS
value: http://+:5000
- name: WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
value: '5'
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: a9c908d8-192d-482c-906e-1269478fab6f
imagePullSecrets:
- name: imagepullsecrets
I'm kind of surprised by the fact that the difference between the notation is causing Azure functions to not log to insights. My impression was that the running application does not care or know whether the value came from a secret or a regular notation in kubernetes. Even though it might be debatable whether the instrumentationkey is a secret or not, i would prefer to store it there. Does anyone have an idea why this might cause it?
# Not working
- name: APPINSIGHTS_INSTRUMENTATIONKEY
valueFrom:
secretKeyRef:
name: mytestfunction-secrets
key: ApplicationInsights
# Working
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: a9c908d8-192d-482c-906e-1269478fab6f
These are the versions i'm using
Azure Functions Core Tools (2.4.419)
Function Runtime Version: 2.0.12332.0
Azure AKS: 1.12.x
Also. The Instrumentation key is a fake one for sharing purposes. not an actual one.
I am running kubernetes 1.9.4 on my gke cluster
I have two pods , gate which is trying to connect to coolapp, both written in elixir
I am using libcluster to connect my nodes
I get the following error:
[libcluster:app_name] cannot query kubernetes (unauthorized): endpoints is forbidden: User "system:serviceaccount:staging:default" cannot list endpoints in the namespace "staging": Unknown user "system:serviceaccount:staging:default"
here is my config in gate under config/prod:
config :libcluster,
topologies: [
app_name: [
strategy: Cluster.Strategy.Kubernetes,
config: [
kubernetes_selector: "tier=backend",
kubernetes_node_basename: System.get_env("MY_POD_NAMESPACE") || "${MY_POD_NAMESPACE}"]]]
here is my configuration:
vm-args
## Name of the node
-name ${MY_POD_NAMESPACE}#${MY_POD_IP}
## Cookie for distributed erlang
-setcookie ${ERLANG_COOKIE}
# Enable SMP automatically based on availability
-smp auto
creating the secrets:
kubectl create secret generic erlang-config --namespace staging --from-literal=erlang-cookie=xxxxxx
kubectl create configmap vm-config --namespace staging --from-file=vm.args
gate/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gate
namespace: staging
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: gate
tier: backend
spec:
securityContext:
runAsUser: 0
runAsNonRoot: false
containers:
- name: gate
image: gcr.io/development/gate:0.1.7
args:
- foreground
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /beamconfig
env:
- name: MY_POD_NAMESPACE
value: staging
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: RELEASE_CONFIG_DIR
value: /beamconfig
- name: ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: erlang-config
key: erlang-cookie
volumes:
- name: config-volume
configMap:
name: vm-config
coolapp/deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: coolapp
namespace: staging
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: coolapp
tier: backend
spec:
securityContext:
runAsUser: 0
runAsNonRoot: false
# volumes
volumes:
- name: config-volume
configMap:
name: vm-config
containers:
- name: coolapp
image: gcr.io/development/coolapp:1.0.3
volumeMounts:
- name: secrets-volume
mountPath: /secrets
readOnly: true
- name: config-volume
mountPath: /beamconfig
ports:
- containerPort: 80
args:
- "foreground"
env:
- name: MY_POD_NAMESPACE
value: staging
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: REPLACE_OS_VARS
value: "true"
- name: RELEASE_CONFIG_DIR
value: /beamconfig
- name: ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: erlang-config
key: erlang-cookie
# proxy_container
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=staging:us-central1:com-staging=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: cloudsql
mountPath: /cloudsql
The default service account for the staging namespace (in which apparently your Pods using libcluster are running) lacks RBAC permissions to get endpoints in that namespace.
Likely your application requires a number of other permissions (that are not mentioned in the above error message) to work correctly; identifying all such permissions is out of scope for SO.
A way to resolve this issue is to grant superuser permissions that service account. This is not a secure solution but a stop gap fix.
$ kubectl create clusterrolebinding make-staging-sa-cluster-admin \
--serviceaccount=staging:default \
--clusterrole=cluster-admin
clusterrolebinding "make-staging-sa-cluster-admin" created
To grant the specific permission only (get endpoints in the staging namespace) you would need to create a Role first:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: some-permissions
namespace: staging
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch"]
And create a RoleBinding for the default service account in the staging namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: give-default-sa-some-permissions
namespace: staging
subjects:
- kind: ServiceAccount
name: default
namespace: staging
roleRef:
kind: Role
name: some-permissions
apiGroup: rbac.authorization.k8s.io
Not an erlang/elixir or libcluster user, but it seems it is trying to use the default service account for the namespace to try and query the master for a list of endpoints available in the cluster.
The readme for libcluster says as much:
If set to Cluster.Strategy.Kubernetes, it will use the Kubernetes API
to query endpoints based on a basename and label selector, using the
token and namespace injected into every pod; once it has a list of
endpoints, it uses that list to form a cluster, and keep it up to
date.
Reading the code for the kubernetes.ex in libcluster and the error you get confirm as much.
You will need to setup a ClusterRole and RoleBinding for the service account in the staging namespace. This will allow libcluster to dynamically query the master to discover other erlang nodes in the cluster/namespace.
Here are some handy resources for follow up reading:
https://kubernetes.io/docs/admin/service-accounts-admin/
https://kubernetes.io/docs/admin/authorization/rbac/
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
The reccomended method for managing environment configuration containers running in a pod is through the use of configmap. See the docs here.
This is great although we have containers that require massive amounts of environment variables, this will only expand in the future. Using the prescribed configmap method this become unweildy and hard to manage.
For example a simple deplyment file becomes massive:
apiVersion: v1
kind: Service
metadata:
name: my-app-api
labels:
name: my-app-api
environment: staging
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
name: my-app-api
environment: staging
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app-api
spec:
replicas: 2
revisionHistoryLimit: 10
template:
metadata:
labels:
name: my-app-api
environment: staging
spec:
containers:
- name: my-app-api
imagePullPolicy: Always
image: myapp/my-app-api:latest
ports:
- containerPort: 80
env:
- name: API_HOST
value: XXXXXXXXXXX
- name: API_ENV
value: XXXXXXXXXXX
- name: API_DEBUG
value: XXXXXXXXXXX
- name: API_KEY
value: XXXXXXXXXXX
- name: EJ_API_ENDPOINT
value: XXXXXXXXXXX
- name: WEB_HOST
value: XXXXXXXXXXX
- name: AWS_ACCESS_KEY
value: XXXXXXXXXXX
- name: AWS_SECRET_KEY
value: XXXXXXXXXXX
- name: CDN
value: XXXXXXXXXXX
- name: STRIPE_KEY
value: XXXXXXXXXXX
- name: STRIPE_SECRET
value: XXXXXXXXXXX
- name: DB_HOST
value: XXXXXXXXXXX
- name: MYSQL_ROOT_PASSWORD
value: XXXXXXXXXXX
- name: MYSQL_DATABASE
value: XXXXXXXXXXX
- name: REDIS_HOST
value: XXXXXXXXXXX
imagePullSecrets:
- name: my-registry-key
Are there alternate easy to inject a central environment configuration?
UPDATE
This was proposed for 1.5 although did not make the cut and looks like it will be included in 1.6. Fingers crossed...
There is a proposal currently targeted for 1.5 that aims to make this easier. As proposed, you would be able to pull all variables from a ConfigMap in one go, without having to spell out each one separately.
If implemented, it would allow you to do something like this:
Warning: This doesn't actually work yet!
ConfigMap:
apiVersion: v1
data:
space-ships: 1
ship-type: battle-cruiser
weapon: laser-cannon
kind: ConfigMap
metadata:
name: space-config
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: space-simulator
spec:
template:
metadata:
labels:
app: space-simulator
spec:
containers:
- name: space-simulator
image: foo/space-simulator
# This is the magic piece that would allow you to avoid all that boilerplate!
- envFrom:
configMap: space-config