How to set secret data to kubernetes secrets by yaml? - ruby-on-rails

I am using kubernetes to deploy a rails app to google container engine.
Follow the kubernetes secrets document: http://kubernetes.io/v1.1/docs/user-guide/secrets.html
I created a web controller file:
# web-controller.yml
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- name: web
image: gcr.io/my-project-id/myapp:v1
ports:
- containerPort: 3000
name: http-server
env:
secret:
- secretName: mysecret
And created a secret file:
# secret.yml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
RAILS_ENV: production
When I run:
kubectl create -f web-controller.yml
It showed:
error: could not read an encoded object from web-controller.yml: unable to load "web-controller.yml": json: cannot unmarshal object into Go value of type []v1.EnvVar
error: no objects passed to create
Maybe the yaml format is wrong in the web-controller.yml file. Then how to write?

secret.yml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
RAILS_ENV: production
stringData is the easymode version of what you're after, one thing though.
you'll see the cleartext original yaml used to create the secret in the annotation (and if you used the above method that means you'll have a human readable secret in your annotation, if you use the below method you'll have the base64'd secret in your annotation), unless you follow up with the erase annotation command like so:
kubectl apply -f secret.yml
kubectl annotate secret mysecret kubectl.kubernetes.io/last-applied-configuration-
(the - at the end is what says to erase it)
kubectl get secret mysecret -n=api -o yaml
(to confirm)
Alternatively you'd do
Bash# echo production | base64
cHJvZHVjdGlvbgo=
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
RAILS_ENV: cHJvZHVjdGlvbgo=

You need to base64 encode the value and your key must be a valid DNS label, that is, replace RAILS_ENV with, for example, rails-env. See also this end-to-end example I put together here for more details and concrete steps.

We do not currently support secrets exposed as env vars.

Lets Say we adding imagepull secrets in deployment now follow the steps,
kubectl create secret docker-registry secret-name --docker-server=<registry-server-url> --docker-username=<Username> --docker-password=<password> --docker-email=<your-email>
Now refer this in deployment yaml file,
apiVersion: v1
kind: Deployment
metadata:
name: test-deployment
spec:
containers:
- name: test-app
image: <Image-name-private>
imagePullSecrets:
- name: secret-name
OR
Lets say you have some api key for access the application.
kubectl create secret generic secret-name --from-literal api-key="<your_api-key"
Now refer this in deployment like this.
env:
- name: API_KEY
valueFrom:
secretKeyRef:
name: secret-name
key: api-key

Related

Deployment from GitHub to k8s cluster failing with (the server has asked for the client to provide credentials)

I'm kind of new to k8s/github deployment story but trying to setup the GitHub action to deploy a docker image to the k8s cluster. So far the image is getting built and pushed to the registry. However, the deploy job is always failing with the error:
error: You must be logged in to the server (the server has asked for the client to provide credentials)
This is how the ci.yaml file looks like:
deploy-to-k8s:
needs: push-api-image
runs-on: ubuntu-latest
steps:
- name: Checkout source code
uses: actions/checkout#v3
- name: Set the Kubernetes context
uses: azure/k8s-set-context#v3
with:
method: service-account
k8s-url: https://------/v3 ---> the API Endpoint
k8s-secret: ${{ secrets.KUBERNETES_SECRET }}
- name: Deploy to the k8s cluster
uses: azure/k8s-deploy#v4.9
with:
namespace: staging
skip-tls-verify: true
manifests: |
k8s/deployment.yaml
k8s/service.yaml
k8s/ingress.yaml
I created the service account (named: github-deployment-action) in k8s, created the cluster role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: github-deployment-action
rules:
- apiGroups: ["", "apps", "networking.k8s.io", "extensions"] # "" indicates the core API group
resources: ["deployments", "services", "configmaps", "secrets", "ingresses"]
verbs: ["get", "watch", "list", "patch", "update", "delete"]
Together with the ClusterRoleBinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: github-deployment-action
subjects:
- kind: ServiceAccount
name: github-deployment-action
namespace: staging
roleRef:
kind: ClusterRole
name: github-deployment-action
apiGroup: rbac.authorization.k8s.io
And the secret:
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: github-deployment-action-token
namespace: staging
annotations:
kubernetes.io/service-account.name: github-deployment-action
Then i copied the secret from the k8s and added it to the secrets in github. I generated the secret yaml using this command:
kubectl get secret github-deployment-action-token --namespace=staging -o yaml
I'm not sure what's wrong here, maybe the k8s-url is wrong, if i run the command
kubectl config view
this gives back the local ip address (http://127.0.0.1:8001) which doesn't really make sense to add it in Github action
k8s version is 1.24 and rancher version is 2.7.0
Your advices are highly appreciated

Pass file to docker container in a kubernetes Pod

I'm beginner in Kubernetes, what I would like to achieve is :
Pass user's ssh private/public key to the Pod and then to the Docker container (there's a shell script that will be using this key)
So I would like to know if it's possible to do that in the Kubectl apply ?
My pod.yaml looks like :
apiVersion: v1
kind: Pod
metadata:
generateName: testing
labels:
type: testing
namespace: ns-test
name: testing-config
spec:
restartPolicy: OnFailure
hostNetwork: true
containers:
- name: mycontainer
image: ".../mycontainer:latest"
you have to store the private / public key in a kubernetes secret object
apiVersion: v1
kind: Secret
metadata:
name: mysshkey
namespace: ns-test
data:
id_rsa: {{ value }}
id_rsa.pub: {{ value }}
and now you can mount this secret file in your container:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
volumes:
- name: ssh-key
secret:
secretName: mysshkey
The documentation of kuberentes provides also an chapter of Using Secrets as files from a Pod
It's not tested but i hope it works.
First, you create a secret with your keys: kubectl create secret generic mysecret-keys --from-file=privatekey=</path/to/the/key/file/on/your/host> --from-file=publickey=</path/to/the/key/file/on/your/host>
Then you refer to the key files using the secret in your pod:
apiVersion: v1
kind: Pod
metadata:
...
spec:
...
containers:
- name: mycontainer
image: ".../mycontainer:latest"
volumeMounts:
- name: mysecret-keys
mountPath: /path/in/the/container # <-- privatekey & publickey will be mounted as file in this directory where your shell script can access
volumes:
- name: mysecret-keys
secret:
secretName: mysecret-keys # <-- mount the secret resource you created above
You can check the secret with kubectl get secret mysecret-keys --output yaml. You can check the pod and its mounting with kubectl describe pod testing-config.

Kubernetes: How to expand env variables from configmap

I'm using config maps to inject env variables into my containers. Some of the variables are created by concatenating variables, for example:
~/.env file
HELLO=hello
WORLD=world
HELLO_WORLD=${HELLO}_${WORLD}
I then create the config map
kubectl create configmap env-variables --from-env-file ~/.env
The deployment manifests reference the config map.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- name: my-image
image: us.gcr.io/my-image
envFrom:
- configMapRef:
name: env-variables
When I exec into my running pods, and execute the command
$ printenv HELLO_WORLD
I expect to see hello_world, but instead I see ${HELLO}_${WORLD}. The variables aren't expanded, and therefore my applications that refer to these variables will get the unexpanded value.
How do I ensure the variables get expanded?
If it matters, my images are using alpine.
I can't find any documentation on interpolating environment variables, but I was able to get this to work by removing the interpolated variable from the configmap and listing it directly in the deployment. It also works if all variables are listed directly in the deployment. It looks like kubernetes doesn't apply interpolation to variables loaded from configmaps.
For instance, this will work:
Configmap
apiVersion: v1
data:
HELLO: hello
WORLD: world
kind: ConfigMap
metadata:
name: env-variables
namespace: default
Deployment:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- name: my-image
image: us.gcr.io/my-image
envFrom:
- configMapRef:
name: env-variables
env:
- name: HELLO_WORLD
value: $(HELLO)_$(WORLD)
I'm thinking about just expanding the variables before creating the configMap and uploading to kubernetes
Another parallel approach would be to use kustomize:
kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is.
It's like make, in that what it does is declared in a file, and it's like sed, in that it emits edited text.
The sed part should be able to generate the right expanded value in your yaml file.

Kubernetes env variable to containers

I want to pass some values from Kubernetes yaml file to the containers. These values will be read in my Java app using System.getenv("x_slave_host").
I have this dockerfile:
FROM jetty:9.4
...
ARG slave_host
ENV x_slave_host $slave_host
...
$JETTY_HOME/start.jar -Djetty.port=9090
The kubernetes yaml file contains this part where I added env section:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: master
spec:
template:
metadata:
labels:
app: master
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: master
image: xregistry.azurecr.io/Y:latest
ports:
- containerPort: 9090
volumeMounts:
- name: shared-data
mountPath: ~/.X/experiment
- env:
- name: slave_host
value: slavevalue
- name: jupyter
image: xregistry.azurecr.io/X:latest
ports:
- containerPort: 8000
- containerPort: 8888
volumeMounts:
- name: shared-data
mountPath: /var/folder/experiment
imagePullSecrets:
- name: acr-auth
Locally when I did the same thing using docker compose, it worked using args. This is a snippet:
master:
image: master
build:
context: ./master
args:
- slave_host=slavevalue
ports:
- "9090:9090"
So now I am trying to do the same thing but in Kubernetes. However, I am getting the following error (deploying it on Azure):
error: error validating "D:\\a\\r1\\a\\_X\\deployment\\kub-deploy.yaml": error validating data: field spec.template.spec.containers[1].name for v1.Container is required; if you choose to ignore these errors, turn validation off with --validate=false
In other words, how to rewrite my docker compose file to kubernetes and passing this argument.
Thanks!
env section should be added under containers, like this:
containers:
- name: master
env:
- name: slave_host
value: slavevalue
To elaborate a on #Kun Li's answer, besides adding environment variables e.g. in the Deployment manifest directly you can create a ConfigMap (or Secret depending on the data being stored) and reference these in your manifests. This is a good way of sharing the same environment variables across applications, compared to manually adding environment variables to several different applications.
Note that a ConfigMap can consist of one or more key: value pairs and it's not limited to storing environment variables, it's just one of the use cases. And as i mentioned before, consider using a Secret if the data is classified as sensitive.
Example of a ConfigMap manifest, in this case used for storing an environment variable:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-env-var
data:
slave_host: slavevalue
To create a ConfigMap holding one key=value pair using kubectl create:
kubectl create configmap my-env --from-literal=slave_host=slavevalue
To get hold of all environment variables configured in a ConfigMap use the following in your manifest:
containers:
envFrom:
- configMapRef:
name: my-env-var
Or if you want to pick one specific environment variable from your ConfigMap containing several variables:
containers:
env:
- name: slave_host
valueFrom:
configMapKeyRef:
name: my-env-var
key: slave_host
See this page for more examples of using ConfigMap's in different situations.

How to define Kubernetes Job using a private docker registry?

I have a simple Kubernetes job (based on the http://kubernetes.io/docs/user-guide/jobs/work-queue-2/ example) which uses a Docker image that I have placed as a public image on my dockerhub account. It all loks like this:
job.yaml:
apiVersion: batch/v1
kind: Job
metadata:
name: job-wq-2
spec:
parallelism: 2
template:
metadata:
name: job-wq-2
spec:
containers:
- name: c
image: jonalv/job-wq-2
restartPolicy: OnFailure
Now I want to try to instead use a private Docker registry which requires authentication as in:
docker login https://myregistry.com
But I can't find anything about how I add username and password to my job.yaml file. How is it done?
You need to use ImagePullSecrets.
Once you create a secret object, you can refer to it in your pod spec (the spec value that is the parent of containers:
apiVersion: batch/v1
kind: Job
metadata:
name: job-wq-2
spec:
parallelism: 2
template:
metadata:
name: job-wq-2
spec:
imagePullSecrets:
- name: myregistrykey
containers:
- name: c
image: jonalv/job-wq-2
restartPolicy: OnFailure
Ofcourse, you'll have to create the secret (as per the docs). This is what this will look like:
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
namespace: mynamespace
data:
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
type: kubernetes.io/dockerconfigjson
The value of .dockerconfigjson is a base64 encoding of this file: .docker/config.json.
The key point: A job spec contains a pod spec. So whatever knowledge you gain about pod specs can be applied to jobs as well.

Resources