Decoding Kubernetes secret - docker

I inherited a Kubernetes/Docker setup, and I accidentally crashed the pod by changing something relating to the DB password.
I am trying to troubleshoot this.
I don't have much Kubernetes or Docker experience, so I'm still learning how to do things.
The value is contained inside the db-user-pass credential I believe, which is an Opaque type secret.
I'm describing it:
kubectl describe secrets/db-user-pass
Name: db-user-pass
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password: 16 bytes
username: 13 bytes
but I have no clue how to get any data from this secret. The example on the Kubernetes site seems to assume I'll have a base64 encoded string, but I can't even seem to get that. How do I get the value for this?

You can use kubectl get secrets/db-user-pass -o yaml or -o json where you'll see the base64-encoded username and password. You can then copy the value and decode it with something like echo <ENCODED_VALUE> | base64 -D (Mac OS X).
A more compact one-liner for this:
kubectl get secrets/db-user-pass --template={{.data.password}} | base64 -D
and likewise for the username:
kubectl get secrets/db-user-pass --template={{.data.username}} | base64 -D
Note: on GNU/Linux, the base64 flag is -d, not -D.

I would suggest using this handy command. It utilizes a power of go-templates. It iterates over all values, decodes them, and prints them along with the key. It also handles not set values.
kubectl get secret name-of-secret -o go-template='
{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'
## In your case it would output
# password: decoded_password
# username: decoded_username
If you don't like go-templates you can use different output formats e.g. yaml or json, but that will output secrets encoded by base64.

If you have jq (json query) this works:
kubectl get secret db-user-pass -o json | jq '.data | map_values(#base64d)'
NOTE:
db-user-pass is the name of the k8s secret
.data is the variable within that contains the secret value

This should work on all platforms, with kubectl 1.11+
kubectl get secrets/db-user-pass --template='{{.data.password | base64decode}}'
If there is a "-" in the password, the following will work
kubectl get secrets/db-user-pass --template='{{ index .data "sql-password" | base64decode}}'
And if you want to get all keys, values
kubectl get secrets/db-user-pass --template='{{ range $key, $value := .data }}{{ printf "%s: %s\n" $key ($value | base64decode) }}{{ end }}'

If your secret keys contain dash (-) or dot (.):
kubectl get secret db-user-pass -o=go-template='{{index .data "password"}}' | base64 -d

This jsonpath variation works for me on OSX.
kubectl get secrets/db-user-pass -o jsonpath="{.data.username}" | base64 -d
To get secret with dot in the name.
kubectl get secrets/tls -o jsonpath="{.data['tls\.crt']}" | base64 -d

First, get the secret from the etcd by querying the api server using kubectl.
kubectl get secret db-user-pass -o yaml
This will give you the base64 encoded secret in yaml format.
Once you have the yaml file decode them using
"base64 --decode"
Final command will look like this:
Don't forget the -n flag in echo command
echo -n "jdddjdkkdkdmdl" | base64 --decode

This is the link you might be looking for.
Kubernetes secrets need the secrets to be given in base64 encoded format, which can be created using base64 binary in case of linux distributions.
Example:
echo "hello" | base64
aGVsbG8K
Kubernetes decodes the base64 encoding when we pass the secret key as environment variable or mounted as volume.

For easier decoding you can use a tool like ksd that will do the base64 decoding for you
kubectl get secrets/db-user-pass -o yaml | ksd
or using https://github.com/elsesiy/kubectl-view-secret
kubectl view-secret secrets/db-user-pass

on ubuntu 18+
kubectl get secrets/db-user-pass --template={{.data.password}} | base64 -d

Kubernetes 1.11+
kubectl get secrets/db-user-pass --template='{{.data.password | base64decode }}'

This one liner is used to get an encoded kubeconfig file from a secret, and generate a file from it to be used dynamically on a ci job for example:
kubectl get secret YOUR_SECRET -o json | grep -oP '(?<=\"YOUR_SECRET_KEY\": \")[^\"]*' | base64 --decode > ./YOUR_KUBECONFIG_FILE_NAME

Extending #Břetislav Hájek solution (thank you very much for that).
If you need to get it by a label, then you'll need to add an extra range command to iterate over the returned items.
$ LABEL_FILTER="app.kubernetes.io/name=mysql-chart"
$ kubectl get secret -l "$LABEL_FILTER" -o go-template='
{{range $i := .items}}{{range $k,$v := $i.data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}{{end}}'
mysql_password: ...
mysql_root_password: ...
mysql_user: ...

With bash. This is running ubuntu 18.04, and Kubernetes 1.18.5
kubectl -n metallb-system get secrets memberlist -o json | grep secretkey | grep -v f:s | awk -F '"' '{print$4}' |base64 --decode; echo

Minimal nodejs CLI tool (github)
npm i -g kusd
kubectl get secret your-secret -o yaml | kusd

This would help if you have yaml file for k8s secrets.
You can use this intellij plugin to decode all base64 encoded values in a yaml file.
https://plugins.jetbrains.com/plugin/19099-yaml-base64-decoder

Related

Pass docker host ip as env var into devcontainer

I am trying to pass an environment variable into my devcontainer that is the output of a command run on my dev machine. I have tried the following in my devcontainer.json with no luck:
"initializeCommand": "export DOCKER_HOST_IP=\"$(ifconfig | grep -E '([0-9]{1,3}.){3}[0-9]{1,3}' | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1)\"",
"containerEnv": {
"DOCKER_HOST_IP1": "${localEnv:DOCKER_HOST_IP}",
"DOCKER_HOST_IP2": "${containerEnv:DOCKER_HOST_IP}"
},
and
"runArgs": [
"-e DOCKER_HOST_IP=\"$(ifconfig | grep -E '([0-9]{1,3}.){3}[0-9]{1,3}' | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1)\"
],
(the point of the ifconfig/grep piped command is to provide me with the IP of my docker host which is running via Docker for Desktop (Mac))
Some more context
Within my devcontainer I am running some kubectl deployments (to a cluster running on Docker for Desktop) where I would like to configure a hostAlias for a pod (docs) such that that pod will direct requests to https://api.cancourier.local to the ip of the docker host (which would then hit an ingress I have configured for that CNAME).
I could just pass in the output of the ifconfig command to my kubectl command when running from within the devcontainer. The problem is that I get two different results from this depending on whether I am running it on my host (10.0.0.89) or from within the devcontainer (10.1.0.1). 10.0.0.89 in this case is the "correct" IP as if I curl this from within my devcontainer, or my deployed pod, I get the response I'd expect from my ingress.
I'm also aware that I could just use the name of my k8s service (in this case api) to communicate between pods, but this isn't ideal. As for why, I'm running a Next.js application in a pod. The Next.js app on this pod has two "contexts":
my browser - the app serves up static HTML/JS to my browser where communicating with https://api.cancourier.local works fine
on the pod itself - running some things (ie. _middleware) on the pod itself, where the pod does not currently know what https://api.cancourier.local
What I was doing to temporarily get around this was to have a separate config on the pod, one for the "browser context" and the other for things running on the pod itself. This is less than ideal as when I go to deploy this Next.js app (to Vercel) it won't be an issue (as my API will be deployed on some publicly accessible CNAME). If I can accomplish what I was trying to do above, I'd be able to avoid this.
So I didn't end up finding a way to pass the output of a command run on the host machine as an env var into my devcontainer. However I did find a way to get the "correct" docker host IP and pass this along to my pod.
In my devcontainer.json I have this:
"runArgs": [
// https://stackoverflow.com/a/43541732/3902555
"--add-host=api.cancourier.local:host-gateway",
"--add-host=cancourier.local:host-gateway"
],
which augments the devcontainer's /etc/hosts with:
192.168.65.2 api.cancourier.local
192.168.65.2 cancourier.local
then in my Makefile where I store my kubectl commands I am simply running:
deploy-the-things:
DOCKER_HOST_IP = $(shell cat /etc/hosts | grep 'api.cancourier.local' | awk '{print $$1}')
helm upgrade $(helm_release_name) $(charts_location) \
--install \
--namespace=$(local_namespace) \
--create-namespace \
-f $(charts_location)/values.yaml \
-f $(charts_location)/local.yaml \
--set cwd=$(HOST_PROJECT_PATH) \
--set dockerHostIp=$(DOCKER_HOST_IP) \
--debug \
--wait
then within my helm chart I can use the following for the pod running my Next.js app:
hostAliases:
- ip: {{ .Values.dockerHostIp }}
hostnames:
- "api.cancourier.local"
Highly recommend following this tutorial: Container environment variables
In this tutorial, 2 methods are mentioned:
Adding individual variables
Using env file
Choose which is more comfortable for you, good luck))

Creating parameterized GitLab personal access token from CLI

Based on this example:
sudo gitlab-rails runner "token = User.find_by_username('automation-bot').personal_access_tokens.create(scopes: [:read_user, :read_repository], name: 'Automation token'); token.set_token('token-string-here123'); token.save!"
I've created an equivalent working command for docker that creates the personalised access token from the CLI:
output="$(sudo docker exec -i 5303124d7b87 bash -c "gitlab-rails runner \"token = User.find_by_username('root').personal_access_tokens.create(scopes: [:read_user, :read_repository], name: 'Automation token'); token.set_token('token-string-here123'); token.save! \"")"
However, when trying to parameterize that command, I am experiencing slight difficulties with the single quote. For example, when I try:
output="$(sudo docker exec -i 5303124d7b87 bash -c "gitlab-rails runner \"token = User.find_by_username($gitlab_username).personal_access_tokens.create(scopes: [:read_user, :read_repository], name: 'Automation-token'); token.set_token('token-string-here123'); token.save! \"")"
It returns:
undefined local variable or method `root' for main:Object
Hence, I would like to ask, how can I substitute 'root' with a variable $gitlab_username that has value root?
I believe the error was, unlike I had assumed incorrectly, not necessarily in the command, but mostly in the variables that I passed into the command. The username contained a newline character, which broke up the command. Hence, I included a trim function that removes the newline characters from the incoming variables. The following function successfully creates a personal access token in GitLab:
create_gitlab_personal_access_token() {
docker_container_id=$(get_docker_container_id_of_gitlab_server)
# trim newlines
personal_access_token=$(echo $GITLAB_PERSONAL_ACCESS_TOKEN | tr -d '\r')
gitlab_username=$(echo $gitlab_server_account | tr -d '\r')
token_name=$(echo $GITLAB_PERSONAL_ACCESS_TOKEN_NAME | tr -d '\r')
# Create a personal access token
output="$(sudo docker exec -i $docker_container_id bash -c "gitlab-rails runner \"token = User.find_by_username('$gitlab_username').personal_access_tokens.create(scopes: [:read_user, :read_repository], name: '$token_name'); token.set_token('$personal_access_token'); token.save! \"")"
}

How to edit all the deployment of kubernetes at a time

We have hundreds of deployment and in the config we have imagePullPolicy set as “ifnotpresent” for most of them and for few it is set to “always” now I want to modify all deployment which has ifnotpresent to always.
How can we achieve this with at a stroke?
Ex:
kubectl get deployment -n test -o json | jq ‘.spec.template.spec.contianer[0].imagePullPolicy=“ifnotpresent”| kubectl -n test replace -f -
The above command helps to reset it for one particular deployment.
Kubernetes doesn't natively offer mass update capabilities. For that you'd have to use other CLI tools. That being said, for modifying existing resources, you can also use the kubectl patch function.
The script below isn't pretty, but will update all deployments in the namespace.
kubectl get deployments -o name | sed -e 's/.*\///g' | xargs -I {} kubectl patch deployment {} --type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/imagePullPolicy", "value": "Always"}]'
Note: I used sed to strip the resource type from the name as kubectl doesn't recognize operations performed on resources of type deployment.extensions (and probably others).

Kubenetes: Is it possible to hit multiple pods with a single request in Kubernetes cluster

I want to clear cache in all the pods in my Kubernetes namespace. I want to send one request to the end-point which will then send a HTTP call to all the pods in the namespace to clear cache. Currently, I can hit only one pod using Kubernetes and I do not have control over which pod would get hit.
Even though the load-balancer is set to RR, continuously hitting the pods(n number of times, where n is the total number of pods) doesn't help as some other requests can creep in.
The same issue was discussed here, but I couldn't find a solution for the implementation:
https://github.com/kubernetes/kubernetes/issues/18755
I'm trying to implement the clearing cache part using Hazelcast, wherein I will store all the cache and Hazelcast automatically takes care of the cache update.
If there is an alternative approach for this problem, or a way to configure kubernetes to hit all end-points for some specific requests, sharing here would be a great help.
Provided you got kubectl in your pod and have access to the api-server, you can get all endpoint adressess and pass them to curl:
kubectl get endpoints <servicename> \
-o jsonpath="{.subsets[*].addresses[*].ip}" | xargs curl
Alternative without kubectl in pod:
the recommended way to access the api server from a pod is by using kubectl proxy: https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod this would of course add at least the same overhead. alternatively you could directly call the REST api, you'd have to provide the token manually.
APISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ")
TOKEN=$(kubectl describe secret $(kubectl get secrets \
| grep ^default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d " ")
if you provide the APISERVER and TOKEN variables, you don't need kubectl in your pod, this way you only need curl to access the api server and "jq" to parse the json output:
curl $APISERVER/api/v1/namespaces/default/endpoints --silent \
--header "Authorization: Bearer $TOKEN" --insecure \
| jq -rM ".items[].subsets[].addresses[].ip" | xargs curl
UPDATE (final version)
APISERVER usually can be set to kubernetes.default.svc and the token should be available at /var/run/secrets/kubernetes.io/serviceaccount/token in the pod, so no need to provide anything manually:
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token); \
curl https://kubernetes.default.svc/api/v1/namespaces/default/endpoints --silent \
--header "Authorization: Bearer $TOKEN" --insecure \
| jq -rM ".items[].subsets[].addresses[].ip" | xargs curl
jq is available here: https://stedolan.github.io/jq/download/ (< 4 MiB, but worth it for easily parsing JSON)
UPDATE I published this article for this approach
I have had the similar situation. Here is how I resolved it (I'm using a namespace other than "default").
Setup access to cluster Using RBAC Authorization
Access to API is done by creating a ServiceAccount, assign it to the Pod and bind a Role to it.
1.Create a ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-serviceaccount
namespace: my-namespace
2.Create a Role: in this section you need to provide the list of resources and the list of actions you'd like to have access to. Here is the example where you'd like to list the endpoints and also get the details of a specific endpoint.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-role
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list"]
3.Bind the role to the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-role-binding
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: my-serviceaccount
roleRef:
kind: Role
name: my-role
apiGroup: rbac.authorization.k8s.io
4.Assign the service account to the pods in your deployment (it should be under template.spec)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: my-pod
template:
metadata:
labels:
app: my-pod
spec:
serviceAccountName: my-serviceaccount
containers:
- name: my-pod
...
Access Clusters Using the Kubernetes API
Having all the security aspects set, you will have enough privilege to access the API within your Pod. All the required information to communicate with API Server is mounted under /var/run/secrets/kubernetes.io/serviceaccount in your Pod.
You can use the following shell script (probably add it to your COMMAND or ENTRYPOINT of the Docker image).
#!/bin/bash
# Point to the internal API server hostname
API_SERVER=https://kubernetes.default.svc
# Path to ServiceAccount token
SERVICE_ACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace
NAMESPACE=$(cat ${SERVICE_ACCOUNT}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICE_ACCOUNT}/token)
# Reference the internal certificate authority (CA)
CA_CERT=${SERVICE_ACCOUNT}/ca.crt
From this point forward, it is just simple REST API call. You can read these environment variables in any language of your choice and access to API.
Here is an example of listing the endpoint for your use case
# List all the endpoints in the namespace that Pod is running
curl --cacert ${CA_CERT} --header "Authorization: Bearer ${TOKEN}" -X GET \
"${API_SERVER}/api/v1/namespaces/${NAMESPACE}/endpoints"
# List all the endpoints in the namespace that Pod is running for a deployment
curl --cacert ${CA_CERT} --header "Authorization: Bearer ${TOKEN}" -X GET \
"${API_SERVER}/api/v1/namespaces/${NAMESPACE}/endpoints/my-deployment"
For more information on available API endpoints and how to call them, refer to API Reference.
For those of you trying to find an alternative, I have used hazelcast as distributed event listener. Added a similar POC on github: https://github.com/vinrar/HazelcastAsEventListener
I fixed this problem by using this script. You just have to write the equivalent command to make the API call. I used curl to do that.
Following is the usage of the script:
function usage {
echo "usage: $PROGNAME [-n NAMESPACE] [-m MAX-PODS] -s SERVICE -- COMMAND"
echo " -s SERVICE K8s service, i.e. a pod selector (required)"
echo " COMMAND Command to execute on the pods"
echo " -n NAMESPACE K8s namespace (optional)"
echo " -m MAX-PODS Max number of pods to run on (optional; default=all)"
echo " -q Quiet mode"
echo " -d Dry run (don't actually exec)"
}
For example to run command curl http://google.com on all pods of a service with name s1 and namespace n1, you need to execute ./kcdo -s s1 -n n1 -- curl http://google.com.
I needed access to all pods so I can change log level on a class so I did from the inside of one of the pods:
// Change level to DEBUG
host <service-name>| awk '{print $4}' | while read line; do
curl --location --request POST "http://$line:9111/actuator/loggers/com.foo.MyClassName" \
--header 'Content-Type: application/json' \
--data-raw '{"configuredLevel": "DEBUG"}'
done
// Query level on all pods
host <service-name>| awk '{print $4}' | while read line; do
curl --location --request GET "http://$line:9111/actuator/loggers/com.foo.MyClassName"
echo
done
You need host and curl to execute it.
Not sure if this is good practice.

How can I use xargs to recursively parse email addresses out of text/html files?

I tried recursively parsing email addresses from a directory of text/html files with xargs and grep but this command keep including the path (I just want the email addresses in my resulting emails.csv file).
find . -type f | xargs grep -E -o "\b[A-Za-z0-9._%+-]+#[A-Za-z0-9.-]+\.[A-Za-z]{2,6}\b" >> ~/emails.csv
Can you explain what's wrong with my grep command? I don't need this to be sorted or unique. I want to match all occurrences of email addresses in files. I need to use xargs cause I'm parsing emails in 20 GB worth of text files.
Thanks.
When you tell grep to search in more than one file, it prepends the corresponding filename to the search result. Try the following to see the effect...
First, search in a single file:
grep local /etc/hosts
# localhost is used to configure the loopback interface
127.0.0.1 localhost
Now search in two files:
grep local /etc/hosts /dev/null
/etc/hosts:# localhost is used to configure the loopback interface
/etc/hosts:127.0.0.1 localhost
To suppress the filename in which the match was found, add the -h switch to grep like this
grep -h <something> <somewhere>

Resources