Trying to access db-user-pass secret - docker

I inherited a Kubernetes/Docker setup. I am trying to recreate a dev environmental exactly as it is (with a new name) on a separate cluster. Sorry if my question is a bit ignorant, while I've mostly picked up Kubernetes/Docker, I'm still pretty new at it.
I've copied all of the information over to the cluster and launched it via kubectl and the old YAML. I am also using the old image file, which should contain the relevant secrets to my knowledge
However, I am getting an error about a missing secret, db-user-pass.
I have checked the included secrets directory in my state store for KOPS (on S3)
Warning FailedScheduling 22m (x3 over 22m) default-scheduler No nodes are available that match all of the predicates: Insufficient memory (2), PodToleratesNodeTaints (1).
Normal Scheduled 22m default-scheduler Successfully assigned name-keycloak-7c4c57cbdf-9g2n2 to ip-ip.address.goes.here.us-east-2.compute.internal
Normal SuccessfulMountVolume 22m kubelet, ip-ip.address.goes.here.us-east-2.compute.internal MountVolume.SetUp succeeded for volume "default-token-2vb5x"
Normal Pulled 21m (x6 over 22m) kubelet, ip-ip.address.goes.here.us-east-2.compute.internal Successfully pulled image "image.location.amazonaws.com/dev-name-keycloak"
Warning Failed 21m (x6 over 22m) kubelet, ip-ip.address.goes.here.us-east-2.compute.internal Error: secrets "db-user-pass" not found
Warning FailedSync 21m (x6 over 22m) kubelet, ip-ip.address.goes.here.us-east-2.compute.internal Error syncing pod
Normal Pulling 2m (x90 over 22m) kubelet, ip-ip.address.goes.here.us-east-2.compute.internal pulling image "image.location.amazonaws.com/dev-name-keycloak"
What exactly am I misunderstanding here? Is it maybe that Kubernetes is trying to assign a variable based on a value in my YAML, which is also set on the Docker image, but isn't available to Kubernetes? Should I just copy all of the secrets manually from one pod to another (or export to YAML and include in my application).
I'm strongly guessing that export + put into my existing setup is probably the best way forward to give all of the credentials to the pod.
Any guidance or ideas would be welcome here.

Could you please check if you have a secret named as a "db-user-pass" in your old cluster?
You can check that via :
ubuntu#sal-k-m:~$ kubectl get secrets
OR (if it's in a different namespace)
ubuntu#sal-k-m:~$ kubectl get secrets -n web
If the secret is there then you need to --export that also and configure that in the new cluster.
kubectl get secrets -n web -o yaml --export > db-user-pass.yaml
You can find more details about the secret in this doc.
https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/

Related

kubectl deploy from within kubernetes container

How do you deploy from within Kubernetes container - using CI/CD?
Senario:
I am building within Kubernetes using Kaniko
Now how to run kubectl within Kuberneters. And I do have the right serviceAccount for it. First problem is to have a container ready for executing kubectl.
Note: - /bin/cat
I found this, but it give errors:
apiVersion: v1
kind: Pod
metadata:
name: kubectl-deploy
spec:
containers:
- name: kubectl
image: bitnami/kubectl:latest
imagePullPolicy: Always
command:
- /bin/cat
tty: true
Errors:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 78s default-scheduler Successfully assigned default/kubectl-deploy to master
Normal Pulled 76s kubelet Successfully pulled image "bitnami/kubectl:latest" in 874.059036ms
Normal Pulled 74s kubelet Successfully pulled image "bitnami/kubectl:latest" in 860.59161ms
Normal Pulled 60s kubelet Successfully pulled image "bitnami/kubectl:latest" in 859.31958ms
Normal Pulling 33s (x4 over 77s) kubelet Pulling image "bitnami/kubectl:latest"
Normal Created 32s (x4 over 76s) kubelet Created container kubectl
Normal Started 32s (x4 over 76s) kubelet Started container kubectl
Normal Pulled 32s kubelet Successfully pulled image "bitnami/kubectl:latest" in 849.398179ms
Warning BackOff 7s (x7 over 73s) kubelet Back-off restarting failed container
I found this, but it give errors
When you run a Pod in Kubernetes, by default, it expect it to be a long running service. But in your case, you run a one-off command that terminates immediately. To run one-off commands in Kubernetes, it is easiest to run them as Kubernetes Jobs.
First problem is to have a container ready for executing kubectl.
Since you are using Tekton, have a look at the "deploy task" from Tekton Hub, it is configured with an image that includes kubectl.

GCP Kubernetes not using service account for pulling docker images

I'm using the latest version of google-kubernetes (1.22.8-gke.202) in a Kubernetes managed cluster. I also have a custom service account that has access to the "Artifact Registry Reader" scope that should grant it permission to pull private images from the repository - calling this custom-service-account.
I've validated that the nodes themselves have the custom-service-account service account linked to them within Compute Engine. Kubernetes is setup with a service account that is linked to the IAM service account with the same name through workload identity. However, when I try to spawn a pod that pulls from my private repo it fails indefinitely.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 21m (x3 over 24m) default-scheduler 0/2 nodes are available: 2 node(s) were unschedulable.
Warning FailedScheduling 19m default-scheduler no nodes available to schedule pods
Normal NotTriggerScaleUp 18m (x25 over 24m) cluster-autoscaler pod didn't trigger scale-up: 1 node(s) had taint {reserved-pool: true}, that the pod didn't tolerate
Normal Scheduled 18m default-scheduler Successfully assigned default/test-service-a-deployment-5757fc5797-b54gx to gke-personal-XXXX--personal-XXXX--ac9a05b6-16sb
Normal Pulling 17m (x4 over 18m) kubelet Pulling image "us-central1-docker.pkg.dev/personal-XXXX/my-test-repo/my-test-repo-business-logic:latest"
Warning Failed 17m (x4 over 18m) kubelet Failed to pull image "us-central1-docker.pkg.dev/personal-XXXX/my-test-repo/my-test-repo-business-logic:latest": rpc error: code = Unknown desc = failed to pull and unpack image "us-central1-docker.pkg.dev/personal-XXXX/my-test-repo/my-test-repo-business-logic:latest": failed to resolve reference "us-central1-docker.pkg.dev/personal-XXXX/my-test-repo/my-test-repo-business-logic:latest": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden
Warning Failed 17m (x4 over 18m) kubelet Error: ErrImagePull
Warning Failed 16m (x6 over 18m) kubelet Error: ImagePullBackOff
Normal BackOff 3m27s (x65 over 18m) kubelet Back-off pulling image "us-central1-docker.pkg.dev/personal-XXXX/my-test-repo/my-test-repo-business-logic:latest"
I've also ssh'ed into the nodes themselves and at least by default with a regular docker pull or crictl pull see this same error.
So, the specific questions I have:
How is GCP injecting the service account credentials into Kubernetes/Docker worker that tries to launch the images? Is it expected that the regular docker command doesn't seem to have these credentials?
Do I need to manually bootstrap some additional authentication for Kubernetes aside from just inheriting the service account on the pods?
EDIT: Result of here
> gcloud container clusters describe personal-XXXX-gke --zone us-central1-a --format="value(workloadIdentityConfig.workloadPool)"
personal-XXXX.svc.id.goog
> gcloud container node-pools describe personal-XXXX-gke-node-pool --cluster personal-XXXX-gke --format="value(config.workloadMetadataConfig.mode)" --zone us-central1-a
GKE_METADATA
> kubectl describe serviceaccount --namespace default be-service-account
Name: be-service-account
Namespace: default
Labels: <none>
Annotations: iam.gke.io/gcp-service-account: custom-service-account#personal-XXXX.iam.gserviceaccount.com
Image pull secrets: <none>
Mountable secrets: be-service-account-token-jmss9
Tokens: be-service-account-token-jmss9
Events: <none>
> gcloud iam service-accounts get-iam-policy custom-service-account#personal-XXXX.iam.gserviceaccount.com
bindings:
- members:
- serviceAccount:personal-XXXX.svc.id.goog[default/be-service-account]
role: roles/iam.workloadIdentityUser
etag: BwXjqJ9DC6A=
version: 1
When checking for access to artifact registry, please check permission and scopes as per this documentation.
Depending on how your cluster is created, various scopes are added. https://cloud.google.com/kubernetes-engine/docs/how-to/access-scopes#create_with_sa
In my case, I created Autopilot cluster from the console (UI) and did everything you did w.r.t linking service accounts - turns out the default service account that gets applied does not get the scope cloud-platform.
I ended up re-creating the cluster with the right service account (non-default) for my autopilot nodes. https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#--scopes. Most likely to use the CLI for future creations.

Getting an error when trying to find a local image with helm/docker

I have a local kubernetes cluster (minikube), that is trying to load images from my local Docker repo.
When I do a "docker images", I get:
cluster.local/container-images/app-shiny-app-validation-app-converter 1.6.9
cluster.local/container-images/app-shiny-app-validation 1.6.9
Given I know the above images are there, I run some helm commands which uses these images, but I get the below error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 66s (x2 over 2m12s) kubelet Back-off pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 66s (x2 over 2m12s) kubelet Error: ImagePullBackOff
Normal Pulling 51s (x3 over 3m24s) kubelet Pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 11s (x3 over 2m13s) kubelet Failed to pull image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9": rpc error: code = Unknown desc = Error response from daemon: Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution
Warning Failed 11s (x3 over 2m13s) kubelet Error: ErrImagePull
Anyone know how I can fix this? Seems the biggest problem is Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution
Since minikube is being used, you can refer to their documentation.
It is recommended that if a imagePullPolicy is being used, it needs to be set to Never. If set to Always, it will try to reach out and pull from the network.
From docs: https://minikube.sigs.k8s.io/docs/handbook/pushing/
"Tip 1: Remember to turn off the imagePullPolicy:Always (use imagePullPolicy:IfNotPresent or imagePullPolicy:Never) in your yaml file. Otherwise Kubernetes won’t use your locally build image and it will pull from the network."
Add cluster.local to your /etc/hosts file in all your kubernetes nodes.
192.168.12.34 cluster.local
Check whether you can login to registry using docker login cluster.local
If your registry has self-signed certificates, copy cluster.local.crt key to all kubernetes worker nodes /etc/docker/certs.d/cluster.local/ca.crt

EKS Docker Image Pull CrashLoopBackOff

I'm trying to deploy a Docker image from ECR to my EKS. When attempting to deploy my docker image to a pod, I get the following events from a CrashLoopBackOff:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 62s default-scheduler Successfully assigned default/mlflow-tracking-server to <EC2 IP>.internal
Normal SuccessfulAttachVolume 60s attachdetach-controller AttachVolume.Attach succeeded for volume "<PVC>"
Normal Pulling 56s kubelet, <IP>.ec2.internal Pulling image "<ECR Image UI>"
Normal Pulled 56s kubelet, <IP>.ec2.internal Successfully pulled image "<ECR Image UI>"
Normal Created 7s (x4 over 56s) kubelet, <IP>.ec2.internal Created container mlflow-tracking-server
Normal Pulled 7s (x3 over 54s) kubelet, <IP>.ec2.internal Container image "<ECR Image UI>" already present on machine
Normal Started 6s (x4 over 56s) kubelet, <IP>.ec2.internal Started container mlflow-tracking-server
Warning BackOff 4s (x5 over 52s) kubelet, <IP>.ec2.internal Back-off restarting failed container
I don't understand why it keeps looping like this and failing. Would anyone know why this is happening?
CrashLoopBackError can be related to these possible reasons:
the application inside your pod is not starting due to an error;
the image your pod is based on is not present in the registry, or the
node where your pod has been scheduled cannot pull from the registry;
some parameters of the pod has not been configured correctly.
In your case it seems an application error, inside the container.
Try to view the logs with:
kubectl logs <your_pod> -n <namespace>
For more info on how to troubleshoot this kind of error refer to:
https://pillsfromtheweb.blogspot.com/2020/05/troubleshooting-kubernetes.html
The process inside container is crashing. Could be reason of entrypoint on docker base images.
You can try something like this to check the logs of container
kubectl logs -f <pod_name>

Kubernetes can't pull images from docker hub repository

Hello Guys hope you well!
I need the that my master machine order the slave to pull the image from my docker hub repo and I get the error below, It doesn't let the slave pull from the repo, but when I go to the slave, manually pull he pulls
This from kubernetes master:
The first lines are a describe from pod my-app-6c99bd7b9c-dqd6l which is running now because I pulled manually the image from the docker hub, but I want Kubernetes to do it.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/my-app2-74969ddd4f-l6d6l to kubeslave.machine.pt
Normal SandboxChanged <invalid> kubelet, kubeslave.machine.pt Pod sandbox changed, it will be killed and re-created.
Warning Failed <invalid> (x3 over <invalid>) kubelet, kubeslave.machine.pt Failed to pull image "bedjase/repository/my-java-app:my-java-app": rpc error: code = Unknown desc = Error response from daemon: pull access denied for bedjase/repository/my-java-app, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Warning Failed <invalid> (x3 over <invalid>) kubelet, kubeslave.machine.pt Error: ErrImagePull
Normal BackOff <invalid> (x7 over <invalid>) kubelet, kubeslave.machine.pt Back-off pulling image "bedjase/repository/my-java-app:my-java-app"
Warning Failed <invalid> (x7 over <invalid>) kubelet, kubeslave.machine.pt Error: ImagePullBackOff
Normal Pulling <invalid> (x4 over <invalid>) kubelet, kubeslave.machine.pt Pulling image "bedjase/repository/my-java-app:my-java-app"
[root#kubernetes ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-app-6c99bd7b9c-dqd6l 1/1 Running 0 14m
my-app2-74969ddd4f-l6d6l 0/1 ImagePullBackOff 0 2m20s
nginx-86c57db685-bxkpl 1/1 Running 0 8h
This from slave:
[root#kubeslave docker]# docker pull bedjase/repository:my-java-app
my-java-app: Pulling from bedjase/repository
50e431f79093: Already exists
dd8c6d374ea5: Already exists
c85513200d84: Already exists
55769680e827: Already exists
e27ce2095ec2: Already exists
5943eea6cb7c: Already exists
3ed8ceae72a6: Already exists
7ba151cdc926: Already exists
Digest: sha256:c765d09bdda42a4ab682b00f572fdfc4bbcec0b297e9f7716b3e3dbd756ba4f8
Status: Downloaded newer image for bedjase/repository:my-java-app
docker.io/bedjase/repository:my-java-app
I already made the login in both master and slave to docker hub repo and succeed.
Both have /etc/hosts ok, also nodes are connected and ready:
[root#kubernetes ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes.machine.pt Ready master 26h v1.17.4
kubeslave.machine.pt Ready <none> 26h v1.17.4
Am I missing some point here?
For private images you must create a secret with username and password of Docker Hub to Kubernetes be able to pull the image.
The command bellow create a secret name regcred with your Docker Hub credentials, replace the fields <<your-name>>, <your-password> and <your-email>:
kubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username=<your-name> --docker-password=<your-password> --docker-email=<your-email>
After that you need to add in your pod/deployment spec that you want to use this credentials to pull your private image adding the imagePullSecrets with the credentials created above, see this example:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
References:
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret
Just to add to the other answers,
1) Create a secret with the following command:
Create a secret for pulling docker images
2) Create your pod that uses this secret as described here:
use the secret in pod
A detailed script to create the secret and another script to patch all the service accounts can be found in my answer here:
How to pull image from dockerhub in kubernetes?
Patching all the service accounts will allow all your k8s namespaces to pull any image from dockerhub without changing the k8s deploy manifests.

Resources