Spring Cloud Data Flow with Docker images from private repo - imagePullSecrets not being used. Cant Pull image - docker

So I am unable to Launch a custom task application stored in a private docker repo. All my docker images in Kubernetes come are pulled from this private repo. So the imagePullSecrets works fine but it seems it is not being used by Spring Cloud Dataflow when deploying the task to Kubernetes. If I inspect the pod there is no imagepullSecret set.
The error I get is:
xxxxx- no basic auth credentials
The server has been deployed with the ENV variable which the guide states will fix this
- name: SPRING_CLOUD_DEPLOYER_KUBERNETES_IMAGE_PULL_SECRET
value: regcred
I have even tried to add custom properties on a per-application bases
I have read through the guide HERE
I am running the following versions:
Kubernetes 1.15 &
I have been stuck on this issue for weeks and simply can't find a solution. I'm hoping somebody has seen this issue and managed to solve it before?
Is there something else I'm missing?

So I found if I do the following it pulls the image (it seems i put this in the wrong place as the documentation doesn't clearly specify where and how)
But using the global environment variable as stated above does not seem to work still

Using the environment variable SPRING_CLOUD_DEPLOYER_KUBERNETES_IMAGE_PULL_SECRET also didnt work for me.
An alternative that made it work in my case is adding the following to the application.yaml of the SCDF Server in Kubernetes:
application.yaml
spring:
cloud:
dataflow:
task:
platform:
kubernetes:
accounts:
default:
imagePullSecret: <your_secret>
or, when you are using a custom SCDF image like i do, you can of course specify it as argument:
deployment.yaml
[...]
command: ["java", "-jar", "spring-cloud-dataflow-server.jar"]
args:
- --spring.cloud.dataflow.task.platform.kubernetes.accounts.default.imagePullSecret=<your_secret>
[...]
More details on https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/

Related

Gitlab Kubernetes Agent - error: error loading config file "/root/.kube/config": open /root/.kube/config: permission denied

I am trying to set up a Gitlab Kubernetes Agent in a small self-hosted k3s cluster.
I am however getting an error:
$ kubectl config get-contexts
error: error loading config file "/root/.kube/config": open /root/.kube/config: permission denied
I have been following the steps in documentation found here:
https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html
I got the agent installed and registered so far.
I also found a pipeline kubectl example here:
https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html#update-your-gitlab-ciyml-file-to-run-kubectl-commands
Using the one below gives the error:
deploy:
image:
name: bitnami/kubectl:latest
entrypoint: [""]
script:
- kubectl config get-contexts
- kubectl config use-context path/to/agent/repository:agent-name
- kubectl get pods
I do not know what is missing. The script itself seems a bit confusing as there is nothing telling the container how to access the cluster.
Looking further down there is also one for doing both certificate-based and agent-based connections. However I have no knowledge of either so I cannot tell if there is something extra in this that I should actually be adding.
Also if it makes a difference the runner is also self-hosted and set to run docker.
The agent is set up without a configuration file. I wanted to keep it as simple as possible and take it from there.
Anyone know what should be changed/added to fix the issue?
EDIT:
I took at step back and disregarded the agent approach. I put the kubeconfig in a gitlab variable and used that in the kubernetes image. This is good enough for now and a relief to finally for the first time have something working and be able to push stuff to my cluster from pipeline. After well over 15 hours spent on the agent I have had enough. Only after several hours did I figure out that the agent was not just about security etc but that it was also intended for syncing repo and cluster without pipelines. This was very poorly presented and as someone who has done neither completely escaped me. The steps in docs I followed seems to be a mixture of both which does not exactly help out.
I will wait some months and see if some proper guides are release somewhere by then.

k8s management/handling of secrets inside container

I'm currently migrating my docker deployment to k8s manifests and I was wondering about the handling of secretes. Currently my docker container fetches /run/secrets/app_secret_key to get the sensitive information inside the container as env var. but does that have any benefit in comparison to k8s secrets handling as on the other side I can also do something like this in my manifest.yaml:
env:
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
which than directly brings the secret as a env-variable inside the container ...
The only difference I was able to notice is that if I fetch /run/secrets/app_secret_key inside the container like so (docker-entrypoint.sh):
export APP_SECRET_KEY="$(cat /run/secrets/app_secret_key)"
the env var is not visible when I access the container after deployment, it seems that the env var is only available at the "session" where docker-entrypoint.sh gets initially triggered (at container/pod startup).
So my question now is what does make more sense here: simply go with the env: statement shown above or stay with manual fetching /run/secrets/app_secret_key inside the container ...
Thanks in advance
To be frank both are different implementation of same thing you can choose either one but I will prefer kubernetes approch as mounting secret than container reading at run time simply because of visibility.
Won't matter if you look for one container but when we have 30-40+ microservice running accross 4-5+ environment and have like 100 or even 200 secret. In this case one deployment go wrong we can look at deployments manifest and can figure out entire application. We don't have to search for docker file to understand what happening.
Exposing secret as env var or file is just a flavor to use the secret the k8s way.
Some secret like password is just a one line long string, so it’s convenient to use it as env var. Other secret like ssh private key or TLS certificate can be multiple line, that’s why you can mount the secret as volume instead.
Still, it’s recommended to declare your secret as k8s secret resources. That way you can fetch the value needed via kubectl without having to go inside the container. You can also make a template like helm chart that generate the secret manifests at deployment. With RBAC, you can also control who can read the secret manifests.
As per your comments, yes any user that can go inside the container will have access to the resource available to the shell user.

Ssh config for Jenkins when using helm and k8s

So I have a k8s cluster and I am trying to deploy Jenkins using the following repo https://github.com/jenkinsci/helm-charts.
The main issue is I am working behind a proxy and when git tried to pull (using the ssh protocol) it is failing.
I am able to get around this by building my own docker image from the provided, installing socat and using the following .ssh/config in the container:
Host my.git.repo
# LogLevel DEBUG
StrictHostKeyChecking no
ProxyCommand /usr/bin/socat - PROXY:$HOST_PROXY:%h:%p,proxyport=3128
Is there a better way to do this, I was hoping to use the provided image and perhaps have a plugin thast allowed something similar, but everywhere I look I can't seem to find anything.
Thanks for the help.

Getting "unauthorized: authentication required" when pulling ACR images from Azure Kubernetes Service

I followed the guide here (Grant AKS access to ACR), but am still getting "unauthorized: authentication required" when a Pod is attempting to pull an image from ACR.
The bash script executed without any errors. I have tried deleting my Deployment and creating it from scratch kubectl apply -f ..., no luck.
I would like to avoid using the 2nd approach of using a secret.
The link you posted in the question is the correct steps for Authenticate with Azure Container Registry from Azure Kubernetes Service. I tried before and it works well.
So I suggest you can check if the service-principal-ID and service-principal-password are correct in the command kubectl create secret docker-registry acr-auth --docker-server <acr-login-server> --docker-username <service-principal-ID> --docker-password <service-principal-password> --docker-email <email-address>. And the secret you set in the yaml file should also be check if the same as the secret you created.
Jeff & Charles - I also experienced this issue, but found that the actual cause of the issue was that AKS was trying to pull an image tag from the container registry that didn't exist (e.g. latest). When I updated this to a tag that was available (e.g. 9) the deployment script on azure kubernetes service (AKS) worked successfully.
I've commented on the product feedback for the guide to request the error message context be improved to reflect this root cause.
Hope this helps! :)
In my case, I was having this problem because my clock was out of sync. I run on Windows Subsytem for Linux, so running sudo hwclock -s fixed my issue.
See this GitHub thread for longer discussion.
In my case, the Admin User was not enabled in the Azure Container Registry.
I had to enable it:
Go to "Container registries" page > Open your Registry > In the side pannel under Settings open Access keys and switch Admin user on. This generates a Username, a Password, and a Password2.

Ansible and restarting a docker service

I'm currently managing my Docker services using Ansible. The images are managed by a Docker Compose file which is in /opt/ipaccess/docker/.
My update process is as follows:
Stop services
docker_service: project_src=/opt/ipaccess/docker/ state=absent
Upload new docker images and load them
Start services
docker_service: project_src=/opt/ipaccess/docker/ state=present
What I'm looking for is a way to only stop/start what I've uploaded (this I know as the impacted services are known in the playbook context).
I've read the documentation online but it's not clear how to tell docker_service module to only stop/start a particular service.
Disclaimer: This is untested and seems to be undocumented.
Looking at the docs it seems like there is no way to do this. However, if we look at the code for the docker_service module (specifically the restart command cmd_restart) it looks like it takes a list of services or a falsey value (falsey value causing it to restart all services). This command is only called by the cmd_up function, so we need state=present too.
So based on that, I'd expect:
docker_service:
project_src: /opt/ipaccess/docker/
services:
- db
- web
- myotherservice
restarted: true
state: present
To restart the listed services only.
Let me know if that works for you, if so we should get those docs updated with a PR. If not post your error and we can work from there.

Resources