update container values at the runtime in Azure Kubernetes cluster - docker

I have docker image/ACR running successfully in my AKS cluster.
Docker Image has configuration file with all credentials saved in it.
I want to change values of .config file at the time of kubernetes deployment creation.
I am using helm chart for deployment.
Do I need to mention these values in values.yaml file ?
How do I mention which file inside application needs to be updated with values from Azure key vault?
How can I achieve this ?

You could use Secrets Store CSI Driver for Kubernetes to use secrets from Azure Key Vault.
When you have an AKS and a ACR, why you need to specify credentials ? You can assign the Role "AcrPull" to the AKS identity and then the AKS is allows to pull images from your ACR.

Related

How do I deploy a GKE Workload with my Docker image from the Artifact Registry using Terraform?

I have a kubernetes cluster that I have stood up with terraform in GCP. Now I want to deploy/run my Docker image to/on it, from the GCP console I would do this by going to the workloads section of the kubernetes engine portion of the console and then selecting Deploy a containerized application, I however want to do this with terraform, and am having difficulty determining how to do this and finding good reference examples for how to do it. Any examples on how to do this would be appreciated.
Thank you!
You need to do 2 things:
For managing workloads on Kubernetes, you can use this Kubectl Terraform provider
For custom images that preset in a 3rd party registry, you'll need to create a Kubernetes secret of type Docker and then use it in your manifests via imagePullSecrets attribute. Check out this example.

eclipse che docker desktop installation is unable to pull images from private docker registry

Aim is to have a default workspace created for each new user.
User will visit the link https://che-eclipse-che.192.168.0.1.nip.io/#https://github.com/test/eclipse-che
It has the devfile to create the workspace.
First user registration will happen via keycloak and then the workspace will be created. This means a new kubernetes namespace will also be created for the user.
The problem is that I need to use an image from a private docker registry but I'm unable to specify the authentication credentials in the devfile. Is there any way to achieve this?
Can not use kubernetes secret because secrets are confined to a namespace.
Withing Che, you can't configure your credentials to be used for every user.
Each is supposed to configure their credentials, if they need access private docker repos. Check https://www.eclipse.org/che/docs/che-7/end-user-guide/using-private-container-registries/
What I can propose to look into:
configure nodes to pull private image https://kubernetes.io/docs/concepts/containers/images/#configuring-nodes-to-authenticate-to-a-private-registry;
push your images to cluster internal docker registry;

Pulling images from private repository in kubernetes without using imagePullSecrets

I am new to kubernetes deployments so I wanted to know is it possible to pull images from private repo without using imagePullSecrets in the deployment yaml files or is it mandatory to create a docker registry secret and pass that secret in imagePullSecrets.
I also looked at adding imagePullSecrets to a service account but that is not the requirement I woul love to know that if I setup creds in variables can kubernetes use them to pull those images.
Also wanted to know how can it be achieved and reference to a document would work
Thanks in advance.
As long as you're using Docker on your Kubernetes nodes (please note that Docker support has itself recently been deprecated in Kubernetes), you can authenticate the Docker engine on your nodes itself against your private registry.
Essentially, this boils down to running docker login on your machine and then copying the resulting credentials JSON file directly onto your nodes. This, of course, only works if you have direct control over your node configuration.
See the documentation for more information:
If you run Docker on your nodes, you can configure the Docker container runtime to authenticate to a private container registry.
This approach is suitable if you can control node configuration.
Docker stores keys for private registries in the $HOME/.dockercfg or $HOME/.docker/config.json file. If you put the same file in the search paths list below, kubelet uses it as the credential provider when pulling images.
{--root-dir:-/var/lib/kubelet}/config.json
{cwd of kubelet}/config.json
${HOME}/.docker/config.json
/.docker/config.json
{--root-dir:-/var/lib/kubelet}/.dockercfg
{cwd of kubelet}/.dockercfg
${HOME}/.dockercfg
/.dockercfg
Note: You may have to set HOME=/root explicitly in the environment of the kubelet process.
Here are the recommended steps to configuring your nodes to use a private registry. In this example, run these on your desktop/laptop:
Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json on your PC.
View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
Get a list of your nodes; for example:
if you want the names: nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )
if you want to get the IP addresses: nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(#.type=="ExternalIP")]}{.address} {end}' )
Copy your local .docker/config.json to one of the search paths list above.
for example, to test this out: for n in $nodes; do scp ~/.docker/config.json root#"$n":/var/lib/kubelet/config.json; done
Note: For production clusters, use a configuration management tool so that you can apply this setting to all the nodes where you need it.
If the Kubernetes cluster is private, you can deploy your own, private (and free) JFrog Container Registry using its Helm Chart in the same cluster.
Once it's running, you should allow anonymous access to the registry to avoid the need for a login in order to pull images.
If you prevent external access, you can still access the internal k8s service created and use it as your "private registry".
Read through the documentation and see the various options.
Another benefit is that JCR (JFrog Container Registry) is also a Helm repository and a generic file repository, so it can be used for more than just Docker images.

How to use customise Ubuntu image for node when GKE cluster create?

I have GKE and I need to use customised Ubuntu image for GKE nodes. I am planning to enable autoscaling. So I require to install TLS certificates to trust the private docker registry in each nodes. It's possible for existing nodes manually. But when I enable auto scale the cluster, it will spin up the nodes. Then docker image pull request will fail, because of the docker cannot trust the private docker registry which hosted in my on premise.
I have created a customised Ubuntu image and uploaded to image in GCP. I was trying to create a GKE and tried to set the node's OS image as that image which I created.
Do you know how to create a GKE cluster with customised Ubuntu Image? Has anyone experienced with incidents like this?
Node pools in GKE are based off GCE instance templates and can't be modified. That means that you aren't allowed to set metadata such as startup-scripts or make them based on custom images.
However, an alternative approach might be deploying a privileged DaemonSet that manipulates the underlying OS settings and resources.
Is important to mention that giving privileges to resources in Kubernetes must be done carefully.
You can add a custom pool where the image is Ubuntu and be sure to add the special GCE instance metadata startup-script and then you can put your customization on it.
But my advice is to put the URL of a shell script stored in a bucket of the same project, GCE will download every time a new node is created and will execute it on the startup as a root.
https://cloud.google.com/compute/docs/startupscript#cloud-storage

environment variables in Docker images in Kubernetes Cluster

I'm working on some GCP apps which are dockerized in a Kubernetes cluster in GCP (I'm new to Docker and Kubernetes). In order to access some of the GCP services, the environment variable GOOGLE_APPLICATION_CREDENTIALS needs to point to a credentials file.
Should the environment variable be set and that file included in:
- each of the Docker images?
- the Kubernetes cluster?
GCP specific stuff
This is the actual error: com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.
-Should the environment variable be set and that file included in:
- each of the Compute Engine instances?
- the main GCP console?
And, most importantly, HOW?
:)
You'll need to create a service account (IAM & Admin > Service Accounts), generate a key for it in JSON format and then give it the needed permissions (IAM & Admin > IAM). If your containers need access to this, it's best practice to add it as a secret in kubernetes and mount it in your containers. Then set the environment variable to point to the secret which you've mounted:
export GOOGLE_APPLICATION_CREDENTIALS="[PATH_TO_SECRET]"
This page should get you going: https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform#step_4_import_credentials_as_a_secret

Resources