I have created Eks cluster and you have full permission still you are not able to access it. I have shared the kubeconfig file also with you - devops

let's suppose I have created Eks cluster and you have full permission still you are not able to access it. I have shared the kubeconfig file also with you, what is the problem?

Related

Giving all containers in a Kubernetes service access to the same shared file system

New to Docker/K8s. I need to be able to mount all the containers (across all pods) on my K8s cluster to a shared file system, so that they can all read from and write to files on this shared file system. The file system needs to be something residing inside of -- or at the very least accessible to -- all containers in the K8s cluster.
As far as I can tell, I have two options:
I'm guessing K8s offers some type of persistent, durable block/volume storage facility? Maybe PV or PVC?
Maybe launch a Dockerized Samba container and give my others containers access to it somehow?
Does K8s offer this type of shared file system capability or do I need to do something like a Dockerized Samba?
NFS is a common solution to provide you the file sharing facilities. Here's a good explanation with example to begin with. Samba can be used if your file server is Windows based.
You are right you can use the File system in the backend with Access Mode ReadWriteMany.
ReadWirteMany will allow the container to mount to a single PVC and write on it.
You can also use the NFS system as suggested by the gohm'c, for NFS you can set up the GlusterFS or MinIO containers.
Read more about the Access mode ReadWriteMany : https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes

GitHub Codespace secrets when using VSCode devcontainers locally

I am using a GCP service account file as a GitHub Codespaces secret, and I am able to access it from the Codespace container, as explained here.
Now, I want to also support developing locally without GitHub Codespaces but still use VSCode devcontainers.
I also hold the service account file on my local filesystem, but outside of the git repo (for obvious reasons). How should I reference it?
You can use the mounts property in devcontainer.json. Codespaces ignores bind mounts (more info can be found in the documentation) so you should be able to mount the file from your local filesystem without affecting how your Codespaces are built/ run.
Update
I have release an extension on the marketplace to solve this usecase: https://marketplace.visualstudio.com/items?itemName=pomdtr.secrets
It stores the secrets in the user keychain. Since it is a web extension, it runs on the client and also works with devcontainers.
Previous Answer
You can use the terminal.integrated.env.linux to pass the secret in your settings.json file.
You can disable settings sync using the settingsSync.ignoredSettings array:
{
"terminal.integrated.env.linux": {
"GITHUB_TOKEN": "<your-token>"
},
"settingsSync.ignoredSettings": [
"terminal.integrated.env.linux"
]
}

environment variables in Docker images in Kubernetes Cluster

I'm working on some GCP apps which are dockerized in a Kubernetes cluster in GCP (I'm new to Docker and Kubernetes). In order to access some of the GCP services, the environment variable GOOGLE_APPLICATION_CREDENTIALS needs to point to a credentials file.
Should the environment variable be set and that file included in:
- each of the Docker images?
- the Kubernetes cluster?
GCP specific stuff
This is the actual error: com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.
-Should the environment variable be set and that file included in:
- each of the Compute Engine instances?
- the main GCP console?
And, most importantly, HOW?
:)
You'll need to create a service account (IAM & Admin > Service Accounts), generate a key for it in JSON format and then give it the needed permissions (IAM & Admin > IAM). If your containers need access to this, it's best practice to add it as a secret in kubernetes and mount it in your containers. Then set the environment variable to point to the secret which you've mounted:
export GOOGLE_APPLICATION_CREDENTIALS="[PATH_TO_SECRET]"
This page should get you going: https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform#step_4_import_credentials_as_a_secret

Create environment variables for Kubernetes main container in Kubernetes Init container

I use an Kubernetes Init container to provision the application's database. After this is done I want to provide the DB's credentials to the main container via environment variables.
How can this be achieved?
I don't want to create a Kubernetes Secret inside the Init container, since I don't want to save the credentials there!
I see several ways to achieve what you want:
From my perspective, the best way is to use Kubernetes Secret. #Nebril has already provided that idea in the comments. You can generate it by Init Container and remove it by PreStop hook, for example. But, you don't want to go that way.
You can use a shared volume which will be used by InitConatainer and your main pod. InitContainer will generate the environment variables file db_cred.env in the volume which you can mount, for example, to /env path. After that, you can load it by modifying a command of your container in the Pod spec and add the command source /env/db_cred.env before the main script which will start your application. #user2612030 already gave you that idea.
Another alternative way can be Vault by Hashicorp, you can use it as storage of all your credentials.
You can use some custom solution to write and read directly to Etcd from Kubernetes apps. Here is a library example - k8s-kv.
But anyway, the best and the most proper way to store credentials in Kubernetes is Secrets. It is more secure and easier than almost any other way.

Kubernetes - File missing inside pods after reboot

In Kubernetes I create a deployment with 3 replica and it is creating 3 pods.
After Pod creation I create a property file which has all the key/value that are required for my application (on all 3 pods).
If I reboot the machine the property file inside the pods is missing.So I am creating it manually every time if the machine reboots.
Is there any way to save the property file inside the pod?
What you do depends on what the file is for and where it needs to be located.
If a config file, you may want to use a config map and mount the config map into the container.
If it needs to be long lived storage for data, create a persistent volume claim and then mount the volume into the container.
Something like this is necessary as the container file system is otherwise ephemeral and anything written to it will be lost when the container is shutdown.

Resources