Uploading kubernetes yaml files from vscode using pulumi - docker

I am doing deployment and I have drafted a YAML manifest file from a reference docker-compose file. After it has been extracted on VScode, I have to upload it to the cluster on Google Cloud Platform, please how do I do that? What is the process of moving those files to google cloud k8s using pulumi.
Many thanks.

If I'm understanding your question correctly, you have some Kubernetes resources that you want to deploy into your Google Kubernetes Engine Cluster?
If so, you can use Pulumi's Kubernetes provider to deploy Kubernetes YAML files to your cluster.
Here's an example in Python:
import pulumi
import pulumi_kubernetes as k8s
# Create resources from standard Kubernetes guestbook YAML example.
guestbook = k8s.yaml.ConfigFile('guestbook', 'guestbook-all-in-one.yaml')
# Export the private cluster IP address of the frontend.
frontend = guestbook.get_resource('v1/Service', 'frontend')
pulumi.export('private_ip', frontend.spec['cluster_ip'])
The example above assumes you have a KUBECONFIG environment variable set in your terminal with appropriate credentials and access to your GKE cluster.
You can see more details and examples (in other languages too) at https://www.pulumi.com/docs/guides/adopting/from_kubernetes/.

Related

Kubernetes Access Windows Environment Variable

How i can access or read windows environment variable in kubernetes. I achieved the same in docker compose file.
How i can do the same in kubernetes as i am unable to read the windows environment variables?
Nothing in the standard Kubernetes ecosystem can be configured using host environment variables.
If you're using the core kubectl tool, the YAML files you'd feed into kubectl apply are self-contained manifests; they cannot depend on host files or environment variables. This can be wrapped in a second tool, Kustomize, which can apply some modifications, but that explicitly does not support host environment variables. Helm lets you build Kubernetes manifests using a templating language, but that also specifically does not use host environment variables.
You'd need to somehow inject the environment variable value into one of these deployment systems. With all three of these tools, you could include those in a file (a Kubernetes YAML manifest, a Kustomize overlay, a Helm values file) that could be checked into source control; you may also be able to retrieve these values from some sort of external storage. But just relaying host environment variables into a container isn't an option in Kubernetes.

How do I deploy a GKE Workload with my Docker image from the Artifact Registry using Terraform?

I have a kubernetes cluster that I have stood up with terraform in GCP. Now I want to deploy/run my Docker image to/on it, from the GCP console I would do this by going to the workloads section of the kubernetes engine portion of the console and then selecting Deploy a containerized application, I however want to do this with terraform, and am having difficulty determining how to do this and finding good reference examples for how to do it. Any examples on how to do this would be appreciated.
Thank you!
You need to do 2 things:
For managing workloads on Kubernetes, you can use this Kubectl Terraform provider
For custom images that preset in a 3rd party registry, you'll need to create a Kubernetes secret of type Docker and then use it in your manifests via imagePullSecrets attribute. Check out this example.

How do you migrate Docker Desktop Kubernetes clusters to Google Kubernetes Engine

I'm trying to migrate and host a Kubernetes cluster that I made locally on my machine using Docker Desktop to Google Kubernetes Engine but I'm not sure where to start or how to do it properly.
Any help is appreciated, thanks!
There's no migration in the sense of virtual machines. If you your deployments / services /etc defined in a CVS of some sort (github, gitlab etc), you could just change the target of kubectl and apply them in bulk using the -f switch to kubectl.
I would recommend creating namespaces first, and then using kubens to swap between namespaces as you do the separate deployments.
If you DON't have them already stored, you'll want to iterate through your namespaces and issue:
k get <object> --export -o yaml
This would be (not limited to)
deployments
secrets
configmaps
daemonsets
statefulsets
services
Once you have everything, run through re-applying them on the remote cluster, and if you missed something, just export it and reapply it remotely.
Does does NOT include your data layer. If you're running databases et all in Kubernetes, you'll need to use tools native to your data platform to export that data, and then re-import it on the other side.

environment variables in Docker images in Kubernetes Cluster

I'm working on some GCP apps which are dockerized in a Kubernetes cluster in GCP (I'm new to Docker and Kubernetes). In order to access some of the GCP services, the environment variable GOOGLE_APPLICATION_CREDENTIALS needs to point to a credentials file.
Should the environment variable be set and that file included in:
- each of the Docker images?
- the Kubernetes cluster?
GCP specific stuff
This is the actual error: com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.
-Should the environment variable be set and that file included in:
- each of the Compute Engine instances?
- the main GCP console?
And, most importantly, HOW?
:)
You'll need to create a service account (IAM & Admin > Service Accounts), generate a key for it in JSON format and then give it the needed permissions (IAM & Admin > IAM). If your containers need access to this, it's best practice to add it as a secret in kubernetes and mount it in your containers. Then set the environment variable to point to the secret which you've mounted:
export GOOGLE_APPLICATION_CREDENTIALS="[PATH_TO_SECRET]"
This page should get you going: https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform#step_4_import_credentials_as_a_secret

Docker stack to Kubernetes

I am quite familiar with Docker, but I have zero experience on Kubernetes.
I have a Docker stack (multi-container) software that I can deploy in a Docker swarm cluster. I was wondering if Kubernetes has something similar? I don't need replicas, auto scaling and so on... I just need a group of containers working together with its dependencies and networks defined in single text file.
I have searched and found a tool called kompose that translates the Docker stack file to Kubernetes syntax... However, it looks like the output is a list of *.yaml files, instead of a single file.
So, I came to the conclusion that kubernetes does not have this exact functionality.. Am I missing something?
You can copy the content of the generated files into one file and separate them with ---.
For instance, if you've got 3 Kubernetes files: service.yml, deployment.yml and configmap.yml, your file should look something like:
# content of service.yml
....
---
# content of deployment.yml
....
---
# content of configmap.yml
....
You would use the same kubectl commands to CRUD using this spec file.
A single 'Docker Stack' yml definition is equivalent to a collection of Kubernetes Deployments and Services. Each service in a Docker Stack definition is also available to one another via a default overlay network automatically created by docker at deploy time. To simulate this in Kubernetes you would need to define multiple deployments/services with-in the same file so that they could be created and deleted as a single 'stack'.

Resources