I'm trying to use flex volumes for mounting file server and key vault, respectively:
Git Repo
and Git Repo
However, mounting any of them cause pods needing them to get stuck in ContainerCreating with warning messages about being unable to mount volumes due to a timeout. There is a step in the configuration of non-aks clusters that requires adjusting configs, which seems to be impossible when using a docker-provided Kubernetes server.
Is it possible to install flex volume driver on the docker Kubernetes server, as outlined here config kubelet service to enable FlexVolume driver and if so, how to access the config files? And if not, is it at all possible to mount flex volume volumes when working locally using docker-desktop Kubernetes?
I've deployed the same configuration to the AKS cluster and it's working correctly.
Related
I'm new to kubernetes and trying to get started. On my dev machine I'm running minikube using the docker driver. On startup I get the following spew:
😄 minikube v1.27.1 on Ubuntu 20.04 (amd64)
🎉 minikube 1.28.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.28.0
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.25.2 on Docker 20.10.18 ...
🔎 Verifying Kubernetes components...
▪ Using image docker.io/kubernetesui/dashboard:v2.7.0
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
▪ Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
▪ Using image k8s.gcr.io/metrics-server/metrics-server:v0.6.1
▪ Using image k8s.gcr.io/ingress-nginx/controller:v1.2.1
▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ Registry addon with docker driver uses port 32795 please use that instead of default port 5000 │
│ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯
📘 For more information see: https://minikube.sigs.k8s.io/docs/drivers/docker
▪ Using image gcr.io/google_containers/kube-registry-proxy:0.4
▪ Using image docker.io/registry:2.8.1
🔎 Verifying ingress addon...
🔎 Verifying registry addon...
🌟 Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard, ingress, registry
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Note that the registry addon attempts to use port 5000 but, since that port is in use on my system, it assigns an available high range port, in this example 32795. My (oversimplified) understanding is that minikube starts the various services in what are conceptually ssomething like separate miniature containers within minikube, each service is addressable by a separate internally ranged ip (10.x.x.x) and the minikube deployment then maps exposed ports for those services to ports that can be addressed from the host machine (so svc/registry 80 is mapped such that minikube 5000 will hit it. But since minikube is running in docker, there's an additional mapping such that it goes something like 127.0.0.1:32795->minikube:5000->svc/registry:80.
The assigned port changes whenever minikube starts. I can build and push docker images to the registry using this port:
$ docker build -t 127.0.0.1:30400/jenkins:2.303.2-lts -f kubernetes-ci-cd/applications/jenkins/Dockerfile kuber
netes-ci-cd/applications/jenkins
$ docker push 127.0.0.1:32795/jenkins:2.303.2-lts
I would like to have this bound to a stable port. Changing the configuration (in the container under /etc/kubernetes/addons/registry-svc.yaml) doesn't work since that folder is not persisted and any changes to it just get blown away on startup. I've tried saving a local copy of the file and applying it after startup, but that doesn't seem to work.
$ kubectl apply -f ~/registry-svc.yaml
service/registry configured
Rebinding the port-forward
kubectl port-forward --namespace kube-system svc/registry 30400:80
Forwarding from 127.0.0.1:30400 -> 5000
Forwarding from [::1]:30400 -> 5000
Changes the port binding for minikube, it looks like (it breaks pushing images to the registry, anyway, presumably because the old port is no longer the correct one) but it's running in a docker container so, since 30400 wasn't exposed at startup and there's no way to expose a port on a running container, attempting to push to that port gets connection refused. I can probably come up with some sort of workaround like persisting the /etc/kubernetes/addons folder, but it kinda feels like that can't really be the right solution since configuration changes from default must be a common thing and if they were a common thing then the configuration folder would have been persisted by default. What is the "correct" solution for controlling which port services (such as the registry, although this is going to become an issue with other, non-addon services as soon as I solve this problem) are bound to and exposed when running minikube under docker?
I am trying to take backup of Google Filestore to a GCS bucket. I also want to rsync the contents of filestore in primary region to another filestore in secondary region.
For this, I have created a bash script which is working fine in compute engine VM. I have converted that into a docker container which I'm running as a kubernetes cronjob inside a GKE cluster.
But when I run the scripts inside the GKE pod, it is giving me the following error:
root#filestore-backup-1594023480-k9wmn:/# mount 10.52.219.10:/vol1 /mnt/filestore-primary
mount.nfs: access denied by server while mounting 10.52.219.10:/vol1
I am able to connect to the filestore from the container:
root#filestore-backup-1594023480-k9wmn:/# telnet 10.52.219.10 111
Trying 10.52.219.10...
Connected to 10.52.219.10.
Escape character is '^]'.
The pod ip ranges are also added to the VPC ip range. Filestore has been given full access to allow the VPC. The same script is working fine in compute engine VM.
Why is mounting a google filestore inside a GKE pod not working?
bash script used for taking backup of google filestore:
#!/bin/bash
# Create the GCloud Authentication file if set
touch /root/gcloud.json
echo "$GCP_GCLOUD_AUTH" > /root/gcloud.json
gcloud auth activate-service-account --key-file=/root/gcloud.json
#backup filestore to GCS
DATE=$(date +"%m-%d-%Y-%T")
mkdir -p /mnt/$FILESHARE_MOUNT_PRIMARY
mount $FILESTORE_IP_PRIMARY:/$FILESHARE_NAME_PRIMARY /mnt/$FILESHARE_MOUNT_PRIMARY
gsutil rsync -r /mnt/$FILESHARE_MOUNT_PRIMARY/ gs://$GCP_BUCKET_NAME/$DATE/
#rsync filestore to secondary region
mkdir -p /mnt/$FILESHARE_MOUNT_SECONDARY
mount $FILESTORE_IP_SECONDARY:/$FILESHARE_NAME_SECONDARY /mnt/$FILESHARE_MOUNT_SECONDARY
rsync -avz /mnt/$FILESHARE_MOUNT_PRIMARY/ /mnt/$FILESHARE_MOUNT_SECONDARY/
All the variables are passed as environmental variables in the yaml.
The reason why you can't access it's because GKE has a different method for consuming filestore than other GCP instances, in order to be able to mount you have to create Persistent Volume and Persistent Volume Claims.
If you need only one static access to the filestore, you can follow this guide to manually set PV and PVC to attach to your application:
Filestore - Accessing Fileshares from GKE Clusters
If you want to make it more dynamic and ready for broader use, consider using a NFS Client Provisioner. It will create a storageClass that can be reffered on your yamls. In a nutshell a storageClass dinamically provisions the PV and PVCs for each access. You can follow this guide:
How to deploy GCP Filestore with GKE + Helm
Additionally, you can also use the Filestore CSI driver to enable GKE workloads to dynamically create and mount Filestore volumes without using helm. However, the CSI driver is not a supported Google Cloud product so you should consider if it fits your production environment:
Kubernetes - GCP Filestore CSI Driver
Choose your path, if you have any questions let me know in the comments.
I'm using Kubernetes on Google Cloud Platform; I installed the Kafka image in a pod, but when I try to make communication between producer and consumer with Kafkacat nothing is working.
I want to find the directory kafka in pod.
The containers running inside a pod are actually run by the docker daemon (assuming docker is the chosen container runtime for this Kubernetes deployment) of the host machine.
So in case of GCP the host machine will be the worker VM where the pod is scheduled by Kubernetes.
You can get to know which worker VM by looking at the node by running the command:
kubectl get pod pod name -o wide
Hence the image will be stored in the file system of the host machine. The exact path depends on the OS distribution of the host machine.
This is discussed here Where are Docker images stored on the host machine?
With the Kubernetes orchestrator now available in the stable version of Docker Desktop for Win/Mac, I've been playing around with running an existing compose stack on Kubernetes locally.
This works fine, e.g., docker stack deploy -c .\docker-compose.yml myapp.
Now I want to go to the next step of running this same application in a production environment using the likes of Amazon EKS or Azure AKS. These services expect proper Kubernetes YAML files.
My question(s) is what's the best way to get these files, or more specifically:
Presumably, docker stack is performing some conversion from Compose YAML to Kubernetes YAML 'under the hood'. Is there documentation/source code links as to what is going on here and can that converted YAML be exported?
Or should I just be using Kompose?
It seems that running the above docker stack deploy command against a remote context (e.g., AKS/EKS) is not possible and that one must do a kubectl deploy. Can anyone confirm?
docker stack deploy with a Compose file to Kube only works on Docker's Kubernetes distributions - Docker Desktop and Docker Enterprise.
With the recent federation announcement you'll be able to manage AKS and EKS with Docker Enterprise, but using them direct means you'll have to use Kubernetes manifest files and kubectl.
I started Kubernetes master and minion on local machine using Vagrant. I can create a json file for my Kubernetes pod where I can start several public containers.
However, one Docker container is local one, ontop on java:8-jdk, configured with DockerFile.
How can I reference this local Docker container in the kubernetes json pod so Kubernetes can run it?
In other words, does Kubernetes support docker build ;)
After you build the docker image, you can "side-load" it into your locally available images by running docker load -i /path/to/image.tar. Once you've done this, Kubernetes will be able to load the image without reaching out to an external hub.