How to use azure disk in AKS environment - azure-aks

I am trying to setup the AKS in which I have used azure disk to mount the source code of the application. When I am using kubectl describe pods command then also it is showing as mounted but I dont know how may I copy the code into that?
I got some recommendations that use kubectl cp command but my pod name is changing each time whenever I am deploying so please let me know what should i do?

you'd need to copy files to the disk directly (not to the pod). you can use your pod or worker node to do that. You can use kubectl cp to copy files to the pod and then move it to the mounted disk like you normally would. or you can ssh to the worker node and copy files over ssh to the node and put files to the mounted disk.

Related

How to update old certificates in docker container filesystem

I need to update certificates that are currently in docker containers running via kubernetes pods. The three pods containing these certificates are titled 'app', 'celery' and 'celery beat'
When I run
kubectl exec -it app -- sh
and then ls
I can see that the old certificates are there. I have new certificates on my VM filesystem and need to get these into the running pods so the program starts to work again. I tried rebuilding the docker images used to create the running containers (using the existing docker compose file), but that didn't seem to work. I think the filesystem in the containers was initially mounted using docker volumes. That presumably was done locally whereas now the project is on a remote Linux VM. What would be the natural way to get the new certs into the running pods leaving everything else the same?
I can kubectl cp the new certs in, the issue with that is that when the pods get recreated, they revert back to the old certificates.
Any help would be much appreciated.
Check in your deployment file, in the volume section, if there is some mention of configmap, secret, PV or PVC with a name more likely "certs" (normally we use names like this), if it exist, and the mention is secret or configmap, you just need to update this resource directly. If the mention is a PV or PVC, you'll need to update it by CLI for example, and I suggest you to change to a secret.
Command to check your deployment resource: kubectl get deploy <DEPLOY NAME> -o yaml (if you don't use deployment, change it to the right resource kind).
Also, you can access your pod shell and run df -hT this probably will prompt your drives and mount points.
In the worst scenario when the certs were added during the container build, you can solve it by (This is not the best practice. The best practice is to build a new image):
Edit the container image, remove the certs, push with a new tag (don't overwrite the old one).
Create a secret with the new certs
Mount this secret in the same path and using the same names.
Change the image version in the deployment.
You can use the kubectl edit deploy <DEPLOY NAME> to edit your resource.
To edit your container image, use docker commit: https://docs.docker.com/engine/reference/commandline/commit/

Copying files from a kubernetes pod container

Is there a recommended way of copying files from a pod periodically. I have a pod with an empty storage and no persistence volume. So wanted to periodically copy some log files from the pod containers to a nfs share. I can run a cronjob and invoke kubectl copy but wondering if there is a better way of doing this?
I think the better way for your case is to mount the NFS volume on your Pod to directly write the logs on it : https://kubernetes.io/docs/concepts/storage/volumes/#nfs
Run a cronjob as a scheduled job to copy the files to target location

Transfer file from kubernetes cluster to other ec2 machine

I have a pod in a Kubernetes(k8s) which has a Java application running on it in a docker container. This application produces logs. I want to move these log files to another amazon EC2 machine. Both the machines are linux based. How can this be done. Is it possible to do so using simple scp command?
For moving logs from pods to your log store , you can use the following options to do it all the time for you , instead of one time copy:
filebeat
fluentd
fluentbit
https://github.com/fluent/fluent-bit-kubernetes-logging
https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd
To copy a file from a pod to a machine via scp you can use the following command:
kubectl cp <namespace>/<pod>:/path/inside/container /path/on/your/host
You can copy file(s) from a Kubernetes Container by using the kubectl cp command.
Example:
kubectl cp <mypodname>:/var/log/syslog .

How do I get a file from a one-off process in Kubernetes?

I have a test process which produces a file as an output.
I want to start this test process during a build, run it to completion, then collect the file that it produces and copy it back into the build context.
What is the correct way to do this in Kubernetes/Helm?
The build process has access to kubectl and helm CLI tools.
I have a requirement not to use kubectl exec, because the cluster settings do not allow it.
Some details:
I was able to configure a one-off process using a Pod.
I set up the process to store the output file in a volume mount, which is mounted to an emptyDir volume.
I cannot figure out how to get the output file.
I tried kubectl cp, but I can't get it to work (no such file or directory).
I can't figure out how to inspect the contents of a stopped container.
I can't figure out how to see what's in the volume.
kubectl logs shows that the test process ran successfully. The file is generated within the container and stored at the expected location.
Quick update:
In my local minikube environment, I was able to set up a persistent volume and copy the output file back to the host file system. I will try it next in Jenkins environment.
Here is the output from kubectl cp on my local (boot2docker) environment:
$ kubectl cp my-pod:/home/node/output . -c mycontainer
error: home/node/output no such file or directory
/home/node/output is the volumeMount path within the container.
I have a requirement not to use kubectl exec, because the cluster settings do not allow it.
Without the kubectl exec command, I can suggest to do it that way:
Run your test as a Job inside a cluster.
Use shared volume like NFS or SMB to store your file.
Get files from the shared volume, which you can mount to your build system.
Also, many build systems have an Artifacts storage, and it can be the best option to store test results.

How to provide a persistent ubuntu env by k8s

I can provide a ubuntu with ssh by docker, and user can setup their env.
For example, he apt-get install something and modify his bashrc, vimrc and so on.
Once I restart this computer, the user still has same env after restart finished.
How can I provide same service by k8s?
Once I restart the node, it will create another pod on other computer.
But the env is based on init image, not the latest env from the user.
The naive way, mount all volume on the shared storage(PV + PVC). Such as /bin /lib /opt /usr /etc /lib64 /root /var /home and so on(Each possible directory may effected by any installation). What is the best practice or other way to do this?
#Saket is Correct.
If a docker container needs to persist its state (in this case the user changing something inside the container), then that state must be saved somewhere... How would you do this with a VM? Answer: save to disk.
In k8s storage is represented as a persistent volume. Something called a PVC (persistent volume claim), is used to maintain the relationship between the POD (your code) and the actual storage volume (whose implementation details you are abstracted from). The latest version of k8s supports the dynamic creation of persistent volumes, so all you have to do is create a unique PVC specific to each user, when deploying their container (I assume here you have a "Deployment" and "Service" for each user as well).
In conclusion... Unusual to run SSH within a container. Have you considered giving each user their own k8s environment instead? For example Openshift is multi-tenanted. Indeed Redhat are integrating Openshift as a backend for Eclipse Che, thereby running the entire IDE on k8s. See:
https://openshift.io/
I would advise you to use ConfigMaps (https://github.com/kubernetes/kubernetes/blob/master/docs/design/configmap.md). This guide should help what you are trying to do: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-pod-environment-variables
Configmaps also allow you to store scripts, so you could have a .bashrc (or a section) stored in a confipmap.

Resources