Transfer file from kubernetes cluster to other ec2 machine - docker

I have a pod in a Kubernetes(k8s) which has a Java application running on it in a docker container. This application produces logs. I want to move these log files to another amazon EC2 machine. Both the machines are linux based. How can this be done. Is it possible to do so using simple scp command?

For moving logs from pods to your log store , you can use the following options to do it all the time for you , instead of one time copy:
filebeat
fluentd
fluentbit
https://github.com/fluent/fluent-bit-kubernetes-logging
https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd

To copy a file from a pod to a machine via scp you can use the following command:
kubectl cp <namespace>/<pod>:/path/inside/container /path/on/your/host

You can copy file(s) from a Kubernetes Container by using the kubectl cp command.
Example:
kubectl cp <mypodname>:/var/log/syslog .

Related

Automate the deployment of a .war application on a Payara-Full Pod in a Kubernetes cluster

As the title says, I need to automate the deployment of an application running on a Payara-Full Pod.
For now I've manually deployed the .war file by copying it inside the Pod (through the kubectl cp command), and then logging inside the pod console through kubectl exec --stdin --tty <pod-name> -- /bin/bash.
Once I'm logged in, I access the Payara console by running the command asadmin and logging in, and then I manually deploy the .war through deploy <filename>.war.
How can I automate this process?
I thought of using a custom Payara image or an InitContainer, but I don't know what is the best practice for this type of deployment.
You can merely copy your .war to payara autodeploy directory inside container (${PAYARA_HOME}/glassfish/domins/[domain you use]/autodeploy) and restart service. You web-app will be deployed automatically on domain restart.

Run executable inside Azure Kubernetes Service Pod

I want to use JMeter with OS Sampler for load testing. Jmeter is deployed on Azure Kubernetes Service(AKS). Can we run executable inside AKS Pod ( Jmeter slave container will execute that exe inside pod)?
Regards,
Amit Agrawal
you can run a second container in your pod using using the sidecar container approach.
https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/#creating-a-pod-that-runs-two-containers
If your Os Sampler needs access to the PID of your main application running in the other Container, you will need to turn on ShareProcessNamespace
https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/
this will allow your JMETER exe to see the PID of the other process in the same POD.
Here's an repo with some master/slave manifest example forJMETER (note that it's not using the side-car container pattern)
https://github.com/kubernauts/jmeter-kubernetes
While this is viable and possible a working solution, assuming you are looking at the CPU/Memory metrics, you could also leverage the Prometheus stack with the node-exporter
https://github.com/helm/charts/tree/master/stable/prometheus-operator
This could remove the need for your JMETER setup if you are not allowing for specific Jmeter metrics
I found another way, copy executable and all its binaries in JMeter slave using follwing command.
kubectl cp <source directory> <jmeter-slave-podname>:/<target directory>
Provide all permission to target directory in jmeter slave pod.

How to use azure disk in AKS environment

I am trying to setup the AKS in which I have used azure disk to mount the source code of the application. When I am using kubectl describe pods command then also it is showing as mounted but I dont know how may I copy the code into that?
I got some recommendations that use kubectl cp command but my pod name is changing each time whenever I am deploying so please let me know what should i do?
you'd need to copy files to the disk directly (not to the pod). you can use your pod or worker node to do that. You can use kubectl cp to copy files to the pod and then move it to the mounted disk like you normally would. or you can ssh to the worker node and copy files over ssh to the node and put files to the mounted disk.

How do I get a file from a one-off process in Kubernetes?

I have a test process which produces a file as an output.
I want to start this test process during a build, run it to completion, then collect the file that it produces and copy it back into the build context.
What is the correct way to do this in Kubernetes/Helm?
The build process has access to kubectl and helm CLI tools.
I have a requirement not to use kubectl exec, because the cluster settings do not allow it.
Some details:
I was able to configure a one-off process using a Pod.
I set up the process to store the output file in a volume mount, which is mounted to an emptyDir volume.
I cannot figure out how to get the output file.
I tried kubectl cp, but I can't get it to work (no such file or directory).
I can't figure out how to inspect the contents of a stopped container.
I can't figure out how to see what's in the volume.
kubectl logs shows that the test process ran successfully. The file is generated within the container and stored at the expected location.
Quick update:
In my local minikube environment, I was able to set up a persistent volume and copy the output file back to the host file system. I will try it next in Jenkins environment.
Here is the output from kubectl cp on my local (boot2docker) environment:
$ kubectl cp my-pod:/home/node/output . -c mycontainer
error: home/node/output no such file or directory
/home/node/output is the volumeMount path within the container.
I have a requirement not to use kubectl exec, because the cluster settings do not allow it.
Without the kubectl exec command, I can suggest to do it that way:
Run your test as a Job inside a cluster.
Use shared volume like NFS or SMB to store your file.
Get files from the shared volume, which you can mount to your build system.
Also, many build systems have an Artifacts storage, and it can be the best option to store test results.

Create a new image from a container’s changes in Google Cloud Container

I have an Image which i should add a dependency to it. Therefore I have tried to change the image when is running on the container and create new Image.
I have follow this article with the following commands after :
kubectl run my-app --image=gcr.io/my-project-id/my-app-image:v1 --port 8080
kubectl get pods
kubectl exec -it my-app-container-id -- /bin/bash
then in the shell of container, i have installed the dependency using "pip install NAME_OF_Dependncy".
Then I have exited from the shell of container and as it have been explained in the article, i should commit the change using this command :
sudo docker commit CONTAINER_ID nginx-template
But I can not find the corresponding command for Google Kubernetes Engine with kubectl
How should i do the commit in google container engine?
As with K8s Version 1.8. There is no way to do Hot Fix changes directly to the images.For example, Committing new image from running container. If you still change or add something by using exec it will stay until the container is running. It's not best practice in K8s eco-system.
The recommended way is to use Dockerfile and customise the images according to the necessity and requirements.After that, you can push that images to the registry(public/ private ) and deploy it with K8s manifest file.
Solution to your issue
Create a Dockerfile for your images.
Build the image by using Dockerfile.
Push the image to the registry.
write the deployment manifest file as well service manifest file.
apply the manifest file to the k8s cluster.
Now If you want to change/modify something, you just need to change/modify the Dockerfile and follow the remaining steps.
As you know that containers are a short living creature which does not have persist changed behaviour ( modified configuration, changing file system).Therefore, It's better to give new behaviour or modification at the Dockerfile.
Kubernetes Mantra
Kubernetes is Cloud Native product which means it does not matter whether you are using Google Cloud, AWS or Azure. It needs to have consistent behaviour on each cloud provider.

Resources