Run executable inside Azure Kubernetes Service Pod - azure-aks

I want to use JMeter with OS Sampler for load testing. Jmeter is deployed on Azure Kubernetes Service(AKS). Can we run executable inside AKS Pod ( Jmeter slave container will execute that exe inside pod)?
Regards,
Amit Agrawal

you can run a second container in your pod using using the sidecar container approach.
https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/#creating-a-pod-that-runs-two-containers
If your Os Sampler needs access to the PID of your main application running in the other Container, you will need to turn on ShareProcessNamespace
https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/
this will allow your JMETER exe to see the PID of the other process in the same POD.
Here's an repo with some master/slave manifest example forJMETER (note that it's not using the side-car container pattern)
https://github.com/kubernauts/jmeter-kubernetes
While this is viable and possible a working solution, assuming you are looking at the CPU/Memory metrics, you could also leverage the Prometheus stack with the node-exporter
https://github.com/helm/charts/tree/master/stable/prometheus-operator
This could remove the need for your JMETER setup if you are not allowing for specific Jmeter metrics

I found another way, copy executable and all its binaries in JMeter slave using follwing command.
kubectl cp <source directory> <jmeter-slave-podname>:/<target directory>
Provide all permission to target directory in jmeter slave pod.

Related

Jenkins: docker agent with docker container in it

I am about to create new structure for CI/CD for our Jenkins. My goal is to create an environment for building and compiling apps. The environment has to be same on the server and developers local machines.
I need to come up with solution, that allows developers to build app on their local machines in the same way as it is compiled on Jenkins worker nodes.
I think, that using docker container to have one fixed environment is a good way. So I have created docker container [1] , that contains all necessary tools to build the application. Now developers can build theirs apps on local machines in the same way as Jenkins does. When someone need to build the app, he just pulls the container, mount source code directory into the container and executes command in the container.
Building looks like this:docker run --rm -v$(pwd):/app env_cont 'build'.
On the server I use a plugin for docker pipelines.
This solution works fine. Building apps is platform interdependent and can be done on any machine.
Now I started toying with the idea to use docker for my Jenkins worker nodes as well. Like having one (physical) node with exposed docker API and use it as a docker cloud for spawning worker nodes [2] . I like this approach, but here comes the problem: How to use docker nodes [2] for running docker containers in it [1] . I guess, that I can install docker tool inside docker container [2] , that is used as a worker node and run the container in it. So the process would look like this:
Job is added into Jenkins queue.
Jenkins connects to worker node's docker API and spawns docker container [2] as a new worker node.
Worker node (which is running as a container) runs another "env_cont" container [1] (with environment for building) and build the app inside the "env_cont" container.
My question is. Is this a good practice? I am little bit worried, that i kinda ower-thinking the problem. What do you thing is a good approach?

Transfer file from kubernetes cluster to other ec2 machine

I have a pod in a Kubernetes(k8s) which has a Java application running on it in a docker container. This application produces logs. I want to move these log files to another amazon EC2 machine. Both the machines are linux based. How can this be done. Is it possible to do so using simple scp command?
For moving logs from pods to your log store , you can use the following options to do it all the time for you , instead of one time copy:
filebeat
fluentd
fluentbit
https://github.com/fluent/fluent-bit-kubernetes-logging
https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd
To copy a file from a pod to a machine via scp you can use the following command:
kubectl cp <namespace>/<pod>:/path/inside/container /path/on/your/host
You can copy file(s) from a Kubernetes Container by using the kubectl cp command.
Example:
kubectl cp <mypodname>:/var/log/syslog .

Best practice using docker inside Jenkins?

Hi I'm learning how to use Jenkins integrated with Docker and I don't understand what should I do to communicate them.
I'm running Jenkins inside a Docker container and I want to build an image in a pipeline. So I need to execute some docker commands inside the Jenkins container.
So the thing here is where docker come from. I understand that we need to bind mount the docker host daemon (socket) to the Jenkins container but this container still needs the binaries to execute Docker.
I have seen some approaches to achieve this and I'm confused what should I do. I have seen:
bind mount the docker binary (/usr/local/bin/docker:/usr/bin/docker)
installing docker in the image
if I'm not wrong the blue ocean image comes with Docker pre-installed (I have not found any documentation of this)
Also I don't understand what Docker plugins for Jenkins can do for me.
Thanks!
Docker has a client server architecture. The server is the docker deamon and the client is basically the command line interface that allows you to execute docker ... from the command line.
Thus when running Jenkins inside Docker you will need access to connect to the deamon. This is acheieved by binding the /var/run/docker.sock into the container.
At this point you need something to communicate with the Deamon which is the server. You can either do that by providing access to docker binaries. This can be achived by either mounting the docker binaries, or installing the
client binaries inside the Jenkins container.
Alternatively, you can communicate with the deamon using the Docker Rest API without having the docker client binaries inside the Jenkins container. You can for instance build an image using the API.
Also I don't understand what Docker plugins for Jenkins can do for me
The Docker plugin for Jenkins isn't useful for the use case that you described. This plugin allows you to provision Jenkins slaves using Docker. You can for instance run a compilation inside a Docker container that gets automatically provisioned by Jenkins
It is not best practice to use Docker with Jenkins. It is also not a bad practice. The relationship between Jenkins and Docker is not determined in such a manner that having Docker is good or bad.
Jenkins is a Continuous Integration Server, which is a fancy way of saying "a service that builds stuff at various times, according to predefined rules"
If your end result is a docker image to be distributed, you have Jenkins call your docker build command, collect the output, and report on the success / failure of the docker build command.
If your end result is not a docker image, you have Jenkins call your non-docker build command, collect the output, and report on the success / failure of the non-docker build.
How you have the build launched depends on how you would build the product. Makefiles are launched with make, Apache Ant with ant, Apache Maven with mvn package, docker with docker build and so on. From Jenkin's perspective, it doesn't matter, provided you provide a complete set of rules to launch the build, collect the output, and report the success or failure.
Now, for the 'Docker plugin for Jenkins'. As #yamenk stated, Jenkins uses build slaves to perform the build. That plugin will launch the build slave within a Docker container. The thing built within that container may or may not be a docker image.
Finally, running Jenkins inside a docker container just means you need to bind your Docker-ized Jenkins to the external world, as #yamenk indicates, or you'll have trouble launching builds.
Bind mounting the docker binary into the jenkins image only works if the jenkins images is "close enough" - it has to contain the required shared libraries!
So when sing a standard jenkins/jenkins:2.150.1 within an ubuntu 18.04 this is not working unfortunately. (it looked so nice and slim ;)
So the the requirement is to build or find a docker image which contains a compatible docker client for the host docker service is.
Many people seem to install docker in their jenkins image....

Docker container Jenkins - access home path

I have just started Jenkins setup on docker. I started docker container and am planning to run the ANT script I have written, this is where problems started.
Jenkins kept on reporting
ERROR: Unable to find build script at /var/jenkins_home/workspace/SampleSCM/.SampleProject/build.xml
I am not sure how to access /var/Jenkins_home in my local host. Can someone please help ?
Thanks.
You won't find this location on your laptop, because it is not there. It is inside the docker container.
Normally you would checkout out the sources as part of your build. You do not put them there yourself.
If you want to see the files you can use the jenkins gui, or ssh/attach your container (docker attach) and look in there. The idea about docker is that it runs isolated, unless you tell it to map volumes (See here for a reference)

Jenkins and Docker

Is there a way to do automation with Jenkins to deploy and run containers? I heard we can use the Docker plugins for it. But there isn't any tutorials or info that explains how we can use Jenkins and Docker together. Anyone who uses them both care to share?
First off in my implementation of things Jenkins is actually a container in Docker.
Here's where it may seem things get bizarre: I actually install docker-ce inside of that container, not because I want to run Docker-in-Docker. I disable the Docker daemon from running (sysctl) but I want the command line.
I install docker-compose and docker-machine on the Jenkins host and add the "jenkins" userid to the docker group.
There's a bunch of other steps that I do but basically they are the same steps that a user is going to go through (except it's all in my Docker file) and I add the results of "docker-machine env" to the global variables in the Jenkins configuration.
head spinning yet?
Applications I have Jenkins deploying all have a "jenkins" subdirectory with a Jenkins file in it to perform the dirty work as a pipeline. (build/test/deploy)
Deployments for Java apps for instance involve copying the warfile for the application to the correct directory and when the container (or containers) start the application engine (tomcat, Jboss, whatever) picks it up and the application runs.
Have a look at
https://registry.hub.docker.com/search?q=jenkins&searchfield=
and at some Dockerfiles such as
https://registry.hub.docker.com/u/niaquinto/jenkins/dockerfile/
or
https://registry.hub.docker.com/u/aespinosa/jenkins/dockerfile/

Resources