I have a kubernetes cluster in which I have jenkins and spinnaker pods up and running. I need to implement a logging mechanism which collects and sends logs to splunk server. I chose to do so using fluentd. I have deployed a daemon set of fluentd to run on each node and collect logs from each node and send to splunk server.
It is working fine for logs that we see using "kubectl logs" or logs that come to stdout. However, I need to pick logs from a jenkins job (Console output of a jenkins job build). These logs are not going to std out of the node, and are stored at /var/jenkins_home/jobs/XXX/builds/<buildno> inside the container storage which is not directly accessible to fluentd for log collection.
I am open for any kind of solution to this problem. Please suggest.
Fluentd dont have any feature like this in case of kubernetes as kubernetes will not allow direct access to any third party plugin to read the data from container. For STDOUT log also first processed by kubernetes and kept at node level. After that you will be able to see.
As a workaround you can follow below link.
Kubernetes - How to read logs that are written to files in pods instead of stdout/stderr?
Long way:
Add hostpath directory volume to a jenkins deployment:
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
Run a separate FluentBit/Fluentd to collect logs from that directory.
Both of them support putting path in a dedicated field in a log or to a tag:
https://docs.fluentbit.io/manual/pipeline/inputs/tail
Use path/tag to parse-out job name and organize the destination storage accordingly. Or just leave it as is and filter on the view.
Short way:
Use jenkins kubernetes plugin to run each job as a separate pod:
https://plugins.jenkins.io/kubernetes/.
Then it will get collected and tagged by a daemonset separately.
Regarding Cloud Native Jenkins log management
Pipeline Log Storage API and reference implementations are available for preview, only Jenkins Pipeline job types are supported. Related JEP: JEP-210. Available plugins are AWS CloudWatch and Elasticsearch
Jenkins core APIs and reference implementations have not been merged/released yet, but prototypes are available for evaluation. Related JEPs: JEP-207, JEP-212. Available plugins are External Logging API and Elasticsearch
Reference: https://www.jenkins.io/sigs/cloud-native/pluggable-storage/
Related
My requirement is to trigger a CI & CD on a kubernetes on prem infra, whenever a PR has been raised. Jenkins X is an ideal candidate but unfortunately due to few proxy issues it didnt come to fruitition.
Coming to kubernetes-operator, looking for few clarifications.
I've 4 nodes cluster, with one node being the leader.
Do I've to set up a new instance of Jenkins before hand on my K8s cluster or kubernetes-operator does that for me ?
Looking to access the Jenkins instance under the domain : jenkins.mybusinessunit.myorg.com/jenkins
Do I have to do any addtional configurations to enable master - slave set up.
Does kubernetes-operator provides a feature to support CI/CD model like Jenkins X ?
Thanks in advance.
As per your comments you are actually interessted in more of a cloud-native solution to operating a Jenkins, so here goes.
Since you already have a Kubernetes cluster and would like to use the Jenkins Kubernetes operator, then I would recommend you use the Jenkins Kubernetes Plugin for managing your workloads.
The Jenkins Kubernetes plugin enables you to run each of your pipelines in a separate Pod in your Kubernetes cluster, and once the required Service resources are setup, then the communication between master and slave pods is completely regulated by the plugin. I would recommend that you look into their documentation, which is quite good ( in comparison to other plugins ).
Now since you are also using the Jenkins Kubernetes operator you should know that the plugin is installed as one of the default plugins and is available as soon as your Jenkins instance is spun up. I would recommend you read through the Jenkins Kubernetes operator documentation to get a better grasp of what happens when that is running.
So now I will move onto your questions.
Do I've to set up a new instance of Jenkins before hand on my K8s cluster or kubernetes-operator does that for me ?
If you install the Jenkins Kubernetes operator via Helm chart, then no, you get a Jenkins master instance included. Otherwise if you install the controller into your cluster manually, then you will need to create a Jenkins CRD, which will create a Jenkins instance for you.
Looking to access the Jenkins instance under the domain : jenkins.mybusinessunit.myorg.com/jenkins
Use Ingress + Load Balancer + DNS Service or expose the Pod via NodePort. Note that exposing your master Pod via NodePort may require you to make your Jenkins Master instance publicly available ( and that may not be wise ).
Do I have to do any addtional configurations to enable master - slave set up.
Please refer to the documentation of Jenkins Kubernetes plugin and Jenkins Kubernetes operator. All details are provided there, but configuration is rather minimal.
Does kubernetes-operator provides a feature to support CI/CD model like Jenkins X
No. The Jenkins Kubernetes operator is there only to manage your Jenkins instance and backups in immutable fashion. Jenkins X can be used in combination with Jenkins, but neither replaces the other completely.
I have inherited a Jenkins installation which is used by multiple remote teams, and running on an Amazon EKS cluster. For reasons that are not really relevant, I need to move this Jenkins workload to a new EKS cluster.
Deploying jenkins itself is not a major issue, I am doing so using helm. The persistence of the existing jenkins deployment is bound to an Amazon EBS volume, the persistence of the new deployment will also be. The mount point will be /var/jenkins_home
I'm trying to find a simple way of migrating everything from the current jenkins installation and configuration to the new one. This includes mainly:
Authorization Strategy (RBAC)
Jobs
Plugins
Cloud and Agent Config
I know that everything required is most likely in Jenkins Home. Could I in theory just dump out the current Jenkins Home folder and import into the new running jenkins container using kubetl cp or something like that? Is there an easier way? Is this unsafe in some way?
I am trying to start a VM that already exist in Google cloud with my jenkins to use it as a slave. The reason is because if I start the template of this VM I need to do a few things before I can use my Jenkins code.
Does anyone know how to start VM's that already exist in my VM Pool in Google Could via Jenkins?
There might be 2 approaches to this depending on the operations that you need to run before in your machine that is preventing you from just recreating it.
First and possibly the most straightforward given the restriction that the machine already exists would be talking directly to the GCE API in order to list and start the machine from Jenkins (using a build step).
Basically you can make requests to the GCE API to do operations with your instances. I suggest doing this using gcloud from within the Jenkins master node as it'll save you having to write your own client. It's straightforward as you only have to "install" it in your master and you can make it work safely using a service account.
Below is the outline of this approach:
Download the cloud-sdk to your master node following these release instructions.
You can do this once outside of Jenkins or directly in the build step, doesn't matter as long as Jenkins and its user is able to get the binary.
Create the service account, generate authentication keys and give it permissions to interact with GCE.
Using a service account is the way to go as you can restrict its permissions to the operations that are relevant for you.
Once you get the service account that will be bound to your gcloud client, you'll need to set it up in Jenkins. You might want to do this in a build step (I'm using Groovy here but it should be easy to translate it to the UI):
stage('Start_machine'){
steps{
//Considering that you already installed gcloud in this node, but you can also curl it from here if that's convenient
// You can set this as an scope env var in Jenkins or just hard code it
gcloud config set project ${GOOGLE_PROJECT_ID};
// This needs a json file location accessible by jenkins like: --key-file /var/lib/jenkins/..key.json
gcloud auth activate-service-account --key-file ${GOOGLE_SERVICE_ACCOUNT_KEY};
// Check the reference on this command: https://cloud.google.com/sdk/gcloud/reference/compute/instances/start
gcloud compute instances start my_existing_instance;
echo "Instance started"
}
post{
always{
println "Result : ${currentBuild.result}";
}
}
Wrapping up: You basically create a service account that has the permissions to start your instances. Download an client that can interact with the GCE API (gcloud), authenticate it and start the instance, all from within your pipeline.
The second approach might be easier if there were no constraints regarding the preexisting machine.
Jenkins has a plugin for Compute Engine that will automatically spin up new workers whenever needed.
I know that you need to do some previous operations before Jenkins sends some work to these slave machines. However, I want to bring to your attention that this plugin also considers the start up scripts.
So there's always the option to preload your operations there before the machine takes off and by the time it's ready, you might have everything done.
Hope this helps.
I currently am using filebeat to ship my Jenkins build log from /var/log/jenkins.
I grok the build logs with Logstash so I can display the success/fail etc in Kibana and make some dashboard --> this is working well.
One thing I cannot seem to get is the total build times for the job as a whole.
I am using pipeline and multi pipeline build job types.
I can see the build stage time totals in the console logs but no matter the logging level I set globally for Jenkins, these do not display in the logs.
Has anyone managed to get this right?
Thanks
We have been using this Jenkins logstash-plugin https://wiki.jenkins.io/display/JENKINS/Logstash+Plugin
successfully to stash the data from Jenkins jobs to elasticsearch.
Supported indexers by this plugins are available in this link
https://wiki.jenkins.io/display/JENKINS/Logstash+Plugin#LogstashPlugin-IndexersCurrentlySupported
We are using the elasticsearch indexer which stashes the data directly to elasticsearch but if you want your data to go via logstash you can use Logstash indexer.
The payload format for the data is as below
https://wiki.jenkins.io/display/JENKINS/Logstash+Plugin#LogstashPlugin-JSONPayloadFormat
I have a jenkins master defined as a stack on cloud.docker.com. I also have set up a couple of other stacks that contain the services I need to test against during my build process (some components use mongo, some use rabbitmq, etc).
docker cloud (man I wish they picked a more unique name!) has a REST api to start stacks, and I've even written a script that will redeploy the stack based on UUID, but I can't figure out how to get the jenkins master to start the stack or how to execute my script.
The jenkins slave setup plugin didn't document how to attach the "setup" to a node, and none of the other plugins I looked at appeared to have support for neither docker cloud nor a way of using arbitrary rest apis on slave startup.
I've also tried just using the docker daemon to launch containers directly, but docker-cloud appears to remove images not associated with stacks or services on its managed node, and then the jenkins docker plugin complains it can't find the slave image.
Everything is latest-and-greatest version-wise. The node itself is running on AWS and otherwise appears to function well.