How to make custom reports in Jenkins? - jenkins

In Jenkins, I want to get information like how many times builds failed in a given period, which tests failed multiple times in successive builds, did each of these failed tests fail due to same or different reasons each time, is a test failing in multiple environments or only some environments etc.
How do I get such information from Jenkins ?

Your question is a bit vague. So I will give you the solution I used to solve this problem with the use of jenkins's influxDB plugin with InfluxDB as a database and Grafana as a Dashboard tool.
Setup InfluxDB
I use the docker image: influxdb:1.7-alpine
mounted volumes /docker-entrypoint-initdb.d and /var/lib/influxdb
In folder /docker-entrypoint-initdb.d I added a file db.iql to create my database
CREATE DATABASE "jenkins" WITH DURATION 24w REPLICATION 1 SHARD DURATION 1d NAME "jenkins_retention_6month"
Setup the InfluxDB plugin
See section configuration of the plugin's page
https://wiki.jenkins.io/display/JENKINS/InfluxDB+Plugin
Use the plugin
the InfluxDbPublisher step can be used to collect data using plugins like the Metrics Plugin, however I use it with customDataMap
influxDbPublisher(
selectedTarget: 'myTarget',
customDataMap: [
myMeasure: [
field: value
]
],
customDataMapTags: [
myMeasure: [
tag: 'someTag'
]
]
])
Everything is documented on
https://wiki.jenkins.io/display/JENKINS/InfluxDB+Plugin
Setup Grafana
I use the docker image: grafana/grafana:6.4.3
I mounted volume /var/lib/grafana
When the instance of grafana is running, add your influxdb database as a datasource
I configured grafana with the following environment variables:
GF_SERVER_DOMAIN=grafana.mydomain.com
GF_SECURITY_ADMIN_PASSWORD=MyPassword
GF_SMTP_ENABLED=true
GF_SMTP_HOST=smtp:25
GF_SMTP_FROM_ADDRESS=grafana#grafana.mydomain.com
I used docker image namshi/smtp to get a smtp server
Create Grafana Dashboards
It is very easy to create a new dashboard with the auto completion feature of grafana. You will certainly need to tweak few times the data you sent with the influxDbPublisher step.
Now you have your dashboards, you can setup alerts in order to get notified in advance by email when something od is happening with your CI

Related

Grafana clone alerts

Grafana 9.2.2 works with created alerts. I need to run exactly the same Grafana with notifications in a docker container on any other host. Do this without manually creating alerts.
I can't find JSON file with alert variables.
The alerts config stored inside the Grafana DB, not in any config file, so its not easy migration.
You can take out the DB dump of Grafana from the container where alerts are configured and restore the DB in new Grafana container.
Reference : https://grafana.com/blog/2020/01/13/how-to-migrate-your-configuration-database/
One more option is to create alerts using file Provisioning instead of UI so that the YAML can be applied to any cluster.

Unable to pull logs from Airflow Worker

I've got a simple docker development setup for Airflow that includes separate containers for the Airflow UI and Worker. I'm encountering a 403 Forbidden error whenever I attempt to view the log for a task in the Airflow UI.
So far I've ensured they all have the same secret key (in fact, using Docker Volumes they're all reading the exact same configuration file) but this doesn't seem to help. I haven't done anything about time sync, but I'd expect that docker containers would effectively be sharing the system clock anyway so I don't see how they'd get out of sync in the first place.
I can find the log file on the airflow worker, and it has run successfully - but something is obviously missing that should be allowing the airflow UI to display that (and it would be much more convenient for my workflow to be able to see the logs in the UI rather than having to rummage around on the worker).

Why can't I connect to the influx shell?

After installing influxdb, the ui screen works well.
In the example(my environments ubuntu 18.04), if you enter './influx', you are connected to the
influx shell, but the following appears.
Incorrect Usage. flag provided but not defined: -precision
NAME:
influx - Influx Client
USAGE:
influx [command]
COMMANDS:
version Print the influx CLI version
write Write points to InfluxDB
bucket Bucket management commands
completion Generates completion scripts
query Execute a Flux query
config Config management commands
org, organization Organization management commands
delete Delete points from InfluxDB
user User management commands
task Task management commands
telegrafs List Telegraf configuration(s). Subcommands manage Telegraf configurations.
dashboards List Dashboard(s).
export Export existing resources as a template
secret Secret management commands
v1 InfluxDB v1 management commands
auth, authorization Authorization management commands
apply Apply a template to manage resources
stacks List stack(s) and associated templates. Subcommands manage stacks.
template Summarize the provided template
bucket-schema Bucket schema management commands
ping Check the InfluxDB /health endpoint
setup Setup instance with initial user, org, bucket
backup Backup database
restore Restores a backup directory to InfluxDB
remote Remote connection management commands
replication Replication stream management commands
server-config Display server config
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--help, -h show help
Error: flag provided but not defined: -precision```
Could it be because the version(1.xx -> 2.xx) is different? Or is there another way?
influxdb 1.x and 2.x is very different. 1.x used database and 2.x used bucket. so It's different to use influx cli.

CI testing with docker-compose on Jenkins with Kubernetes

I have tests that I run locally using a docker-compose environment.
I would like to implement these tests as part of our CI using Jenkins with Kubernetes on Google Cloud (following this setup).
I have been unsuccessful because docker-in-docker does not work.
It seems that right now there is no solution for this use-case. I have found other questions related to this issue; here, and here.
I am looking for solutions that will let me run docker-compose. I have found solutions for running docker, but not for running docker-compose.
I am hoping someone else has had this use-case and found a solution.
Edit: Let me clarify my use-case:
When I detect a valid trigger (ie: push to repo) I need to start a new job.
I need to setup an environment with multiple dockers/instances (docker-compose).
The instances on this environment need access to code from git (mount volumes/create new images with the data).
I need to run tests in this environment.
I need to then retrieve results from these instances (JUnit test results for Jenkins to parse).
The problems I am having are with 2, and 3.
For 2 there is a problem running this in parallel (more than one job) since the docker context is shared (docker-in-docker issues). If this is running on more than one node then i get clashes because of shared resources (ports for example). my workaround is to only limit it to one running instance and queue the rest (not ideal for CI)
For 3 there is a problem mounting volumes since the docker context is shared (docker-in-docker issues). I can not mount the code that I checkout in the job because it is not present on the host that is responsible for running the docker instances that I trigger. my workaround is to build a new image from my template and just copy the code into the new image and then use that for the test (this works, but means I need to use docker cp tricks to get data back out, which is also not ideal)
I think the better way is to use the pure Kubernetes resources to run tests directly by Kubernetes, not by docker-compose.
You can convert your docker-compose files into Kubernetes resources using kompose utility.
Probably, you will need some adaptation of the conversion result, or maybe you should manually convert your docker-compose objects into Kubernetes objects. Possibly, you can just use Jobs with multiple containers instead of a combination of deployments + services.
Anyway, I definitely recommend you to use Kubernetes abstractions instead of running tools like docker-compose inside Kubernetes.
Moreover, you still will be able to run tests locally using Minikube to spawn the small all-in-one cluster right on your PC.

How does Spread know to update image in Kubernetes?

I want to set up a Gitlab CD to Kubernetes and I read this article
However, I am wondering, how is it that my K8 cluster would be updated with my latest Docker images?
For example, in my .gitlab-ci.yaml file I will have a build, test, and release stage that ultimately updates my cloud Docker images. By setting up the deploy stage as instructed in the article:
deploy:
stage: deploy
image: redspreadapps/gitlabci
script:
- null-script
would Spread then know to "magically" update my K8 cluster (perhaps by repulling all images, perform rolling-updates) as long as I set up my directory structure of K8 resources as is specified by Spread?
I don't have a direct answer, but from looking at the spread project it seems pretty dead. Last commit in Aug last year with a bunch of issues and not supporting any of the newer kubernetes constructs (e.g. deployments).
The typical way to update images in kubernetes nowadays is to run a command like kubectl set image <deployment-name> <image>. This will in turn perform a rolling update on the deployment and shutting down a POD at a time updating it with the new image. See this doc.
Since spread is from before that, I assume they must use rolling update replication controller with a command like kubectl rolling-update NAME -f FILE and picking up the new image from the configuration file in their project folder (assuming it changed). See this doc.

Resources