Get LOC count for classes and methods using Jenkins - jenkins

I have installed Jenkins for my projects. The automatic build and deployment is happening successfully.
I would like to get following data:
No. of classes with lines in the range 0-50
No. of classes with lines in the range 301-500
No. of classes with lines in the range 501-1000
No. of classes with lines > 1000 etc.
I'd like the same things for methods: Eg: No. of methods with lines in the range 0-50
How can I get this data?
Please let me know.

I suggest you use http://cloc.sourceforge.net/
You can then extract the data as SQL data import it in a H2 database (in-memory) to
group accordingly to your needs.

Perhaps more than you need, but have you looked at Sonar? (http://www.sonarsource.org/) It integrates with your build, and can provide the metrics you're looking for and a lot more besides.

There are several other useful and easy to use tools:
javancss: A Source Measurement Suite for Java http://www.kclee.de/clemens/java/javancss/
ckjm: Chidamber and Kemerer Java Metrics http://www.spinellis.gr/sw/ckjm/
Some relevant tools:
classycle - http://classycle.sourceforge.net/
jdepend - http://clarkware.com/software/JDepend.html
You can also use XRadar to aggregate all these report and get something called "Project health". XRadar also supports previously mentioned CLOC

I do not know if the issue is still relevant to you. The other responses do not address Jenkins. There are several plugins for Jenkins, such as http://www.dwheeler.com/sloccount/. After you install the plugin, you can retrieve code metrics via the Jenkins REST API.

Related

Is possible to use typeORM with Druid?

Currently I cannot find any information on a Node compatiable ORM for working with druid.
Druid is not officially supported by typeORM.
Druid takes sql("Druid SQL") so hypothetically, I should be able to output the raw sql queries to druid, correct?
I've not seen typeORM directly - rather it's super common for apps to query the Apache Calcite-powered SQL API directly:
https://druid.apache.org/docs/latest/querying/sql.html
Some people build an additional layer with application logic on top first - e.g. what Target have done.
https://imply.io/virtual-druid-summit/enterprise-scale-analytics-platform-powered-by-druid-at-target
Note the bit on NULL handling in case that's important to ya :) https://druid.apache.org/docs/latest/querying/sql.html#null-values

Delete Bigtable row in Apache Beam 2.2.0

In Dataflow 1.x versions, we could use CloudBigtableIO.writeToTable(TABLE_ID) to create, update, and delete Bigtable rows. As long as a DoFn was configured to output a Mutation object, it could output either a Put or a Delete, and CloudBigtableIO.writeToTable() successfully created, updated, or deleted a row for the given RowID.
It seems that the new Beam 2.2.0 API uses BigtableIO.write() function, which works with KV<RowID, Iterable<Mutation>>, where the Iterable contains a set of row-level operations. I have found how to use that to work on Cell-level data, so it's OK to create new rows and create/delete columns, but how do we delete rows now, given an existing RowID?
Any help appreciated!
** Some further clarification:
From this document: https://cloud.google.com/bigtable/docs/dataflow-hbase I understand that changing the dependency ArtifactID from bigtable-hbase-dataflow to bigtable-hbase-beam should be compatible with Beam version 2.2.0 and the article suggests doing Bigtble writes (and hence Deletes) in the old way by using CloudBigtableIO.writeToTable(). However that requires imports from the com.google.cloud.bigtable.dataflow family of dependencies, which the Release Notes suggest is deprecated and shouldn't be used (and indeed it seems incompatible with the new Configuration classes/etc.)
** Further Update:
It looks like my pom.xml didn't refresh properly after the change from bigtable-hbase-dataflow to bigtable-hbase-beam ArtifactID. Once the project got updated, I am able to import from the
com.google.cloud.bigtable.beam.* branch, which seems to be working at least for the minimal test.
HOWEVER: It looks like now there are two different Mutation classes:
com.google.bigtable.v2.Mutation and
org.apache.hadoop.hbase.client.Mutation ?
And in order to get everything to work together, it has to be specified properly which Mutation is used for which operation?
Is there a better way to do this?
Unfortunately, Apache Beam 2.2.0 doesn't provide a native interface for deleting an entire row (including the row key) in Bigtable. The only full solution would be to continue using the CloudBigtableIO class as you already mentioned.
A different solution would be to just delete all the cells from the row. This way, you can fully move forward with using the BigtableIO class. However, this solution does NOT delete the row key itself, so the cost of storing the row key remains. If your application requires deleting many rows, this solution may not be ideal.
import com.google.bigtable.v2.Mutation
import com.google.bigtable.v2.Mutation.DeleteFromRow
// mutation to delete all cells from a row
Mutation.newBuilder().setDeleteFromRow(DeleteFromRow.getDefaultInstance()).build()
I would suggest that you should continue using CloudBigtableIO and bigtable-hbase-beam. It shouldn't be too different from CloudBigtableIO in bigtable-hbase-dataflow.
CloudBigtableIO uses the HBase org.apache.hadoop.hbase.client.Mutation and translates them into the Bigtable equivalent values under the covers

How to get a dataflow job's step details using Java Beam SDK?

I'm using Java Beam SDK for my dataflow job, and com.google.api.services.dataflow.model.Job class gives details about a particular job. However, it doesn't provide any method/property to get dataflow step information such as Elements Added, Estimated Size etc.
Below is the code I'm using to get the job's information,
PipelineResult result = p.run();
String jobId = ((DataflowPipelineJob) result).getJobId();
DataflowClient client = DataflowClient.create(options);
Job job = client.getJob(jobId);
I'm looking for something like,
job.getSteps("step name").getElementsAdded();
job.getSteps("step name").getEstimatedSize();
Thanks in advance.
The SinkMetrics class provides a bytesWritten() method and a elementsWritten() method. In addition, the SourceMetrics class provides an elementsRead() and a bytesRead() method.
If you use the classes in the org.apache.beam.sdk.metrics package to query for these metrics and filter by step, you should be able to get the underlying metrics for the stop (i.e., elements read).
I would add that if you're willing to look outside of the Beam Java SDK, since you're running on Google Cloud Dataflow you can use the Google Dataflow API In particular, you can use projects.jobs.getMetrics to fetch the detailed metrics for the job including the number of elements written/read. You will need to do some parsing of the metrics as there are hundreds of metrics for even a simple job, but the underlying data you're looking for is present via this API call (I just tested).

Dataflow GroupBy -> multiple outputs based on keys

Is there any simple way that I can redirect the output of GroupBy into multiple output files based on Group keys?
Bin.apply(GroupByKey.<String, KV<Long,Iterable<TableRow>>>create())
.apply(ParDo.named("Print Bins").of( ... )
.apply(TextIO.Write.to(*Output file based on key*))
If Sink is the solution, would you please share a sample code w/ me?
Thanks!
Beam 2.2 will include an API to do just that - TextIO.write().to(DynamicDestinations), see source. For now, if you'd like to use this API, you can use the 2.2.0-SNAPSHOT version. Note that this API is experimental and might change in Beam 2.3 or onwards.

Can Jenkins show me the total number/percent of broken builds per month?

I have a Jenkins server that builds/tests about 50 projects. Unfortunately, some of these builds fail, but I don't have a good way to measure whether build failures are increasing or decreasing in frequency over time.
What I'd like is something along these lines:
A report that shows me, over the course of a month, how many jobs were unstable/failed
A report that says "X Days without a broken build" (kind of like at construction sites)
A "Red/Green calendar", that would show on a per-day basis whether any builds were broken
I didn't see any plugins that visualized data in any of these ways, but I'm willing to scrape the Jenkins logs to get the information. Is there a better way to see data similar to this?
I think this work pretty decent using the API. You can get all jobs from your view, then go into the job details and get the build numbers and build date. With those build numbers you can get the corresponding status. You would have to do some coding to collect and display the data, but this would be a possible way.
Another possibility would be using a Groovy script over the console in Manage Jenkins. I do not have much experience working with that feature though, but as you have access to the internal representation it should be pretty easy to get some data out of there.
Finally, the optimal solution would be to write a plugin that does the work, but this is of course also the solution that requires the most effort and know-how.
The Global Build Stats plugin might provide the reporting you're looking for.
(And if you already considered this plugin, I'm curious what problems you ran into.)
As #pushy mentions, the Groovy script console is a good tool to use for these types of statistics gathering. You can use the groovy script in the remote API as well. Here is a starting point for gathering information from all jobs matching a pattern.
def jobPattern='pattern'
Hudson.instance.getItems(Project).each {project ->
def results = [:]
if (project.name.contains(jobPattern)) {
results."$project.name" = [SUCCESS:0,UNSTABLE:0,FAILURE:0,ABORTED:0]
def build = project.getLastBuild()
while (build){
//println "$project.name;$build.id;$build.result"
results."$project.name"."$build.result" = results."$project.name"."$build.result" +1
build=build.getPreviousBuild()
}
}
results.each{name,map->
map.each{result,count->
println "$name : $result = $count"
}
}
}
"Done"
Use this as a start and modify according to your specific requirements.
Try build metric plugin along with Global Build Stat plugin.

Resources