I have several Jenkins matrix projects in where I output benchmark results (i.e. execution times) in a CSV file. I'd like to plot these execution times as a function of the build number, so I can see if my projects are regressing over time.
I can confirm Plot Plugin is a correct and quite useful approach. BTW, it supports CSV as well: plot configuration example
I've been using it for several years without any problem. Benchmarks results were generated as a property file. Benchmark id (series id) was used as a key and result as a value. One build produces one result for each benchmark. Having that data it is quite easy to create plot configuration ant track performance.
This may help you:
https://wiki.jenkins-ci.org/display/JENKINS/Plot+Plugin
It adds plotting capabilities to Jenkins.
Related
Context
Hi all, I have been using Apache Beam pipelines to generate columnar DB to store in GCS, I have a datastream coming in from Kafka and have a window of 1m.
I want to transform all data of that 1m window into a columnar DB file (ORC in my case, can be Parquet or anything else), I have written a pipeline for this transformation.
Problem
I am experiencing general slowness. I suspect it could be due to the group by key transformation as I have only key. Is there really a need to do that? If not, what should be done instead? I read that combine isn't very useful for this as my pipeline isn't really aggregating the data but creating a merged file. What I exactly need is an iterable list of objects per window which will be transformed to ORC files.
Pipeline Representation
input -> window -> group by key (only 1 key) -> pardo (to create DB) -> IO (to write to GCS)
What I have tried
I have tried using the profiler, scaling horizontally/vertically. Using the profiler I saw more than 50% of the time going into group by key operation. I do believe the problem is of hot keys but I am unable to find a solution on what should be done. When I removed the group by key operation, my pipeline keeps up with the Kafka lag (ie, it doesn't seem to be an issue at Kafka end).
Code Snippet
p.apply("ReadLines", KafkaIO.<Long, byte[]>read().withBootstrapServers("myserver.com:9092")
.withTopic(options.getInputTopic())
.withTimestampPolicyFactory(MyTimePolicy.myTimestampPolicyFactory())
.withConsumerConfigUpdates(Map.of("group.id", "mygroup-id")).commitOffsetsInFinalize()
.withKeyDeserializer(LongDeserializer.class)
.withValueDeserializer(ByteArrayDeserializer.class).withoutMetadata())
.apply("UncompressSnappy", ParDo.of(new UncompressSnappy()))
.apply("DecodeProto", ParDo.of(new DecodePromProto()))
.apply("MapTSSample", ParDo.of(new MapTSSample()))
.apply(Window.<TSSample>into(FixedWindows.of(Duration.standardMinutes(1)))
.withTimestampCombiner(TimestampCombiner.END_OF_WINDOW))
.apply(WithKeys.<Integer, TSSample>of(1))
.apply(GroupByKey.<Integer, TSSample>create())
.apply("CreateTSORC", ParDo.of(new CreateTSORC()))
.apply(new WriteOneFilePerWindow(options.getOutput(), 1));
Wall Time Profile
https://gist.github.com/anandsinghkunwar/4cc26f7e3da7473af66ce9a142a74c35
The problem indeed seems to be a hot keys issue, I had to change my pipeline to create a custom IO for ORC files and bump up the number of shards to 50 for my case. I removed the GroupByKey totally. Since beam doesn't yet have auto determination of number of shards for FileIO.write(), you'll have to manually choose a number that suits your workload.
Also, enabling streaming engine API in Google Dataflow sped up the ingestion even more.
I'm fairly new to Google Dataflow and I am finding that the service spends several hours estimating the input file size before actually processing data, and will often do several recounts for large input collections before failing. I'm using Apache Beam 2.9 and the io.ReadFromText method.
The logs start with a comment about beginning estimation of input file size and continue to log an update every 10k files counted.
Is there a way to skip this step or to significantly increase the pace in which it does the count?
The Python ReadFromText source is based on FileBasedSource. If you look at the code for it, you can find that its estimate_size method is inefficient for a very large set of files.
As we discussed in the comments, you may be able to improve this bottleneck by manually partitioning the range of files. For example, if your files are gs://my_bucket/file001, gs://my_bucket/file002, ... gs://my_bucket/file999, you should be able to add 10 sources, something like so:
p = Pipeline()
file_lines = (
[p | ReadFromText('gs://my_bucket/file%s*' % i) for i in range(10)]
| beam.Flatten())
This should help your pipeline scale for a case like this.
As for a permanent solution... I imagine that one could try to propose improvements to the source itself, so that a future version may have better performance.
There are also transforms based on Java's FileIO transforms that we are planning to implement. That may be helpful as well. But it's not so near in the horizon.
I'm reaching you hoping to find answers about Pentaho data integrator limitation.
I'm currentlty working on a 1 to 1 data source integration and would like to make it n to 1-n. This requires dynamic jobs creation and would like to know if any of came across such issue. My 1 to 1 is working perfectly, it integration form differents data source types (CSV, databases "Mysql, Oracle ...) to same date destination and need to make it n to 1-n.
There is a Metadata Injection Step just for that.
A use case similar to yours is described by Diethard here.
Because it seams that you have a lot of different source format, it may be a good investment to read the use case of Jens, the author of the step, here, which (apart for the automation) is precisely your case.
AFAIK in Pentaho DI, it is not possible to create dynamic transformations for any random data sources. PDI looks for the input columns to be available in the input stream before it loads the data to the target database. For example, if you are using 1 data source (in MySQL) and loading the same to the csv output, the csv output step is expecting the presence of input columns in the data source step (Table input). If you are trying to load any n random data sources you need to define input columns/fields for each of them individually.
Alternatively there are few things which you can explore:
1. Fast Dump in Text File Output step:
There is an option to fast data dump the data set in Text file output step. Here you don't need to define any output column. The input fields will be automatically dumped without formatting as it is. You can use this to map all of the input sources to a csv format and then load it to their targets.
2. Extending Java and Kettle together to build a solution:
PDI allows you to create custom JAVA codes on top of kettle. You can check this blog for more. You can use this idea to create custom code to pass n data sources fields to the kettle as a parameter and execute them. {note: i haven't tried this step, just thinking out loud here}
Hope this helps :)
I am working on a problem of finding similar content in a log file. Let's say I have a log file which looks like this:
show version
Operating System (OS) Software
Software
BIOS: version 1.0.10
loader: version N/A
kickstart: version 4.2(7b)
system: version 4.2(7b)
BIOS compile time: 01/08/09
kickstart image file is: bootflash:/m9500-sf2ek9-kickstart-mz.4.2.7b.bin
kickstart compile time: 8/16/2010 13:00:00 [09/29/2010 23:10:48]
system image file is: bootflash:/m9500-sf2ek9-mz.4.2.7b.bin
system compile time: 8/16/2010 13:00:00 [09/30/2010 00:46:36]`
Hardware
xxxx MDS 9509 (9 Slot) Chassis ("xxxxxxx/xxxxx-2")
xxxxxxx, xxxx with 1033100 kB of memory.
Processor Board ID xxxx
Device name: xxx-xxx-1
bootflash: 1000440 kB
slot0: 0 kB (expansion flash)
For a human eye, it can easily be understood that "Software" and the data below is a section and "Hardware" and the data below is another section. Is there a way I can model using machine learning or some other technique to cluster similar sections based on a pattern? Also, I have shown 2 similar kinds of pattern but the patterns between sections might vary and hence should identify as different section. I have tried to find similarity using cosine similarity but it doesn't help much because the words aren't similar but the pattern is.
I see actually two separate machine learning problems:
1) If I understood you correctly the first problem you want to solve is the problem to split each log into distinct section, so one for Hardware, one for Software etc.
In order to achieve this one approach could be try to extract heading which mark the beginning of a new section. In order to do so you could manually label a set of different logs and label each row as heading=true, heading= false
No you could try to train a classifier which takes your labeled data as an input and the result could be a model.
2) Now that you have this different sections, you can split each log into those section and treat each section as a separate document.
Now I would first try a straigt-forward document clustering using a standard nlp pipeline:
Tokenize your document to get the tokens
Normalize them (maybe stemming is not the best idea for logs)
Create for each document a tf-idf vector
Start with a simple clustering algorithm like k-means to try to cluster the different section
After the clustering you should have the section similar to each other in the same cluster
I hope this helped, I think especially the first task is quit hard and maybe hand-tailored patterns will perform better.
I'm trying to do the following in hadoop map/reduce( written in java, linux kernel OS)
Text files 'rules-1' and 'rules-2' (total 3GB in size) contains some rules, each rule are separated by endline character, so the files can be read using readLine() function.
These files 'rules-1' and 'rules-2' needs to be imported as a whole from hdfs in every map function in my cluster i.e. these file are not splittable across different map function.
Input to the mapper's map function is a text file called 'record' (each line is terminated by endline character), so from the 'record' file we get the (key, value) pair. The file is splittable and can be given as input to different map function used in the whole map/reduce process.
What needs to be done is compare each value(i.e. lines from record file) with the rules inside 'rules-1' and 'rules-2'
Problem is, if I pull out each line of rules-1 and rules-2 files to a static arraylist only once, so that each mapper can share the same arraylint and try to compare elements in the arraylist with the each input value from the record file, I get a memory overflow error, since 3GB cannot be stored at a time in the arraylist.
Alternatively, if I import only few lines from the rules-1 and rules-2 files at a time and compare them to each value, map/reduce is taking a lot time to finish its job.
Could you guys provide me any other alternative ideas how can this be done without the memory overflow error? Will it help if I put those file-1 and file-2 inside a hdfs supporting database or something? I'm going out of ideas actually.Would really appreciate if some of you guys could provide me your valuable suggestions.
Iif you input files are small - you can load them into static variables and use rules as an input.
If above is not a case I can suggest the following ways:
a) To give rule-1 and rule-2 high replication factor close to the number of nodes you have. Then you can read from HDFS rule=1 and rule-2 for each record in the input relatively efficient - because it will be sequential read from the local datanode.
b) If you can consider some hash function which, when applied to the rule and to the input string will predict without false negatives that they can match - then you can emit this hash for rules, input record and resolve all possible matches in the reducer. It will be very similar to the way how a join is done using MR
c) I would consider some other optimization techniques like building search trees, or sorting since otherwise the problem looks computationally expensive and will took forever...
On this page find Real-World Cluster Configurations
it will cover file size configuration
You could use the param "mapred.child.java.opts" in conf/mapred-site.xml to increase the memory for your mappers. You might not be able to run as many map slots per server but with more servers in your cluster you could still parallelize your job.
Read the content text file from the MapReduce function and read the keyword text file from the mapper function (for reading your HDFS) and split using StringTokenizer value.toString reading from MapReduce and in your mapper function write HDFS read text file code it will read line-by-line so use two while loops here you compare. Whenever you want data send it to reducer.
Split the 3gb text file into several text files and apply that all text files as usual MapReduce your previous program.
For splitting text file I written Java program and you decide how many lines you want write in each text file.