How can I analyse the data from UPPAAL simulator eps file? - analysis

I ran the automaton and exported the eps file. But how do I further analyse and get information from eps file?
Is it possible to write variable values to an external log file?
Thanks in advance

There is no point analysing eps (these are meant to be included into report).
For data analysis try the following in the verifier using simulate (Uppaal 4.1) queries like this:
simulate 1 [<=300] {
(T(1).Ready+2*T(1).Computing+3*T(1).Release+4*T(1).Error)+8,
(T(2).Ready+2*T(2).Computing+3*T(2).Release+4*T(2).Error)+4,
(T(3).Ready+2*T(3).Computing+3*T(3).Release+4*T(3).Error)+0
}
where T(i) is a process and Ready, Computing, Release and Error are its locations. Then model-check the query, right-click on it and see the plot, then:
a) right-click on the plot and choose Export -> comma separated values,
or:
b) parse the plot values from standard output from verifyta (command line tool) when verifying the query above.

Related

Store Terminal Output in a File ros melodic cpp

I have a pcd file with x y z coordinates from point cloud.
Now I have another cpp file from where I print x y z coordinates on terminal. ( This is just the coordinates not a point cloud)
I want to store this in another file in order to compare it with pcd file.
How do I do it?
Why do you need to store it directly from stdout? There are a couple of different ways to go about this that are probably easier.
You can simply publish the (x,y,z) data and record it with rosbag record and then export via rosbag_to_csv.
You could also just write the values to a file directly in the code instead of printing it out. Since you did not specify Python or C++ here is a quick example in Python
f = open('your_output_file.csv', 'a')
while not rospy.is_shutdown():
#Whatever ops to get data
x,y,z = get_the_data()
output_str = str(x)+','+str(y)+','+str(z)
f.write(output_str)
ROS will also automatically log output from rospy.log*() functions. You can control where this is stored by exporting the environmental variable ROS_LOG_DIR. Note that this may not work 100% correctly for print() statements
Finally, if you really really need to use stdout for some reason you can always redirect the output from however you're launching the node. Ex: roslaunch your_package your_launch.launch >> some_file.txt

How can I export the data from the results table in Image J Pendant Drop plugin?

I am using the pendant drop plug in (http://www.msc.univ-paris-diderot.fr/~daerr/misc/pendent_drop.html) to get the surface tension of droplets. It produces a table of results in a window called Results, however, it does not have the usual file, save as etc options. Also, when I try the getResults and nResults command in macro it doesnt give me any results and says the number of results is .
Do I need to edit the plug in to be able to output the results? My aim is to output the results as a csv file.
Pendent Drop is an ImageJ2-style plugin that generates a SciJava Table. In an up-to-date Fiji installation, you can save such tables using File > Export > Table....
The macro functions getResults and nResults do not work on those tables, because they require an ImageJ1 ResultsTable window.
See also this topic on the image.sc forum. In general, questions like this one are much better asked on https://forum.image.sc (see also the description of the imagej tag).

PENTAHO data integration data source/destination mapping

I'm reaching you hoping to find answers about Pentaho data integrator limitation.
I'm currentlty working on a 1 to 1 data source integration and would like to make it n to 1-n. This requires dynamic jobs creation and would like to know if any of came across such issue. My 1 to 1 is working perfectly, it integration form differents data source types (CSV, databases "Mysql, Oracle ...) to same date destination and need to make it n to 1-n.
There is a Metadata Injection Step just for that.
A use case similar to yours is described by Diethard here.
Because it seams that you have a lot of different source format, it may be a good investment to read the use case of Jens, the author of the step, here, which (apart for the automation) is precisely your case.
AFAIK in Pentaho DI, it is not possible to create dynamic transformations for any random data sources. PDI looks for the input columns to be available in the input stream before it loads the data to the target database. For example, if you are using 1 data source (in MySQL) and loading the same to the csv output, the csv output step is expecting the presence of input columns in the data source step (Table input). If you are trying to load any n random data sources you need to define input columns/fields for each of them individually.
Alternatively there are few things which you can explore:
1. Fast Dump in Text File Output step:
There is an option to fast data dump the data set in Text file output step. Here you don't need to define any output column. The input fields will be automatically dumped without formatting as it is. You can use this to map all of the input sources to a csv format and then load it to their targets.
2. Extending Java and Kettle together to build a solution:
PDI allows you to create custom JAVA codes on top of kettle. You can check this blog for more. You can use this idea to create custom code to pass n data sources fields to the kettle as a parameter and execute them. {note: i haven't tried this step, just thinking out loud here}
Hope this helps :)

How do I plot benchmark data in a Jenkins matrix project

I have several Jenkins matrix projects in where I output benchmark results (i.e. execution times) in a CSV file. I'd like to plot these execution times as a function of the build number, so I can see if my projects are regressing over time.
I can confirm Plot Plugin is a correct and quite useful approach. BTW, it supports CSV as well: plot configuration example
I've been using it for several years without any problem. Benchmarks results were generated as a property file. Benchmark id (series id) was used as a key and result as a value. One build produces one result for each benchmark. Having that data it is quite easy to create plot configuration ant track performance.
This may help you:
https://wiki.jenkins-ci.org/display/JENKINS/Plot+Plugin
It adds plotting capabilities to Jenkins.

Comparing using Map Reduce(Cloudera Hadoop 0.20.2) two text files of size of almost 3GB

I'm trying to do the following in hadoop map/reduce( written in java, linux kernel OS)
Text files 'rules-1' and 'rules-2' (total 3GB in size) contains some rules, each rule are separated by endline character, so the files can be read using readLine() function.
These files 'rules-1' and 'rules-2' needs to be imported as a whole from hdfs in every map function in my cluster i.e. these file are not splittable across different map function.
Input to the mapper's map function is a text file called 'record' (each line is terminated by endline character), so from the 'record' file we get the (key, value) pair. The file is splittable and can be given as input to different map function used in the whole map/reduce process.
What needs to be done is compare each value(i.e. lines from record file) with the rules inside 'rules-1' and 'rules-2'
Problem is, if I pull out each line of rules-1 and rules-2 files to a static arraylist only once, so that each mapper can share the same arraylint and try to compare elements in the arraylist with the each input value from the record file, I get a memory overflow error, since 3GB cannot be stored at a time in the arraylist.
Alternatively, if I import only few lines from the rules-1 and rules-2 files at a time and compare them to each value, map/reduce is taking a lot time to finish its job.
Could you guys provide me any other alternative ideas how can this be done without the memory overflow error? Will it help if I put those file-1 and file-2 inside a hdfs supporting database or something? I'm going out of ideas actually.Would really appreciate if some of you guys could provide me your valuable suggestions.
Iif you input files are small - you can load them into static variables and use rules as an input.
If above is not a case I can suggest the following ways:
a) To give rule-1 and rule-2 high replication factor close to the number of nodes you have. Then you can read from HDFS rule=1 and rule-2 for each record in the input relatively efficient - because it will be sequential read from the local datanode.
b) If you can consider some hash function which, when applied to the rule and to the input string will predict without false negatives that they can match - then you can emit this hash for rules, input record and resolve all possible matches in the reducer. It will be very similar to the way how a join is done using MR
c) I would consider some other optimization techniques like building search trees, or sorting since otherwise the problem looks computationally expensive and will took forever...
On this page find Real-World Cluster Configurations
it will cover file size configuration
You could use the param "mapred.child.java.opts" in conf/mapred-site.xml to increase the memory for your mappers. You might not be able to run as many map slots per server but with more servers in your cluster you could still parallelize your job.
Read the content text file from the MapReduce function and read the keyword text file from the mapper function (for reading your HDFS) and split using StringTokenizer value.toString reading from MapReduce and in your mapper function write HDFS read text file code it will read line-by-line so use two while loops here you compare. Whenever you want data send it to reducer.
Split the 3gb text file into several text files and apply that all text files as usual MapReduce your previous program.
For splitting text file I written Java program and you decide how many lines you want write in each text file.

Resources