Calling a Repast model with a list of model parameters - agent-based-modeling

I have an agent-based model which is developed in Repast. To calibrate the model, I need to run the model with a list of parameters. And, use some optimization algorithms to find the best parameter set (minimizing some loss value). I wonder how to do this in Repast Simphony. Apparently, the standard Repast GUI does not support this. I tried batch run, but it seems not what I am looking for either. I could pack the JAVA code as a JAR file, and run it from command line. But how to make the program to take command line arguments in my Repast/JAVA implementation?

Please take a look at the EMEWS framework (emews.org). The tutorials walk through how to use EMEWS to sweep and optimize Repast (Simphony and HPC) simulations.
The main interface for running the individual models is through the InstanceRunner class. Take a look at Section 8 in the Repast Batch Getting Started Guide: https://repast.github.io/docs/RepastBatchRunsGettingStarted.pdf

Related

Should i use a framework or self made script for machine learning workflow automation?

For a personal work I try to automate the workflow of my machine learning model but I face some question in the perspective of a professional approach.
At the moment I am doing the following tasks manually:
From the raw data I extract the data that interests me in a directory with the help of a third party software (to which I give in argument the parameters of the extraction).
Then I run another software, or in some cases one (or more) of my scripts (python) in order to pre-process my data which will be stored in a new directory.
Finally I provide the processed data to one of my model which returns the labeled data and that I store in a last directory.
process diagram of the previous description.
Each step (extract, pre-process and model) are always executed in the same order but I change the scripts/software parameters/model according to my needs or the comparison I need to do.
All my scripts are stored in an ordered script directory and the third party software is called from the command line from a python script.
My goal would be to have a script/software that does the whole loop by itself. As input it would take the raw data (or the directory where they are stored) and the different parameters to make the loop with the desired module (and their right parameters).
The number of module and parameter combinations is so big that I can't make a script for each one, that's why I want to build something very modular.
I can code myself my own script but I would like to have a more professional approach as if I had to implement it for a company.
My questions: In my case (customizable/interchangeable module) would it be more appropriate to use a framework (e.g. Kedro or any other) or to build it myself (because my needs are too specific)? If frameworks are appropriate which ones to choose (and why) ?
I've been researching frameworks that already exist but besides the fact that I'm not sure if they fit my needs there are so many that I'd like to spend some time on one that could help me in my future project or professional experience.
thanks you

Generating an FMU from a ROS node

fmi_adapter looks like an awesome way to use an FMU as a ROS node. However, I don't see anything about the opposite/inverse problem - generating an FMU from a ROS node. Is there a reason that this is not possible in general? Or is it just an unusual pattern that no one has ever written a library for because it would be seldom used?
I'm the developer of the fmi_adapter package but, admitted, never thought about the opposite direction. The big difference is that with an FMU you have an explicit specification of the variables (i.e., inputs and outputs), whereas with ROS these would have to be analyzed from the code (or can even be determined at runtime only). This would make a corresponding generator significantly more complex.

Dart meta programming features

Will there be an equivelent of the c# Reflection.Emit namespace in dart?
Reflection.Emit has a number of classes that are used to build types at run time and adding properties, configering their getters and setter and building methods and event handlers all at run time, which is really powerfull when it comes to metaprogramming.
my idea is about generating my data models at run time and caching them in a map so i can create instances at run time and add new methods and properties to them when i need to and not having to use mirrors often after generating the class, this could be really useful when writing ORMs and more dynamic applications where you use reflection once rather than using it every time you need to modify an instance
My questions are:
Will there be such thing in the future versions of dart? they mention
something about a Mirror Builder but i am not sure if does the same
thing, can some one please confirm if thats what a Mirror Builder is
about?
another question is, if i am able to generate my data types on the
server as strings, is there a way to to compile them before sending
them to the client and map them in a Map and use this Map to create instances?
I have seen discussions that this should be supported at some time but as far as I know will not be started to work on in the near future.
Similar requirements are usually solved by code generation at build time (Polymer, Angular, others) by transformers which analyze the code and generated code for reflective property access or code snippets in HTML.
Smoke is a package that aims to simplify this.
Code generation has the advantage that the amount of code needed to be downloaded by the client is much smaller.
When you do code generation at runtime you need a compiler and that is a lot of code that needs to be downloaded into the browser.
try.dartlang.org takes a such an approach. The source is available here https://code.google.com/p/dart/source/browse/branches/bleeding_edge/dart/site/try/ .
It includes dart2js (built to JavaScript) and runs a background isolate that compiles the Dart code to JS.

Creating design document from existing java code

I have existing java code and need to create Design Document based on that.
For starter even if I could get all functions with input / output parameters that will help in overall proces.
Note: There is not commeted documentation on any procedures, function or classes.
Last but not least. Let me know for any good tool which will reduce time required for this phase. As currently we write every flow and related stuffs.
What you want is just too much. Quoting Linus Torvalds: “Good code is its own best documentation.”. Anyway, I digress.
You might want to look into UML tools which generate class/sequence diagrams from the code. There are many of them but only a handful support reverse engineering (into and from the class diagram), and even fewer subset support the same to/from sequence diagram. I only know MagicDraw could do this, but I am biased as I used to work for the manufacturer of this tool so do your shopping around first.
Use java docs: http://www.oracle.com/technetwork/java/javase/documentation/index-137868.html
or Introspection: http://docs.oracle.com/javase/tutorial/reflect/class/classMembers.html

Is there a way to determine code coverage without running the code?

I am not asking the static code analysis which is provided by StyleCop or Fxcop. Both are having different purpose and it serves well. I am asking whether is there a way to find the code coverage of your user control or sub module? For ex, you have an application which uses the helper classes in a separate assembly. Inorder to ensure the unit testing code coverage, we need to run the application and ensure using NCover or similar tool.
My requirement is, without running it, is there any possible to find code coverage of the helper classes or similar kind of assemblies?
See Static Estimation for Test Coverage for a technique that estimates coverage without executing the source code.
The basic idea is to compute a program slice for each test case, and then "count" what the slice enumerates. A (forward) slice is effectively that part of a program that you can reach from a specific starting point in the code, in this case, the test code.
While the technical paper above is hard to get if you're not an ACM member [or you didn't attend the conference where it was presented :], there's a slide presentation here.
Of course, running this static estimator only tells you (roughly) what code will be exercised. It doesn't substitute for actually running the tests, and verifying that they pass!
In general, the answer is no. This is equivalent to the halting problem, which is not computable.
There are (research) tools based on abstract interpretation or model checking that can show coverage properties without execution, for subsets of language. See, e.g.
"Analyzing Functional Coverage in Bounded Model Checking", Grosse, D. Kuhne, U. Drechsler, R. 2008
In general, yes, there are approaches, but they're specialized, and may require some formal methods experience. This kind of stuff is still cutting edge research.
I would say no; with the exception of 'dead code' which a compiler can determine.
My definition of code coverage is a result which indicates how many times each line of code is run in your program: which, of course, means running the program. The determining factor here is usually the values of data passing through the program which the determine the paths of executions taken by conditionals. A static analysis, like a compiler, could deduce lines of code that cannot run under any conditions.
An example here is if your program uses a third-party library, but there is a bug in the library. If your program never uses those parts of the library, or the data you send to the library causes it to avoid the bug, then you won't be affected.
You could write a program that, by reflection, assumes that all conditionals will be taken, and follows all function calls, through all derived classes, but I'm not sure what this will tell you. It certainly can't tell you whether or not there are any bugs in the lines of code covered.
Coverity Static Analysis is a tool that is can identify many secuirty flaws in a program. It can also identify dead code and can be used to help satisfy testing regulations such as D0178B which requires that the developers demonstrate that all code can be executed.
If you are using Visual Studio, you can first run 'Analyze Code Coverage', Then you can export code Coverage results using below Button(marked in Green) in Visual Studio:
Later you can import the Coverage Result file back to Visual Studio

Resources