Retrieve object executed via ExecutorService.submit - submit

I have an ExecutorService that runs several solvers in parallel. Each solver modifies several internal variables which value must be returned.
It is not possible to encapsulate all the variables in a class to be returned via a callable object for compatibility issues. Therefore, make the solvers either callable or runnable does not make any difference in my case, as I cannot retrieve all the variables I need.
I considered following two options:
Each solver access a synchronized class and writes its values there.
Access the objects (solvers) that have been submitted by the executor in order to get their variables via get methods.
I prefer the second option, but I don't find the way to gain access to the objects submitted.
Any suggestion (for any of the options)?

You didn't elaborate on the "compatibility issues", so I can only suggest a general solution for what you described.
Since you use ExecutorService, I believe that you use ThreadPoolExecutor (or its subclass) as implementation of that interface. If that's the case, I suggest overriding ThreadPoolExecutor.afterExecute(Runnable r, Throwable t) method. It's called after any submitted Runnable has completed it's execution. Its default implementation is empty.
Your implementation should follow these steps:
Check if t != null. If so, process Throwable t which caused a solver to abort.
Check the type of r and if you recognize it, retrieve its results. Of course, it will be simpler if all your solvers have a common API.
Store results somewhere.
But look out - ThreadPoolExecutor.afterExecute() is called from the thread that ran the Runnable r, so the 3rd step will most likely need to be synchronized.
Putting it all together, your code can look like this:
if (t != null) {
// handle t
} else {
Solver solver = (Solver)r;
Results results = solver.getResults();
synchronized (allSolutions) {
allSolutions.addResults(results);
}
}

Related

Create and Run Multiple Solver Instances in Parallel

I'd like to run multiple solvers in multiple threads and eventually processes. I'm currently running a for-loop and creating threads like the following:
for (...) {
pthread_t pid;
Args args;
args.solver = solver???
pthread_create(&pid, NULL, &func, (void*)&args);
}
When defining solver, I've tried several options, though none have worked.
First, I tried calling const auto solver = drake::solvers::MakeSolver(solver_id);, then passing solver.get() into each thread's args. This successfully compiles and runs, but I get some obscure failure terminate called recursively in drake::solvers::SnoptSolver::DoSolve. I saw that MakeSolver seems to return a unique ptr around a single solver instance defined in kKnownSolvers, so possibly the threads are calling DoSolve on the same solver instance, causing this issue.
I then tried creating multiple instances of the solver. Calling StaticSolverInterface::Make<SnoptSolver>() didn't work since that is defined in an unnamed namespace and thus is only accessible in that file. Calling const auto solver = drake::solvers::MakeSolver(solver_id); and copying the SolverInterface pointed to by solver isn't possible because SolverInterface is not movable or copyable.
Is what I'm doing possible? If so, how can I achieve this?
Calling drake::solvers::MakeSolver(solver_id) multiple times and giving each one to a different thread should work fine. It returns distinct objects each time, nothing is shared.
Similarly, repeated calls to make_unique<SnoptSolver> or the like should also work, if you can hard-code which solver you'd like instead of going by the id.
The terminate error message is probably an unhandled exception. Generally when you make a thread you'll want to put a try { } catch () {} within the immediate entry point; you don't want exceptions to leave the thread.
Also I strongly suggest std::thread if you're in C++; the pthread API is old and stinky.

How to deal with checking for valid state in every method call

I have encountered some code that looks like this.
member this.Send (data:array<byte>) =
if tcpClient.Connected then
// Send something.
member this.Open () =
if not tcpClient.Connected then
// Connect.
It's a potential bug hive with constantly checking to see if the TcpClient is connected before performing an operation on it.
A similar problem would be to check whether or not something is null before performing an operation on that something.
What is the general approach to dealing with this?
I was thinking along the lines of a monad that abstracts this boring checking away.
EDIT:
Potentially I can write many methods that each will have to check if we are connected.
member this.SendName name =
if tcpClient.Connected then
// Send name
member this.ThrottleConnection percent =
if tcpClient.Connected then
// Throttle
member this.SendAsTest text =
if tcpClient.Connected then
// Send as text.
So, it depends on whether you want to do the check inside the wrapper class or outside of it. Doing the check inside the class, I don't see how a computation expression is really relevant; you're not binding operations.
A workflow expression would only be useful if you're doing the check outside the wrapper class (i.e. from the calling function). If you create a connected builder together, the resulting code would look like
connected {
do! wrapper.Send(..)
do! wrapper.Throttle(..)
do! wrapper.SendAsTest(..)
}
However, that is really no simpler than
if wrapper.connected do
wrapper.Send(..)
wrapper.Throttle(..)
wrapper.SendAsTest(..)
So, kind of, what's the point, right?
It'd make more sense if you had multiple tcpClient wrapper objects and needed them all to be connected within your workflow. That's more what the "monadic" approach is for.
connected {
do! wrapper1.Send(..)
do! wrapper2.Throttle(..)
do! wrapper3.SendAsText(..)
}
However, specific to your example of doing the checks inside the wrapper class, like I said earlier, monads would not be applicable. One neat approach to that specific problem would be to try mimicking some preconditions like the following link http://laurent.le-brun.eu/site/index.php/2008/03/26/32-design-by-contract-with-fsharp. I don't know if it's much more intuitive than the if statements, but if you're looking for an fsharp-y way of doing things interestingly, that's the best I can come up with.
Ultimately your existing code is about as compact as it gets. Presumably not all of your functions would start with the same if statement, so there's nothing unnecessarily repetitive there.

cannot traverse the nodes of an AST, while assigning each node an ID

This is more a simple personal attempt to understand what goes on inside Rascal. There must be better (if not already supported) solution.
Here's the code:
fileLoad = |home:///PHPAnalysis/systems/ApilTestScripts/simple1.php|;
fileAST=loadPHPFile(fileLoad,true,false);
//assign a simple id to each node
public map[value,int] assignID12(node N)
{
myID=();
visit(N)
{
case node M:
{
name=getName(M);
myID[name] =999;
}
}
return myID;
}
ids=assignID12(fileAST);
gives me
|stdin:///|(92,4,<1,92>,<1,96>): Expected str, but got value
loadPHPFile returns a node of type: list[Stmt], where each Stmt is one of the many types of statements that could occur in a program (PHP, in my case). Without going into why I'd do this, why doesn't the above code work? Especially frustrating because a very simple example is worked out in the online documentation. See: http://tutor.rascal-mpl.org/Recipes/Basic/Basic.html#/Recipes/Common/CountConstructors/CountConstructors.html
I started a new console, and it seems to work. Of course, I changed the return type from map[value,int] to map[str,int] as it was originally in the example.
The problem I was having was that I may have erroneously defined the function previously. While I quickly fixed an apparent problem, it kept giving me errors. I realized that in Rascal, when you've started a console and imported certain definitions, it (seems)is impossible to overwrite those definitions. The interpreter keeps making reference to the very first definition that you provided. This could just be the interpreter performing a type-check, and preventing unintentional and/or incompatible assignments further down the road. That makes sense for variables (in the typical program sense), but it doesn't seem like the best idea to enforce that on functions (or methods). I feel it becomes cumbersome, because a user typically has to undergo some iterations before he/she is satisfied with a function definition. Just my opinion though...
Most likely you already had the name ids in scope as having type map[str,int], which would be the direct source of the error. You can look in script https://github.com/cwi-swat/php-analysis/blob/master/src/lang/php/analysis/cfg/LabelState.rsc at the function labelScript to see how this is done in PHP AiR (so you don't need to write this code yourself). What this will give you is a script where all the expressions and statements have an assigned ID, as well as the label state, which just keeps track of some info used in this labeling operation (mainly the counter to generate a unique ID).
As for the earlier response, the best thing to do is to give your definitions in modules which you can import. If you do that, any changes to types, etc will be picked up (automatically if the module is already imported, since Rascal will reimport the module for you if it has changed, or when you next import the module). However, if you define something directly in the console, this won't happen. Think of the console as one large module that you keep adding to. Since we can have overloads of functions, if you define the function again you are really defining a new alternative to the function, but this may not work like you expect.

Semantics of OMG IDL attributes

I'm working on the verification of an interface formalised in the OMG's IDL, and am having problems finding a definitive answer on the semantics of getting an attribute value. In an interface, I have an entry...
interface MyInterface {
readonly attribute SomeType someName;
};
I need to know if it is acceptable for someObj.someName != someObj.someName to be true (where someObj is an instance of an object implementing MyInterface).
All I can find in OMG documentation in regards to attributes is...
(5.14) An attribute definition is logically equivalent to declaring a
pair of accessor functions; one to retrieve the value of the attribute
and one to set the value of the attribute.
...
The optional readonly keyword indicates that there is only a single
accessor function—the retrieve value function.
Ergo, I'm forced to conclude that IDL attributes need not be backed by a data member, and are free to return basically any value the interface deems appropriate. Can anyone with more experience in IDL confirm that this is indeed the case?
As we know, IDL interface always will be represented by a remote object. An attribute is no more then a syntatic sugar for getAttributeName() and setAttributeName(). Personally, i don't like to use attribute because it is hardly to understand than a simply get/set method.
CORBA also has valuetypes, object by value structure - better explaned here. They are very usefull because, different from struct, allow us inherit from other valuetypes, abstract interface or abstract valuetype. Usualy, when i'm modeling objects with alot of
get/set methods i prefer to use valuetypes instead of interfaces.
Going back to your question, the best way to understand 'attribute' is looking for C#. IIOP.NET maps 'attribute' to properties. A property simulates a public member but they are a get/set method.
Answering your question, i can't know if someObj.someName != someObj.someName will return true or false without see the someObj implementation. I will add two examples to give an ideia about what we can see.
Example 1) This implementation will always return false for the expression above:
private static i;
public string getSomeName() {
return "myName" i;
}
Example 2) This implementation bellow can return true or false, depending of concurrency or 'race condition' between clients.
public string getSomeName() {
return this.someName;
}
public setSomeName(string name) {
this.someName = name;
}
First client can try to access someObj.someName() != someObj.someName(). A second client could call setSomeName() before de second call from the first client.
It is perfectly acceptable for someObj.someName != someObj.someName to be true, oddly as it may seem.
The reason (as others alluded to) is because attributes map to real RPC functions. In the case of readonly attributes they just map to a setter, and for non-readonly attributes there's a setter and a getter implicitly created for you when the IDL gets compiled. But the important thing to know is that an IDL attribute has a dynamic, server-dictated, RPC-driven value.
IDL specifies a contract for distributed interactions which can be made at runtime between independent, decoupled entities. Almost every interaction with an IDL-based type will lead to an RPC call and any return value will be dependent on what the server decides to return.
If the attribute is, say, currentTime then you'll perhaps get the server's current clock time with each retrieval of the value. In this case, someObj.currentTime != someObj.currentTime will very likely always be true (assuming the time granularity used is smaller than the combined roundtrip time for two RPC calls).
If the attribute is instead currentBankBalance then you can still have someObj.currentBankBalance != someObj.currentBankBalance be true, because there may be other clients running elsewhere who are constantly modifying the attribute via the setter function, so you're dealing with a race condition too.
All that being said, if you take a very formal look at the IDL spec, it contains no language that actually requires that the setting/accessing of an attribute should result in an RPC call to the server. It could be served by the client-side ORB. In fact, that's something which some ORB vendors took advantage of back in the CORBA heyday. I used to work on the Orbix ORB, and we had a feature called Smart Proxies - something which would allow an application developer to overload the ORB-provided default client proxies (which would always forward all attribute calls to the server hosting the target object) with custom functionality (say, to cache the attribute values and return a local copy without incurring network or server overhead).
In summary, you need to be very clear and precise about what you are trying to verify formally. Given the dynamic and non-deterministic nature of the values they can return (and the fact that client ORBs might behave differently from each other and still remain compliant to the CORBA spec) you can only reliably expect IDL attributes to map to getters and setters that can be used to retrieve or set a value. There is simply no predictability surrounding the actual values returned.
Generally, attribute does not need to be backed by any data member on the server, although some language mapping might impose such convention.
So in general case it could happen that someObj.someName != someObj.someName. For instance attribute might be last access time.

XNA/C#: Entity Factories and typeof(T) performance

In our game (targeted at mobile) we have a few different entity types and I'm writing a factory/repository to handle instantiation of new entities. Each concrete entity type has its own factory implementation and these factories are managed by an EntityRepository.
I'd like to implement the repository as such:
Repository
{
private Dictionary <System.Type, IEntityFactory<IEntity>> factoryDict;
public T CreateEntity<T> (params) where T : IEntity
{
return factoryDict[typeof(T)].CreateEntity() as T;
}
}
usage example
var enemy = repo.CreateEntity<Enemy>();
but I am concerned about performance, specifically related to the typeof(T) operation in the above. It is my understanding that the compiler would not be able to determine T's type and it will have to be determined at runtime via reflection, is this correct? One alternative is:
Repository
{
private Dictionary <System.Type, IEntityFactory> factoryDict;
public IEntity CreateEntity (System.Type type, params)
{
return factoryDict[type].CreateEntity();
}
}
which will be used as
var enemy = (Enemy)repo.CreateEntity(typeof(Enemy), params);
in this case whenever typeof() is called, the type is on hand and can be determined by the compiler (right?) and performance should be better. Will there be a noteable difference? any other considerations? I know I can also just have a method such as CreateEnemy in the repository (we only have a few entity types) which would be faster but I would prefer to keep the repository as entity-unaware as possible.
EDIT:
I know that this may most likely not be a bottleneck, my concern is just that it is such a waste to use up time on reflecting when there is a slightly less sugared alternative available. And I think it's an interesting question :)
I did some benchmarking which proved quite interesting (and which seem to confirm my initial suspicions).
Using the performance measurement tool I found at
http://blogs.msdn.com/b/vancem/archive/2006/09/21/765648.aspx
(which runs a test method several times and displays metrics such as average time etc) I conducted a basic test, testing:
private static T GenFunc<T>() where T : class
{
return dict[typeof(T)] as T;
}
against
private static Object ParamFunc(System.Type type)
{
var d = dict[type];
return d;
}
called as
str = GenFunc<string>();
vs
str = (String)ParamFunc(typeof(String));
respectively. Paramfunc shows a remarkable improvement in performance (executes on average in 60-70% the time it takes GenFunc) but the test is quite rudimentary and I might be missing a few things. Specifically how the casting is performed in the generic function.
An interesting aside is that there is little (neglible) performance gained by 'caching' the type in a variable and passing it to ParamFunc vs using typeof() every time.
Generics in C# don't use or need reflection.
Internally types are passed around as RuntimeTypeHandle values. And the typeof operator maps to Type.GetTypeFromHandle (MSDN). Without looking at Rotor or Mono to check, I would expect GetTypeFromHandle to be O(1) and very fast (eg: an array lookup).
So in the generic (<T>) case you're essentially passing a RuntimeTypeHandle into your method and calling GetTypeFromHandle in your method. In your non-generic case you're calling GetTypeFromHandle first and then passing the resultant Type into your method. Performance should be near identical - and massively outweighed by other factors, like any places you're allocating memory (eg: if you're using the params keyword).
But it's a factory method anyway. Surely it won't be called more than a couple of times per second? Is it even worth optimising?
You always hear how slow reflection is, but in C#, there is actually fast reflection and slow reflection. typeof is fast-reflection - it has basically the overhead of method call, which is nearly infinitesimal.
I would bet a steak and lobster dinner that this isn't going to be a performance bottleneck in your application, so it's not even worth your (or our) time in trying to optimize it. It's been said a million times before, but it's worth saying again: "Premature optimization is the root of all evil."
So, finish writing the application, then profile to determine where your bottlenecks are. If this turns out to be one of them, then and only then spend time optimizing it. And let me know where you'd like to have dinner.
Also, my comment above is worth repeating, so you don't spend any more time reinventing the wheel: Any decent IoC container (such as AutoFac) can [create factory methods] automatically. If you use one of those, there is no need to write your own repository, or to write your own CreateEntity() methods, or even to call the CreateEntity() method yourself - the library does all of this for you.

Resources