I have a doubt, Do I need to delete a background thread when the execute is done? Or the thread is automatically clean and deleted?
In Java and Groovy, object instances are automatically garbage-collected when there's no more reference to them.
The Background Thread Plugin doesn't hold references to Runnables or closures you pass to it (let's call them "threads") once those have been executed.
However, you shouldn't hold large arrays of references to these "threads" in instance variables at the class level. (I guess, you wouldn't do, anyway.) Remember that Services in Grails are singleton-scoped and have the same life span as the Grails application.
In contrast, object instances at the method level are garbage-collected after the method call has completed.
After all, there's simply no need to delete "threads", just don't store the them in instance variables.
Related
I have recently started getting acquainted with explicit Multithreading in swift. I am trying to understand the below method to dispatch a new thread for executing a selector. while I am able to use it successfully, what I don't understand is what is the significance of target in the signature of the method below? is that argument used to hold a monitor lock for thread safety like in java ? I tried referring to the documentation with not much help. I would really appreciate if someone can help me understand what's happening under the hood here.
(void)detachNewThreadSelector:(SEL)selector
toTarget:(id)target
withObject:(id)argument;
The documentation for aTarget says:
The object that will receive the message aSelector on the new thread.
This means that the selector will be called on the object that you pass as the target. It's no different than making any other method call. You call a method on a specific instance of a class. The target is the specific instance. The selector is the method that is called on that instance.
Think of detachNewThreadSelector:toTarget:withObject: as calling the given method of the given object, with the given argument (or ignore the argument if the method has zero parameters), but call the method on a newly created thread.
For example:
[NSThread detachNewThreadSelector:#selector(expensiveComputationWithObjects:)
target:someCalculatorObject
withObject:someVeryLargeArray]
The method thus provides a very convenient way of dispatching method calls on background threads (though it doesn't allow reusing an existing thread).
Another minor disadvantage is that the methods in discussions need to have at most one parameter, though this limitation can be circumvented by having the target method receive a structure (a dictionary or another class) that holds the actual arguments.
For an object shared between threads (via persisting and querying), will changes to an ignored property made in one thread be visible in another thread?
To share objects between threads or re-use them between app launches you must persist them to a Realm ... all changes you make to it will be persisted (and must be made within a write transaction). Any changes are made available to other threads that use the same Realm when the write transaction is committed.
http://realm.io/docs/cocoa/0.91.1/#writes
It looks like this doesn't apply to ignored properties. Each thread's instance of the object has its own copy of the ignored property, and changes in one thread don't affect any other threads. Is that right?
Correct. When you access an RLMObject from another thread by re-querying for it, it will be a new instance of the object, so the ignored properties will not be carried along with that one.
That being said, as long as you don't try and access any of the Realm-backed properties (Else a RLMException will be triggered), you CAN pass an RLMObject instance from one thread to another and still continue to access its ignored properties on the new thread.
I have a method that runs in a background thread using a copy of NSManagedObjectContext which is specially generated when the background thread starts as per Apples recommendations.
In this method it makes a call to shared instance of a class, this shared instance is used for managing property values.
The shared instance that managed properties uses a NSManagedObjectContext on the main thread, now even though the background thread method should not use the NSManagedObjectContext on the main thread, it shouldn't really matter if the shared property manager class does or does not use the such a context as it only returns scalar values back to the background thread (at least that's my understanding).
So, why does the shared property class hang when retrieving values via the main threads context when called from the background thread? It doesn't need to pass an NSManagedObject or even update one so I cannot see what difference it would make.
I can appreciate that my approach is probably wrong but I want to understand at a base level why this is. At the moment I cannot understand this whole system enough to be able to think beyond Apples recommended methods of implementation and that's just a black magic approach which I don't like.
Any help is greatly appreciated.
Does using:
[theContext performBlock:^{
// do stuff on the context's queue, launch asynchronously
}];
-- or --
[theContext performBlockAndWait:^{
// do stuff on the context's queue, run synchronously
}];
-- just work for you? If so, you're done.
If not, take a long, hard look at how your contexts are setup, being passed around, and used. If they all share a root context, you should be able to "move" data between them easily, so long as you lookup any objectIDs always on your current context.
Contexts are bound to threads/queues, basically, so always use a given context as a a reference for where to do work. performBlock: is one way to do this.
Say, I want to create a singleton which has some data inside. The data is dynamically allocated only once, as it expected on singleton.
But I would like to now under when and how this data can be is released. Should I establish special method which will destroy the singleton? To be more specific - when the method 'dealloc' for this singleton will be executed? who is responsible for that?
You can declare a method/function you call explicitly.
The simplest way is to have a static C++ class hold it, then release it in its destructor. If you have multiple singletons, then this approach does not extend very well because the destruction order is implementation defined.
Another alternative (and better design) would be to avoid the singleton approach, and just use it as a regular instance in another class which lives for the duration of your app (an app delegate is a commonly known example).
As to 'when', it depends on its dependencies and how it's used. It's also good to try to minimize external influence in destruction.
In general, a singleton is not different to a normal object. It is freed, if there is no (strong) reference to it anymore. Usually, you control that there is one object only by a static variable. This variable is created at compile time; therefore it can not be freed. But all the 'real' object stuff can.
in my program i can load a Catalog: ICatalog
a Catalog here contains a lot of refcounted structures (Icollections of IItems, IElements, IRules, etc.)
when I want to change to another catalog,
I load a new Catalog
but the automatic release of the previous ICatalog instance takes time, freezing my application for 2 second or more.
my question is :
I want to defer the release of the old (and no more used) ICatalog instance to another thread.
I've not tested it already, but I intend to create a new thread with :
ErazerThread.OldCatalog := Catalog; // old catalog refcount jumps to 2
Catalog := LoadNewCatalog(...); // old catalog refcount =1
ErazerThread.Execute; //just set OldCatalog to nil.
this way, I expect the release to occur in the thread, and my application not
beeing freezed anymore.
Is it safe (and good practice) ?
Do you have examples of existing code already perfoming release with a similar method ?
I would let such thread block on some threadsafe queue(*), and push the interfaces to release into that queue as iunknowns.
Note however that if the releasing touches a lock that your memory manager uses (like a global heapmanager lock), then this is futile, since your mainthread will block on the first heapmanager access.
With a heapmanager with per thread pools, allocating many items in one thread and releasing it in a different thread might frustrate coalescing and reuse of (small) blocks algorithms.
I still think the way you describe is generally sound when implemented properly. But
this is from a theoretic perspective to show that there might be a link from the 2nd thread to the mainthread via the heapmanager.
(*) Simplest way is to add it to a tthreadlist and use tevent to signal that an element was added.
That looks OK, but don't call the thread's Execute method directly; that will run the thread object's code in the current thread instead of the one that the thread object creates. Call Start or Resume instead.