Does pthread_detach free up the stack allocated to the child thread, after the child thread as exited - pthreads

I have used pthread_detach inorder to free up the stack allocated to the child thread, but this is not working, I guess it does not free up the memory.....
I don't want to use pthread_join. I know join assures me of freeing up the stack for child, but, I don't want the parent to hang up until the child thread terminates, I want my parent to do some other work in the mean time. So, I have used detach, as it will not block the parent thread.
Please, help me. I have been stuck..

this is not working
Yes it is. You are likely mis-interpreting your observations.
I want my parent to do some other work in the mean time
That's usually the reason to create threads in the first place, and you can do that:
pthread_create(...);
do_some_work(); // both current and new threads work in parallel
pthread_join(...); // wait for both threads to finish
report_results();
I don't want to use pthread_join. I know join assures me of freeing up the stack for child
Above statement is false: it assures no such thing. A common implementation will cache the now available child stack for reuse (in case you'll create another thread shortly).

YES - according to http://cursuri.cs.pub.ro/~apc/2003/resources/pthreads/uguide/users-16.htm it frees memory either when the thread ends or immediately if the thread has already ended...
As you don't provide any clue as how you determine that the memory is not freed up I can only assume that the method you use to determine it is not sufficient...

Related

Pthread join priority

Let me show you the function first:
for (i=0; i<3;i=i+2){
pthread_create(&thread1, NULL, &randtrack, (void *)&rnum_array[i]);
pthread_create(&thread2, NULL, &randtrack, (void *)&rnum_array[i+1]);
pthread_join(thread1, NULL);
pthread_join(thread2, NULL);
}
print final result here;
My understanding is after two threads are created, the parent thread will blocked at join(thread1), what is the thread 2 actually come back earlier than thread1? How can make longer thread always stay behind?
Thanks
If thread2 finishes and thread1 hasn't, you'll continue waiting until thread1 finishes. Then you'll wait until thread2 finishes, which will complete more or less instantaneously. The order in which you wait for the threads won't matter (unless the threads try to interact with each other directly, such as by calling pthread_kill or pthread_join on each other).
Update: Your design is completely wrong for what you're actually trying to do. You want to do this:
Create a structure to track the work that needs to be done. It should be protected by a mutex, track how many threads are currently working, and what the next work unit that needs to be assigned is.
When you create the threads, have them rung a function that acquires the mutex, grabes the next unit of work, increments the number of threads running, and then does the work.
When a thread completes a work unit, it should acquire the mutex, decrement the number of threads running, and see if there's more work to do. When there's no work to do, the thread should terminate.
You can now wait for all threads to terminate, which will only happen when all the work is done. This eliminates the loop over the work units.
And please learn a very important general rule -- threads are just the things that get work done. What you want your code to focus on is doing the work, not how it will be done. Try to wait for work to be done, not for threads to be done.

is there a way that the synchronized keyword doesn't block the main thread

Imagine you want to do many thing in the background of an iOS application but you code it properly so that you create threads (for example using GCD) do execute this background activity.
Now what if you need at some point to write update a variable but this update can occur or any of the threads you created.
You obviously want to protect that variable and you can use the keyword #synchronized to create the locks for you but here is the catch (extract from the Apple documentation)
The #synchronized() directive locks a section of code for use by a
single thread. Other threads are blocked until the thread exits the
protected code—that is, when execution continues past the last
statement in the #synchronized() block.
So that means if you synchronized an object and two threads are writing it at the same time, even the main thread will block until both threads are done writing their data.
An example of code that will showcase all this:
// Create the background queue
dispatch_queue_t queue = dispatch_queue_create("synchronized_example", NULL);
// Start working in new thread
dispatch_async(queue, ^
{
// Synchronized that shared resource
#synchronized(sharedResource_)
{
// Write things on that resource
// If more that one thread access this piece of code:
// all threads (even main thread) will block until task is completed.
[self writeComplexDataOnLocalFile];
}
});
// won’t actually go away until queue is empty
dispatch_release(queue);
So the question is fairly simple: How to overcome this ? How can we securely add a locks on all the threads EXCEPT the main thread which, we know, doesn't need to be blocked in that case ?
EDIT FOR CLARIFICATION
As you some of you commented, it does seem logical (and this was clearly what I thought at first when using synchronized) that only two the threads that are trying to acquire the lock should block until they are both done.
However, tested in a real situation, this doesn't seem to be the case and the main thread seems to also suffer from the lock.
I use this mechanism to log things in separate threads so that the UI is not blocked. But when I do intense logging, the UI (main thread) is clearly highly impacted (scrolling is not as smooth).
So two options here: Either the background tasks are too heavy that even the main thread gets impacted (which I doubt), or the synchronized also blocks the main thread while performing the lock operations (which I'm starting reconsidering).
I'll dig a little further using the Time Profiler.
I believe you are misunderstanding the following sentence that you quote from the Apple documentation:
Other threads are blocked until the thread exits the protected code...
This does not mean that all threads are blocked, it just means all threads that are trying to synchronise on the same object (the _sharedResource in your example) are blocked.
The following quote is taken from Apple's Thread Programming Guide, which makes it clear that only threads that synchronise on the same object are blocked.
The object passed to the #synchronized directive is a unique identifier used to distinguish the protected block. If you execute the preceding method in two different threads, passing a different object for the anObj parameter on each thread, each would take its lock and continue processing without being blocked by the other. If you pass the same object in both cases, however, one of the threads would acquire the lock first and the other would block until the first thread completed the critical section.
Update: If your background threads are impacting the performance of your interface then you might want to consider putting some sleeps into the background threads. This should allow the main thread some time to update the UI.
I realise you are using GCD but, for example, NSThread has a couple of methods that will suspend the thread, e.g. -sleepForTimeInterval:. In GCD you can probably just call sleep().
Alternatively, you might also want to look at changing the thread priority to a lower priority. Again, NSThread has the setThreadPriority: for this purpose. In GCD, I believe you would just use a low priority queue for the dispatched blocks.
I'm not sure if I understood you correctly, #synchronize doesn't block all threads but only the ones that want to execute the code inside of the block. So the solution probably is; Don't execute the code on the main thread.
If you simply want to avoid having the main thread acquire the lock, you can do this (and wreck havoc):
dispatch_async(queue, ^
{
if(![NSThread isMainThread])
{
// Synchronized that shared resource
#synchronized(sharedResource_)
{
// Write things on that resource
// If more that one thread access this piece of code:
// all threads (even main thread) will block until task is completed.
[self writeComplexDataOnLocalFile];
}
}
else
[self writeComplexDataOnLocalFile];
});

slow memory release (refcounted structure) - Is my workaround a good way?

in my program i can load a Catalog: ICatalog
a Catalog here contains a lot of refcounted structures (Icollections of IItems, IElements, IRules, etc.)
when I want to change to another catalog,
I load a new Catalog
but the automatic release of the previous ICatalog instance takes time, freezing my application for 2 second or more.
my question is :
I want to defer the release of the old (and no more used) ICatalog instance to another thread.
I've not tested it already, but I intend to create a new thread with :
ErazerThread.OldCatalog := Catalog; // old catalog refcount jumps to 2
Catalog := LoadNewCatalog(...); // old catalog refcount =1
ErazerThread.Execute; //just set OldCatalog to nil.
this way, I expect the release to occur in the thread, and my application not
beeing freezed anymore.
Is it safe (and good practice) ?
Do you have examples of existing code already perfoming release with a similar method ?
I would let such thread block on some threadsafe queue(*), and push the interfaces to release into that queue as iunknowns.
Note however that if the releasing touches a lock that your memory manager uses (like a global heapmanager lock), then this is futile, since your mainthread will block on the first heapmanager access.
With a heapmanager with per thread pools, allocating many items in one thread and releasing it in a different thread might frustrate coalescing and reuse of (small) blocks algorithms.
I still think the way you describe is generally sound when implemented properly. But
this is from a theoretic perspective to show that there might be a link from the 2nd thread to the mainthread via the heapmanager.
(*) Simplest way is to add it to a tthreadlist and use tevent to signal that an element was added.
That looks OK, but don't call the thread's Execute method directly; that will run the thread object's code in the current thread instead of the one that the thread object creates. Call Start or Resume instead.

How to kill a thread in delphi?

In delphi, a method in TThread is terminate. It seems a subthread can not kill another thread by calling terminate or free.
For example
A(main form), B (a thread unit), C (another form).
B is sending data to main form and C (by calling syncronize), we tried to terminate B within C while B is executing by calling B.terminate. But this method does not work and B is still working until it ends in execute method.
Please help. Thank you in advance.
You have to check for Terminate in the thread for this to work. For instance:
procedure TMyThread.Execute;
begin
while not Terminated do begin
//Here you do a chunk of your work.
//It's important to have chunks small enough so that "while not Terminated"
//gets checked often enough.
end;
//Here you finalize everything before thread terminates
end;
With this, you can call
MyThread.Terminate;
And it'll terminate as soon as it finishes processing another chunk of work. This is called "graceful thread termination" because the thread itself is given a chance to finish any work and prepare for termination.
There is another method, called 'forced termination'. You can call:
TerminateThread(MyThread.Handle);
When you do this, Windows forcefully stops any activity in the thread. This does not require checking for "Terminated" in the thread, but potentially can be extremely dangerous, because you're killing thread in the middle of operation. Your application might crash after that.
That's why you never use TerminateThread until you're absolutely sure you have all the possible consequences figured out. Currently you don't, so use the first method.
Actually,
currently most voted answer to this question is incorrect (so as 34 upvoters...) in regard how to forcefully kill a thread.
You do not use ThreadId as a parameter to TerminateThread procedure. Using ThreadId will cause most likely an "Invalid handle" error or in worse case scenerio - will kill a different thread.
You should pass a thread handle as a parameter:
TerminateThread(MyThread.Handle);
More about differences between thread's handle and id can be found here.
Edit
Seems #himself corrected his mistake after seeing my answer, so this is no longer relevant.
Terminate does not kill a thread; it sets the Terminated property to inform the thread that it needs to terminate. It's the thread's responsibility to watch for Terminated and shut itself down gracefully.
All the Terminate method does is it sets the Terminated property to true. So you have to manually keep checking that property and then exit the thread method when it is set to true.
If you might want to terminate a thread then you could be better off spawning another app and killing that if you think its failed - windows will then tidy up after you.

Finalization Reachable Table

If I implement a destructor in a class, Foo, instances of Foo are tracked closely on the finalization queue. When an instance of Foo is garbage collected, I understand that the CLR sees the entry in the finalization queue and gives that object special treatment by moving the object off the heap and into the finalization reachable table. Then... nothing else happens for that garbage collection cycle?
Will finalize() always be called during the next garbage collection cycle?
Why isn't finalize called immediately after copying my object to the freachable table? (this seems like extra unnecessary complexity)
The finalizer queue is there to simplify things; it would be more complex without it. When the GC runs, no managed code must be executed - else all analysis that the GC had made might be void if user code runs in the middle.
So when the GC runs, finalization must be deferred, instead of getting executed right away. Running it in a separate thread minimizes the time that the VM requires exclusive access to all threads, and increases the potential for concurrent activities.

Resources