reading examples on the internet I don't see that there is protection around queues in FreeRTOS. Are they somehow protected or should I protect them with mutexes?
All the RTOS objects are completely thread safe (as would be expected). You can read the documentation and follow the examples - of which there are lot: http://www.freertos.org/Embedded-RTOS-Queues.html
It is not needed. Protection is included in the implementation of the queues. After all semaphores themselves are implemented as queues.
Related
Is queues in freeRtos from the beginning also mutual exclusive, by that i mean, shall i create some kind of mutual exclusion for writing or reading from a queue, or is it already implemented by the function xQueueRead and xQueueSend.
If you look at the source in "queue.c" you will notice that xQueueGenericSend() and xQueueGenericReceive() functions are using tastENTER_CRITICAL()/taskEXIT_CRITICAL() macro pair to ensure atomic operation of the function, which, in a sense, is kind of mutual exclusion you are asking for.
FreeRTOS queues are thread-safe, you don't need to implement your own locking. See the FreeRTOS documentation about queues:
Queues are the primary form of intertask communications. They can be
used to send messages between tasks, and between interrupts and tasks.
In most cases they are used as thread safe FIFO (First In First Out)
buffers
As you may have experienced, access none-thread safe variables is a big headache. For iOS one simple solution is to use keyword #synchronized, which will add NSLock to insure the data can be accessed by unique one thread, the disadvantage is as below:
Lock too many will reduce app performance greatly, especially when invoked by main thread.
Dead lock will occur when logic becomes complex.
Based on the above considerations, we prefer to use serial queue to handle, each thread safe critical operation will append to the end of the queue, it is a great solution, but the problem is that all access interfaces should by designed in asyn style, see the following one.
-(id)objectForKey:(NSString *)key;
The people who invoke this class aren't reluctant to design in this way. Anyone who has experience on this field please share and discuss together.
The final solution is using NSUserDefault to store small data, for large cache data put them in file maintained by ourselves.
Per Apple doc the advantage of NSUserDefault is thread safe and will do synchronize work periodically.
I'm working on some code that I can't contact the original developer.
He passes a class a reference to another class's serial dispatch queue and I'm not seeing any purpose to him not just creating another dispatch queue (each class is a singleton).
It's not creating issues but I'd like to understand it further, so any insight into the positive implications is appreciated, thank you.
Edit: I suggest reading all the answers here, they explain a lot.
It's actually not a good idea to share queues in this fashion (and no, not because they are expensive - they're not, quite the converse). The rationale is that it's not clear to anyone but a queue's creator just what the semantics of the queue are. Is it serial? Concurrent? High priority? Low priority? All are possible, and once you start passing internal queues around which were actually created for the benefit of a specific class, the external caller can schedule work on it which causes a mutual deadlock or otherwise behaves in an unexpected fashion with the other items on that queue because caller A knew to expect concurrent behavior and caller B was thinking it was a serial queue, without any of the "gotchas" that concurrent execution implies.
Queues should therefore, wherever possible, be hidden implementation details of a class. The class can export methods for scheduling work against its internal queue as necessary, but the methods should be the only accessors since they are the only ones who know for sure how to best access and schedule work on that specific type of queue!
If it's a serial queue, then they may be intending to serialize access to some resource shared between all objects that share it.
Dispatch queues are somewhat expensive to create, and tie up system resources. If you can share one, the system runs more efficiently. For that matter, if your app does work in the background, using a shared queue allows you to manage a single pool of tasks to be completed. So yes, there are good reasons for using a shared dispatch queue.
I have a streaming iOS app that captures video to Wowza servers.
It's a beast, and it's really finicky.
I'm grabbing configuration settings from a php script that shoots out JSON.
Now that I've implemented that, I've run into some strange threading issues. My app connects to the host, says its streaming, but never actually sends packets.
Getting rid of the remote configuration NSURLConnection (which I've made sure is properly formatted) delegate fixes the problem. So I'm thinking either some data is getting misconstrued across threads or something like that.
What will help me is knowing:
Are NSURLConnection delegate methods called on the main thread?
Will nonatomic data be vulnerable in a delegate method?
When dealing with a complex threaded app, what are the best practices for grabbing data from the web?
Have you looked at AFNetworking?
http://www.raywenderlich.com/30445/afnetworking-crash-course
https://github.com/AFNetworking/AFNetworking
It's quite robust and helps immensely with the threading, and there are several good tutorials.
Are NSURLConnection delegate methods called on the main thread?
Yes, on request completion it gives a call back on the main thread if you started it on the main thread.
Will nonatomic data be vulnerable in a delegate method?
Generally collection values (like array) are vulnerable with multiple threads; the rest shouldn't create anything other than a race problem.
When dealing with a complex threaded app, what are the best practices for grabbing data from the web?
I feel it's better to use GCD for handling your threads, and asynchronous retrieval using NSURLConnection should be helpful. There are few network libraries available to do the boilerplate code for you, such as AFNetworking, and ASIHTTPRequest (although that is a bit old).
Are NSURLConnection delegate methods called on the main thread?
Delegate methods can be executed on a NSOperationQueue or a thread. If you not explicitly schedule the connection, it will use the thread where it receives the start message. This can be the main thread, but it can also any other secondary thread which shall also have a run loop.
You can set the thread (indirectly) with method
- (void)scheduleInRunLoop:(NSRunLoop *)aRunLoop forMode:(NSString *)mode
which sets the run loop which you retrieved from the current thread. A run loop is associated to a thread in a 1:1 relation. That is, in order to set a certain thread where the delegate methods shall be executed, you need to execute on this thread, retrieve the Run Loop from the current thread and send scheduleInRunLoop:forMode: to the connection.
Setting up a dedicated secondary thread requires, that this thread will have a Run Loop. Ensuring this is not always straight forward and requires a "hack".
Alternatively, you can use method
- (void)setDelegateQueue:(NSOperationQueue *)queue
in order to set the queue where the delegate methods will be executed. Which thread will be actually used for executing the delegates is then undetermined.
You must not use both methods - so schedule on a thread OR a queue. Please consult the documentation for more information.
Will nonatomic data be vulnerable in a delegate method?
You should always synchronize access to shared resources - even for integers. On certain multiprocessor systems it is not even guaranteed that accesses to a shared integer is safe. You will have to use memory barriers on both threads in order to guarantee that.
You might utilize serial queues (either NSOperationQueue or dispatch queue) to guarantee safe access to shared resources.
When dealing with a complex threaded app, what are the best practices for grabbing data from the web?
Utilize queues, as mentioned, then you don't have to deal with threads. "Grabbing data" is not only a threading problem ;)
If you prefer a more specific answer you would need to describe your problem in more detail.
To answer your first question: The delegate methods are called on the thread that started the asynchronous load operation for the associated NSURLConnection object.
There are various questions around this topic, and lots of advice saying NOT to use sendSynchronousRequest within dispatch_async, because it blocks the thread, and GCD will spawn lots of new worker threads to service all the synchronous URL requests.
Nobody seems to have a definitive answer as to what iOS 5, [NSURLConnection sendAsynchronousRequest:queue:completionHandler:] does behind the scenes.
One post I read states that it 'might' optimise, and it 'might' use the run loop - but certainly won't create a new thread for each request.
When I pause my debugger when using sendAsynchronousRequest:queue:completionHandler, the stack trace looks like this:
..now it appears that sendAsynchronousRequest:queue:completionHandler, is actually calling sendSynchronousRequest, and I still have tons of threads created when I use the async method instead of the sync method.
Yes, there are other benefits to using the async call, which I don't want to discuss in this post.
All I'm interested in is performance / thread / system usage, and if i'm worse off using the sync call inside dispatch_async instead of using the async call.
I don't need advice on using ios4 async calls either, this is purely for educational purposes.
Does anyone have any insightful answers to this?
Thanks
This is actually open source. http://libdispatch.macosforge.org/
It is very unlikely that you will be able to manage the worker threads more efficiently than Apple's implementation. In this context "asynchronous" doesn't mean select/poll, it just means that the call will return immediately. So it is not surprising that the implementation is spawning threads.
As the previous answer stated, both GCD (libdispatch) and the libblocksruntime are open source. Internally, iOS/OS X manage a global pool of pthreads, plus any app-specific pools you create in your user code. Since there's a 1:N mapping between OS threads and dispatch tasks, you don't have to (and shouldn't) worry about thread creation and disposal under the hood. To that end, a GCD task doesn't use any time in an app's run loop after invocation as it's punted to a background thread.
Most kinds of NSURL operations are I/O-bound; there's no amount of concurrency voodo that can disguise this, but if Apple's own async implementation uses its synchronous counterpart in the background, it probably suggests it's quite highly optimized already. Contrary to what the previous answer said, libdispatch does use scalable I/O internally (kqueue on OS X/iOS/BSD), and if you hand-rolled something yourself you'd have to deal with file-descriptor readiness yourself to yield similar performance.
While possible, the return on time invested is probably marginal. Stick with Apple's implementation and stop worrying about threads!