Please help me make up my mind.
Allocate at the start of the application, free at the exit.
Allocate when streaming starts, free as soon as streaming stops.
Which one? Why?
In case it matters: it's a voice communication application like Ekiga. I'm allocating the buffers with DSSCL_NORMAL (the 8bit 22KHz limitation is false as far as I've tested).
Since sound is CRITICAL to your application you should initialise it as soon as you are capable of displaying errors to the user. Otherwise you're making people wait through a startup sequence for an application they can't use.
On the other hand, if the ability to allocate is dependent on user settings then you should obviously allow them to reach the settings prior to doing something that could crash the application.
Related
I have a Beaglebone running Ubuntu. We want to continuously sample from 3 on-board ATD converters at 100KS/s, and every window of samples we will run a cross correlation DSP algorithm. Once we find a correlation value above a threshold, we will send the value to a PC.
My concern is the process scheduling in Ubuntu. If our process gets swapped out and an ATD sample becomes available during this time, the process will miss the sample. We need to ensure that our process will capture every sample and save it in memory.
With this being said, is there a way to trigger interrupts on the Beaglebone so that if an ATD sample is ready, the sample will be saved in the memory of our program even if the program does not have the processor at the time?
Thanks!
You might be able to trigger the EDMA or use the PRUSS. Probably best to ask on beagleboard#googlegroups.com. There isn't a DSP per-se on the BeagleBone.
This is not exactly an answer to your question, but hopefully it explains how the process works. Since you didn't mention what hardware you are running for AD conversion, maybe this is the best that can be done:
With audio hardware, which faces the same problem, the solution comes from the hardware and the drivers working together: whenever the hardware has filled up enough of the buffer it signals the driver (via an interrupt or some similar mechanism). In some cases, it's also possible that the driver polls the hardware or something like that, but that's a less efficient solution, and I'm not sure anyone does it that way anymore (maybe on cheaper hardware?). From there, the driver process may call right into the end-user process, or it may simply mark the relevant end-user process as "runnable". Either way, control needs to be transferred to the end user process.
For that to happen, the end user process must be running at a higher priority than anything else occupying the CPUs at that moment. To guarantee that your process will always be first in the queue, you can run it at a high priority, with the appropriate permissions, you can even run in very high priorities.
The time it takes for the top priority process to go from runnable to running is sometimes called the "latency" of the OS, though I am sure there's a more specific technical term. The latency of Linux is on the order of 1 ms, but since it's not a "hard" real-time OS, this is not a guarantee. If this is too long to handle your chunks of data, you may have to buffer some of it in your driver.
It came into my mind that Core Audio callbacks require very low latency. In my case I'm getting requests for 512 samples at a time, which at 44100Hz means that the callback can at a very maximum, take 11.6 milliseconds to run.
Now, as I understand garbage collection, each collection cycle requires the VM to stop all threads. It is then possible for a garbage collection cycle to interrupt a Core Audio callback, and get glitches.
If so, then it is not really safe to use Core Audio from MonoTouch.
Am I correct in my assumptions? or is this all incorrect?
The Core Audio render callback is going to be called on a realtime thread which is very strict about its deadlines. From the sounds of it, you're occasionally exceeding the render callback's time allowance, and being cut off (which == glitches). While I don't know much about MonoTouch, your guess about GC delays being the culprit does sound like a very likely conclusion.
To give you a sense of just how strict Core Audio render callbacks are, here's some things that are unacceptable in that context:
Allocating memory
Waiting on a mutex
Reading data from disk
Objective-C messaging
Due to the architecture of Core Audio, render callbacks are going to be triggered very shortly before the audio you produce will be heard. Therefore, even a brief GC hangup could trigger audible glitches.
No. The MonoTouch VM does not appear to be guaranteed to execute code in deterministic time. Real-time audio callbacks require code (usually compiled native C) whose performance can be strictly bounded in time, including all OS calls and any interpreter overhead.
I'm trying to get a feel for the amount of memory an iOS app can reliably allocate to help me drive some design decisions. The app is going to involve real time synchronised processed audio and animation.
Other than writing code that loads up the frameworks I'll need then trying to progressively allocate memory until I get warnings, is there any way to determine this kind of thing?
The simulator doesn't let you select a specific hardware model, so I assume I can't even simulate this stuff.
You cannot determine how much memory an app allocate as far as I know. Always try to keep the lowest memory possible for your app.
The memory allocated to your app depends on many factors like : number of background process happening, amount of memory available, memory used by other apps, the device you are running etc..
So, its not a good practice to keep a maximum line for memory consumed by your app and design accordingly.
Also try to hold only the necessary memory you need and handle memory issue in the memory callback methods like 'didreceivememorywarning'. Always consider that you have the least amount of memory allocated by the OS.
I'm developing an iOS application which (like any other) requires a certain amount of free memory to run correctly. In my case - at least 4MB, I cannot use any less than that. It's a fairly small amount, but a few times (on my device at least) I got only 2MB free and the program crashed. What do you think is the best way to tell users how much memory you need. I know the code to get the currently available memory, but even if I tell the user (like in a UIAlertView when the user starts the program) that he is running low, what can I suggest him to do to free more memory (except turning off and on the device). Any ideas?
On older devices you can't really rely on getting more than 8MB. 4MB is a great target, and if through your profiling you've determined that's all you need, you should be fine.
However, I think the concept here is that if you receive memory warnings you wouldn't bother the user with those types of things. I would find it pretty annoying myself. It would be better to limit your app's activity or throttle back whatever you are doing that is so memory intensive.
On which kinds of iPhone devices your app is being tested? I suppose that the iOS has to do its job well to free enough memory for you or kill alll background app so it can have more memory
Is there a way to access (read or free) memory chunks that are outside the memory that is allocated for the program without getting access violation exceptions.
Well what I actually would like to understand apart from this, is how a memory cleaner (system garbage collector) works. I've always wanted to write such a program. (The language isn't an issue)
Thanks in advance :)
No.
Any modern operating system will prevent one process from accessing memory that belongs to another process.
In fact, it you understood virtual memory, you'd understand that this is impossible. Each process has its own virtual address space.
The simple answer (less I'm mistaken), no. Generally it's not a good idea for 2 reasons. First is because it causes a trust problem between your program and other programs (not to mention us humans won't trust your application either). second is if you were able to access another applications memory and make a change without the application knowing about it, you will cause the application to crash (also viruses do this).
A garbage collector is called from a runtime. The runtime "owns" the memory space and allows other applications to "live" within that memory space. This is why the garbage collector can exist. You will have to create a runtime that the OS allocates memory to, have the runtime execute the application under it's authority and use the GC under it's authority as well. You will need to allow some instrumentation or API that allows the application developer to "request" memory from your runtime (not the OS) and your runtime have a way to not only response to such a request but also keep track of the memory space it's allocating to that application. You will probably need to have a framework (set of DLL's) that makes these calls available to the application (the developer would use them to form the request inside their application).
You have to be sure that your garbage collector does not remove memory other then the memory that is used by the application being executed, as you may have more then 1 application running within your runtime at the same time.
Hope this helps.
Actually the right answer is YES.. there are some programs that does it (and if they exists.. it means it is possible...)
maybe you need to write a kernel drive to accomplish this, but it is possible.
Oh - and I have another example... Debugger attach command... here is one program that interacts with another program memory even though both started as a different process....
of course - messing with another program memory.. if you don't know what you're doing will probably make it crush...