I'm working in iOS and have a simple OpenAL project running.
The difference to most openAL projects i've seen is that im not loading in a sound file. Instead I load an array of raw data into the alBufferData. Using a couple of equations I can load in data to produce white noise, sine and pulse waves. And all is working well.
My problem is that I need a way to modify this data whilst the sound is playing in real-time.
Is there a way to modify this data without having to create a new buffer (i tried the approach of creating a new buffer with new data and then use it instead but its nowhere near quick enough).
Any help or suggestions of other ways to accomplish this would be much appreciated.
Thanks
I haven't done it on iOS, but with openAL on the PC what you would do is chain a few buffers together. Each buffer would have a small time period's worth of data. Periodically, check to see if the playing buffer is done, and if so, add it to a free list for reuse When you want to change the sound, write the new waveform into a free buffer and add it to the chain. You select the buffer size to balance latency and required update rate - smaller buffers allow faster response to changes, but need to be generated more often.
This page suggests that a half second update rate is doable. Whether you can go faster depends on the complexity of your calculations as well as on the overhead of the OS.
Changing the data during playback is not supported in OpenAL.
However, you can still try it and see if you get acceptable defaults (though you'll be racing against the OpenAL playback mechanism, and any lag-outs in your app could throw it off, so do this at your own risk).
There's an Apple extension version of ALBufferData that tells OpenAL to use the data you give it directly, rather than making its own local copy. You set it up like so:
typedef ALvoid AL_APIENTRY (*alBufferDataStaticProcPtr) (const ALint bid,
ALenum format,
const ALvoid* data,
ALsizei size,
ALsizei freq);
static alBufferDataStaticProcPtr alBufferDataStatic = NULL;
alBufferDataStatic = (alBufferDataStaticProcPtr) alcGetProcAddress(NULL, (const ALCchar*) "alBufferDataStatic");
Call alBufferDataStatic() it like you would call alBufferData():
alBufferDataStatic(bufferId, format, data, size, frequency);
Since it's now using your sound data buffer rather than its own, you could conceivably modify that data and it won't be the wiser (provided you're not modifying things too close to where it's currently playing from in the buffer).
However, this approach is risky, since it depends on timing you're not fully in control of. To be 100% safe you'll need to use Audio Units.
Related
The documentation for setVertexBytes says:
Use this method for single-use data smaller than 4 KB. Create a MTLBuffer object if your data exceeds 4 KB in length or persists for multiple uses.
What exactly does single-use mean?
For example, if I have a uniforms struct which is less than 4KB(and is updated every frame), is it better to use a triple buffer technique or simply use setVertexBytes?
From what I understand using setVertexBytes would copy the data every time into a MTLBuffer that Metal manages. This sounds slower than using triple buffering.
But then if I have different objects, each with its own uniforms, I would have to triple buffer everything, since it's dynamically updated.
And if I have a material that updates rarely but is passed to the shader every frame, would it be better to keep it in a buffer or pass it as a pointer using setVertexBytes?
It's not necessarily the case that Metal manages a distinct resource into which this data is written. As user Columbo notes in their comment, some hardware allows constant data to be recorded directly into command buffer memory, from which it can be subsequently read by a shader.
As always, you should profile in order to find the difference between the two approaches on your target hardware, but if the amount of data you're pushing per draw call is small, you might very well find that using setVertexBytes:... is faster than writing into a buffer and calling setVertexBuffer:....
For data that doesn't vary every frame (your slow-varying material use case), it may indeed be more efficient to keep that data in a buffer (double- or triple-buffered) rather than using setVertexBytes:....
In the Metal Best Practices Guide, it states that for best performance one should "implement a triple buffering model to update dynamic buffer data," and that "dynamic buffer data refers to frequently updated data stored in a buffer."
Does an MTLTexture qualify as "frequently updated data stored in a buffer" if it needs to be updated every frame? All the examples in the guide above focus on MTLBuffers.
I notice Apple's implementation in MetalKit has a concept of a nextDrawable, so perhaps that's what's happening here?
If a command could be in flight and it could access (read/sample/write) the texture while you're modifying that same texture on the CPU (e.g. using one of the -replaceRegion:... methods or by writing to a backing IOSurface), then you will need a multi-buffering technique, yes.
If you're only modifying the texture on the GPU (by rendering to it, writing to it from a shader function, or using blit command encoder methods to copy to it), then you don't need multi-buffering. You may need to use a texture fence within the shader function or you may need to call -textureBarrier on the render command encoder between draw calls, depending on exactly what you're doing.
Yes, nextDrawable provides a form of multi-buffering. In this case, it's not due to CPU access, though. You're going to render to one texture while the previously-rendered texture may still be on its way to the screen. You don't want to use the same texture for both because the new rendering could overdraw the texture just before it's put on screen, thus showing corrupt results.
I'm a total noob when it comes to core audio so bear with me. Basically what I want to do is record audio data from a machine's default mic, record until the user decides to stop, and then do some analysis on the entire recording. I've been learning from the book "Learning Core Audio" by Chis Adamson and Kevin Avila (which is an awesome book, found it here: http://www.amazon.com/Learning-Core-Audio-Hands-On-Programming/dp/0321636848/ref=sr_1_1?ie=UTF8&qid=1388956621&sr=8-1&keywords=learning+core+audio ). I see how the AudioQueue works, but I'm not sure how to get data as it's coming from the buffers and store it in a global array.
The biggest problem is that I can't allocate an array a priori because we have no idea how long the user wants to record for. I'm guessing that a global array would have to be passed to the AudioQueue's callback where it would then append data from the latest buffer, however I'm not exactly sure how to do that, or if that's the correct place to be doing so.
If using AudioUnits I'm guessing that I would need to create two audio units, one a remote IO unit to get the microphone data and one generic output audio unit that would do the data appending in the unit's (I'm guessing here, really not sure) AudioUnitRender() function.
If you know where I need to be doing these things or know any resources that could help explain how this works, that would be awesome.
I eventually want to learn how to do this in iOS and Mac OS platforms. For the time being I'm just working in the Mac OS.
Since you know the sample rate, your app can pre-allocated a sufficient number of new buffers (for example, in a linked list) in the UI run-loop (for example, periodically, based on an NSTimer or CADisplayLink), into which you can then just copy data during the Audio Queue callbacks.
There are also a few async file write functions that are safe to call inside an audio callback. After recording you can copy the data back out of the file into a now-known-sized memory array (or just mmap the file).
Ok everyone, so I figured out the answer to my own question, and I only had to add about 10 lines of code to get it to work. Basically what I did was in the user data struct, or as Apple calls it the client data struct I added a variable to keep track of the total number of recorded samples, the sample rate (just so I would have access to the value inside the callback) and a pointer that would point to the audio data. In the callback function, I reallocated memory for the pointer and then copied the contents of the buffer into the newly allocated memory.
I'll post the code for my client recorder struct and the lines of code inside the callback function. I would like to post code for the entire program, but much of it was borrowed from the book "Learning Core Audio" by Chis Adamson and Kevin Avila and I don't want to infringe on any copyrights held by the book (can someone tell me if it's legal to post that here or not? if it is I'd be more than happy to post the whole program).
the client recorder struct code:
//user info struct for recording audio queue callbacks
typedef struct MyRecorder{
AudioFileID recordFile;
SInt64 recordPacket;
Boolean running;
UInt32 totalRecordedSamples;
Float32 * allData;
Float64 sampleRate;
}MyRecorder;
This struct needs to be initialized in the main loop of the program. It would look something like this:
MyRecorder recoder = {0};
I know I spelled "recorder" incorrectly.
Next, what I did inside the callback function:
//now let's also write the data to a buffer that can be accessed in the rest of the program
//first calculate the number of samples in the buffer
Float32 nSamples = RECORD_SECONDS * recorder->sampleRate;
nSamples = (UInt32) nSamples;
//so first we need to reallocate memory that the recorder.allData pointer is pointing to
//pretty simple, just add the amount of samples we just recorded to the number that we already had
//to get the current total number of samples, and the size of each sample, which we get from
//sizeof(Float32).
recorder->allData = realloc(recorder->allData, sizeof(Float32) * (nSamples + recorder->totalRecordedSamples));
//now we need to copy what's in the current buffer into the memory that we just allocated
//however, rememeber that we don't want to overwrite what we already just recorded
//so using pointer arith, we need to offset the recorder->allData pointer in memcpy by
//the current value of totalRecordedSamples
memcpy((recorder->allData) + (recorder->totalRecordedSamples), inBuffer->mAudioData, sizeof(Float32) * nSamples);
//update the number of total recorded samples
recorder->totalRecordedSamples += nSamples;
And of course at the end of my program I freed the memory in the recoder.allData pointer.
Also, my experience with C is very limited, so if I'm making some mistakes especially with memory management, please let me know. Some of the malloc, realloc, memcpy etc. type functions in C sort of freak me out.
EDIT: Also I'm now working on how to do the same thing using AudioUnits, I'll post the solution to that when done.
I'm trying to develop an iPhone app that will use the camera to record only the last few minutes/seconds.
For example, you record some movie for 5 minutes click "save", and only the last 30s will be saved. I don't want to actually record five minutes and then chop last 30s (this wont work for me). This idea is called "Loop recording".
This results in an endless video recording, but you remember only last part.
Precorder app do what I want to do. (I want use this feature in other context)
I think this should be easily simulated with a Circular buffer.
I started a project with AVFoundation. It would be awesome if I could somehow redirect video data to a circular buffer (which I will implement). I found information only on how to write it to a file.
I know I can chop video into intervals and save them, but saving it and restarting camera to record another part will take time and it is possible to lose some important moments in the movie.
Any clues how to redirect data from camera would be appreciated.
Important! As of iOS 8 you can use VTCompressionSession and have direct access to the NAL units instead of having to dig through the container.
Well luckily you can do this and I'll tell you how, but you're going to have to get your hands dirty with either the MP4 or MOV container. A helpful resource for this (though, more MOV-specific) is Apple's Quicktime File Format Introduction manual
http://developer.apple.com/library/mac/#documentation/QuickTime/QTFF/QTFFPreface/qtffPreface.html#//apple_ref/doc/uid/TP40000939-CH202-TPXREF101
First thing's first, you're not going to be able to start your saved movie from an arbitrary point 30 seconds before the end of the recording, you'll have to use some I-Frame at approximately 30 seconds. Depending on what your Keyframe Interval is, it may be several seconds before or after that 30 second mark. You could use all I-frames and start from an arbitrary point, but then you'll probably want to re-encode the video afterward because it will be quite large.
SO knowing that, let's move on.
First step is when you set up your AVAssetWriter, you will want to set its AVAssetWriterInput's expectsMediaDataInRealTime property to YES.
In the captureOutput callback you'll be able to do an fread from the file you are writing to. The first fread will get you a little bit of MP4/MOV (whatever format you're using) header (i.e. 'ftyp' atom, 'wide' atom, and the beginning of the 'mdat' atom). You want what's inside the 'mdat' section. So the offset you'll start saving data from will be 36 or so.
Each read will get you 0 or more AVC NAL Units. You can find a listing of NAL unit types from ISO/IEC 14496-10 Table 7-1. They will be in a slightly different format than specified in Annex B, but it's fine. Additionally, there will only be IDR slices and non-IDR slices in the MP4/MOV file. IDR will be the I-Frame you're looking to hang onto.
The NAL unit format in the MP4/MOV container is as follows:
4 bytes - Size
[Size] bytes - NALU Data
data[0] & 0x1F - NALU Type
So now you have the data you're looking for. When you go to save this file, you'll have to update the MPV/MOV container with the correct length, sample count, you'll have to update the 'stsz' atom with the correct sizes for each sample and things like updating the media headers and track headers with the correct duration of the movie and so on. What I would probably recommend doing is creating a sample container on first run that you can more or less just overwrite/augment with the appropriate data for that particular movie. You'll want to do this because the encoders on the various iDevices don't all have the same settings and the 'avcC' atom contains encoder information.
You don't really need to know much about the AVC stream in this case, so you'll probably want to concentrate your experimenting around updating the container format you choose correctly. Good luck.
I see the word "BUFFER" everywhere, but I am unable to grasp what it exactly is.
Would anybody please explain what is buffer in layman's language?
When is it used?
How is it used?
Imagine that you're eating candy out of a bowl. You take one piece regularly. To prevent the bowl from running out, someone might refill the bowl before it gets empty, so that when you want to take another piece, there's candy in the bowl.
The bowl acts as a buffer between you and the candy bag.
If you're watching a movie online, the web service will continually download the next 5 minutes or so into a buffer, that way your computer doesn't have to download the movie as you're watching it (which would cause hanging).
The term "buffer" is a very generic term, and is not specific to IT or CS. It's a place to store something temporarily, in order to mitigate differences between input speed and output speed. While the producer is being faster than the consumer, the producer can continue to store output in the buffer. When the consumer gets around to it, it can read from the buffer. The buffer is there in the middle to bridge the gap.
If you average out the definitions at http://en.wiktionary.org/wiki/buffer, I think you'll get the idea.
For proof that we really did "have to walk 10 miles thought the snow every day to go to school", see TOPS-10 Monitor Calls Manual Volume 1, section 11.9, "Using Buffered I/O", at bookmark 11-24. Don't read if you're subject to nightmares.
A buffer is simply a chunk of memory used to hold data. In the most general sense, it's usually a single blob of memory that's loaded in one operation, and then emptied in one or more, Perchik's "candy bowl" example. In a C program, for example, you might have:
#define BUFSIZE 1024
char buffer[BUFSIZE];
size_t len = 0;
// ... later
while((len=read(STDIN, &buffer, BUFSIZE)) > 0)
write(STDOUT, buffer, len);
... which is a minimal version of cp(1). Here, the buffer array is used to store the data read by read(2) until it's written; then the buffer is re-used.
There are more complicated buffer schemes used, for example a circular buffer, where some finite number of buffers are used, one after the next; once the buffers are all full, the index "wraps around" so that the first one is re-used.
Buffer means 'temporary storage'. Buffers are important in computing because interconnected devices and systems are seldom 'in sync' with one another, so when information is sent from one system to another, it has somewhere to wait until the recipient system is ready.
Really it would depend on the context in each case as there is no one definition - but speaking very generally a buffer is an place to temporarily hold something. The best real world analogy I can think of would be a waiting area. One simple example in computing is when buffer refers to a part of RAM used for temporary storage of data.
That a buffer is "a place to store something temporarily, in order to mitigate differences between input speed and output speed" is accurate, consider this as an even more "layman's" way of understanding it.
"To Buffer", the verb, has made its way into every day vocabulary. For example, when an Internet connection is slow and a Netflix video is interrupted, we even hear our parents say stuff like, "Give it time to buffer."
What they are saying is, "Hit pause; allow time for more of the video to download into memory; and then we can watch it without it stopping or skipping."
Given the producer / consumer analogy, Netflix is producing the video. The viewer is consuming it (watching it). A space on your computer where extra downloaded video data is temporarily stored is the buffer.
A video progress bar is probably the best visual example of this:
That video is 5:05. Its total play time is represented by the white portion of the bar (which would be solid white if you had not started watching it yet.)
As represented by the purple, I've actually consumed (watched) 10 seconds of the video.
The grey portion of the bar is the buffer. This is the video data that that is currently downloaded into memory, the buffer, and is available to you locally. In other words, even if your Internet connection where to be interrupted, you could still watch the area you have buffered.
Buffer is temporary placeholder (variables in many programming languages) in memory (ram/disk) on which data can be dumped and then processing can be done.
The term "buffer" is a very generic term, and is not specific to IT or CS. It's a place to store something temporarily, in order to mitigate differences between input speed and output speed. While the producer is being faster than the consumer, the producer can continue to store output in the buffer. When the consumer speeds up, it can read from the buffer. The buffer is there in the middle to bridge the gap.
A buffer is a data area shared by hardware devices or program processes that operate at different speeds or with different sets of priorities. The buffer allows each device or process to operate without being held up by the other. In order for a buffer to be effective, the size of the buffer and the algorithms for moving data into and out of the buffer.
buffer is a "midpoint holding place" but exists not so much to accelerate the speed of an activity as to support the coordination of separate activities.
This term is used both in programming and in hardware. In programming, buffering sometimes implies the need to screen data from its final intended place so that it can be edited or otherwise processed before being moved to a regular file or database.
Buffer is temporary placeholder (variables in many programming languages) in memory (ram/disk) on which data can be dumped and then processing can be done.
There are many advantages of Buffering like it allows things to happen in parallel, improve IO performance, etc.
It also has many downside if not used correctly like buffer overflow, buffer underflow, etc.
C Example of Character buffer.
char *buffer1 = calloc(5, sizeof(char));
char *buffer2 = calloc(15, sizeof(char));