I am using the MIKMIDI framework and this is using the AudioToolbox type MusicTimeStamp
How can i convert this timestamp to milliseconds?
The MusicTimeStamp is a raw beat count, you need to know the tempo (and tempo map, tempo isn't an invariant) of the music you're working with in order to convert this into milliseconds.
Outside of a MusicSequence a MTS can't be mapped to a wall time.
Edit: A CoreMedia CMTime can be converted to wall times if that helps.
There's new API for this in MIKMIDI. It's in a branch (1.8) as I write this, but should be merged soon, and released in the 1.8 release. It makes it much easier to do the conversion you're asking about.
In the context of a sequence, do:
let seconds = sequence.timeInSeconds(forMusicTimeStamp: musicTimeStamp)
There's also a method to convert in the opposite direction. MIKMIDISequencer has very similar, but more sophisticated (to account for looping, tempo override, etc.) methods to do the same kinds of conversions.
Without this new API in MIKMIDI, you can still use MusicSequenceGetSecondsForBeats(). You can get the underlying MusicSequence for an MIKMIDISequence using its musicSequence property:
var timeInSeconds = Float64(0)
MusicSequenceGetSecondsForBeats(sequence, musicTimeStamp, &timeInSeconds)
As far as I know this doesn't take into account looping even if you're doing it with the MusicPlayer API, and certainly not an overridden tempo if one is set on MIKMIDISequencer, so you should prefer MIKMIDI's API above if possible.
Related
I'm a total noob when it comes to core audio so bear with me. Basically what I want to do is record audio data from a machine's default mic, record until the user decides to stop, and then do some analysis on the entire recording. I've been learning from the book "Learning Core Audio" by Chis Adamson and Kevin Avila (which is an awesome book, found it here: http://www.amazon.com/Learning-Core-Audio-Hands-On-Programming/dp/0321636848/ref=sr_1_1?ie=UTF8&qid=1388956621&sr=8-1&keywords=learning+core+audio ). I see how the AudioQueue works, but I'm not sure how to get data as it's coming from the buffers and store it in a global array.
The biggest problem is that I can't allocate an array a priori because we have no idea how long the user wants to record for. I'm guessing that a global array would have to be passed to the AudioQueue's callback where it would then append data from the latest buffer, however I'm not exactly sure how to do that, or if that's the correct place to be doing so.
If using AudioUnits I'm guessing that I would need to create two audio units, one a remote IO unit to get the microphone data and one generic output audio unit that would do the data appending in the unit's (I'm guessing here, really not sure) AudioUnitRender() function.
If you know where I need to be doing these things or know any resources that could help explain how this works, that would be awesome.
I eventually want to learn how to do this in iOS and Mac OS platforms. For the time being I'm just working in the Mac OS.
Since you know the sample rate, your app can pre-allocated a sufficient number of new buffers (for example, in a linked list) in the UI run-loop (for example, periodically, based on an NSTimer or CADisplayLink), into which you can then just copy data during the Audio Queue callbacks.
There are also a few async file write functions that are safe to call inside an audio callback. After recording you can copy the data back out of the file into a now-known-sized memory array (or just mmap the file).
Ok everyone, so I figured out the answer to my own question, and I only had to add about 10 lines of code to get it to work. Basically what I did was in the user data struct, or as Apple calls it the client data struct I added a variable to keep track of the total number of recorded samples, the sample rate (just so I would have access to the value inside the callback) and a pointer that would point to the audio data. In the callback function, I reallocated memory for the pointer and then copied the contents of the buffer into the newly allocated memory.
I'll post the code for my client recorder struct and the lines of code inside the callback function. I would like to post code for the entire program, but much of it was borrowed from the book "Learning Core Audio" by Chis Adamson and Kevin Avila and I don't want to infringe on any copyrights held by the book (can someone tell me if it's legal to post that here or not? if it is I'd be more than happy to post the whole program).
the client recorder struct code:
//user info struct for recording audio queue callbacks
typedef struct MyRecorder{
AudioFileID recordFile;
SInt64 recordPacket;
Boolean running;
UInt32 totalRecordedSamples;
Float32 * allData;
Float64 sampleRate;
}MyRecorder;
This struct needs to be initialized in the main loop of the program. It would look something like this:
MyRecorder recoder = {0};
I know I spelled "recorder" incorrectly.
Next, what I did inside the callback function:
//now let's also write the data to a buffer that can be accessed in the rest of the program
//first calculate the number of samples in the buffer
Float32 nSamples = RECORD_SECONDS * recorder->sampleRate;
nSamples = (UInt32) nSamples;
//so first we need to reallocate memory that the recorder.allData pointer is pointing to
//pretty simple, just add the amount of samples we just recorded to the number that we already had
//to get the current total number of samples, and the size of each sample, which we get from
//sizeof(Float32).
recorder->allData = realloc(recorder->allData, sizeof(Float32) * (nSamples + recorder->totalRecordedSamples));
//now we need to copy what's in the current buffer into the memory that we just allocated
//however, rememeber that we don't want to overwrite what we already just recorded
//so using pointer arith, we need to offset the recorder->allData pointer in memcpy by
//the current value of totalRecordedSamples
memcpy((recorder->allData) + (recorder->totalRecordedSamples), inBuffer->mAudioData, sizeof(Float32) * nSamples);
//update the number of total recorded samples
recorder->totalRecordedSamples += nSamples;
And of course at the end of my program I freed the memory in the recoder.allData pointer.
Also, my experience with C is very limited, so if I'm making some mistakes especially with memory management, please let me know. Some of the malloc, realloc, memcpy etc. type functions in C sort of freak me out.
EDIT: Also I'm now working on how to do the same thing using AudioUnits, I'll post the solution to that when done.
I'm using CMTime for AVAssets for video clip. To trim the video without saving the new video file I just want to keep track of the start time and the duration.
The CMTimeGetSeconds() method will return a Float64, what would be the best way to store this in CoreData?
I can't use a NSNumber as the float type round the Float64 way to much. 1.200000 is 1.0000 when I create my NSNumber.
Thanks in advance
Based on your comments it is highly likely that that the videoTrack object will adjust the duration to a nice round number that makes since for video playback. Try creating an NSNumber and printing it without setting it to the duration property and you will probably get the exact same result. Also make sure the data type is set to a Double in the CoreData model.
The IMediaSample SetTime() function expects two REFERENCE_TIME parameters. REFERENCE_TIME is defined as type "LongLong" in Delphi 6, the programming language I am using for my DirectShow application. However, the first parameter of the Callback method that the DirectShow sample grabber filter uses to pass the sample time of a new media sample is cast as double. How do I convert between these two values so I can compare the sample time's between media sample's I receive from the sample grabber filter and the REFERENCE_TIME values that I generate in my push source filter's FillBuffer() method?
Also, would the sample time that is provided by the Sample Grabber filter in the callback method be considered the Start time of a media sample, or the End time?
Simple part: double is in seconds, and REFERENCE_TIME is in 100 ns units. Hence the conversion is simple: multiple or divide by 1E+7.
Not so simple one: you capture some time in grabber in one filter graph, and you time stamp data in your filter in another graph. Both graphs have time stamps to indicate streaming/presentation time, which is relative to graph "run time". That is, when media sample is passed between graphs, there might also a time stamp offset involved.
As for end time, with video media samples, sample stop time may be omitted or set equal to start time; with audio stop time is normally something you can compute by adding start time to time of payload data the buffer holds.
Bonus reading on MSDN: Time and Clocks in DirectShow
To me it also has been a bit difficult to think in 100 nanosecond units. So I also often convert between milliseconds and the 100 ns units. Although it is pretty trivial to write your own functions. If you use the DirectShow BaseClasses there is also a macro exported in the directshow baseclasses in the file RefTime.h
This would also do the conversion:
double time = 1000;
REFERENCE_TIME direct_show_time = MILLISECONDS_TO_100NS_UNITS(time);
I'm working in iOS and have a simple OpenAL project running.
The difference to most openAL projects i've seen is that im not loading in a sound file. Instead I load an array of raw data into the alBufferData. Using a couple of equations I can load in data to produce white noise, sine and pulse waves. And all is working well.
My problem is that I need a way to modify this data whilst the sound is playing in real-time.
Is there a way to modify this data without having to create a new buffer (i tried the approach of creating a new buffer with new data and then use it instead but its nowhere near quick enough).
Any help or suggestions of other ways to accomplish this would be much appreciated.
Thanks
I haven't done it on iOS, but with openAL on the PC what you would do is chain a few buffers together. Each buffer would have a small time period's worth of data. Periodically, check to see if the playing buffer is done, and if so, add it to a free list for reuse When you want to change the sound, write the new waveform into a free buffer and add it to the chain. You select the buffer size to balance latency and required update rate - smaller buffers allow faster response to changes, but need to be generated more often.
This page suggests that a half second update rate is doable. Whether you can go faster depends on the complexity of your calculations as well as on the overhead of the OS.
Changing the data during playback is not supported in OpenAL.
However, you can still try it and see if you get acceptable defaults (though you'll be racing against the OpenAL playback mechanism, and any lag-outs in your app could throw it off, so do this at your own risk).
There's an Apple extension version of ALBufferData that tells OpenAL to use the data you give it directly, rather than making its own local copy. You set it up like so:
typedef ALvoid AL_APIENTRY (*alBufferDataStaticProcPtr) (const ALint bid,
ALenum format,
const ALvoid* data,
ALsizei size,
ALsizei freq);
static alBufferDataStaticProcPtr alBufferDataStatic = NULL;
alBufferDataStatic = (alBufferDataStaticProcPtr) alcGetProcAddress(NULL, (const ALCchar*) "alBufferDataStatic");
Call alBufferDataStatic() it like you would call alBufferData():
alBufferDataStatic(bufferId, format, data, size, frequency);
Since it's now using your sound data buffer rather than its own, you could conceivably modify that data and it won't be the wiser (provided you're not modifying things too close to where it's currently playing from in the buffer).
However, this approach is risky, since it depends on timing you're not fully in control of. To be 100% safe you'll need to use Audio Units.
I'm using jFugue to parse a midi file and it will always parse the tempo incorrectly(I know that the tempo is 140 and it is saying that the tempo is 720). At first I thought that it might, somehow, be multiplying the actual tempo by some number and that's not it. The number it's giving me is somehow related to the tempo, but I don't know how. This whole thing is very confusing, any help would be greatly appreciated.
Here it says that if you're using a version of JFugue before 4.0, tempo is stored as microseconds per beat, which is 60000 / BPM
http://www.jfugue.org/javadoc/org/jfugue/Tempo.html
Correction:
The conversion information on that page is incorrect.
PPQ (pulses per quarter, or microseconds per beat) = 60,000,000 / BPM