Saving a very simple MusicSequence into MIDI doesn't reproduce sound - ios

I'm trying to save a very basic one note MusicSequence (MusicSequence Reference) into a MIDI file. The file is being written right now and the duration of the note also (if I put duration 4 then the MIDI file lasts 2 secs, if I change it to 2 then it lasts 1 sec as it should) but there's no sound being reproduced and if I look at the MIDI file in Logic there's no information neither. Seems like the note duration gets written but the note's note isn't.
What could be happening?
+ (MusicSequence)getSequence
{
MusicSequence mySequence;
MusicTrack myTrack;
NewMusicSequence(&mySequence);
MusicSequenceNewTrack(mySequence, &myTrack);
MIDINoteMessage noteMessage;
MusicTimeStamp timestamp = 0;
noteMessage.channel = 0;
noteMessage.note = 4;
noteMessage.velocity = 90;
noteMessage.releaseVelocity = 0;
noteMessage.duration = 4;
if (MusicTrackNewMIDINoteEvent(myTrack, timestamp, &noteMessage) != noErr) NSLog(#"ERROR creating the note");
else NSLog(#"Note added");
return mySequence;
}

Try writing a note that is > 20 and < 109 (midi note range). While 4 may be technically valid, it is outside the range of normal midi notes.
Also, a useful function working with Core Audio/MusicPlayer etc. is CAShow() - so try CAShow(sequence) to view the sequence data.

Related

iOS Bluetooth Performing Write Long

I'm working on a project with an iPhone connecting to an ESP32 using BLE. I'm trying to write a 528 byte long blob to a characteristic. Since the blob is longer than the max 512 allowed per write I'm trying to do a Write Long.
I'ved tried a couple things
1 - If I just try to write the blob I see the first chunk go through with Prepare Write set but there are no subsequent writes.
Why is it stopping after the first chunk?
2 - If I try to chuck it manually based on the size returned from maximumWriteValueLengthForType I see all the data is sent correctly but Prepare Write is not set so they aren't handled correctly.
How do I specify Prepare Write / Execute Write requests?
Here's a code snippet covering the implementation #2
NSData *blob = [request value];
NSUInteger localLength = 0;
NSUInteger totalLength = [blob length];
NSUInteger chunkSize = [peripheral maximumWriteValueLengthForType:type];
uint8_t localBytes[chunkSize];
NSData *localData;
do
{
if(totalLength > chunkSize) {
NSLog(#"BIGGER THAN CHUNK!!!!!!!!!!!!!!!!!!!!!");
NSLog(#"%tu", chunkSize);
for ( int i = 0; i < chunkSize; i++) {
localBytes[i] = ((uint8_t *)blob.bytes)[localLength + i];
}
localData = [NSMutableData dataWithBytes:localBytes length:chunkSize];
totalLength -= chunkSize;
localLength += chunkSize;
}
else {
NSLog(#"Smaller than chunk");
uint8_t lastBytes[totalLength];
for (int i = 0 ; i < totalLength; i++) {
lastBytes[i] = ((uint8_t *)blob.bytes)[localLength + i];
}
localData = [NSMutableData dataWithBytes:lastBytes length:totalLength];
totalLength = 0;
}
// Write to characteristic
[peripheral writeValue: localData forCharacteristic:characteristic type:type];
} while( totalLength > 0);
Long writes are affected by the same limit of 512 bytes maximum for the characteristic value. Long writes are only useful when MTU is too short to write the full value in one packet. Maybe you're trying to write out of this allowed range or something.
Newer iOS versions communicating with BLE 5 devices use a large enough MTU to fit a characteristic value of 512 in one packet (if the remote device also supports such a big MTU).
If you want to write bigger values than 512 bytes, you will need to split it up into multiple writes, so that the second write "overwrites" the first value sent, rather than appending to it. You can also use L2CAP CoC instead which eliminates this arbitrary 512 byte limit.
You have the right general approach but you can't just send the chunks sequentially. There is a limited buffer for sending Bluetooth data and your loop will write data into that buffer more quickly than the Bluetooth hardware can send it.
The exact approach you need to take depends on whether your characteristic supports write with response or write without response.
If your characteristic uses write with response, you should send a chunk and then wait until you get a call to the didWriteValueFor delegate method. You can then write the next chunk. The advantage of this approach is essentially guaranteed delivery of the data. The disadvantage is it is relatively slow.
If your characteristic uses write without response then you call write repeatedly until you get a call to didWriteValueFor with an error. At this point you have to wait until you get a call to peripheralIsReady. At this point you can start writing again, beginning with the last failed write.
With this approach there is the potential for lost data, but it is faster.
If you have to move large amounts of data, an L2Cap stream might be better, but you need to handle data framing.

midi packet list dose not change timestamp

i have this code from here(Using MIDIPacketList in swift) but i can't pm the user or comment on that question, so i will ask my question.
#ephemer dude if you see my question, i love your code on midi list and it work so well, but when i change the time stamp, nothing happens, it must create some delay but it will be the same as 0 time stamp.
do anyone know how to fix this?
and how i can have the time stamp out of this extension to have that in midi event, i want to be able to change time stamp for every midi event,
to have it here:
var packets = MIDIPacketList(midiEvents: [[0x90, 60, 100]])
public typealias MidiEvent = [UInt8]
extension MIDIPacketList {
init(midiEvents: [MidiEvent]) {
let timestamp = MIDITimeStamp(0) // do it now
let totalBytesInAllEvents = midiEvents.reduce(0) { total, event in
return total + event.count
}
// Without this, we'd run out of space for the last few MidiEvents
let listSize = MemoryLayout<MIDIPacketList>.size + totalBytesInAllEvents
// CoreMIDI supports up to 65536 bytes, but in practical tests it seems
// certain devices accept much less than that at a time. Unless you're
// turning on / off ALL notes at once, 256 bytes should be plenty.
assert(totalBytesInAllEvents < 256,
"The packet list was too long! Split your data into multiple lists.")
// Allocate space for a certain number of bytes
let byteBuffer = UnsafeMutablePointer<UInt8>.allocate(capacity: listSize)
// Use that space for our MIDIPacketList
self = byteBuffer.withMemoryRebound(to: MIDIPacketList.self, capacity: 1) { packetList -> MIDIPacketList in
var packet = MIDIPacketListInit(packetList)
midiEvents.forEach { event in
packet = MIDIPacketListAdd(packetList, listSize, packet, timestamp, event.count, event)
}
return packetList.pointee
}
byteBuffer.deallocate() // release the manually managed memory
}
}
// to send use this
var packets = MIDIPacketList(midiEvents: [[0x90, 60, 100]])
MIDISend(clientOutputPort, destination, &packetList)
i figure it out, how it should work, but not quite right.
according to apple documentation, it must use mach_absolute_time() .
so i used that but i don't know how to work properly with mach_absolute_time().if anyone knows please tell me too.
if i use var timestamp = mach_absolute_time()+1000000000 it will delay like 1 min or so.if i change the 1000000000 with any number lower than this ,will not make a delay.
do anyone know how to work with mach_absolute_time()?
i saw some code on mach_absolute_time() but they use that as timer, it is acutely a timer that give time from the fist boot ,but how to give a time as a mach_absolute_time() to work with midi timestamp.
I found the answer.
set the timestamp to :
var delay = 0.2
var timestamp: MIDITimeStamp = mach_absolute_time() + MIDITimeStamp(delay * 1_000_000_000)
the delay is the time you want the midi message age to have delay.

Set timestamp in CMsampleBuffer using AVAssetWriter not working

Hello I'm working in an app that is recording video + audio. The Video source is the camera, and the audio is coming from streaming. My problem happen when the communication with streaming is closed for some reason. Then in that case I switch the audio source to built in mic. The problem is the audio is not synchronised at all. I would like to add a space in my audio and then set the timestamp in realtime according to the current video timestamp. Seems AvassetWritter is adding the frames consecutive from built in mic and it looks like is ignoring the timestamp.
Do you know why avassetwriter is ignoring the timestamp?
EDIT:
This is the code than gets the latest video timestamp
- (void)renderVideoSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
CVPixelBufferRef renderedPixelBuffer = NULL;
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
self.lastVideoTimestamp = timestamp;
and this is the code that I use to synchronise audio coming from built in mic when the stream is disconnected.
CFRelease(sampleBuffer);
sampleBuffer = [self adjustTime:sampleBuffer by:self.lastVideoTimestamp];
//Adjust CMSampleBufferFunction
- (CMSampleBufferRef) adjustTime:(CMSampleBufferRef) sample by:(CMTime) offset
{
CMItemCount count;
CMSampleBufferGetSampleTimingInfoArray(sample, 0, nil, &count);
CMSampleTimingInfo* pInfo = malloc(sizeof(CMSampleTimingInfo) * count);
CMSampleBufferGetSampleTimingInfoArray(sample, count, pInfo, &count);
for (CMItemCount i = 0; i < count; i++)
{
pInfo[i].decodeTimeStamp = kCMTimeInvalid;//CMTimeSubtract(pInfo[i].decodeTimeStamp, offset);
pInfo[i].presentationTimeStamp = CMTimeSubtract(pInfo[i].presentationTimeStamp, offset);
}
CMSampleBufferRef sout;
CMSampleBufferCreateCopyWithNewTiming(nil, sample, count, pInfo, &sout);
free(pInfo);
return sout;
}
That is what I want to do.
Video
--------------------------------------------------------------------
Stream disconnect stream Built in mic
----------------------------------- -----------------
I would like to get this, as you can see there is a space with no audio, because the audio coming from the stream is disconnected and maybe you didn't receive all of the audio.
What it is currently doing:
Video
--------------------------------------------------------------------
Stream disconnect stream Built in mic
--------------------------------------------------------------------

AVAudioEngine seek the time of the song

I am playing a song using AVAudioPlayerNode and I am trying to control its time using a UISlider but I can't figure it out how to seek the time using AVAUdioEngine.
After MUCH trial and error I think I have finally figured this out.
First you need to calculate the sample rate of your file. To do this get the last render time of your AudioNode:
var nodetime: AVAudioTime = self.playerNode.lastRenderTime
var playerTime: AVAudioTime = self.playerNode.playerTimeForNodeTime(nodetime)
var sampleRate = playerTime.sampleRate
Then, multiply your sample rate by the new time in seconds. This will give you the exact frame of the song at which you want to start the player:
var newsampletime = AVAudioFramePosition(sampleRate * Double(Slider.value))
Next, you are going to want to calculate the amount of frames there are left in the audio file:
var length = Float(songDuration!) - Slider.value
var framestoplay = AVAudioFrameCount(Float(playerTime.sampleRate) * length)
Finally, stop your node, schedule the new segment of audio, and start your node again!
playerNode.stop()
if framestoplay > 1000 {
playerNode.scheduleSegment(audioFile, startingFrame: newsampletime, frameCount: framestoplay, atTime: nil,completionHandler: nil)
}
playerNode.play()
If you need further explanation I wrote a short tutorial here: http://swiftexplained.com/?p=9
For future readers, probably better to get the sample rate as :
playerNode.outputFormat(forBus: 0).sampleRate
Also take care when converting to AVAudioFramePosition, as it is an integer, while sample rate is a double. Without rounding the result, you may end up with undesirable results.
P.S. The above answer assumes that the file you are playing has the same sample rate as the output format of the player, which may or may not be true.

How to get the current captured timestamp of Camera data from CMSampleBufferRef in iOS

I developed and iOS application which will save captured camera data into a file and I used
(void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
to capture CMSampleBufferRef and this will encode into H264 format, and frames will be saved to a file using AVAssetWriter.
I followed the sample source code to create this app:
Now I want to get the timestamp of saved video frames to create a new movie file. For this, I have done the following things
Locate the file and create AVAssestReader to read the file
CMSampleBufferRef sample = [asset_reader_output copyNextSampleBuffer];
CMSampleBufferRef buffer;
while ([assestReader status] == AVAssetReaderStatusReading) {
buffer = [asset_reader_output copyNextSampleBuffer];
// CMSampleBufferGetPresentationTimeStamp(buffer);
CMTime presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(buffer);
UInt32 timeStamp = (1000 * presentationTimeStamp.value) / presentationTimeStamp.timescale;
NSLog(#"timestamp %u", (unsigned int) timeStamp);
NSLog(#"reading");
// CFRelease(buffer);
}
printed value gives me a wrong timestamp and I need to get frame's captured time.
Is there any way to get frame captured timestamp?
I've read an answer to get it to timestamp but it does not properly elaborate my question above.
Update:
I read the sample time-stamp before it writes to a file, it gave me xxxxx value (33333.23232). After I tried to read the file it gave me different value. Any specific reason for this??
The file timestamps are different to the capture timestamps because they are relative to the beginning of the file. This means they are the capture timestamps you want, minus the timestamp of the very first frame captured:
presentationTimeStamp = fileFramePresentationTime + firstFrameCaptureTime
So when reading from the file, this should calculate the capture timestamp you want:
CMTime firstCaptureFrameTimeStamp = // the first capture timestamp you see
CMTime presentationTimeStamp = CMTimeAdd(CMSampleBufferGetPresentationTimeStamp(buffer), firstCaptureFrameTimeStamp);
If you do this calculation between launches of your app, you'll need to serialise and deserialise the first frame capture time, which you can do with CMTimeCopyAsDictionary and CMTimeMakeFromDictionary.
You could store this in the output file, via AVAssetWriter's metadata property.

Resources