Set timestamp in CMsampleBuffer using AVAssetWriter not working - ios

Hello I'm working in an app that is recording video + audio. The Video source is the camera, and the audio is coming from streaming. My problem happen when the communication with streaming is closed for some reason. Then in that case I switch the audio source to built in mic. The problem is the audio is not synchronised at all. I would like to add a space in my audio and then set the timestamp in realtime according to the current video timestamp. Seems AvassetWritter is adding the frames consecutive from built in mic and it looks like is ignoring the timestamp.
Do you know why avassetwriter is ignoring the timestamp?
EDIT:
This is the code than gets the latest video timestamp
- (void)renderVideoSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
CVPixelBufferRef renderedPixelBuffer = NULL;
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
self.lastVideoTimestamp = timestamp;
and this is the code that I use to synchronise audio coming from built in mic when the stream is disconnected.
CFRelease(sampleBuffer);
sampleBuffer = [self adjustTime:sampleBuffer by:self.lastVideoTimestamp];
//Adjust CMSampleBufferFunction
- (CMSampleBufferRef) adjustTime:(CMSampleBufferRef) sample by:(CMTime) offset
{
CMItemCount count;
CMSampleBufferGetSampleTimingInfoArray(sample, 0, nil, &count);
CMSampleTimingInfo* pInfo = malloc(sizeof(CMSampleTimingInfo) * count);
CMSampleBufferGetSampleTimingInfoArray(sample, count, pInfo, &count);
for (CMItemCount i = 0; i < count; i++)
{
pInfo[i].decodeTimeStamp = kCMTimeInvalid;//CMTimeSubtract(pInfo[i].decodeTimeStamp, offset);
pInfo[i].presentationTimeStamp = CMTimeSubtract(pInfo[i].presentationTimeStamp, offset);
}
CMSampleBufferRef sout;
CMSampleBufferCreateCopyWithNewTiming(nil, sample, count, pInfo, &sout);
free(pInfo);
return sout;
}
That is what I want to do.
Video
--------------------------------------------------------------------
Stream disconnect stream Built in mic
----------------------------------- -----------------
I would like to get this, as you can see there is a space with no audio, because the audio coming from the stream is disconnected and maybe you didn't receive all of the audio.
What it is currently doing:
Video
--------------------------------------------------------------------
Stream disconnect stream Built in mic
--------------------------------------------------------------------

Related

Unity video player audio skipping [duplicate]

MovieTexture is finally deprecated after Unity 5.6.0b1 release and new API that plays video on both Desktop and Mobile devices is now released.
VideoPlayer and VideoClip can be used to play video and retrieve texture for each frame if needed.
I've managed to get the video working but coduldn't get the audio to play as-well from the Editor on Windows 10. Anyone know why audio is not playing?
//Raw Image to Show Video Images [Assign from the Editor]
public RawImage image;
//Video To Play [Assign from the Editor]
public VideoClip videoToPlay;
private VideoPlayer videoPlayer;
private VideoSource videoSource;
//Audio
private AudioSource audioSource;
// Use this for initialization
void Start()
{
Application.runInBackground = true;
StartCoroutine(playVideo());
}
IEnumerator playVideo()
{
//Add VideoPlayer to the GameObject
videoPlayer = gameObject.AddComponent<VideoPlayer>();
//Add AudioSource
audioSource = gameObject.AddComponent<AudioSource>();
//Disable Play on Awake for both Video and Audio
videoPlayer.playOnAwake = false;
audioSource.playOnAwake = false;
//We want to play from video clip not from url
videoPlayer.source = VideoSource.VideoClip;
//Set video To Play then prepare Audio to prevent Buffering
videoPlayer.clip = videoToPlay;
videoPlayer.Prepare();
//Wait until video is prepared
while (!videoPlayer.isPrepared)
{
Debug.Log("Preparing Video");
yield return null;
}
Debug.Log("Done Preparing Video");
//Set Audio Output to AudioSource
videoPlayer.audioOutputMode = VideoAudioOutputMode.AudioSource;
//Assign the Audio from Video to AudioSource to be played
videoPlayer.EnableAudioTrack(0, true);
videoPlayer.SetTargetAudioSource(0, audioSource);
//Assign the Texture from Video to RawImage to be displayed
image.texture = videoPlayer.texture;
//Play Video
videoPlayer.Play();
//Play Sound
audioSource.Play();
Debug.Log("Playing Video");
while (videoPlayer.isPlaying)
{
Debug.LogWarning("Video Time: " + Mathf.FloorToInt((float)videoPlayer.time));
yield return null;
}
Debug.Log("Done Playing Video");
}
Found the problem. Below is the FIXED code that plays Video and Audio:
//Raw Image to Show Video Images [Assign from the Editor]
public RawImage image;
//Video To Play [Assign from the Editor]
public VideoClip videoToPlay;
private VideoPlayer videoPlayer;
private VideoSource videoSource;
//Audio
private AudioSource audioSource;
// Use this for initialization
void Start()
{
Application.runInBackground = true;
StartCoroutine(playVideo());
}
IEnumerator playVideo()
{
//Add VideoPlayer to the GameObject
videoPlayer = gameObject.AddComponent<VideoPlayer>();
//Add AudioSource
audioSource = gameObject.AddComponent<AudioSource>();
//Disable Play on Awake for both Video and Audio
videoPlayer.playOnAwake = false;
audioSource.playOnAwake = false;
//We want to play from video clip not from url
videoPlayer.source = VideoSource.VideoClip;
//Set Audio Output to AudioSource
videoPlayer.audioOutputMode = VideoAudioOutputMode.AudioSource;
//Assign the Audio from Video to AudioSource to be played
videoPlayer.EnableAudioTrack(0, true);
videoPlayer.SetTargetAudioSource(0, audioSource);
//Set video To Play then prepare Audio to prevent Buffering
videoPlayer.clip = videoToPlay;
videoPlayer.Prepare();
//Wait until video is prepared
while (!videoPlayer.isPrepared)
{
Debug.Log("Preparing Video");
yield return null;
}
Debug.Log("Done Preparing Video");
//Assign the Texture from Video to RawImage to be displayed
image.texture = videoPlayer.texture;
//Play Video
videoPlayer.Play();
//Play Sound
audioSource.Play();
Debug.Log("Playing Video");
while (videoPlayer.isPlaying)
{
Debug.LogWarning("Video Time: " + Mathf.FloorToInt((float)videoPlayer.time));
yield return null;
}
Debug.Log("Done Playing Video");
}
Why Audio was not playing:
//Set Audio Output to AudioSource
videoPlayer.audioOutputMode = VideoAudioOutputMode.AudioSource;
//Assign the Audio from Video to AudioSource to be played
videoPlayer.EnableAudioTrack(0, true);
videoPlayer.SetTargetAudioSource(0, audioSource);
must be called before videoPlayer.Prepare(); not after it. This is took hours of experiment to find this this was the problem I was having.
Stuck at "Preparing Video"?
Wait 5 seconds after videoPlayer.Prepare(); is called then exit the while loop.
Replace:
while (!videoPlayer.isPrepared)
{
Debug.Log("Preparing Video");
yield return null;
}
with:
//Wait until video is prepared
WaitForSeconds waitTime = new WaitForSeconds(5);
while (!videoPlayer.isPrepared)
{
Debug.Log("Preparing Video");
//Prepare/Wait for 5 sceonds only
yield return waitTime;
//Break out of the while loop after 5 seconds wait
break;
}
This should work but you may experience buffering when the video starts playing. While using this temporary fix, my suggestion is to file for bug with the title of "videoPlayer.isPrepared always true" because this is a bug.
Some people also fixed it by changing:
videoPlayer.playOnAwake = false;
audioSource.playOnAwake = false;
to
videoPlayer.playOnAwake = true;
audioSource.playOnAwake = true;
Play Video From URL:
Replace:
//We want to play from video clip not from url
videoPlayer.source = VideoSource.VideoClip;
with:
//We want to play from url
videoPlayer.source = VideoSource.Url;
videoPlayer.url = "http://www.quirksmode.org/html5/videos/big_buck_bunny.mp4";
then Remove:
public VideoClip videoToPlay; and videoPlayer.clip = videoToPlay; as these are not needed anymore.
Play Video From StreamingAssets folder:
string url = "file://" + Application.streamingAssetsPath + "/" + "VideoName.mp4";
if !UNITY_EDITOR && UNITY_ANDROID
url = Application.streamingAssetsPath + "/" + "VideoName.mp4";
#endif
//We want to play from url
videoPlayer.source = VideoSource.Url;
videoPlayer.url = url;
All supported video formats:
ogv
vp8
webm
mov
dv
mp4
m4v
mpg
mpeg
Extra supported video formats on Windows:
avi
asf
wmf
Some of these formats don't work on some platforms. See this post for more information on supported video formats.
Similar to what the other answers have been saying. You could use callbacks for when preparing and end of video states. Rather than using coroutines and yield return.
videoPlayer.loopPointReached += EndReached;
videoPlayer.prepareCompleted += PrepareCompleted;
void PrepareCompleted(VideoPlayer vp) {
vp.Play();
}
void EndReached(VideoPlayer vp) {
// do something
}
I used #Programmer 's answer to play videos from a URL, but I couldn't get any sound to play. Eventually I found the answer in the comments of a YouTube tutorial.
To get the audio to play for a movie loaded via URL, you need to add the following line before the call to EnableAudioTrack:
videoPlayer.controlledAudioTrackCount = 1;
By now the VideoPlayer should be updated enough you don't need to write code to get to work correctly. Here are the settings I found to have the most desirable effect:
These settings are:
Video Player:
Play On Awake: True
Wait For First Frame: False
Audio Output Mode: None
Audio Source:
Play On Awake: True
Don't forget to have a VideoClip for the VideoPlayer and an AudioClip for the AudioSource. The file formats I found to work the best are .ogv for video and .wav for audio.

How to get the current captured timestamp of Camera data from CMSampleBufferRef in iOS

I developed and iOS application which will save captured camera data into a file and I used
(void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
to capture CMSampleBufferRef and this will encode into H264 format, and frames will be saved to a file using AVAssetWriter.
I followed the sample source code to create this app:
Now I want to get the timestamp of saved video frames to create a new movie file. For this, I have done the following things
Locate the file and create AVAssestReader to read the file
CMSampleBufferRef sample = [asset_reader_output copyNextSampleBuffer];
CMSampleBufferRef buffer;
while ([assestReader status] == AVAssetReaderStatusReading) {
buffer = [asset_reader_output copyNextSampleBuffer];
// CMSampleBufferGetPresentationTimeStamp(buffer);
CMTime presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(buffer);
UInt32 timeStamp = (1000 * presentationTimeStamp.value) / presentationTimeStamp.timescale;
NSLog(#"timestamp %u", (unsigned int) timeStamp);
NSLog(#"reading");
// CFRelease(buffer);
}
printed value gives me a wrong timestamp and I need to get frame's captured time.
Is there any way to get frame captured timestamp?
I've read an answer to get it to timestamp but it does not properly elaborate my question above.
Update:
I read the sample time-stamp before it writes to a file, it gave me xxxxx value (33333.23232). After I tried to read the file it gave me different value. Any specific reason for this??
The file timestamps are different to the capture timestamps because they are relative to the beginning of the file. This means they are the capture timestamps you want, minus the timestamp of the very first frame captured:
presentationTimeStamp = fileFramePresentationTime + firstFrameCaptureTime
So when reading from the file, this should calculate the capture timestamp you want:
CMTime firstCaptureFrameTimeStamp = // the first capture timestamp you see
CMTime presentationTimeStamp = CMTimeAdd(CMSampleBufferGetPresentationTimeStamp(buffer), firstCaptureFrameTimeStamp);
If you do this calculation between launches of your app, you'll need to serialise and deserialise the first frame capture time, which you can do with CMTimeCopyAsDictionary and CMTimeMakeFromDictionary.
You could store this in the output file, via AVAssetWriter's metadata property.

Getting or setting the audio format that AUGraphAddRenderNotify receives

Is it possible to set the audio format for an AUGraphAddRenderNotify callback? If not, is it possible just to see what the format is at init time?
I have a very simple AUGraph which plays audio from a kAudioUnitSubType_AudioFilePlayer to a kAudioUnitSubType_RemoteIO. I'm doing some live processing on the audio so I've added a AUGraphAddRenderNotify callback to the graph to do it there. This all works fine, but when I initialise the graph, I need to set up a couple buffers and some other data for my processing, and I need to know what format will be supplied in the callback. (On some devices it's interleaved, on others it's not — this is fine I just need to know).
Here's the setup:
NewAUGraph(&audioUnitGraph);
AUNode playerNode;
AUNode outputNode;
AudioComponentDescription playerDescription = {
.componentType = kAudioUnitType_Generator,
.componentSubType = kAudioUnitSubType_AudioFilePlayer,
.componentManufacturer = kAudioUnitManufacturer_Apple
};
AudioComponentDescription outputDescription = {
.componentType = kAudioUnitType_Output,
.componentSubType = kAudioUnitSubType_RemoteIO,
.componentManufacturer = kAudioUnitManufacturer_Apple
};
AUGraphAddNode(audioUnitGraph, &playerDescription, &playerNode);
AUGraphAddNode(audioUnitGraph, &outputDescription, &outputNode);
AUGraphOpen(audioUnitGraph);
AUGraphNodeInfo(audioUnitGraph, playerNode, NULL, &playerAudioUnit);
AUGraphNodeInfo(audioUnitGraph, outputNode, NULL, &outputAudioUnit);
// Tried adding all manner of AudioUnitSetProperty() calls here to set the AU formats
AUGraphConnectNodeInput(audioUnitGraph, playerNode, 0, outputNode, 0);
AUGraphAddRenderNotify(audioUnitGraph, render, (__bridge void *)self);
AUGraphInitialize(audioUnitGraph);
// Some time later...
// - Set up audio file in the file player
// - Start the graph with AUGraphStart()
I can understand that altering the formats used by the two audio units may not have any effect on the format 'seen' at the point the AUGraph renders into its callback (as this is downstream of them), but surely there is a way to know at init time what that format will be?

How to play MIDI with bassmidi? (ios)

I'm trying to play midi but it's not working, I can play an mp3 but when I change the code to midi and build - there is no sound. (with "bassmidi" plugin).
my code:
NSString *fileName = #"1";
NSString *fileType = #"mid"; // mid
BASS_Init(-1, 44100, 0, 0, 0); // initialize output device
NSString *respath=[[NSBundle mainBundle]pathForResource:fileName ofType:fileType]; // get path of audio file
HSTREAM stream=BASS_MIDI_StreamCreateFile(0, [respath UTF8String], 0, 0, BASS_SAMPLE_LOOP, 1);
BASS_ChannelPlay(stream, FALSE); // play the stream
For those who would be interested in using the BASSMIDI player on iOS, we implemented an AUv3 Audio Unit wrapped around the bassmidi library.
The main advantage is that this audio unit can be inserted into a graph of audio nodes handled by the iOS Audio Engine (just like you would do with the AVAudioUnitSampler).
The code is available on a public repository:
https://github.com/newzik/BassMidiAudioUnit
Feel free to use it!

iOS AudioUnit settings to save mic input to raw PCM file

I'm currently working on a VOIP project for iOS.
I use AudioUnits to get data from the mic and play sounds.
My main app is written in C# (Xamarin) and uses a C++ library for faster audio and codec processing.
To test the input/output result I'm currently testing recording & playback on the same device
- store the mic audio data in a buffer in the recordingCallback
- play the data from the buffer in the playbackCallback
That works as expected, the voice quality is good.
I need to save the incoming audio data from the mic to a raw PCM file.
I have done that, but the resulting file only contains some short "beep" signals.
So my question is:
What Audio settings do I need, that I can hear my voice (real audio signals) in the resulting raw PCM file instead of short beep sounds?
Has anyone an idea what could be wrong or what I have to do that I'm able to replay the resulting PCM file correctly?
My current format settings are (C# code):
int framesPerPacket = 1;
int channelsPerFrame = 1;
int bitsPerChannel = 16;
int bytesPerFrame = bitsPerChannel / 8 * channelsPerFrame;
int bytesPerPacket = bytesPerFrame * framesPerPacket;
AudioStreamBasicDescription audioFormat = new AudioStreamBasicDescription ()
{
SampleRate = 8000,
Format = AudioFormatType.LinearPCM,
FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger | AudioFormatFlags.LinearPCMIsPacked | AudioFormatFlags.LinearPCMIsAlignedHigh,
BitsPerChannel = bitsPerChannel,
ChannelsPerFrame = channelsPerFrame,
BytesPerFrame = bytesPerFrame,
FramesPerPacket = framesPerPacket,
BytesPerPacket = bytesPerPacket,
Reserved = 0
};
Additional C# settings (here in short without error checking):
AVAudioSession session = AVAudioSession.SharedInstance();
NSError error = null;
session.SetCategory(AVAudioSession.CategoryPlayAndRecord, out error);
session.SetPreferredIOBufferDuration(Config.packetLength, out error);
session.SetPreferredSampleRate(Format.samplingRate,out error);
session.SetActive(true,out error);
My current recording callback in short (only for PCM file saving) (C++ code):
OSStatus
NotSoAmazingAudioEngine::recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
std::pair<BufferData*, int> bufferInfo = _sendBuffer.getNextEmptyBufferList();
AudioBufferList* bufferList = new AudioBufferList();
bufferList->mNumberBuffers = 1;
bufferList->mBuffers[0].mData = NULL;
OSStatus status = AudioUnitRender(_instance->_audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, bufferList);
if(_instance->checkStatus(status))
{
if(fout != NULL) //fout is a "FILE*"
{
fwrite(bufferList->mBuffers[0].mData, sizeof(short), bufferList->mBuffers[0].mDataByteSize/sizeof(short), fout);
}
}
delete bufferList;
return noErr;
}
Background info why I need a raw PCM file:
To compress the audio data I'd like to use the Opus codec.
With the codec I have the problem that there is a tiny "tick" at the end of each frame:
With a frame size of 60ms I nearly can't hear them, at 20ms its annoying, at 10 ms frame sizes my own voice can't be heared because of the ticking (for the VOIP application I try to get 10ms frames).
I don't encode & decode in the callback functions (I encode/decode the data in the functions which I use to transfer audio data from the "micbuffer" to the "playbuffer").
And everytime the playbackCallback wants to play some data, there is a frame in my buffer.
I also eliminate my Opus encoding/decoding functions as error source, because if I read PCM data from a raw PCM file, encode & decode it afterwards, and save it to a new raw PCM file, the ticking does not appear (if I play the result file with "Softe Audio Tools", the output file audio is OK).
To find out what causes the ticking, I'd like to save the raw PCM data from the mic to a file to make further investigations on that issue.
I found the solution myself:
My PCM player expected 44100 Hz stereo, but my file only had 8000 Hz mono and therefore my saved file was played about 10x too fast.

Resources