I create a buffer with CVPixelBufferCreateWithPlanarBytes and pass a callback function as a parameter:
CVPixelBufferCreateWithPlanarBytes(NULL, imf.iWidth, imf.iHeight, pixelFormat, NULL, 0, 3, (void**)ppPlaneData, nPlaneWidth, nPlaneHeight, nPlaneBytesPerRow, LAVSinkPixelBufferReleasePlanarBytes, ppPlaneData, NULL, &buffer)
For some reason my callback LAVSinkPixelBufferReleasePlanarBytes is not called after I release the buffer with CVPixelBufferRelease(buffer).
However a different callback is called properly if I use CVPixelBufferCreateWithBytes function, but I want to use the planar data version.
Larger code snippet:
void LAVSinkPixelBufferReleasePlanarBytes(void* releaseRefCon, const void* dataPtr, size_t dataSize, size_t numberOfPlanes, const void* planeAddresses[])
{
// This is never called
}
...
...
size_t nPlaneSize[3];
uint8_t** ppPlaneData = new uint8_t*[3];
for (int i = 0; i < 3; i++) {
nPlaneSize[i] = nPlaneBytesPerRow[i] * nPlaneHeight[i];
ppPlaneData[i] = new uint8_t[nPlaneSize[i]];
memcpy(ppPlaneData[i], ppPlaneAddress[i], nPlaneSize[i]);
}
if (CVPixelBufferCreateWithPlanarBytes(NULL, imf.iWidth, imf.iHeight, pixelFormat, NULL, 0, 3, (void**)ppPlaneData, nPlaneWidth, nPlaneHeight, nPlaneBytesPerRow, LAVSinkPixelBufferReleasePlanarBytes, ppPlaneData, NULL, &buffer) != kCVReturnSuccess) break;
const BOOL result = [adaptor appendPixelBuffer:buffer withPresentationTime:timestamp];
CVPixelBufferRelease(buffer);
I found someone else asking the same question with much cleaner sample code:
http://prod.lists.apple.com/archives/cocoa-dev/2013/Nov/msg00177.html
But unfortunately there is no answer.
Related
I have been trying to read a file and load it into a buffer of a kernel in OpenCL, while the kernel is processing another buffer. However, it seems to not like that: for some reason, the results are wrong.
First, I tried setting the Args for the same kernel every time before enqueueing a task. Then, I tried enqueuing tasks for 2 kernels of the same function like below, without changing the arguments:
krnl_1.setArg(0, buffer_a));
krnl_1.setArg(1, output_buffer));
krnl_2.setArg(0, buffer_b));
krnl_2.setArg(1, output_buffer));
void* ptr[2];
ptr[0] = q.enqueueMapBuffer(buffer_a, CL_TRUE, CL_MAP_READ | CL_MAP_WRITE, 0, buffer_size_in_bytes, NULL, NULL, &err);
ptr[1] = q.enqueueMapBuffer(buffer_b, CL_TRUE, CL_MAP_READ | CL_MAP_WRITE, 0, buffer_size_in_bytes, NULL, NULL, &err);
int sel = 0;
long long bytes_sent = 0;
// Fill buffer_a
bytes_sent += pread(myFd, (void*)ptr[sel], buffer_size_in_bytes, bytes_sent);
while (bytes_sent < total_size_in_bytes){
if (sel == 0){ // If buffer_a was just filled
q.enqueueTask(krnl_1);
sel = 1; // Fill buffer_b
} else { // If buffer_b was just filled
q.enqueueTask(krnl_2);
sel = 0; // Fill buffer_a
}
if (bytes_sent >= total_size_in_bytes) // If this is the last task
q.enqueueMigrateMemObjects({output_buffer},CL_MIGRATE_MEM_OBJECT_HOST);
else // Fill the buffer that is not being processed
bytes_sent += pread(myFd, (void*)ptr[sel], buffer_size_in_bytes, bytes_sent);
q.finish();
}
If I do it serially, it is working fine:
void* ptr[2];
ptr[0] = q.enqueueMapBuffer(buffer_a, CL_TRUE, CL_MAP_READ | CL_MAP_WRITE, 0, buffer_size_in_bytes, NULL, NULL, &err);
ptr[1] = q.enqueueMapBuffer(buffer_b, CL_TRUE, CL_MAP_READ | CL_MAP_WRITE, 0, buffer_size_in_bytes, NULL, NULL, &err);
int sel = 0;
long long bytes_sent = 0;
while (bytes_sent < total_size_in_bytes){
bytes_sent += pread(myFd, (void*)ptr[sel], buffer_size_in_bytes, bytes_sent);
if (sel == 0){
q.enqueueTask(krnl_1);
sel = 1;
} else {
q.enqueueTask(krnl_2);
sel = 0;
}
if (bytes_sent >= total_size_in_bytes) //if this is the last task
q.enqueueMigrateMemObjects({output_buffer},CL_MIGRATE_MEM_OBJECT_HOST);
q.finish();
}
I feel like I must have miscomprehended the way OpenCL treats arguments and enqueues tasks, but I cannot find any similar examples.
First, reads and writes by a kernel executing on a device to a memory region mapped for writing are undefined (for more information please see Accessing mapped regions of a memory object section for clEnqueueMapBuffer). Therefore, it is necessary to unmap the buffer before run the task.
Second, bytes_sent is incremented before it is checked in while() condition in the first example. If the first call of pread() reads all data, the loop won't be executed.
Therefore, I expect that the code should look something like this:
krnl_1.setArg(0, buffer_a);
krnl_1.setArg(1, output_buffer);
krnl_2.setArg(0, buffer_b);
krnl_2.setArg(1, output_buffer);
int sel = 0;
long long bytes_processed = 0;
void* buf_ptr = q.enqueueMapBuffer(buffer_a, CL_TRUE, CL_MAP_WRITE, 0, buffer_size_in_bytes, NULL, NULL, &err);
long long bytes_to_process = pread(myFd, buf_ptr, buffer_size_in_bytes, bytes_processed);
err = q.enqueueUnmapMemObject(buffer_a, buf_ptr);
while (bytes_processed < total_size_in_bytes)
{
if (sel == 0){ // If buffer_a was just filled
q.enqueueTask(krnl_1);
sel = 1; // Fill buffer_b
} else { // If buffer_b was just filled
q.enqueueTask(krnl_2);
sel = 0; // Fill buffer_a
}
bytes_processed += bytes_to_process;
if (bytes_processed < total_size_in_bytes) { // Fill the buffer that is not being processed
auto& buffer = sel ? buffer_b : buffer_a;
buf_ptr = q.enqueueMapBuffer(buffer, CL_TRUE, CL_MAP_WRITE, 0, buffer_size_in_bytes, NULL, NULL, &err);
bytes_to_process = pread(myFd, buf_ptr, buffer_size_in_bytes, bytes_processed);
err = q.enqueueUnmapMemObject(buffer, buf_ptr);
}
else { // If this is the last task
buf_ptr = q.enqueueMapBuffer(output_buffer, CL_TRUE, CL_MAP_READ, 0, output_buffer_size_in_bytes, NULL, NULL, &err);
}
}
I want to send ARFrame pixel buffer data over network, i have listed my setup below. With this setup if i try to send the frames app crashes in gstreamers C code after few frames, but if i send camera's AVCaptureVideoDataOutput pixel buffer instead, the stream works fine. I have set AVCaptureSession's pixel format type to
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
so it replicates the same type ARFrame gives. Please help i am unable to find any solution. I am sorry if my english is bad or i have missed something out, do ask me for it.
My Setup
Get pixelBuffer of ARFrame from didUpdateFrame delegate of ARKit
Encode to h264 using VTCompressionSession
- (void)SendFrames:(CVPixelBufferRef)pixelBuffer :(NSTimeInterval)timeStamp
{
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
if(session == NULL)
{
[self initEncoder:width height:height];
}
CMTime presentationTimeStamp = CMTimeMake(0, 1);
OSStatus statusCode = VTCompressionSessionEncodeFrame(session, pixelBuffer, presentationTimeStamp, kCMTimeInvalid, NULL, NULL, NULL);
if (statusCode != noErr) {
// End the session
VTCompressionSessionInvalidate(session);
CFRelease(session);
session = NULL;
return;
}
VTCompressionSessionEndPass(session, NULL, NULL);
}
- (void) initEncoder:(size_t)width height:(size_t)height
{
OSStatus status = VTCompressionSessionCreate(NULL, (int)width, (int)height, kCMVideoCodecType_H264, NULL, NULL, NULL, OutputCallback, NULL, &session);
NSLog(#":VTCompressionSessionCreate %d", (int)status);
if (status != noErr)
{
NSLog(#"Unable to create a H264 session");
return ;
}
VTSessionSetProperty(session, kVTCompressionPropertyKey_RealTime, kCFBooleanTrue);
VTSessionSetProperty(session, kVTCompressionPropertyKey_ProfileLevel, kVTProfileLevel_H264_Baseline_AutoLevel);
VTCompressionSessionPrepareToEncodeFrames(session);
}
Get sampleBuffer from callback, convert it to elementary stream
void OutputCallback(void *outputCallbackRefCon, void *sourceFrameRefCon, OSStatus status, VTEncodeInfoFlags infoFlags,CMSampleBufferRef sampleBuffer)
{
if (status != noErr) {
NSLog(#"Error encoding video, err=%lld", (int64_t)status);
return;
}
if (!CMSampleBufferDataIsReady(sampleBuffer))
{
NSLog(#"didCompressH264 data is not ready ");
return;
}
// In this example we will use a NSMutableData object to store the
// elementary stream.
NSMutableData *elementaryStream = [NSMutableData data];
// This is the start code that we will write to
// the elementary stream before every NAL unit
static const size_t startCodeLength = 4;
static const uint8_t startCode[] = {0x00, 0x00, 0x00, 0x01};
// Write the SPS and PPS NAL units to the elementary stream
CMFormatDescriptionRef description = CMSampleBufferGetFormatDescription(sampleBuffer);
// Find out how many parameter sets there are
size_t numberOfParameterSets;
int AVCCHeaderLength;
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(description,
0, NULL, NULL,
&numberOfParameterSets,
&AVCCHeaderLength);
// Write each parameter set to the elementary stream
for (int i = 0; i < numberOfParameterSets; i++) {
const uint8_t *parameterSetPointer;
int NALUnitHeaderLengthOut = 0;
size_t parameterSetLength;
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(description,
i,
¶meterSetPointer,
¶meterSetLength,
NULL, &NALUnitHeaderLengthOut);
// Write the parameter set to the elementary stream
[elementaryStream appendBytes:startCode length:startCodeLength];
[elementaryStream appendBytes:parameterSetPointer length:parameterSetLength];
}
// Get a pointer to the raw AVCC NAL unit data in the sample buffer
size_t blockBufferLength;
uint8_t *bufferDataPointer = NULL;
size_t lengthAtOffset = 0;
size_t bufferOffset = 0;
CMBlockBufferGetDataPointer(CMSampleBufferGetDataBuffer(sampleBuffer),
bufferOffset,
&lengthAtOffset,
&blockBufferLength,
(char **)&bufferDataPointer);
// Loop through all the NAL units in the block buffer
// and write them to the elementary stream with
// start codes instead of AVCC length headers
while (bufferOffset < blockBufferLength - AVCCHeaderLength) {
// Read the NAL unit length
uint32_t NALUnitLength = 0;
memcpy(&NALUnitLength, bufferDataPointer + bufferOffset, AVCCHeaderLength);
// Convert the length value from Big-endian to Little-endian
NALUnitLength = CFSwapInt32BigToHost(NALUnitLength);
// Write start code to the elementary stream
[elementaryStream appendBytes:startCode length:startCodeLength];
// Write the NAL unit without the AVCC length header to the elementary stream
[elementaryStream appendBytes:bufferDataPointer + bufferOffset + AVCCHeaderLength
length:NALUnitLength];
// Move to the next NAL unit in the block buffer
bufferOffset += AVCCHeaderLength + NALUnitLength;
}
char *bytePtr = (char *)[elementaryStream mutableBytes];
long maxSize = (long)elementaryStream.length;
CMTime presentationtime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
vidplayer_stream(bytePtr, maxSize, (long)presentationtime.value);
}
I am using AudioQueueStart in order to start recording on an iOS device and I want all the recording data streamed to me in buffers so that I can process them and send them to a server.
Basic functionality works great however in my BufferFilled function I usually get < 10 bytes of data on every call. This feels very inefficient. Especially since I have tried to set the buffer size to 16384 btyes (see beginning of startRecording method)
How can I make it fill up the buffer more before calling BufferFilled? Or do I need to make a second phase buffering before sending to server to achieve what I want?
OSStatus BufferFilled(void *aqData, SInt64 inPosition, UInt32 requestCount, const void *inBuffer, UInt32 *actualCount) {
AQRecorderState *pAqData = (AQRecorderState*)aqData;
NSData *audioData = [NSData dataWithBytes:inBuffer length:requestCount];
*actualCount = inBuffer + requestCount;
//audioData is ususally < 10 bytes, sometimes 100 bytes but never close to 16384 bytes
return 0;
}
void HandleInputBuffer(void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumPackets, const AudioStreamPacketDescription *inPacketDesc) {
AQRecorderState *pAqData = (AQRecorderState*)aqData;
if (inNumPackets == 0 && pAqData->mDataFormat.mBytesPerPacket != 0)
inNumPackets = inBuffer->mAudioDataByteSize / pAqData->mDataFormat.mBytesPerPacket;
if(AudioFileWritePackets(pAqData->mAudioFile, false, inBuffer->mAudioDataByteSize, inPacketDesc, pAqData->mCurrentPacket, &inNumPackets, inBuffer->mAudioData) == noErr) {
pAqData->mCurrentPacket += inNumPackets;
}
if (pAqData->mIsRunning == 0)
return;
OSStatus error = AudioQueueEnqueueBuffer(pAqData->mQueue, inBuffer, 0, NULL);
}
void DeriveBufferSize(AudioQueueRef audioQueue, AudioStreamBasicDescription *ASBDescription, Float64 seconds, UInt32 *outBufferSize) {
static const int maxBufferSize = 0x50000;
int maxPacketSize = ASBDescription->mBytesPerPacket;
if (maxPacketSize == 0) {
UInt32 maxVBRPacketSize = sizeof(maxPacketSize);
AudioQueueGetProperty(audioQueue, kAudioQueueProperty_MaximumOutputPacketSize, &maxPacketSize, &maxVBRPacketSize);
}
Float64 numBytesForTime = ASBDescription->mSampleRate * maxPacketSize * seconds;
*outBufferSize = (UInt32)(numBytesForTime < maxBufferSize ? numBytesForTime : maxBufferSize);
}
OSStatus SetMagicCookieForFile (AudioQueueRef inQueue, AudioFileID inFile) {
OSStatus result = noErr;
UInt32 cookieSize;
if (AudioQueueGetPropertySize (inQueue, kAudioQueueProperty_MagicCookie, &cookieSize) == noErr) {
char* magicCookie =
(char *) malloc (cookieSize);
if (AudioQueueGetProperty (inQueue, kAudioQueueProperty_MagicCookie, magicCookie, &cookieSize) == noErr)
result = AudioFileSetProperty (inFile, kAudioFilePropertyMagicCookieData, cookieSize, magicCookie);
free(magicCookie);
}
return result;
}
- (void)startRecording {
aqData.mDataFormat.mFormatID = kAudioFormatMPEG4AAC;
aqData.mDataFormat.mSampleRate = 22050.0;
aqData.mDataFormat.mChannelsPerFrame = 1;
aqData.mDataFormat.mBitsPerChannel = 0;
aqData.mDataFormat.mBytesPerPacket = 0;
aqData.mDataFormat.mBytesPerFrame = 0;
aqData.mDataFormat.mFramesPerPacket = 1024;
aqData.mDataFormat.mFormatFlags = kMPEG4Object_AAC_Main;
AudioFileTypeID fileType = kAudioFileAAC_ADTSType;
aqData.bufferByteSize = 16384;
UInt32 defaultToSpeaker = TRUE;
AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof(defaultToSpeaker), &defaultToSpeaker);
OSStatus status = AudioQueueNewInput(&aqData.mDataFormat, HandleInputBuffer, &aqData, NULL, kCFRunLoopCommonModes, 0, &aqData.mQueue);
UInt32 dataFormatSize = sizeof (aqData.mDataFormat);
status = AudioQueueGetProperty(aqData.mQueue, kAudioQueueProperty_StreamDescription, &aqData.mDataFormat, &dataFormatSize);
status = AudioFileInitializeWithCallbacks(&aqData, nil, BufferFilled, nil, nil, fileType, &aqData.mDataFormat, 0, &aqData.mAudioFile);
for (int i = 0; i < kNumberBuffers; ++i) {
status = AudioQueueAllocateBuffer (aqData.mQueue, aqData.bufferByteSize, &aqData.mBuffers[i]);
status = AudioQueueEnqueueBuffer (aqData.mQueue, aqData.mBuffers[i], 0, NULL);
}
aqData.mCurrentPacket = 0;
aqData.mIsRunning = true;
status = AudioQueueStart(aqData.mQueue, NULL);
}
UPDATE: I have logged the data that I receive and it is quite interesting, it almost seems like half of the "packets" are some kind of header and half is sound data. Could I assume this is just how the AAC encoding on iOS works? It writes header in one buffer, then data in the next one and so on. And it never wants more than around 170-180 bytes for each data chunk and that is why it ignores my large buffer?
I solved this eventually. Turns out that yes the encoding on iOS produces small and large chunks of data. I added a second phase buffer myself using NSMutableData and it worked perfectly.
I just want to use GNOME glib functions to simply write and read a file. I think my syntaxes are wrong in calling the functions. I tried to open a file with g_fopen("filenam.txt", "w"); but it didnt create any file. I also used g_file_set_contents and I am trying to save my Gstring s into a file file.txt with code as
static void events_handler(const uint8_t *pdu, uint16_t len, gpointer user_data)
{
uint8_t *opdu;
uint16_t handle, i, olen;
size_t plen;
//GString *s;
const gchar *s;
gssize length;
length = 100;
handle = get_le16(&pdu[1]);
switch (pdu[0]) {
case ATT_OP_HANDLE_NOTIFY:
s = g_string_new(NULL);
//g_string_printf(s, "Movement data = 0x%04x value: ",handle);
g_file_set_contents("file.txt", s, 100, NULL);
break;
case ATT_OP_HANDLE_IND:
s = g_string_new(NULL);
g_string_printf(s, "Indication handle = 0x%04x value: ",handle);
break;
default:
error("Invalid opcode\n");
return;
}
for (i = 3; i < len; i++)
g_string_append_printf(s, "%02x ", pdu[i]);
rl_printf("%s\n", s->str);
g_string_free(s, TRUE);
if (pdu[0] == ATT_OP_HANDLE_NOTIFY)
return;
opdu = g_attrib_get_buffer(attrib, &plen);
olen = enc_confirmation(opdu, plen);
if (olen > 0)
g_attrib_send(attrib, 0, opdu, olen, NULL, NULL, NULL);
}
You're conflating GString* and gchar*. The g_string_*() functions expect a GString*, and g_file_set_contents() expects gchar*. If you want the raw data, use the str field.
Also, I suggest turning on some more warnings on your compiler, since it really should be complaining during development if you try to do this. Passing -Wall should do the trickā¦
I have been trying to decode H264 using VTDecompressionSessionDecodeFrame but getting errors. The Parameter sets have been created previously and look fine, nothing errors up to this point so it may have something to do with my understanding of Timing information in the CMSampleBufferRef. Any input would be much appreciated
void didDecompress( void *decompressionOutputRefCon, void *sourceFrameRefCon, OSStatus status, VTDecodeInfoFlags infoFlags, CVImageBufferRef imageBuffer, CMTime presentationTimeStamp, CMTime presentationDuration ){
NSLog(#"In decompression callback routine");
}
void decodeH264 {
VTDecodeInfoFlags infoFlags;
[NALPacket appendBytes: NalPacketSize length:4];
[NALPacket appendBytes: &NALCODE length:1];
[NALPacket appendBytes: startPointer length:buflen];
void *samples = (void *)[NALTestPacket bytes];
blockBuffer = NULL;
// add the nal raw data to the CMBlockBuffer
status = CMBlockBufferCreateWithMemoryBlock(
kCFAllocatorDefault,
samples,
[NALPacket length],
kCFAllocatorDefault,
NULL,
0,
[NALPacket length],
0,
&blockBuffer);
const size_t * samplesizeArrayPointer;
size_t sampleSizeArray= buflen;
samplesizeArrayPointer = &sampleSizeArray;
int32_t timeSpan = 1000000;
CMTime PTime = CMTimeMake(presentationTime, timeSpan);
CMSampleTimingInfo timingInfo;
timingInfo.presentationTimeStamp = PTime;
timingInfo.duration = kCMTimeZero;
timingInfo.decodeTimeStamp = kCMTimeInvalid;
status = CMSampleBufferCreate(kCFAllocatorDefault, blockBuffer, YES, NULL, NULL, formatDescription, 1, 1, &timingInfo, 0, samplesizeArrayPointer, &sampleBuffer);
CFArrayRef attachmentsArray = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, true);
for (CFIndex i = 0; i < CFArrayGetCount(attachmentsArray); ++i) {
CFMutableDictionaryRef attachments = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachmentsArray, i);
CFDictionarySetValue(attachments, kCMSampleAttachmentKey_DoNotDisplay, kCFBooleanFalse);
CFDictionarySetValue(attachments, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue);
}
// I Frame
status = VTDecompressionSessionDecodeFrame(decoder, sampleBuffer, kVTDecodeFrame_1xRealTimePlayback, (void*)CFBridgingRetain(currentTime), &infoFlags);
if (status != noErr) {
NSLog(#"Decode error");
}
Discovered why this wasn't working, I had forgotten to set the CMSampleBufferRef to NULL each time a new sample was captured.
samples = NULL;
status = CMSampleBufferCreate(kCFAllocatorDefault, blockBuffer, YES, NULL, NULL, formatDescription, 1, 1, &timingInfo, 0, samplesizeArrayPointer, &sampleBuffer);