How to deep copy camera collection callback CMSampleBufferRef? - ios

Obviously I must copy the CMSampleBufferRef, but CMSampleBufferCreateCopy() will only create a shallow copy.
The method is feasible, but the CPU consumption is too high!
- (CVPixelBufferRef) copyPixelbuffer : (CVPixelBufferRef)pixel {
NSAssert(CFGetTypeID(pixel) == CVPixelBufferGetTypeID(), #"typeid !=");
CVPixelBufferRef _copy = NULL;
CVPixelBufferCreate(nil, CVPixelBufferGetWidth(pixel), CVPixelBufferGetHeight(pixel), CVPixelBufferGetPixelFormatType(pixel), CVBufferGetAttachments(pixel, kCVAttachmentMode_ShouldPropagate), &_copy);
if (_copy != NULL) {
CVPixelBufferLockBaseAddress(pixel, kCVPixelBufferLock_ReadOnly);
CVPixelBufferLockBaseAddress(_copy, 0);
size_t count = CVPixelBufferGetPlaneCount(pixel);
size_t img_widstp = CVPixelBufferGetBytesPerRowOfPlane(pixel, 0);
size_t img_heistp = CVPixelBufferGetBytesPerRowOfPlane(pixel, 1);
NSLog(#"img_widstp = %d, img_heistp = %d", img_widstp, img_heistp);
for (size_t plane = 0; plane < count; plane++) {
void *dest = CVPixelBufferGetBaseAddressOfPlane(_copy, plane);
void *source = CVPixelBufferGetBaseAddressOfPlane(pixel, plane);
size_t height = CVPixelBufferGetHeightOfPlane(pixel, plane);
size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixel, plane);
memcpy(dest, source, height * bytesPerRow);
}
CVPixelBufferUnlockBaseAddress(_copy, 0);
CVPixelBufferUnlockBaseAddress(pixel, kCVPixelBufferLock_ReadOnly);
}
return _copy;
}

Related

Memory leak from Objective-C code in iOS application

My code is eating memory. I added this function and it seems to the cause of all the problems as when I dont call it then I don't run out.
It's a function in Objective-C to crop an image. How do I release the memory that was used in the auction so that at the end of the function everything is cleaned up before exiting.
-(void) crop: (CVImageBufferRef)sampleBuffer
{
int cropX0, cropY0, cropHeight, cropWidth, outWidth, outHeight;
cropHeight = 720;
cropWidth = 1280;
cropX0 = 0;
cropY0 = 0;
outWidth = 1280;
outHeight = 720;
CVPixelBufferLockBaseAddress(sampleBuffer,0);
void *baseAddress = CVPixelBufferGetBaseAddress(sampleBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(sampleBuffer);
vImage_Buffer inBuff;
inBuff.height = cropHeight;
inBuff.width = cropWidth;
inBuff.rowBytes = bytesPerRow;
int startpos = cropY0*bytesPerRow+4*cropX0;
inBuff.data = baseAddress+startpos;
unsigned char *outImg= (unsigned char*)malloc(4*outWidth*outHeight);
vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4*outWidth};
vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0);
if (err != kvImageNoError)
{
NSLog(#" error %ld", err);
}
else
{
NSLog(#"Success");
}
CVPixelBufferRef pixelBuffer = NULL;
OSStatus result = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
inBuff.width,
inBuff.height,
kCVPixelFormatType_32BGRA,
outImg,
bytesPerRow,
NULL,
NULL,
NULL,
&pixelBuffer);
CVPixelBufferUnlockBaseAddress(sampleBuffer,0);
}
free(outImg);
at the end missing since you are not freeing the memory allocated.
It is a good practice in embedded programming and also here since you have const size pixel dimensions to use a const matrix that you can declare at the top of the function and initialized to zero.

What's the most efficient way to check of CVPixelBufferRef has white pixels?

What's the most efficient way to check of CVPixelBufferRef has white pixels?
Below is the code that I'm currently using, but it's not returning an white pixels, when a photo clearly has white pixels.
const int kBytesPerPixel = 4;
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
int bufferWidth = (int)CVPixelBufferGetWidth( pixelBuffer );
int bufferHeight = (int)CVPixelBufferGetHeight( pixelBuffer );
size_t bytesPerRow = CVPixelBufferGetBytesPerRow( pixelBuffer );
uint8_t *baseAddress = CVPixelBufferGetBaseAddress( pixelBuffer );
int count = 0;
BOOL hasWhitePixels = NO;
for ( int row = 0; row < bufferHeight; row++ ){
uint8_t *pixel = baseAddress + row * bytesPerRow;
for ( int column = 0; column < bufferWidth; column++ ){
if (pixel[0] == 255 && pixel[1] == 255 && pixel[2] == 255) {
count++;
hasWhitePixels = YES;
NSLog(#"HAS WHITE PIXELS");
break;
}
pixel += kBytesPerPixel;
}
}
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );

AudioConverterFillComplexBuffer returns 1852797029 (kAudioCodecIllegalOperationError)

I'm trying to decode aac data with AudioToolbox in iOS environment. I consulted this thread.
'AudioConverterNew' function call succeed but AudioConverterFillComplexBuffer returns error code 1852797029, kAudioCodecIllegalOperationError.
I'm trying to find my mistakes. Thank you for reading.
- (void)initAudioToolBox {
HCAudioAsset* asset = [self.provider getAudioAsset];
AudioStreamBasicDescription outFormat;
memset(&outFormat, 0, sizeof(outFormat));
outFormat.mSampleRate = 44100;
outFormat.mFormatID = kAudioFormatLinearPCM;
outFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger;
outFormat.mBytesPerPacket = 2;
outFormat.mFramesPerPacket = 1;
outFormat.mBytesPerFrame = 2;
outFormat.mChannelsPerFrame = 1;
outFormat.mBitsPerChannel = 16;
outFormat.mReserved = 0;
AudioStreamBasicDescription inFormat;
memset(&inFormat, 0, sizeof(inFormat));
inFormat.mSampleRate = [asset sampleRate];
inFormat.mFormatID = kAudioFormatMPEG4AAC;
inFormat.mFormatFlags = kMPEG4Object_AAC_LC;
inFormat.mBytesPerPacket = 0;
inFormat.mFramesPerPacket = (UInt32)[asset framePerPacket];
inFormat.mBytesPerFrame = 0;
inFormat.mChannelsPerFrame = (UInt32)[asset channelCount];
inFormat.mBitsPerChannel = 0;
inFormat.mReserved = 0;
OSStatus status = AudioConverterNew(&inFormat, &outFormat, &audioConverter);
if (status != noErr) {
NSLog(#"setup converter error, status: %i\n", (int)status);
} else {
NSLog(#"Audio Converter is initialized successfully.");
}
}
typedef struct _PassthroughUserData PassthroughUserData;
struct _PassthroughUserData {
UInt32 mChannels;
UInt32 mDataSize;
const void* mData;
AudioStreamPacketDescription mPacket;
};
int inInputDataProc(AudioConverterRef aAudioConverter,
UInt32* aNumDataPackets,
AudioBufferList* aData,
AudioStreamPacketDescription** aPacketDesc,
void* aUserData)
{
PassthroughUserData* userData = (PassthroughUserData*)aUserData;
if (!userData->mDataSize) {
*aNumDataPackets = 0;
NSLog(#"inInputDataProc returns -1");
return -1;
}
if (aPacketDesc) {
userData->mPacket.mStartOffset = 0;
userData->mPacket.mVariableFramesInPacket = 0;
userData->mPacket.mDataByteSize = userData->mDataSize;
NSLog(#"mDataSize:%d", userData->mDataSize);
*aPacketDesc = &userData->mPacket;
}
aData->mBuffers[0].mNumberChannels = userData->mChannels;
aData->mBuffers[0].mDataByteSize = userData->mDataSize;
aData->mBuffers[0].mData = (void*)(userData->mData);
NSLog(#"buffer[0] - channel:%d, byte size:%u, data:%p",
aData->mBuffers[0].mNumberChannels,
(unsigned int)aData->mBuffers[0].mDataByteSize,
aData->mBuffers[0].mData);
// No more data to provide following this run.
userData->mDataSize = 0;
NSLog(#"inInputDataProc returns 0");
return 0;
}
- (void)decodeAudioFrame:(NSData *)frame withPts:(NSInteger)pts{
if(!audioConverter){
[self initAudioToolBox];
}
HCAudioAsset* asset = [self.provider getAudioAsset];
PassthroughUserData userData = { (UInt32)[asset channelCount], (UInt32)frame.length, [frame bytes]};
NSMutableData *decodedData = [NSMutableData new];
const uint32_t MAX_AUDIO_FRAMES = 128;
const uint32_t maxDecodedSamples = MAX_AUDIO_FRAMES * 1;
do {
uint8_t *buffer = (uint8_t *)malloc(maxDecodedSamples * sizeof(short int));
AudioBufferList decBuffer;
memset(&decBuffer, 0, sizeof(AudioBufferList));
decBuffer.mNumberBuffers = 1;
decBuffer.mBuffers[0].mNumberChannels = 2;
decBuffer.mBuffers[0].mDataByteSize = maxDecodedSamples * sizeof(short int);
decBuffer.mBuffers[0].mData = buffer;
UInt32 numFrames = MAX_AUDIO_FRAMES;
AudioStreamPacketDescription outPacketDescription;
memset(&outPacketDescription, 0, sizeof(AudioStreamPacketDescription));
outPacketDescription.mDataByteSize = MAX_AUDIO_FRAMES;
outPacketDescription.mStartOffset = 0;
outPacketDescription.mVariableFramesInPacket = 0;
NSLog(#"frame - size:%lu, buffer:%p", [frame length], [frame bytes]);
OSStatus rv = AudioConverterFillComplexBuffer(audioConverter,
inInputDataProc,
&userData,
&numFrames,
&decBuffer,
&outPacketDescription);
NSLog(#"num frames:%d, dec buffer [0] channels:%d, dec buffer [0] data byte size:%d, rv:%d",
numFrames, decBuffer.mBuffers[0].mNumberChannels,
decBuffer.mBuffers[0].mDataByteSize, (int)rv);
if (rv && rv != noErr) {
NSLog(#"Error decoding audio stream: %d\n", rv);
break;
}
if (numFrames) {
[decodedData appendBytes:decBuffer.mBuffers[0].mData length:decBuffer.mBuffers[0].mDataByteSize];
}
} while (true);
//void *pData = (void *)[decodedData bytes];
//audioRenderer->Render(&pData, decodedData.length, pts);
}

SDL code should greyscale but just loads the image normally

The code below should load an image, then grey scale the image in a window. Instead it just loads the image. I've used printf("hello") in the loop starting with "for (int y = 0; y < image->h; y++)" however the console doesn't show "hello", unless I removed SDL_Delay(20000) which makes the console print it, but the image flashes for a second and i cant tell if that's in greyscale of the same image.
#include <SDL2/SDL.h>
#include <SDL2/SDL_image.h>
#include <stdio.h>
#include "SDL2/SDL_ttf.h"
SDL_Window *window = NULL;
SDL_Surface *windowSurface = NULL;
SDL_Surface *image = NULL;
SDL_Event *event = NULL;
SDL_Texture *texture = NULL;
int main(int argc, char *argv[])
{
if(SDL_Init(SDL_INIT_VIDEO) < 0)
{
perror("Cannot initialise SDL");
SDL_Quit();
return 1;
}
else
{
window = SDL_CreateWindow("Loading_image", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 640, 480, SDL_WINDOW_SHOWN | SDL_WINDOW_RESIZABLE);
if(window == NULL)
perror("Cannot load image");
else
{
windowSurface = SDL_GetWindowSurface(window);
image = IMG_Load("image.bmp");
if(image == NULL)
perror("Cannot load image");
else
{
SDL_BlitSurface(image, NULL, windowSurface, NULL);
}
SDL_UpdateWindowSurface(window);
SDL_Delay(20000);
}
}
SDL_UpdateTexture(texture, NULL, image->pixels, image->w * sizeof(Uint32));
image = SDL_ConvertSurfaceFormat(image,SDL_PIXELFORMAT_ARGB8888,0);
Uint32 * pixels = (Uint32 *)image->pixels;
int x = 0;
int y = 0;
for (int y = 0; y < image->h; y++)
{
for (int x = 0; x < image->w; x++)
{
Uint32 pixel = pixels[y * image->w + x];
Uint8 r=0,g=0,b=0;
SDL_GetRGB(pixel, image->format, &r,&g,&b);
Uint8 v = 0.212671f * r + 0.715160f * g + 0.072169f * b;
SDL_MapRGB(image->format,v,v,v);
}
}
int quit = 0;
while (!quit) //This loop will loop until the conditions are met e.g. You quit the renderer//
{
SDL_WaitEvent(event);// waits for the event (quitting the renderer)//
switch (event->type)
{
case SDL_QUIT:
quit = 1;
break;
}
}
SDL_FreeSurface(image);
image = NULL;
window = NULL;
windowSurface = NULL;
SDL_DestroyWindow(window);
IMG_Quit();
SDL_Quit();
return 0;
}
There are several issues with your code. Mostly SDL specifics, but also some issues with the grayscale conversion.
I removed any unnecessary stuff I could spot and annotated some changes by comments.
#include <SDL.h>
#include <SDL_image.h>
#include <stdio.h>
SDL_Window *window = NULL;
SDL_Surface *windowSurface = NULL;
SDL_Surface *image = NULL;
SDL_Event event; // You may want to use an object instead of a pointer
SDL_Texture *texture = NULL;
int main(int argc, char *argv[])
{
if (SDL_Init(SDL_INIT_VIDEO) < 0)
{
perror("Cannot initialise SDL");
SDL_Quit();
return 1;
}
else
{
window = SDL_CreateWindow("Loading_image", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 640, 480, SDL_WINDOW_SHOWN | SDL_WINDOW_RESIZABLE);
if (window == NULL)
perror("Cannot load image"); // You may want to change this error message
else
{
windowSurface = SDL_GetWindowSurface(window);
image = IMG_Load("image.bmp");
if (image == NULL)
perror("Cannot load image");
// I removed the blitting code here, you basically don't need it here
// Rather do it in the render loop below
}
}
image = SDL_ConvertSurfaceFormat(image, SDL_PIXELFORMAT_ARGB8888, 0);
Uint32 * pixels = (Uint32 *)image->pixels;
int x = 0;
int y = 0;
for (int y = 0; y < image->h; y++)
{
for (int x = 0; x < image->w; x++)
{
Uint32 pixel = pixels[y * image->w + x];
Uint8 r = 0, g = 0, b = 0;
SDL_GetRGB(pixel, image->format, &r, &g, &b);
Uint8 v = 0.212671f * r + 0.715160f * g + 0.072169f * b;
pixel = SDL_MapRGB(image->format, v, v, v); // Get the return value which is the pixel value
pixels[y * image->w + x] = pixel; // ...and assign it back to the pixels
}
}
int quit = 0;
while (!quit)
{
while (SDL_PollEvent(&event)) // Continous checking for events
{
switch (event.type)
{
case SDL_QUIT:
quit = 1;
break;
}
}
// "Render loop"
SDL_BlitSurface(image, NULL, windowSurface, NULL);
SDL_UpdateWindowSurface(window);
}
SDL_FreeSurface(image);
image = NULL;
window = NULL;
windowSurface = NULL;
SDL_DestroyWindow(window);
IMG_Quit();
SDL_Quit();
return 0;
}

SlimDX (DirectX10) - How to change a texel in Texture?

I try to change the texels of a Texture which is already loaded.
My assumption was to use the Texture2D::Map and UnMap functions, but there is no change when I change the data of given DataRectangle.
I need a simple example like, creating a texture of 128x128 with a gradient from black to white from each side.
Thx
ps: A Direct3D 10 C++ example may also help, SlimDX is only a wrapper and has nearly complete the same functions.
This is my D3D10 2D texture loader
bool D3D10Texture::Init( GFXHandler* pHandler, unsigned int usage, unsigned int width, unsigned int height, unsigned int textureType, bool bMipmapped, void* pTextureData )
{
mMipmapped = bMipmapped;
//SetData( pHandler, 0 );
D3D10Handler* pD3DHandler = (D3D10Handler*)pHandler;
ID3D10Device* pDevice = pD3DHandler->GetDevice();
DXGI_SAMPLE_DESC dxgiSampleDesc;
dxgiSampleDesc.Count = 1;
dxgiSampleDesc.Quality = 0;
D3D10_USAGE d3d10Usage;
if ( usage & RU_All_Dynamic ) d3d10Usage = D3D10_USAGE_DYNAMIC;
else d3d10Usage = D3D10_USAGE_DEFAULT;
//unsigned int cpuAccess = D3D10_CPU_ACCESS_WRITE;
//if ( (usage & RU_Buffer_WriteOnly) == 0 ) cpuAccess |= D3D10_CPU_ACCESS_READ;
unsigned int cpuAccess = 0;
if ( !pTextureData )
{
cpuAccess = D3D10_CPU_ACCESS_WRITE;
//if ( (usage & RU_Buffer_WriteOnly) == 0 ) cpuAccess |= D3D10_CPU_ACCESS_READ;
}
unsigned int bindFlags = D3D10_BIND_SHADER_RESOURCE;
if ( usage & RU_Texture_RenderTarget ) bindFlags |= D3D10_BIND_RENDER_TARGET;
unsigned int miscFlags = 0;
if ( usage & RU_Texture_AutoGenMipmap ) miscFlags |= D3D10_RESOURCE_MISC_GENERATE_MIPS;
D3D10_TEXTURE2D_DESC d3d10Texture2DDesc;
d3d10Texture2DDesc.Width = width;
d3d10Texture2DDesc.Height = height;
d3d10Texture2DDesc.MipLevels = GetNumMipMaps( width, height, bMipmapped );
d3d10Texture2DDesc.ArraySize = 1;
d3d10Texture2DDesc.Format = GetD3DFormat( (TextureTypes)textureType );
d3d10Texture2DDesc.SampleDesc = dxgiSampleDesc;
d3d10Texture2DDesc.Usage = d3d10Usage;
d3d10Texture2DDesc.BindFlags = D3D10_BIND_SHADER_RESOURCE;
d3d10Texture2DDesc.CPUAccessFlags = cpuAccess;
d3d10Texture2DDesc.MiscFlags = miscFlags;
//D3D10_SUBRESOURCE_DATA d3d10SubResourceData;
//d3d10SubResourceData.pSysMem = pTextureData;
//d3d10SubResourceData.SysMemPitch = GetPitch( width, (TextureTypes)textureType );
//d3d10SubResourceData.SysMemSlicePitch = 0;
D3D10_SUBRESOURCE_DATA* pSubResourceData = NULL;
if ( pTextureData )
{
pSubResourceData = new D3D10_SUBRESOURCE_DATA[d3d10Texture2DDesc.MipLevels];
char* pTexPos = (char*)pTextureData;
unsigned int pitch = GetPitch( width, (TextureTypes)textureType );
unsigned int count = 0;
unsigned int max = d3d10Texture2DDesc.MipLevels;
while( count < max )
{
pSubResourceData[count].pSysMem = pTexPos;
pSubResourceData[count].SysMemPitch = pitch;
pSubResourceData[count].SysMemSlicePitch = 0;
pTexPos += pitch * height;
pitch >>= 1;
count++;
}
}
if ( FAILED( pDevice->CreateTexture2D( &d3d10Texture2DDesc, pSubResourceData, &mpTexture ) ) )
{
return false;
}
if ( pSubResourceData )
{
delete[] pSubResourceData;
pSubResourceData = NULL;
}
mWidth = width;
mHeight = height;
mFormat = (TextureTypes)textureType;
mpTexture->AddRef();
mpTexture->Release();
D3D10_SHADER_RESOURCE_VIEW_DESC d3d10ShaderResourceViewDesc;
d3d10ShaderResourceViewDesc.Format = d3d10Texture2DDesc.Format;
d3d10ShaderResourceViewDesc.ViewDimension = D3D10_SRV_DIMENSION_TEXTURE2D;
d3d10ShaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
d3d10ShaderResourceViewDesc.Texture2D.MipLevels = GetNumMipMaps( width, height, bMipmapped );
if ( FAILED( pDevice->CreateShaderResourceView( mpTexture, &d3d10ShaderResourceViewDesc, &mpView ) ) )
{
return false;
}
ResourceRecorder::Instance()->AddResource( this );
return true;
}
With that function all you need to do is pass in the whit to black texture. For example to write a 256x256 textue with each horizontal line being one brighter than the previous line the following code will work
int* pTexture = new int[256 * 256];
int count = 0;
while( count < 256 )
{
int count2 = 0;
while( count2 < 256 )
{
pTexture[(count * 256) + count2] = 0xff000000 | (count << 16) | (count << 8) | count;
count2++;
}
count++;
}
Make sure you follow the rules in the "Resource Usage Restrictions" section:
MSDN: D3D10_USAGE
public void NewData(byte[] newData)
{
DataRectangle mappedTex = null;
//assign and lock the resource
mappedTex = pTexture.Map(0, D3D10.MapMode.WriteDiscard, D3D10.MapFlags.None);
// if unable to hold texture
if (!mappedTex.Data.CanWrite)
{
throw new ApplicationException("Cannot Write to the Texture");
}
// write new data to the texture
mappedTex.Data.WriteRange<byte>(newData);
// unlock the resource
pTexture.Unmap(0);
if (samplerflag)
temptex = newData;
}
this overwrites the buffer on every new frame, you may want to use a D3D10.MapMode.readwrite or something if ur only trying to write one texel
you will also need to write to the datarectangle in a specific point using one of the other write functions

Resources