The convertion between iplimage and avframe - opencv

Now I use sws_scale to convert an avframe to iplimage, then use opencv to do some operations to the image, then convert the iplimage to avframe, but the image seems becoming more purple than the origin image. This is my code
int av2ipl(AVFrame *src, IplImage *dst, int height, int width)
{
struct SwsContext *swscontext = sws_getContext(width, height, PIX_FMT_YUV420P, dst->width, dst->height, PIX_FMT_BGR24, SWS_BILINEAR, 0, 0, 0);
if (swscontext == 0)
return 0;
int linesize[4] = {dst->widthStep, 0, 0, 0 };
sws_scale(swscontext, src->data, src->linesize, 0, height, (uint8_t **) & (dst->imageData), linesize);
return 1;
}
int ipl2av(IplImage* src, AVFrame* dst, int height, int width)
{
struct SwsContext *swscontext = sws_getContext(width, height, PIX_FMT_BGR24, width, height, PIX_FMT_YUV420P, SWS_BILINEAR, 0, 0, 0);
if(swscontext == 0)
return 0;
int linesize[4] = {src->widthStep, 0, 0, 0};
sws_scale(swscontext, (uint8_t **) & (src->imageData), linesize, 0, height, dst- >data, dst->linesize);
return 1;
}
Is there any wrong in the code?

Related

CVPixelBufferCreate does not care Planar format

I try to rotate CoreVideo '420f' image without transfer to RGBA.
The incoming CMSampleBuffer Y-plane bytesPerRow is width + 32.
That means Y-plane row size is 8bit * width + sizeof(CVPlanarComponentInfo).
But if I call CVPixelBufferCreate(,,,'420f',,) , BytesPerRow == width.
CVPixelBufferCreate() does not care about planar format and did not add 32bytes.
I tried
vImage_Buffer myYBuffer = {buf, height, width, bytePerRow};
But there is no parameter for bitsPerPixel. I cannot use for UVBuffer.
I tried
vImageBuffer_Init(buf, height, width, bitPerPixel, flag);
But there is no parameter for bytesPerRow.
I like to know how to create vImageBuffer or CVPixelBuffer with '420f' planar format.
This is under construction code for rotation
NS_INLINE void dumpData(NSString* tag, unsigned char* p, size_t w) {
NSMutableString* str = [tag mutableCopy];
for(int i=0;i<w+100;++i) {
[str appendString:[NSString stringWithFormat:#"%02x ", *(p + i)]];
}
NSLog(#"%#", str);
}
- (CVPixelBufferRef) RotateBuffer:(CMSampleBufferRef)sampleBuffer withConstant:(uint8_t)rotationConstant
{
vImage_Error err = kvImageNoError;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t outHeight = width;
size_t outWidth = height;
assert(CVPixelBufferGetPixelFormatType(imageBuffer) == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange);
assert(CVPixelBufferGetPlaneCount(imageBuffer) == 2);
NSLog(#"YBuffer %ld %ld %ld", CVPixelBufferGetWidthOfPlane(imageBuffer, 0), CVPixelBufferGetHeightOfPlane(imageBuffer, 0),
CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0)); // BytesPerRow = width + 32
dumpData(#"Base=", CVPixelBufferGetBaseAddress(imageBuffer), width);
dumpData(#"Plane0=", CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0), width);
CVPixelBufferRef rotatedBuffer = NULL;
CVReturn ret = CVPixelBufferCreate(kCFAllocatorDefault, outWidth, outHeight, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, NULL, &rotatedBuffer);
NSLog(#"CVPixelBufferCreate err=%d", ret);
CVPixelBufferLockBaseAddress(rotatedBuffer, 0);
NSLog(#"CVPixelBufferCreate init %ld %ld %ld p=%p", CVPixelBufferGetWidthOfPlane(rotatedBuffer, 0), CVPixelBufferGetHeightOfPlane(rotatedBuffer, 0),
CVPixelBufferGetBytesPerRowOfPlane(rotatedBuffer, 0), CVPixelBufferGetBaseAddressOfPlane(rotatedBuffer, 0));
// BytesPerRow = width ??? should be width + 32
// rotate Y plane
vImage_Buffer originalYBuffer = { CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0), CVPixelBufferGetHeightOfPlane(imageBuffer, 0),
CVPixelBufferGetWidthOfPlane(imageBuffer, 0), CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0) };
vImage_Buffer rotatedYBuffer = { CVPixelBufferGetBaseAddressOfPlane(rotatedBuffer, 0), CVPixelBufferGetHeightOfPlane(rotatedBuffer, 0),
CVPixelBufferGetWidthOfPlane(rotatedBuffer, 0), CVPixelBufferGetBytesPerRowOfPlane(rotatedBuffer, 0) };
err = vImageRotate90_Planar8(&originalYBuffer, &rotatedYBuffer, 1, 0.0, kvImageNoFlags);
NSLog(#"rotatedYBuffer rotated %ld %ld %ld p=%p", rotatedYBuffer.width, rotatedYBuffer.height, rotatedYBuffer.rowBytes, rotatedYBuffer.data);
NSLog(#"RotateY err=%ld", err);
dumpData(#"Rotated Plane0=", rotatedYBuffer.data, outWidth);
// rotate UV plane
vImage_Buffer originalUVBuffer = { CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1), CVPixelBufferGetHeightOfPlane(imageBuffer, 1),
CVPixelBufferGetWidthOfPlane(imageBuffer, 1), CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1) };
vImage_Buffer rotatedUVBuffer = { CVPixelBufferGetBaseAddressOfPlane(rotatedBuffer, 1), CVPixelBufferGetHeightOfPlane(rotatedBuffer, 1),
CVPixelBufferGetWidthOfPlane(rotatedBuffer, 1), CVPixelBufferGetBytesPerRowOfPlane(rotatedBuffer, 1) };
err = vImageRotate90_Planar16U(&originalUVBuffer, &rotatedUVBuffer, 1, 0.0, kvImageNoFlags);
NSLog(#"RotateUV err=%ld", err);
dumpData(#"Rotated Plane1=", rotatedUVBuffer.data, outWidth);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CVPixelBufferUnlockBaseAddress(rotatedBuffer, 0);
return rotatedBuffer;
}
I found vImageBuffer BytesPerRow extra 32 byte is optional. Some Apple API add 32 byte on each row, some API does not add.
Actually questioned code works fine. CVPixelBufferCreate() creates buffer without extra 32 byte. vImageRotate90_Planar8() supports both formats, with 32 byte and without 32 byte.

imagemagick : saving MagickExportImagePixels's output blob to gray image file?

zbar engine sample source(zbarimg.c) shows the following.
https://github.com/ZBar/ZBar/blob/master/zbarimg/zbarimg.c
size_t bloblen = width * height;
unsigned char *blobdata = malloc(bloblen);
MagickExportImagePixels(images, 0, 0, width, height, "I", CharPixel, blobdata);
I'd like to see the blobdata.
How can I save the blobdata to file?
I made save_imgdata function to save blobdata.
int save_imgdata(char* imgf, int width, int height, char *raw)
{
PixelWand *p_wand = NULL;
PixelIterator *iterator = NULL;
PixelWand **pixels = NULL;
unsigned long x, y;
char hex[128];
//MagickWandGenesis();
p_wand = NewPixelWand();
PixelSetColor(p_wand, "gray");
//PixelSetColor(p_wand, "white");
MagickWand *m_wand = NewMagickWand(); //CORE_RL_wand_.lib;
MagickSetImageDepth(m_wand, 8);
MagickNewImage(m_wand, width, height, p_wand);
// Get a new pixel iterator
iterator = NewPixelIterator(m_wand);
for (y = 0; y<height; y++) {
// Get the next row of the image as an array of PixelWands
pixels = PixelGetNextIteratorRow(iterator, &x);
// Set the row of wands to a simple gray scale gradient
for (x = 0; x<width; x++) {
sprintf(hex, "#%02x", *raw++);
//sprintf(hex, "#%02%x02%x02x", *raw, *raw, *raw); raw++;
PixelSetColor(pixels[x], hex);
}
// Sync writes the pixels back to the m_wand
PixelSyncIterator(iterator);
}
MagickWriteImage(m_wand, imgf);
DestroyMagickWand(m_wand);
return 0;
}
The call of save_imgdata("imgw.bmp", width, height, blobdata)
save 24bpp image.
What's wrong of save_imgdata?
I want it saves 8bpp gray image file.
Don't bother iterating and building dynamic color/pixel values -- It's slow and resource intensive. If the data came from an export method, than use the import method to restore.
int save_imgdata(char* imgf, int width, int height, void * raw)
{
MagickWand * wand;
PixelWand * bgcolor;
bgcolor = NewPixelWand();
PixelSetColor(bgcolor, "WHITE");
wand = NewMagickWand();
MagickNewImage(wand, width, height, bgcolor);
bgcolor = DestroyPixelWand(bgcolor);
MagickSetImageDepth(wand, 8);
MagickSetImageColorspace(wand, GRAYColorspace);
MagickImportImagePixels(wand, 0, 0, width, height, "I", CharPixel, raw);
MagickQuantizeImage(wand,
256, // Reduce to 8bpp
GRAYColorspace, // Match colorspace
0, // Calculate optimal tree depth
MagickTrue, // Use dither ? This changes in IM-7
MagickFalse); // Messure Error
MagickWriteImage(wand, imgf);
wand = DestroyMagickWand(wand);
return 0;
}

How to compile vImage emboss effect sample code?

Here is the code found in the documentation:
int myEmboss(void *inData,
unsigned int inRowBytes,
void *outData,
unsigned int outRowBytes,
unsigned int height,
unsigned int width,
void *kernel,
unsigned int kernel_height,
unsigned int kernel_width,
int divisor ,
vImage_Flags flags ) {
uint_8 kernel = {-2, -2, 0, -2, 6, 0, 0, 0, 0}; // 1
vImage_Buffer src = { inData, height, width, inRowBytes }; // 2
vImage_Buffer dest = { outData, height, width, outRowBytes }; // 3
unsigned char bgColor[4] = { 0, 0, 0, 0 }; // 4
vImage_Error err; // 5
err = vImageConvolve_ARGB8888( &src, //const vImage_Buffer *src
&dest, //const vImage_Buffer *dest,
NULL,
0, //unsigned int srcOffsetToROI_X,
0, //unsigned int srcOffsetToROI_Y,
kernel, //const signed int *kernel,
kernel_height, //unsigned int
kernel_width, //unsigned int
divisor, //int
bgColor,
flags | kvImageBackgroundColorFill
//vImage_Flags flags
);
return err;
}
Here is the problem: the kernel variable seems to refer to three different types:
void * kernel in the formal parameter list
an undefined unsigned int uint_8 kernel, as a new variable which presumably would shadow the formal parameter
a const signed int *kernel when calling vImageConvolve_ARGB8888.
Is this actual code ? How may I compile this function ?
You are correct that that function is pretty messed up. I recommend using the Provide Feedback widget to let Apple know.
I think you should remove the kernel, kernel_width, and kernel_height parameters from the function signature. Those seem to be holdovers from a function that applies a caller-supplied kernel, but this example is about applying an internally-defined kernel.
Fixed the declaration of the kernel local variable to make it an array of uint8_t, like so:
uint8_t kernel[] = {-2, -2, 0, -2, 6, 0, 0, 0, 0}; // 1
Then, at the call to vImageConvolve_ARGB8888(), replace kernel_width and kernel_height by 3. Since the kernel is hard-coded, the dimensions can be as well.
The kernel is just the kernel used in the convolution. In mathematical terms, it is the matrix that is convolved with your image, to achieve blur/sharpen/emboss or other effects. This function you provided is just a thin wrapper around the vimage convolution function. To actually perform the convolution you can follow the code below. The code is all hand typed so not necessarily 100% correct but should point you in the right direction.
To use this function, you first need to have pixel access to your image. Assuming you have a UIImage, you do this:
//image is a UIImage
CGImageRef img = image.CGImage;
CGDataProviderRef dataProvider = CGImageGetDataProvider(img);
CFDataRef cfData = CGDataProviderCopyData(dataProvider);
void * dataPtr = (void*)CFDataGetBytePtr(cfData);
Next, you construct the vImage_Buffer that you will pass to the function
vImage_Buffer inBuffer, outBuffer;
inBuffer.data = dataPtr;
inBuffer.width = CGImageGetWidth(img);
inBuffer.height = CGImageGetHeight(img);
inBuffer.rowBytes = CGImageGetBytesPerRow(img);
Allocate the outBuffer as well
outBuffer.data = malloc(inBuffer.height * inBuffer.rowBytes)
// Setup width, height, rowbytes equal to inBuffer here
Now we create the Kernel, the same one in your example, which is a 3x3 matrix
Multiply the values by a divisor if they are float (they need to be int)
int divisor = 1000;
CGSize kernalSize = CGSizeMake(3,3);
int16_t *kernel = (int16_t*)malloc(sizeof(int16_t) * 3 * 3);
// Assign kernel values to the emboss kernel
// uint_8 kernel = {-2, -2, 0, -2, 6, 0, 0, 0, 0} // * 1000 ;
Now perform the convolution on the image!
//Use a background of transparent black as temp
Pixel_8888 temp = 0;
vImageConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, kernel, kernelSize.width, kernelSize.height, divisor, temp, kvImageBackgroundColorFill);
Now construct a new UIImage out of outBuffer and your done!
Remember to free the kernel and the outBuffer data.
This is the way I am using it to process frames read from a video with AVAssetReader. This is a blur, but you can change the kernel to suit your needs. 'imageData' can of course be obtained by other means, e.g. from an UIImage.
CMSampleBufferRef sampleBuffer = [asset_reader_output copyNextSampleBuffer];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *imageData = CVPixelBufferGetBaseAddress(imageBuffer);
int16_t kernel[9];
for(int i = 0; i < 9; i++) {
kernel[i] = 1;
}
kernel[4] = 2;
unsigned char *newData= (unsigned char*)malloc(4*currSize);
vImage_Buffer inBuff = { imageData, height, width, 4*width };
vImage_Buffer outBuff = { newData, height, width, 4*width };
vImage_Error err=vImageConvolve_ARGB8888 (&inBuff,&outBuff,NULL, 0,0,kernel,3,3,10,nil,kvImageEdgeExtend);
if (err != kvImageNoError) NSLog(#"convolve error %ld", err);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
//newData holds the processed image

How to set background image fitable in blackberry application

I have written the following code here the background image is displaying but the image did not cover the full background
private Bitmap background;
int mWidth = Display.getWidth();
int mHeight = Display.getHeight();
public MyScreen()
{
// Set the displayed title of the screen
//backgroundBitmap = Bitmap.getBitmapResource("slidimage.png");
final Bitmap background = Bitmap.getBitmapResource("slidimage.png");
HorizontalFieldManager vfm = new HorizontalFieldManager(USE_ALL_HEIGHT | USE_ALL_WIDTH) {
public void paint(Graphics g) {
g.drawBitmap(0, 0,mWidth, mHeight, background, 0, 0);
super.paint(g);
}
};
add(vfm);
public static Bitmap resizeBitmap(Bitmap image, int width, int height)
{
int rgb[] = new int[image.getWidth()*image.getHeight()];
image.getARGB(rgb, 0, image.getWidth(), 0, 0, image.getWidth(), image.getHeight());
int rgb2[] = rescaleArray(rgb, image.getWidth(), image.getHeight(), width, height);
Bitmap temp2 = new Bitmap(width, height);
temp2.setARGB(rgb2, 0, width, 0, 0, width, height);
return temp2;
}
You can use the above method to resize the image
just pass the image to be resized and its width and height .
and the function will return the resized image .
where rescale Array is the below method
private static int[] rescaleArray(int[] ini, int x, int y, int x2, int y2)
{
int out[] = new int[x2*y2];
for (int yy = 0; yy < y2; yy++)
{
int dy = yy * y / y2;
for (int xx = 0; xx < x2; xx++)
{
int dx = xx * x / x2;
out[(x2 * yy) + xx] = ini[(x * dy) + dx];
}
}
return out;
}

how to take screenShot in vala

I have the following program does not work
NOTE: compiled on windows 7.
Gdk.Screen screen = Gdk.Screen.get_default ();
Gdk.Window rootWin2 = screen.get_active_window ();
int width, height;
rootWin2.get_size (out width, out height);
Gdk.Colormap? colormap= rootWin2.get_colormap ();
Gdk.Pixbuf? dest = new Gdk.Pixbuf (Gdk.Colorspace.RGB, false, 8, width, height);
Gdk.pixbuf_get_from_drawable (dest, rootWin2, colormap, 0, 0, 0, 0, width, height);
try {
dest.save("screenShoot2.jpg", "jpeg");
} catch (Error e) {
stdout.printf("\n eerorrr " + e.message + "\n");
}
using Gtk;
int main (string[] args) {
Gtk.init (ref args);
int width, height;
Gdk.Window win = Gdk.get_default_root_window();
width = win.get_width();
height = win.get_height();
Gdk.Pixbuf screenshot = Gdk.pixbuf_get_from_window(win, 0, 0, width, height);
screenshot.save("screenshot.png","png");
return 0;
}
// valac --pkg gtk+-3.0 --pkg gdk-3.0 screenshot.vala

Resources