Artifacts when scaling Ycbcr (420f) with Accelerate - ios

I cannot find any documentation or example on how to resize Ycbcr biplanar, supposedly the main format you should use on iOS according to Apple. I tried to resize the two planes like this:
// resize luma
vImage_Buffer originalYBuffer = { CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0), CVPixelBufferGetHeightOfPlane(pixelBuffer, 0), CVPixelBufferGetWidthOfPlane(pixelBuffer, 0), CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0) };
vImage_Buffer resizedYBuffer;
vImageBuffer_Init(&resizedYBuffer, IMAGE_HEIGHT, IMAGE_WIDTH, 8 * sizeof(Pixel_8), kvImageNoFlags);
error = vImageScale_Planar8(&originalYBuffer, &resizedYBuffer, NULL, kvImageNoFlags);
assert(!error);
cv::Mat grey(IMAGE_HEIGHT, IMAGE_WIDTH, CV_8UC1, resizedYBuffer.data);
// resize chroma
vImage_Buffer originalUVBuffer = { CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1), CVPixelBufferGetHeightOfPlane(pixelBuffer, 1), CVPixelBufferGetWidthOfPlane(pixelBuffer, 1), CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1) };
vImage_Buffer resizedUVBuffer;
vImageBuffer_Init(&resizedUVBuffer, IMAGE_HEIGHT / 2, IMAGE_WIDTH / 2, 8 * sizeof(Pixel_16U), kvImageNoFlags);
error = vImageScale_Planar8(&originalUVBuffer, &resizedUVBuffer, NULL, kvImageNoFlags);
assert(!error);
But the colors are totally borked. The luma channel works by itself, so it's a problem with the chroma. This format is supposed to use 2 bytes for chroma, although not totally sure. If I use vImageScale_Planar8 I get half of the screen green, else if I use vImageScale_Planar16U I get blue/yellow noise all over the image.

You can use vImageScale_CbCr8 for the UV buffer, but it's only iOS 10+ :
// resize luma
vImage_Buffer originalYBuffer = { CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0), CVPixelBufferGetHeightOfPlane(pixelBuffer, 0), CVPixelBufferGetWidthOfPlane(pixelBuffer, 0), CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0) };
vImage_Buffer resizedYBuffer;
vImageBuffer_Init(&resizedYBuffer, IMAGE_HEIGHT, IMAGE_WIDTH, 8 * sizeof(Pixel_8), kvImageNoFlags);
error = vImageScale_Planar(&originalYBuffer, &resizedYBuffer, NULL, kvImageNoFlags);
assert(!error);
cv::Mat grey(IMAGE_HEIGHT, IMAGE_WIDTH, CV_8UC1, resizedYBuffer.data);
// resize chroma
vImage_Buffer originalUVBuffer = { CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1), CVPixelBufferGetHeightOfPlane(pixelBuffer, 1), CVPixelBufferGetWidthOfPlane(pixelBuffer, 1), CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1) };
vImage_Buffer resizedUVBuffer;
vImageBuffer_Init(&resizedUVBuffer, IMAGE_HEIGHT / 2, IMAGE_WIDTH / 2, 8 * sizeof(Pixel_16U), kvImageNoFlags);
error = vImageScale_CbCr8(&originalUVBuffer, &resizedUVBuffer, NULL, kvImageNoFlags);
assert(!error);

Got the andwer from Apple guys: vImageScale_Planar8 cannot operate on the UV plane, because it is interleaved. The only solution is to split into two independent planes.

Related

Strange errors during drawing hollow circles in the 3D space

I am trying to draw two hollow circles that are surrounding a cube which is located at the 0, 0, 0 position..
so far I've implemented the cube and the two circles here is what I get.
there are two strange things happening here.
One is that I want to draw the circles but I can see the lines radiating from the origin.
and two is that interpolated colors, even though I set just one color for the fragment shader.
here is you can see clearly those lines with interpolated color...
here is my vertex shader code and the fragment shader code
"use strict";
const loc_aPosition = 1;
const loc_aColor = 2;
const loc_UVCoord = 3;
const VSHADER_SOURCE =
`#version 300 es
layout(location=${loc_aPosition}) in vec4 aPosition;
layout(location=${loc_aColor}) in vec4 aColor;
layout(location=${loc_UVCoord}) in vec2 UVCoord;
out vec4 vColor;
out vec2 vUVCoord;
uniform mat4 uMVP;
void main()
{
gl_Position = uMVP * aPosition;
vColor = aColor;
vUVCoord = UVCoord;
}`;
const FSHADER_SOURCE =
`#version 300 es
precision mediump float;
in vec4 vColor;
out vec4 fColor;
void main()
{
fColor = vColor;
}`;
and the initilize functions for the two circles and there is the only difference is the target plane.
function init_equator(gl)
{
let vertices = []; // for the vertices
let color = [1, 0, 0]; // red color
for(var i = 0; i <= 360; i+=10)
{
let j = i * Math.PI/180;
let vert = [R * Math.cos(j), 0, R * Math.sin(j)]; // drawing a circle at the XZ plane since it has to be an equator for the cube...
vertices.push( vert[0], vert[1], vert[2] ); // push the vertices
vertices.push( color[0], color[1], color[2]); // set the color
}
const SZ = vertices.BYTES_PER_ELEMENT;
let vao = gl.createVertexArray();
gl.bindVertexArray(vao);
let vbo = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vbo);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
gl.vertexAttribPointer(loc_aPosition, 3, gl.FLOAT, false, SZ * 6, 0); // stride is 6, 3 for positions and 3 for the color
gl.enableVertexAttribArray(loc_aPosition);
gl.vertexAttribPointer(loc_aColor, 3, gl.FLOAT, false, SZ * 6, SZ * 3); // stride is 6, offset is this is because 3 color elements are located after 3 position elements..
gl.enableVertexAttribArray(loc_aColor);
gl.bindVertexArray(null);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
return { vao, n : vertices.length / 3 }; // since it has three coordinates so devide by 3
}
function init_latitude(gl)
{
let vertices = []; // for the vertices
let color = [1, 0, 0]; // supposed to be the red
for(var i = 0; i <= 360; i+=10)
{
let j = i * Math.PI/180;
let vert = [0, R * Math.cos(j), R * Math.sin(j)]; // drawing a circle on the YZ plane
vertices.push( vert[0], vert[1], vert[2] );
vertices.push( color[0], color[1], color[2]);
}
const SZ = vertices.BYTES_PER_ELEMENT;
let vao = gl.createVertexArray();
gl.bindVertexArray(vao);
let vbo = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vbo);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
gl.vertexAttribPointer(loc_aPosition, 3, gl.FLOAT, false, SZ * 6, 0); // stride is 6, 3 for positions and 3 for the color
gl.enableVertexAttribArray(loc_aPosition);
gl.vertexAttribPointer(loc_aColor, 3, gl.FLOAT, false, SZ * 6, SZ * 3); // stride is 6, offset is this is because 3 color elements are located after 3 position elements..
gl.enableVertexAttribArray(loc_aColor);
gl.bindVertexArray(null);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
return { vao, n : vertices.length / 3 }; // since it has three coordinates so devide by 3
}
I refer these drawing fucntions from here drawing circle
in the main function I called the draw function like this..
........
MVP.setOrtho(LEFT, RIGHT, BOTTOM, TOP, NEAR, FAR); // setting MVP matrix to orthographic mode
MVP.lookAt(FIXED_X, FIXED_Y, FIXED_Z, 0,0,0, 0,1,0); // Eye position x, y, z Look at position 0, 0, 0 Up vector 0, 1, 0
gl.uniformMatrix4fv(loc_MVP, false, MVP.elements);
gl.bindVertexArray(cube.vao);
gl.drawElements(gl.TRIANGLES, cube.n, gl.UNSIGNED_BYTE, 0)
gl.bindVertexArray(null);
gl.bindVertexArray(equator.vao);
gl.drawArrays(gl.LINE_LOOP, 0, equator.n);
gl.bindVertexArray(null);
gl.bindVertexArray(latitudeCircle.vao);
gl.drawArrays(gl.LINE_LOOP, 0, latitudeCircle.n);
gl.bindVertexArray(null);
I have no ideas why the lines are radiating from the origin and the mixed color...
could somebody help me?
this line, which appears twice in the code you posted
const SZ = vertices.BYTES_PER_ELEMENT;
is SZ will be undefined. vertices is a native JavaScript array, not a typedarray array like Float32Array. After that every calculation with SZ will be 0 or NaN
In other words these lines
gl.vertexAttribPointer(loc_aPosition, 3, gl.FLOAT, false, SZ * 6, 0);
gl.vertexAttribPointer(loc_aColor, 3, gl.FLOAT, false, SZ * 6, SZ * 3);
Will be
gl.vertexAttribPointer(loc_aPosition, 3, gl.FLOAT, false, 0, 0);
gl.vertexAttribPointer(loc_aColor, 3, gl.FLOAT, false, 0, 0);
Which means every other position is a color, and every other color is a position which explains why lines go to the center and why colors are interpolated.
Note that if you had stepped through the code in the debugger you'd have probably seen this issue so it would be good to learn how to use the debugger.
Also FYI unrelated to your issue you don't need to call gl.bindVertexArray twice in a row, once with null and once with the next thing you want to draw with.
this
gl.bindVertexArray(cube.vao);
gl.drawElements(gl.TRIANGLES, cube.n, gl.UNSIGNED_BYTE, 0)
gl.bindVertexArray(null);
gl.bindVertexArray(equator.vao);
gl.drawArrays(gl.LINE_LOOP, 0, equator.n);
gl.bindVertexArray(null);
gl.bindVertexArray(latitudeCircle.vao);
gl.drawArrays(gl.LINE_LOOP, 0, latitudeCircle.n);
gl.bindVertexArray(null);
can just be this
gl.bindVertexArray(cube.vao);
gl.drawElements(gl.TRIANGLES, cube.n, gl.UNSIGNED_BYTE, 0)
gl.bindVertexArray(equator.vao);
gl.drawArrays(gl.LINE_LOOP, 0, equator.n);
gl.bindVertexArray(latitudeCircle.vao);
gl.drawArrays(gl.LINE_LOOP, 0, latitudeCircle.n);
gl.bindVertexArray(null); // this is also not technically needed
Also also, you can use the spread operator.
This
vertices.push( vert[0], vert[1], vert[2] ); // push the vertices
vertices.push( color[0], color[1], color[2]); // set the color
can be this
vertices.push( ...vert ); // push the vertices
vertices.push( ...color ); // set the color
Also you might find these tutorials useful.

CVPixelBufferCreate does not care Planar format

I try to rotate CoreVideo '420f' image without transfer to RGBA.
The incoming CMSampleBuffer Y-plane bytesPerRow is width + 32.
That means Y-plane row size is 8bit * width + sizeof(CVPlanarComponentInfo).
But if I call CVPixelBufferCreate(,,,'420f',,) , BytesPerRow == width.
CVPixelBufferCreate() does not care about planar format and did not add 32bytes.
I tried
vImage_Buffer myYBuffer = {buf, height, width, bytePerRow};
But there is no parameter for bitsPerPixel. I cannot use for UVBuffer.
I tried
vImageBuffer_Init(buf, height, width, bitPerPixel, flag);
But there is no parameter for bytesPerRow.
I like to know how to create vImageBuffer or CVPixelBuffer with '420f' planar format.
This is under construction code for rotation
NS_INLINE void dumpData(NSString* tag, unsigned char* p, size_t w) {
NSMutableString* str = [tag mutableCopy];
for(int i=0;i<w+100;++i) {
[str appendString:[NSString stringWithFormat:#"%02x ", *(p + i)]];
}
NSLog(#"%#", str);
}
- (CVPixelBufferRef) RotateBuffer:(CMSampleBufferRef)sampleBuffer withConstant:(uint8_t)rotationConstant
{
vImage_Error err = kvImageNoError;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t outHeight = width;
size_t outWidth = height;
assert(CVPixelBufferGetPixelFormatType(imageBuffer) == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange);
assert(CVPixelBufferGetPlaneCount(imageBuffer) == 2);
NSLog(#"YBuffer %ld %ld %ld", CVPixelBufferGetWidthOfPlane(imageBuffer, 0), CVPixelBufferGetHeightOfPlane(imageBuffer, 0),
CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0)); // BytesPerRow = width + 32
dumpData(#"Base=", CVPixelBufferGetBaseAddress(imageBuffer), width);
dumpData(#"Plane0=", CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0), width);
CVPixelBufferRef rotatedBuffer = NULL;
CVReturn ret = CVPixelBufferCreate(kCFAllocatorDefault, outWidth, outHeight, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, NULL, &rotatedBuffer);
NSLog(#"CVPixelBufferCreate err=%d", ret);
CVPixelBufferLockBaseAddress(rotatedBuffer, 0);
NSLog(#"CVPixelBufferCreate init %ld %ld %ld p=%p", CVPixelBufferGetWidthOfPlane(rotatedBuffer, 0), CVPixelBufferGetHeightOfPlane(rotatedBuffer, 0),
CVPixelBufferGetBytesPerRowOfPlane(rotatedBuffer, 0), CVPixelBufferGetBaseAddressOfPlane(rotatedBuffer, 0));
// BytesPerRow = width ??? should be width + 32
// rotate Y plane
vImage_Buffer originalYBuffer = { CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0), CVPixelBufferGetHeightOfPlane(imageBuffer, 0),
CVPixelBufferGetWidthOfPlane(imageBuffer, 0), CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0) };
vImage_Buffer rotatedYBuffer = { CVPixelBufferGetBaseAddressOfPlane(rotatedBuffer, 0), CVPixelBufferGetHeightOfPlane(rotatedBuffer, 0),
CVPixelBufferGetWidthOfPlane(rotatedBuffer, 0), CVPixelBufferGetBytesPerRowOfPlane(rotatedBuffer, 0) };
err = vImageRotate90_Planar8(&originalYBuffer, &rotatedYBuffer, 1, 0.0, kvImageNoFlags);
NSLog(#"rotatedYBuffer rotated %ld %ld %ld p=%p", rotatedYBuffer.width, rotatedYBuffer.height, rotatedYBuffer.rowBytes, rotatedYBuffer.data);
NSLog(#"RotateY err=%ld", err);
dumpData(#"Rotated Plane0=", rotatedYBuffer.data, outWidth);
// rotate UV plane
vImage_Buffer originalUVBuffer = { CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1), CVPixelBufferGetHeightOfPlane(imageBuffer, 1),
CVPixelBufferGetWidthOfPlane(imageBuffer, 1), CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1) };
vImage_Buffer rotatedUVBuffer = { CVPixelBufferGetBaseAddressOfPlane(rotatedBuffer, 1), CVPixelBufferGetHeightOfPlane(rotatedBuffer, 1),
CVPixelBufferGetWidthOfPlane(rotatedBuffer, 1), CVPixelBufferGetBytesPerRowOfPlane(rotatedBuffer, 1) };
err = vImageRotate90_Planar16U(&originalUVBuffer, &rotatedUVBuffer, 1, 0.0, kvImageNoFlags);
NSLog(#"RotateUV err=%ld", err);
dumpData(#"Rotated Plane1=", rotatedUVBuffer.data, outWidth);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CVPixelBufferUnlockBaseAddress(rotatedBuffer, 0);
return rotatedBuffer;
}
I found vImageBuffer BytesPerRow extra 32 byte is optional. Some Apple API add 32 byte on each row, some API does not add.
Actually questioned code works fine. CVPixelBufferCreate() creates buffer without extra 32 byte. vImageRotate90_Planar8() supports both formats, with 32 byte and without 32 byte.

Plot histogram of Sobel operator magnitude and angle in OpenCV

I want to plot histogram in OpenCV C++. The task is that x-axis should be angle and y-axis should be magnitude of histogram. I calculate magnitude and angle by using Sobel operator. Now how can I plot histogram by using magnitude and angle?
Thanks in advance. The simple code of problem is
// Read image
Mat img = imread("abs.jpg");
img.convertTo(img, CV_32F, 1 / 255.0);
/*GaussianBlur(img, img, Size(3, 3), 0, 0, BORDER_CONSTANT);*/
// Calculate gradients gx, gy
Mat gx, gy;
Sobel(img, gx, CV_32F, 1, 0, 1);
Sobel(img, gy, CV_32F, 0, 1, 1);
// C++ Calculate gradient magnitude and direction (in degrees)
Mat mag, angle;
cartToPolar(gx, gy, mag, angle, 1);
imshow("magnitude of image is", mag);
imshow("angle of image is", angle);
Ok, So the first part of it is to calculate the histogram of each of them. Since both are separated already (in their own Mat) we do not have to split them or anything, and we can use them directly in the calcHist function of OpenCV.
By the documentation we have:
void calcHist(const Mat* images, int nimages, const int* channels, InputArray mask, OutputArray hist, int dims, const int* histSize, const float** ranges, bool uniform=true, bool accumulate=false )
So you would have to do:
cv::Mat histMag, histAng;
// number of bins of the histogram, adjust to your liking
int histSize = 10;
// degrees goes from 0-360 if radians then change acordingly
float rangeAng[] = { 0, 360} ;
const float* histRangeAng = { rangeAng };
double minval, maxval;
// get the range for the magnitude
cv::minMaxLoc(mag, &minval, &maxval);
float rangeMag[] = { static_cast<float>(minval), static_cast<float>(maxval)} ;
const float* histRangeMag = { rangeMag };
cv::calcHist(&mag, 1, 0, cv::NoArray(), histMag, 1, &histSize, &histRangeMag, true, false);
cv::calcHist(&angle, 1, 0, cv::NoArray(), histAng, 1, &histSize, &histRangeAng, true, false);
Now you have to plot the two histograms found in histMag and histAng.
In the turtorial I posted in the comments you have lines in the plot, for the angle it would be something like this:
// Draw the histograms for B, G and R
int hist_w = 512; int hist_h = 400;
int bin_w = cvRound( (double) hist_w/histSize );
cv::Mat histImage( hist_h, hist_w, CV_8UC3, Scalar( 0,0,0) );
/// Normalize the result to [ 0, histImage.rows ]
cv::normalize(histAng, histAng, 0, histImage.rows, cv::NORM_MINMAX, -1, Mat() );
// Draw the lines
for( int i = 1; i < histSize; i++ )
{
cv::line( histImage, cv::Point( bin_w*(i-1), hist_h - cvRound(histAng.at<float>(i-1)) ) ,
cv::Point( bin_w*(i), hist_h - cvRound(histAng.at<float>(i)) ),
cv::Scalar( 255, 0, 0), 2, 8, 0 );
}
With this you can do the same for the magnitude, or maybe turn it into a function which draws histograms if they are supplied.
In the documentation they have another option, to draw rectangles as the bins, adapting it to our case, we get something like:
// Draw the histograms for B, G and R
int hist_w = 512; int hist_h = 400;
int bin_w = std::round( static_cast<double>(hist_w)/static_cast<double>(histSize) );
cv::Mat histImage( hist_h, hist_w, CV_8UC3, Scalar( 0,0,0) );
/// Normalize the result to [ 0, histImage.rows ]
cv::normalize(histAng, histAng, 0, histImage.rows, cv::NORM_MINMAX, -1, Mat() );
for( int i = 1; i < histSize; i++ )
{
cv::rectangle(histImage, cv::Point(bin_w*(i-1), hist_h - static_cast<int>(std::round(histAng.at<float>(i-1)))), cv::Point(bin_w*(i), hist_h),);
}
Again, this can be done for the magnitude as well in the same way. This are super simple plots, if you need more complex or beautiful plots, you may need to call an external library and pass the data inside the calculated histograms. Also, this code has not been tested, so it may have a typo or error, but if something fails, just write a comment and we can find a solution.
I hope this helps you, and sorry for the late answer.

Duplicate / Copy CVPixelBufferRef with CVPixelBufferCreate

I need to create a copy of a CVPixelBufferRef in order to be able to manipulate the original pixel buffer in a bit-wise fashion using the values from the copy. I cannot seem to achieve this with CVPixelBufferCreate, or with CVPixelBufferCreateWithBytes.
According to this question, it could possibly also be done with memcpy(). However, there is no explanation on how this would be achieved, and which Core Video library calls would be needed regardless.
This seems to work:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Get pixel buffer info
const int kBytesPerPixel = 4;
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int bufferWidth = (int)CVPixelBufferGetWidth(pixelBuffer);
int bufferHeight = (int)CVPixelBufferGetHeight(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
uint8_t *baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer);
// Copy the pixel buffer
CVPixelBufferRef pixelBufferCopy = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, bufferWidth, bufferHeight, kCVPixelFormatType_32BGRA, NULL, &pixelBufferCopy);
CVPixelBufferLockBaseAddress(pixelBufferCopy, 0);
uint8_t *copyBaseAddress = CVPixelBufferGetBaseAddress(pixelBufferCopy);
memcpy(copyBaseAddress, baseAddress, bufferHeight * bytesPerRow);
// Do what needs to be done with the 2 pixel buffers
}
ooOlly's code was not working for me with YUV pixel buffers in all cases (green line at bottom and sig trap in memcpy), so this works in swift for YUV pixel buffers from the camera:
var copyOut: CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer), CVPixelBufferGetPixelFormatType(pixelBuffer), nil, &copyOut)
let copy = copyOut!
CVPixelBufferLockBaseAddress(copy, [])
CVPixelBufferLockBaseAddress(pixelBuffer, [])
let ydestPlane = CVPixelBufferGetBaseAddressOfPlane(copy, 0)
let ysrcPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0)
memcpy(ydestPlane, ysrcPlane, CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0) * CVPixelBufferGetHeightOfPlane(pixelBuffer, 0))
let uvdestPlane = CVPixelBufferGetBaseAddressOfPlane(copy, 1)
let uvsrcPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1)
memcpy(uvdestPlane, uvsrcPlane, CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1) * CVPixelBufferGetHeightOfPlane(pixelBuffer, 1))
CVPixelBufferUnlockBaseAddress(copy, [])
CVPixelBufferUnlockBaseAddress(pixelBuffer, [])
better error handling than the force unwrap is strongly suggested of course.
Maxi Mus's code only due with RGB/BGR buffer.
so for YUV buffer below code should work.
// Copy the pixel buffer
CVPixelBufferRef pixelBufferCopy = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, bufferWidth, bufferHeight, pixelFormat, NULL, &pixelBufferCopy);
CVPixelBufferLockBaseAddress(pixelBufferCopy, 0);
//BGR
// uint8_t *copyBaseAddress = CVPixelBufferGetBaseAddress(pixelBufferCopy);
// memcpy(copyBaseAddress, baseAddress, bufferHeight * bytesPerRow);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBufferCopy, 0);
//YUV
uint8_t *yPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPlane, bufferWidth * bufferHeight);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBufferCopy, 1);
uint8_t *uvPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane, uvPlane, bufferWidth * bufferHeight/2);
CVPixelBufferUnlockBaseAddress(pixelBufferCopy, 0);

How to compile vImage emboss effect sample code?

Here is the code found in the documentation:
int myEmboss(void *inData,
unsigned int inRowBytes,
void *outData,
unsigned int outRowBytes,
unsigned int height,
unsigned int width,
void *kernel,
unsigned int kernel_height,
unsigned int kernel_width,
int divisor ,
vImage_Flags flags ) {
uint_8 kernel = {-2, -2, 0, -2, 6, 0, 0, 0, 0}; // 1
vImage_Buffer src = { inData, height, width, inRowBytes }; // 2
vImage_Buffer dest = { outData, height, width, outRowBytes }; // 3
unsigned char bgColor[4] = { 0, 0, 0, 0 }; // 4
vImage_Error err; // 5
err = vImageConvolve_ARGB8888( &src, //const vImage_Buffer *src
&dest, //const vImage_Buffer *dest,
NULL,
0, //unsigned int srcOffsetToROI_X,
0, //unsigned int srcOffsetToROI_Y,
kernel, //const signed int *kernel,
kernel_height, //unsigned int
kernel_width, //unsigned int
divisor, //int
bgColor,
flags | kvImageBackgroundColorFill
//vImage_Flags flags
);
return err;
}
Here is the problem: the kernel variable seems to refer to three different types:
void * kernel in the formal parameter list
an undefined unsigned int uint_8 kernel, as a new variable which presumably would shadow the formal parameter
a const signed int *kernel when calling vImageConvolve_ARGB8888.
Is this actual code ? How may I compile this function ?
You are correct that that function is pretty messed up. I recommend using the Provide Feedback widget to let Apple know.
I think you should remove the kernel, kernel_width, and kernel_height parameters from the function signature. Those seem to be holdovers from a function that applies a caller-supplied kernel, but this example is about applying an internally-defined kernel.
Fixed the declaration of the kernel local variable to make it an array of uint8_t, like so:
uint8_t kernel[] = {-2, -2, 0, -2, 6, 0, 0, 0, 0}; // 1
Then, at the call to vImageConvolve_ARGB8888(), replace kernel_width and kernel_height by 3. Since the kernel is hard-coded, the dimensions can be as well.
The kernel is just the kernel used in the convolution. In mathematical terms, it is the matrix that is convolved with your image, to achieve blur/sharpen/emboss or other effects. This function you provided is just a thin wrapper around the vimage convolution function. To actually perform the convolution you can follow the code below. The code is all hand typed so not necessarily 100% correct but should point you in the right direction.
To use this function, you first need to have pixel access to your image. Assuming you have a UIImage, you do this:
//image is a UIImage
CGImageRef img = image.CGImage;
CGDataProviderRef dataProvider = CGImageGetDataProvider(img);
CFDataRef cfData = CGDataProviderCopyData(dataProvider);
void * dataPtr = (void*)CFDataGetBytePtr(cfData);
Next, you construct the vImage_Buffer that you will pass to the function
vImage_Buffer inBuffer, outBuffer;
inBuffer.data = dataPtr;
inBuffer.width = CGImageGetWidth(img);
inBuffer.height = CGImageGetHeight(img);
inBuffer.rowBytes = CGImageGetBytesPerRow(img);
Allocate the outBuffer as well
outBuffer.data = malloc(inBuffer.height * inBuffer.rowBytes)
// Setup width, height, rowbytes equal to inBuffer here
Now we create the Kernel, the same one in your example, which is a 3x3 matrix
Multiply the values by a divisor if they are float (they need to be int)
int divisor = 1000;
CGSize kernalSize = CGSizeMake(3,3);
int16_t *kernel = (int16_t*)malloc(sizeof(int16_t) * 3 * 3);
// Assign kernel values to the emboss kernel
// uint_8 kernel = {-2, -2, 0, -2, 6, 0, 0, 0, 0} // * 1000 ;
Now perform the convolution on the image!
//Use a background of transparent black as temp
Pixel_8888 temp = 0;
vImageConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, kernel, kernelSize.width, kernelSize.height, divisor, temp, kvImageBackgroundColorFill);
Now construct a new UIImage out of outBuffer and your done!
Remember to free the kernel and the outBuffer data.
This is the way I am using it to process frames read from a video with AVAssetReader. This is a blur, but you can change the kernel to suit your needs. 'imageData' can of course be obtained by other means, e.g. from an UIImage.
CMSampleBufferRef sampleBuffer = [asset_reader_output copyNextSampleBuffer];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *imageData = CVPixelBufferGetBaseAddress(imageBuffer);
int16_t kernel[9];
for(int i = 0; i < 9; i++) {
kernel[i] = 1;
}
kernel[4] = 2;
unsigned char *newData= (unsigned char*)malloc(4*currSize);
vImage_Buffer inBuff = { imageData, height, width, 4*width };
vImage_Buffer outBuff = { newData, height, width, 4*width };
vImage_Error err=vImageConvolve_ARGB8888 (&inBuff,&outBuff,NULL, 0,0,kernel,3,3,10,nil,kvImageEdgeExtend);
if (err != kvImageNoError) NSLog(#"convolve error %ld", err);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
//newData holds the processed image

Resources