Cimg in embedded hardware - image-processing

I load a jpg into embedded system memory on an Stm32 board with assembly code via .incbin and copy the data to an alternate buffer via std::copy. The image is displayed on an attached lcd screen and is decompressed with picoimage and all is well. I wish to apply image effects beforehand and I use CImg which seems to be small and portable. Compared to others I simply have to place the header in working directory and I have a grayscale code below; however, I have to same issue as when I attempted to alter the code by hand the screen appears black. I can't seem to find a proper fix for it. Are their any suggestions. For some reason I feel as though CImg is not aware it is a jpg file and opts to load and operate on the whole compressed data. Is their a work around?
CImg<uint8_t> image(_buffer,_panel->getWidth(),_panel->getHeight(),1,1,true);
int width = image.width();
int height = image.height();
//int depth = image.depth();
//New grayscale images.
//CImg<unsigned char> gray1(width,height,depth,1);
//CImg<unsigned char> gray2(width,height,depth,1);
unsigned char r,g,b;
unsigned char gr1 = 0;
unsigned char gr2 = 0;
/* Convert RGB image to grayscale image */
for(int i=0;i<width;i++){
for(int j=0;j<height;j++){
//Return a pointer to a located pixel value.
r = image(i,j,0,0); // First channel RED
g = image(i,j,0,1); // Second channel GREEN
b = image(i,j,0,2); // Third channel BLUE
//PAL and NTSC
//Y = 0.299*R + 0.587*G + 0.114*B
gr1 = round(0.299*((double)r) + 0.587*((double)g) + 0.114*((double)b));
//HDTV
//Y = 0.2126*R + 0.7152*G + 0.0722*B
gr2 = round(0.2126*((double)r) + 0.7152*((double)g) + 0.0722*((double)b));
image(i,j,0,0) = gr1;
//image(i,j,0,0) = gr2;
}
}

cimg does not decompress jpeg images itself, it uses your system jpeg library. If you are using a debian-derivative, for example, you'll need apt-get install libjpeg-turbo8-dev before compiling. Have a look through cimg and make sure it's picking up the headers and linking correctly.

Related

iOS lossless image editing

I'm working on a photo app for iPhone/iPod.
I'd like to get the raw data from a large image in an iPhone app and perform some pixel manipulation on it and write it back to the disk/gallery.
So far I've been converting the UIImage obtained from image picker to unsigned char pointers using the following technique:
CGImageRef imageBuff = [imageBuffer CGImage];//imageBuffer is an UIImage *
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(imageBuff));
unsigned char *input_image = (unsigned char *)CFDataGetBytePtr(pixelData);
//height & width represents the dimensions of the input image
unsigned char *resultant = (unsigned char *)malloc(height*4*width);
for (int i=0; i<height;i++)
{
for (int j=0; j<4*width; j+=4)
{
resultant[i*4*width+4*(j/4)+0] = input_image[i*4*width+4*(j/4)];
resultant[i*4*width+4*(j/4)+1] = input_image[i*4*width+4*(j/4)+1];
resultant[i*4*width+4*(j/4)+2] = input_image[i*4*width+4*(j/4)+2];
resultant[i*4*width+4*(j/4)+3] = 255;
}
}
CFRelease(pixelData);
I'm doing all operations on resultant and writing it back to disk in the original resolution using:
NSData* data = UIImagePNGRepresentation(image);
[data writeToFile:path atomically:YES];
I'd like to know:
is the transformation actually lossless?
if there's a 20-22 MP image at hand... is it wise to do this operation in a background thread? (chances of crashing etc... I'd like to know the best practice for doing this).
is there a better method for implementing this (getting the pixel data is a necessity here)?
Yes the method is lossless but i am not sure about 20-22 MP images. I think the Iphone is not at all a suitable choice if i want to edit that big images!
I have been successful in capturing and editing image upto 22 MP using this technique.
Tested this on an iPhone 4s and it worked fine. However, some of the effects I'm using required Core Image filters. It seems like CIFilters do not support more than 16 MP images. the filters will return a blank image if used on an image >16MP.
I'd still like people to comment on lossless large image editing strategies in iOS.

How to convert an FFMPEG AVFrame in YUVJ420P to AVFoundation cVPixelBufferRef?

I have an FFMPEG AVFrame in YUVJ420P and I want to convert it to a CVPixelBufferRef with CVPixelBufferCreateWithBytes. The reason I want to do this is to use AVFoundation to show/encode the frames.
I selected kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange and tried converting it since the AVFrame has the data in three planes
Y480 Cb240 Cr240. And according to what I've researched this matches the selected kCVPixelFormatType. By being biplanar I need to convert it into a buffer that contains Y480 and CbCr480 Interleaved.
I tried to create a buffer with 2 planes:
frame->data[0] on the first plane,
frame->data[1] and frame->data[2] interleaved on the second plane.
However, I'm getting return error -6661 (invalid a) from CVPixelBufferCreateWithBytes:
"Invalid function parameter. For example, out of range or the wrong type."
I don't have expertise on image processing at all, so any pointers to documentation that can get me started in the right approach to this problem are appreciated. My C skills aren't top of the line either so maybe I'm making a basic mistake here.
uint8_t **buffer = malloc(2*sizeof(int *));
buffer[0] = frame->data[0];
buffer[1] = malloc(frame->linesize[0]*sizeof(int));
for(int i = 0; i<frame->linesize[0]; i++){
if(i%2){
buffer[1][i]=frame->data[1][i/2];
}else{
buffer[1][i]=frame->data[2][i/2];
}
}
int ret = CVPixelBufferCreateWithBytes(NULL, frame->width, frame->height, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, buffer, frame->linesize[0], NULL, 0, NULL, cvPixelBufferSample)
The frame is the AVFrame with the rawData from FFMPEG Decoding.
My C skills aren't top of the line either so maybe im making a basic mistake here.
You're making several:
You should be using CVPixelBufferCreateWithPlanarBytes(). I do not know if CVPixelBufferCreateWithBytes() can be used to create a planar video frame; if so, it will require a pointer to a "plane descriptor block" (I can't seem to find the struct in the docs).
frame->linesize[0] is the bytes per row, not the size of the whole image. The docs are unclear, but the usage is fairly unambiguous.
frame->linesize[0] refers to the Y plane; you care about the UV planes.
Where is sizeof(int) from?
You're passing in cvPixelBufferSample; you might mean &cvPixelBufferSample.
You're not passing in a release callback. The documentation does not say that you can pass NULL.
Try something like this:
size_t srcPlaneSize = frame->linesize[1]*frame->height;
size_t dstPlaneSize = srcPlaneSize *2;
uint8_t *dstPlane = malloc(dstPlaneSize);
void *planeBaseAddress[2] = { frame->data[0], dstPlane };
// This loop is very naive and assumes that the line sizes are the same.
// It also copies padding bytes.
assert(frame->linesize[1] == frame->linesize[2]);
for(size_t i = 0; i<srcPlaneSize; i++){
// These might be the wrong way round.
dstPlane[2*i ]=frame->data[2][i];
dstPlane[2*i+1]=frame->data[1][i];
}
// This assumes the width and height are even (it's 420 after all).
assert(!frame->width%2 && !frame->height%2);
size_t planeWidth[2] = {frame->width, frame->width/2};
size_t planeHeight[2] = {frame->height, frame->height/2};
// I'm not sure where you'd get this.
size_t planeBytesPerRow[2] = {frame->linesize[0], frame->linesize[1]*2};
int ret = CVPixelBufferCreateWithPlanarBytes(
NULL,
frame->width,
frame->height,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
NULL,
0,
2,
planeBaseAddress,
planeWidth,
planeHeight,
planeBytesPerRow,
YOUR_RELEASE_CALLBACK,
YOUR_RELEASE_CALLBACK_CONTEXT,
NULL,
&cvPixelBufferSample);
Memory management is left as an exercise to the reader, but for test code you might get away with passing in NULL instead of a release callback.

Update directX texture

How can I solve following task: some app need to
use dozens dx9 terxtures (render them with dx3d)
and
update some of them (whole or in part).
I.e. sometimes (once per frame/second/minute) i need to write bytes (void *) in different formats (argb, bgra, rgb, 888, 565) to some sub-rect of existing texture.
In openGL solution is very simple - glTexImage2D. But here unfamiliar platform features completely confused me.
Interested in solution for both dx9 and dx11.
To update a texture, make sure the texture is created in D3DPOOL_MANAGED memory pool.
D3DXCreateTexture( device, size.x, size.y, numMipMaps,usage, textureFormat, D3DPOOL_MANAGED, &texture );
Then call LockRect to update the data
RECT rect = {x,y,z,w}; // the dimensions you want to lock
D3DLOCKED_RECT lockedRect = {0}; // "out" parameter from LockRect function below
texture->LockRect(0, &lockedRect, &rect, 0);
// copy the memory into lockedRect.pBits
// make sure you increment each row by "Pitch"
unsigned char* bits = ( unsigned char* )lockedRect.pBits;
for( int row = 0; row < numRows; row++ )
{
// copy one row of data into "bits", e.g. memcpy( bits, srcData, size )
...
// move to the next row
bits += lockedRect.Pitch;
}
// unlock when done
texture->UnlockRect(0);

Converting Basler image to OpenCV

I'm trying to convert frames captured from a Basler camera to OpenCV's Mat format. There isn't a lot of information from the Basler API documentation, but these are the two lines in the Basler example that should be useful in determining what the format of the output is:
// Get the pointer to the image buffer
const uint8_t *pImageBuffer = (uint8_t *) Result.Buffer();
cout << "Gray value of first pixel: " << (uint32_t) pImageBuffer[0] << endl << endl;
I know what the image format is (currently set to mono 8-bit), and have tried doing:
img = cv::Mat(964, 1294, CV_8UC1, &pImageBuffer);
img = cv::Mat(964, 1294, CV_8UC1, Result.Buffer());
Neither of which works. Any suggestions/advices would be much appreciated, thanks!
EDIT: I can access the pixels in the Basler image by:
for (int i=0; i<1294*964; i++)
(uint8_t) pImageBuffer[i];
If that helps with converting it to OpenCV's Mat format.
You are creating the cv images to use the camera's memory - rather than the images owning their own memory. The problem may be that the camera is locking that pointer - or perhaps expects to reallocate and move it on each new image
Try creating the images without the last parameter and then copy the pixel data from the camera to the image using memcpy().
// Danger! Result.Buffer() may be changed by the Basler driver without your knowing
const uint8_t *pImageBuffer = (uint8_t *) Result.Buffer();
// This is using memory that you have no control over - inside the Result object
img = cv::Mat(964, 1294, CV_8UC1, &pImageBuffer);
// Instead do this
img = cv::Mat(964, 1294, CV_8UC1); // manages it's own memory
// copies from Result.Buffer into img
memcpy(img.ptr(),Result.Buffer(),1294*964);
// edit: cvImage stores it's rows aligned on a 4byte boundary
// so if the source data isn't aligned you will have to do
for (int irow=0;irow<964;irow++) {
memcpy(img.ptr(irow),Result.Buffer()+(irow*1294),1294);
}
C++ code to get a Mat frame from a Pylon cam
Pylon::DeviceInfoList_t devices;
... get pylon devices if you have more than a camera connected ...
pylonCam = new CInstantCamera(tlFactory->CreateDevice(devices[selectedCamId]));
Pylon::CGrabResultPtr ptrGrabResult;
Pylon::CImageFormatConverter formatConverter;
formatConverter.OutputPixelFormat = Pylon::PixelType_BGR8packed;
pylonCam->MaxNumBuffer = 30;
pylonCam->StartGrabbing(GrabStrategy_LatestImageOnly);
std::cout << " trying to get width and height from pylon device " << std::endl;
pylonCam->RetrieveResult(5000, ptrGrabResult, TimeoutHandling_ThrowException);
formatConverter.Convert(pylonImage, ptrGrabResult);
Mat temp = Mat(ptrGrabResult->GetHeight(), ptrGrabResult->GetWidth(), CV_8UC3, (uint8_t*)pylonImage.GetBuffer());

Flicker removal using OpenCV?

I am a newbie to openCV. I have installed the opencv library on a ubuntu system, compiled it and trying to look into some image/video processing apps in opencv to understand more.
I am interested to know if OpenCV library has any algorithm/class for removal flicker in captured videos? If yes what document or code should I should look deeper into?
If openCV does not have it, are there any standard implementations in some other Video processing library/SDK/Matlab,.. which provide algorithms for flicker removal from video sequences?
Any pointers would be useful
Thank you.
-AD.
I don't know any standard way to deflicker a video.
But VirtualDub is a Video Processing software which has a Filter for deflickering the video. You can find it's filter source and documents (algorithm description probably) here.
I wrote my own Deflicker C++ function. here it is. You can cut and paste this code as is - no headers needed other than the usual openCV ones.
Mat deflicker(Mat,int);
Mat prevdeflicker;
Mat deflicker(Mat Mat1,int strengthcutoff = 20){ //deflicker - compares each pixel of the frame to a previously stored frame, and throttle small changes in pixels (flicker)
if (prevdeflicker.rows){//check if we stored a previous frame of this name.//if not, theres nothing we can do. clone and exit
int i,j;
uchar* p;
uchar* prevp;
for( i = 0; i < Mat1.rows; ++i)
{
p = Mat1.ptr<uchar>(i);
prevp = prevdeflicker.ptr<uchar>(i);
for ( j = 0; j < Mat1.cols; ++j){
Scalar previntensity = prevp[j];
Scalar intensity = p[j];
int strength = abs(intensity.val[0] - previntensity.val[0]);
if(strength < strengthcutoff){ //the strength of the stimulus must be greater than a certain point, else we do not want to allow the change
//value 25 works good for medium+ light. anything higher creates too much blur around moving objects.
//in low light however this makes it worse, since low light seems to increase contrasts in flicker - some flickers go from 0 to 255 and back. :(
//I need to write a way to track large group movements vs small pixels, and only filter out the small pixel stuff. maybe blur first?
if(intensity.val[0] > previntensity.val[0]){ // use the previous frames value. Change it by +1 - slow enough to not be noticable flicker
p[j] = previntensity.val[0] + 1;
}else{
p[j] = previntensity.val[0] - 1;
}
}
}
}//end for
}
prevdeflicker = Mat1.clone();//clone the current one as the old one.
return Mat1;
}
Call it as: Mat= deflicker(Mat). It needs a loop, and a greyscale image, like so:
for(;;){
cap >> frame; // get a new frame from camera
cvtColor( frame, src_grey, CV_RGB2GRAY ); //convert to greyscale - simplifies everything
src_grey = deflicker(src_grey); // this is the function call
imshow("grey video", src_grey);
if(waitKey(30) >= 0) break;
}

Resources