I have a camera that is giving 4 separated JPEG images for the 4 different Bayer channels (B,G1,G2,R).
I want to transform this in to a colour image.
What I'm doing at the moment is uncompress the jpeg, restore the "original" image manually and converting to a colour image using cvtColor. But this is too slow. How could I do it better?
cv::Mat imgMat[4]=cv::Mat::zeros(616, 808, CV_8U); //height, width
for (k=0;k<4;k++) {
........
imgMat[k] = cv::imdecode(buffer, CV_LOAD_IMAGE_GRAYSCALE);
}
//Reconstruct the original image from the four channels! RGGB
cv::Mat Reconstructed=cv::Mat::zeros(1232, 1616, CV_8U);
int x,y;
for(x=0;x<1616;x++){
for(y=0;y<1232;y++){
if(y%2==0){
if(x%2==0){
//R
Reconstructed.at<uint8_t>(y,x)=imgMat[0].at<uint8_t>(y/2,x/2);
}
else{
//G1
Reconstructed.at<uint8_t>(y,x)=imgMat[1].at<uint8_t>(y/2,floor(x/2));
}
}
else{
if(x%2==0){
//G2
Reconstructed.at<uint8_t>(y,x)=imgMat[2].at<uint8_t>(floor(y/2),x/2);
}
else{
//B
Reconstructed.at<uint8_t>(y,x)=imgMat[3].at<uint8_t>(floor(y/2),floor(x/2));
}
}
}
}
//Debayer
cv::Mat ReconstructedColor;
cv::cvtColor(Reconstructed, ReconstructedColor, CV_BayerBG2BGR);
It seems clear that what it takes more time is decoding the jpeg images. Has somebody some advice/trick I could use to speed up this code?
Firstly you should do a profile to see where the time is mostly going. Maybe it is all in imdecode(), as "seems clear", but you might be wrong.
If not, .at<>() is a bit slow (and you are calling it nearly 4 million times). You can get some speedup by more efficent scanning of the image. Also you do not need floor() - that will avoid converting an int to double and back again (2 million times). Something like this will be faster:
int x , y;
for(y = 0; y < 1232; y++){
uint8_t* row = Reconstructed.ptr<uint8_t>(y);
if(y % 2 == 0){
uint8_t* i0 = imgMat[0].ptr<uint8_t>(y / 2);
uint8_t* i1 = imgMat[1].ptr<uint8_t>(y / 2);
for(x = 0; x < 1616; ){
//R
row[x] = i0[x / 2];
x++;
//G1
row[x] = i1[x / 2];
x++;
}
}
else {
uint8_t* i2 = imgMat[2].ptr<uint8_t>(y / 2);
uint8_t* i3 = imgMat[3].ptr<uint8_t>(y / 2);
for(x = 0; x < 1616; ){
//G2
row[x] = i2[x / 2];
x++;
//B
row[x] = i3[x / 2];
x++;
}
}
}
Related
I wrote down an openCV code .I tried to embed a 64X64 pix watermark image in a 512X512 image.
my code has 5 parts:
reading two pictures( watermark and original image that I want to
embed watermark in it)
resize 2 readed images to specified size.(64X64 for watermark image
and 512X512 for original image)
devide original resized image to 8X8 blocks and transform them with
DCT.
embedding each pixel of watermark in each block of original image.
applying inverse DCT on each block.
I have this problem that all of three imshows have same results.
thank you for your help :)
here is my code :
int _tmain(int argc, _TCHAR* argv[])
{
int index=0;
int iindex=0;
vector<Mat> blocks(4096);
/////////////Part1:reading images
Mat originalImage;
originalImage = imread("C:\\MGC.jpg",CV_LOAD_IMAGE_GRAYSCALE);
Mat watermarkImage;
watermarkImage = imread("C:\\ivp_lg.bmp" , CV_LOAD_IMAGE_GRAYSCALE);
/// show original image
namedWindow("Original");
int x = 0; int y = 0;
moveWindow("Original", x, y);
imshow("Original", originalImage);
x += 100; y += 100;
//////Part 2:Leave originals alone, work on a copys. resize readed images
Mat dctImage = originalImage.clone();
Mat wmrk = watermarkImage.clone();
Mat tmp1(512, 512, CV_8UC1);
Mat tmp2(64, 64, CV_8UC1);
resize(dctImage, dctImage, tmp1.size());
resize(wmrk, wmrk , tmp2.size());
/////Part 3:break dctImage into 8X8 blocks and applying DCT on each block
for (int i = 0; i < 512; i += 8)
{
for (int j = 0; j < 512; j+= 8)
{
Mat block = dctImage(Rect(i, j, 8, 8));
block.convertTo(block, CV_32FC1);
dct(block,blocks[index]);
blocks[index].convertTo(blocks[index], CV_8UC1);
index++;
}
}
/// show transformed image
namedWindow("TransformedImage");
moveWindow("TransformedImage", x, y);
imshow("TransformedImage",dctImage );
x += 100; y += 100;
//////Part 4: embeding watermark. if corresponding pixel of watermark was 255 then element (5,5) in the block increase 200 otherwise do nothing
for(int idx=0 ; idx<4096 ; idx++)
{
int i=idx/64;
int j=idx%64;
float elem=(float) wmrk.at<uchar>(i,j);
if (elem>=128)
{
float tmp=(float) blocks[idx].at<uchar>(5,5);
float temp=tmp +200;
uchar ch=(uchar) temp;
blocks[idx].at<uchar>(5,5)=ch;
}
}
//////Part 5:applying iDCT on each block
for (int i = 0; i < 512; i += 8)
{
for (int j = 0; j < 512; j+= 8)
{
Mat block = dctImage(Rect(i, j, 8, 8));
block.convertTo(block, CV_32FC1);
idct(block,blocks[iindex]);
blocks[iindex].convertTo(blocks[iindex], CV_8UC1);
iindex++;
}
}
/// show watermarked image
namedWindow("WatermarkedImage");
moveWindow("WatermarkedImage", x, y);
imshow("WatermarkedImage",dctImage );
cvWaitKey(80000);
destroyAllWindows();
return 0;
}
#N_Kh As far as I have seen ur code in hurry, You are executing IMSHOW Command over the Matrix dctImage while you are performing operation on different Matrix and vector Block and Blocks respectively.
I have searched internet and stackoverflow thoroughly, but I haven't found answer to my question:
How can I get/set (both) RGB value of certain (given by x,y coordinates) pixel in OpenCV? What's important-I'm writing in C++, the image is stored in cv::Mat variable. I know there is an IplImage() operator, but IplImage is not very comfortable in use-as far as I know it comes from C API.
Yes, I'm aware that there was already this Pixel access in OpenCV 2.2 thread, but it was only about black and white bitmaps.
EDIT:
Thank you very much for all your answers. I see there are many ways to get/set RGB value of pixel. I got one more idea from my close friend-thanks Benny! It's very simple and effective. I think it's a matter of taste which one you choose.
Mat image;
(...)
Point3_<uchar>* p = image.ptr<Point3_<uchar> >(y,x);
And then you can read/write RGB values with:
p->x //B
p->y //G
p->z //R
Try the following:
cv::Mat image = ...do some stuff...;
image.at<cv::Vec3b>(y,x); gives you the RGB (it might be ordered as BGR) vector of type cv::Vec3b
image.at<cv::Vec3b>(y,x)[0] = newval[0];
image.at<cv::Vec3b>(y,x)[1] = newval[1];
image.at<cv::Vec3b>(y,x)[2] = newval[2];
The low-level way would be to access the matrix data directly. In an RGB image (which I believe OpenCV typically stores as BGR), and assuming your cv::Mat variable is called frame, you could get the blue value at location (x, y) (from the top left) this way:
frame.data[frame.channels()*(frame.cols*y + x)];
Likewise, to get B, G, and R:
uchar b = frame.data[frame.channels()*(frame.cols*y + x) + 0];
uchar g = frame.data[frame.channels()*(frame.cols*y + x) + 1];
uchar r = frame.data[frame.channels()*(frame.cols*y + x) + 2];
Note that this code assumes the stride is equal to the width of the image.
A piece of code is easier for people who have such problem. I share my code and you can use it directly. Please note that OpenCV store pixels as BGR.
cv::Mat vImage_;
if(src_)
{
cv::Vec3f vec_;
for(int i = 0; i < vHeight_; i++)
for(int j = 0; j < vWidth_; j++)
{
vec_ = cv::Vec3f((*src_)[0]/255.0, (*src_)[1]/255.0, (*src_)[2]/255.0);//Please note that OpenCV store pixels as BGR.
vImage_.at<cv::Vec3f>(vHeight_-1-i, j) = vec_;
++src_;
}
}
if(! vImage_.data ) // Check for invalid input
printf("failed to read image by OpenCV.");
else
{
cv::namedWindow( windowName_, CV_WINDOW_AUTOSIZE);
cv::imshow( windowName_, vImage_); // Show the image.
}
The current version allows the cv::Mat::at function to handle 3 dimensions. So for a Mat object m, m.at<uchar>(0,0,0) should work.
uchar * value = img2.data; //Pointer to the first pixel data ,it's return array in all values
int r = 2;
for (size_t i = 0; i < img2.cols* (img2.rows * img2.channels()); i++)
{
if (r > 2) r = 0;
if (r == 0) value[i] = 0;
if (r == 1)value[i] = 0;
if (r == 2)value[i] = 255;
r++;
}
const double pi = boost::math::constants::pi<double>();
cv::Mat distance2ellipse(cv::Mat image, cv::RotatedRect ellipse){
float distance = 2.0f;
float angle = ellipse.angle;
cv::Point ellipse_center = ellipse.center;
float major_axis = ellipse.size.width/2;
float minor_axis = ellipse.size.height/2;
cv::Point pixel;
float a,b,c,d;
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y < image.rows; y++)
{
auto u = cos(angle*pi/180)*(x-ellipse_center.x) + sin(angle*pi/180)*(y-ellipse_center.y);
auto v = -sin(angle*pi/180)*(x-ellipse_center.x) + cos(angle*pi/180)*(y-ellipse_center.y);
distance = (u/major_axis)*(u/major_axis) + (v/minor_axis)*(v/minor_axis);
if(distance<=1)
{
image.at<cv::Vec3b>(y,x)[1] = 255;
}
}
}
return image;
}
I have a huge image ( about 63000 x 63000 pixels = 3969 Megapixels )
what i have done so far is i decided to make "tiles" of (1024 x 1024) and do my calculations based on these tiles, resulting in an 62 x 62 image tile grid!
(this works out very well and has the advantage of making the image viewable with zoom-in and zoom out, only viewn tiles are downsized for example)
But what i need now are the contours from the huge image!
i use the OpenCV function "findContours" to detect contours on each
one of the tiles.
i have added some overlap in the tiles so i get
overlapping contours ( 1 pixel overlap )
i used the offset parameter
of "findContours" to shift the contours to the right position
into the "virtual total image"
Here are some screenshot's i made from a demo application
What I want is this:
Now my questions:
is it possible to stitch the contours, my worst case is a contour which covers the total image... is there some library that can do this?
is there a library which works on a compressed version of the total image ( like rle for example )
is there a way to make opencv findcontours work on 1 bit binary images ?
Here's the code used by findcontours:
// Surf2DTiledData ...a gobject based class used for 2d tile management and viewing..
Surf2DTiledData* td = (Surf2DTiledData*)in_td;
int nr_hor_tiles = surf2_d_tiled_data_get_nr_hor_tiles(td);
int nr_ver_tiles = surf2_d_tiled_data_get_nr_ver_tiles(td);
int tile_size_x = surf2_d_tiled_data_get_tile_width(td);
int tile_size_y = surf2_d_tiled_data_get_tile_height(td);
contouring_data_obj = surf2_d_tiled_data_get_ContouringData(td);
p_contours = contouring_data_obj->p_contours;
p_border_contours = contouring_data_obj->p_border_contours;
g_return_if_fail(p_border_contours != NULL);
g_return_if_fail(p_contours != NULL);
for (y = 0; y < nr_ver_tiles; y++){
int x;
for (x = 0; x < nr_hor_tiles; x++){
int idx = x + y*nr_hor_tiles;
CvMemStorage *mem = contouring_data_obj->contour_storage[idx];
CvMat _src;
CvSeq *contours = NULL;
uchar* dataBuffer = (uchar*)p_data[x][y];
// the idea is to have some extra space available for the overlap
// detection of contours!
// the extra space is needed for the algorithm to check for
// overlaps of contours later on!
#define VIRT_BORDER_EXTEND 2
int virtual_x = x * tile_size_x - VIRT_BORDER_EXTEND;
int virtual_y = y * tile_size_y - VIRT_BORDER_EXTEND;
int virtual_width = tile_size_x + VIRT_BORDER_EXTEND * 2;
int virtual_height = tile_size_y + VIRT_BORDER_EXTEND * 2;
int x_off = -VIRT_BORDER_EXTEND;
int y_off = -VIRT_BORDER_EXTEND;
if (virtual_x < 0) {
virtual_width += virtual_x;
virtual_x = 0;
x_off = 0;
}
if (virtual_y < 0) {
virtual_height += virtual_y;
virtual_y = 0;
y_off = 0;
}
if ((virtual_x + virtual_width) > (nr_hor_tiles*tile_size_x)) {
virtual_width = nr_hor_tiles*tile_size_x - virtual_x;
}
if ((virtual_y + virtual_height) > (nr_ver_tiles*tile_size_y)) {
virtual_height = nr_ver_tiles*tile_size_y - virtual_y;
}
CvMat* _roi_mat = get_roi_mat(td,
virtual_x, virtual_y,
virtual_width, virtual_height);
// Use either this:
//mem = cvCreateMemStorage(0);
if (_roi_mat){
// CV_LINK_RUNS => different algorithm!!!!
int tile_off_x = tile_size_x * x;
int tile_off_y = tile_size_y * y;
CvPoint contour_shift = cvPoint(x_off + tile_off_x, y_off + tile_off_y);
int n = cvFindContours(_roi_mat, mem, &contours, sizeof(CvContour), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, contour_shift);
cvReleaseMat(&_roi_mat);
p_contours[x][y] = contours;
}
//cvReleaseMemStorage(&mem);
}
}
later i used opengl to make textures out of the tiles and for every tile there is a quad !
the opencv contours are not drawn as this could be too slow for now, but i draw their bounding boxes... which are drawn in opengl too..
I tried to perform EM based back ground foreground segmentation using a code below...which I also found in Stackoverflow....But seems there is some error somewhere as I dont ever see the second printf statement to get executed... . basically it is never reaching the classification/clustering part of the code..The code is given below..Could someone help me on this ?
#include <opencv2/opencv.hpp>
#include <opencv2/legacy/legacy.hpp>
char str1[60];
int main()
{
cv::Mat source = cv::imread("C:\\Image Input\\part1.bmp" );
if(!source.data)
printf(" No data \n");
//ouput images
cv::Mat meanImg(source.rows, source.cols, CV_32FC3);
cv::Mat fgImg(source.rows, source.cols, CV_8UC3);
cv::Mat bgImg(source.rows, source.cols, CV_8UC3);
//convert the input image to float
cv::Mat floatSource;
source.convertTo(floatSource, CV_32F);
//now convert the float image to column vector
cv::Mat samples(source.rows * source.cols, 3, CV_32FC1);
int idx = 0;
for (int y = 0; y < source.rows; y++) {
cv::Vec3f* row = floatSource.ptr<cv::Vec3f > (y);
for (int x = 0; x < source.cols; x++) {
samples.at<cv::Vec3f > (idx++, 0) = row[x];
}
}
printf(" After Loop \n");
//we need just 2 clusters
cv::EMParams params(2);
cv::ExpectationMaximization em(samples, cv::Mat(), params);
//the two dominating colors
cv::Mat means = em.getMeans();
//the weights of the two dominant colors
cv::Mat weights = em.getWeights();
//we define the foreground as the dominant color with the largest weight
const int fgId = weights.at<float>(0) > weights.at<float>(1) ? 0 : 1;
printf(" After Training \n");
//now classify each of the source pixels
idx = 0;
for (int y = 0; y < source.rows; y++)
{
printf(" Now Classify\n");
for (int x = 0; x < source.cols; x++)
{
//classify
const int result = cvRound(em.predict(samples.row(idx++), NULL));
//get the according mean (dominant color)
const double* ps = means.ptr<double>(result, 0);
//set the according mean value to the mean image
float* pd = meanImg.ptr<float>(y, x);
//float images need to be in [0..1] range
pd[0] = ps[0] / 255.0;
pd[1] = ps[1] / 255.0;
pd[2] = ps[2] / 255.0;
//set either foreground or background
if (result == fgId) {
fgImg.at<cv::Point3_<uchar> >(y, x, 0) = source.at<cv::Point3_<uchar> >(y, x, 0);
} else {
bgImg.at<cv::Point3_<uchar> >(y, x, 0) = source.at<cv::Point3_<uchar> >(y, x, 0);
}
}
}
printf(" Show Images \n");
cv::imshow("Means", meanImg);
cv::imshow("Foreground", fgImg);
cv::imshow("Background", bgImg);
cv::waitKey(0);
return 0;
}
The code works fine. I think that you use too large images, and learning takes too long time. Try process small images.
Just 1 correction, initialize images with zeros:
//ouput images
cv::Mat meanImg=Mat::zeros(source.rows, source.cols, CV_32FC3);
cv::Mat fgImg=Mat::zeros(source.rows, source.cols, CV_8UC3);
cv::Mat bgImg=Mat::zeros(source.rows, source.cols, CV_8UC3);
i want to equalize two half face color images of the same subject and then merge them. Each of them has different values of hue saturation and brightness....using opencv how can i normalize/equalize each half image?
I tried performing cvEqualizeHist(v, v); on the v value of the converted HSV image, but two images still have significant difference and after the merge still has a line between the colors of the two halves...thanks
Have u tried to read this link? http://answers.opencv.org/question/75510/how-to-make-auto-adjustmentsbrightness-and-contrast-for-image-android-opencv-image-correction/
void Utils::BrightnessAndContrastAuto(const cv::Mat &src, cv::Mat &dst, float clipHistPercent)
{
CV_Assert(clipHistPercent >= 0);
CV_Assert((src.type() == CV_8UC1) || (src.type() == CV_8UC3) || (src.type() == CV_8UC4));
int histSize = 256;
float alpha, beta;
double minGray = 0, maxGray = 0;
//to calculate grayscale histogram
cv::Mat gray;
if (src.type() == CV_8UC1) gray = src;
else if (src.type() == CV_8UC3) cvtColor(src, gray, CV_BGR2GRAY);
else if (src.type() == CV_8UC4) cvtColor(src, gray, CV_BGRA2GRAY);
if (clipHistPercent == 0)
{
// keep full available range
cv::minMaxLoc(gray, &minGray, &maxGray);
}
else
{
cv::Mat hist; //the grayscale histogram
float range[] = { 0, 256 };
const float* histRange = { range };
bool uniform = true;
bool accumulate = false;
calcHist(&gray, 1, 0, cv::Mat(), hist, 1, &histSize, &histRange, uniform, accumulate);
// calculate cumulative distribution from the histogram
std::vector<float> accumulator(histSize);
accumulator[0] = hist.at<float>(0);
for (int i = 1; i < histSize; i++)
{
accumulator[i] = accumulator[i - 1] + hist.at<float>(i);
}
// locate points that cuts at required value
float max = accumulator.back();
clipHistPercent *= (max / 100.0); //make percent as absolute
clipHistPercent /= 2.0; // left and right wings
// locate left cut
minGray = 0;
while (accumulator[minGray] < clipHistPercent)
minGray++;
// locate right cut
maxGray = histSize - 1;
while (accumulator[maxGray] >= (max - clipHistPercent))
maxGray--;
}
// current range
float inputRange = maxGray - minGray;
alpha = (histSize - 1) / inputRange; // alpha expands current range to histsize range
beta = -minGray * alpha; // beta shifts current range so that minGray will go to 0
// Apply brightness and contrast normalization
// convertTo operates with saurate_cast
src.convertTo(dst, -1, alpha, beta);
// restore alpha channel from source
if (dst.type() == CV_8UC4)
{
int from_to[] = { 3, 3 };
cv::mixChannels(&src, 4, &dst, 1, from_to, 1);
}
return;
}
I'm not sure as I'm now facing the same problem,
but maybe try to equalize the H & S values instead of the V?
Also try manually adjusting it using Photoshop to see what works best and then try to replicate it using code.