How to reduce yellow component in image? - opencv

I was trying to implement tone mapping in OpenCV by using logarithmic mapping, but the result I got was a yellowish image. So I want to reduce yellow component in image and increase other colors. Any suggestion or advice would be appreciated. Thank you.
The tone mapped image:

You can try to convert the image to HSV colorspace and offset colors (just shift Hue component).
Or reduce R and B components by the same ratio relative to G.
Here it my implementation:
#include <iostream>
#include <vector>
#include <string>
#include <fstream>
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main(int argc, char **argv)
{
cv::namedWindow("result");
Mat img=imread("yellowish.jpg");
img.convertTo(img,CV_32FC3,1.0/255.0);
Scalar m;
m=cv::mean(img);
img-=m;
img+=Scalar(0.3,0.31,0.3); // Changing this you can adjust color balance.
normalize(img,img,0,1,cv::NORM_MINMAX);
imshow("result",img);
cv::waitKey(0);
cv::destroyAllWindows();
}
The result shown below:

convert your image to HSV as following:
cvtColor(image, hsv_image, CV_RGB2HSV);
then split your "hsv_image" into its individual channels:
Mat channel[3];
split( hsv_image, channel );
so, now your channel[0] is an image based upon Hue-component. Usually the value of "Yellow" color have a hue range 22-38. So, now access each pixel of your hue-image (i.e. channel[0]) and make some if() condition as per your requirement.

Related

opencv reverse perspective transformation?

I start with the following image:
Using opencv I rotate 45° about the Y axis to get the following:
If I tried a little harder I could get it not to be cropped in the foreground.
Now my question: does opencv have the tools to do the reverse transformation? Could I take the second image and produce the first? (Not concerned about blurred pixels.) Please suggest a method.
Yes.
You already made a homography matrix to produce this picture, right?
Just invert it (H.inv()) or pass the WARP_INVERSE_MAP flag.
No need for all that other stuff.
Yes, its possible. After 45° rotation, there are some regions below and above are missing(not seen). You only can not get those parts back.
By using warpPerspective() and getPerspectiveTransform() together, you can easily get back to the first image. Only thing you need to consider is that you should fid the end points of rotated image. Such as: left_up , right_up , left_down , right_down respectively. Since you didn't specify the language, I used C++ to implement the functions. Here is the output and code:
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include <fstream>
int main()
{
cv::Mat begin = cv::imread("/ur/img/dir/input.jpg");
cv::Mat output;
cv::Point2f Poly2[4] = {
cv::Point2f(31,9),
cv::Point2f(342,51),
cv::Point2f(28,571),
cv::Point2f(345,525), //points I got from looking in paint.
};
cv::Point2f Points[4] = {
cv::Point2f(0,0),
cv::Point2f(432,0),
cv::Point2f(0,576), //The picture I want to transform to.
cv::Point2f(432,576),
};
cv::Mat Matrix = cv::getPerspectiveTransform( Poly2,Points);
cv::warpPerspective(begin, output, Matrix, cv::Size(432, 576));
cv::imshow("Input", begin);
cv::imshow("Output", output);
cv::imwrite("/home/yns/Downloads/tt2.jpg",output);
cv::waitKey(0);
return 0;
}

Video Analysis to detect Object [ Realtime ]

I want to do video analysis to detect movement for a certain duration in time. For example, I have a video of my lane outside my house. I want to check whether it remains clean. So, i wanted to detect garbage lying around(and in case it is being cleaned). I tried a lot of sites, they said that I will have to take the video and divide it into frames and XOR the frames and find out the object movement.
I tried to find example code for this and I was unable to find the same. So if anybody has expertise in this field using OpenCV/Xuggler/JavaCV/ or any software and some code, can u please post so i can get going.
My main objective is to develop some software through which i can do realtime tracking of the garbage outside my house and check who is dumping it, and whether it is being cleaned. Is it possible? Any ideas/suggestions/ advice is appreciated. Thanks a lot!
I've tried OpenCV, but have no clue how to split video into frames and apply object detection on it.
Thanks!
This is the code of background Subtraction based technique. If there is any change in the background (which your lane in this case), you would be able to detect it.
You should have a sample code of "Background subtraction" provided by openCV samples.
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/video/background_segm.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdio.h>
#include <iostream>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
VideoCapture cap;
bool update_bg_model = true;
cap.open(0);
cv::BackgroundSubtractorMOG2 bg;//(100, 3, 0.3, 5);
bg.set ("nmixtures", 3);
std::vector < std::vector < cv::Point > >contours;
cv::namedWindow ("Frame");
cv::namedWindow ("Background");
Mat frame, fgmask, fgimg, backgroundImage;
for(;;)
{
cap >> frame;
bg.operator()(frame, fgimg);
bg.getBackgroundImage (backgroundImage);
cv::erode (fgimg, fgimg, cv::Mat ());
cv::dilate (fgimg, fgimg, cv::Mat ());
cv::findContours (fgimg, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
cv::drawContours (frame, contours, -1, cv::Scalar (0, 0, 255), 2);
cv::imshow ("Frame", frame);
cv::imshow ("Background", backgroundImage);
char k = (char)waitKey(30);
if( k == 27 ) break;
}
return 0;
}
As far as your query about splitting a video into frames is concerned, then you should know that a video is nothing but a collection of several frames. So, whenever you use VideoCapture capture(videoFilename); in openCV, you are capturing a single frame/image from your video.
It is very simple with the aid of google's opencv & javacv libraries.The logic is simply grabb the frames from the webcam continuesly using a thread and perform a substraction between the 2 consequetive frames. If there is any change between the 2 frames , it will produes a white pixel.Black pixel indicates no change between the frames[no motion detected].The complete implementation can be found at here

goodFeaturesToTrack gives zero size vector

I am trying to use the goodFeatureToTrack() function with opencv 2.4.3 on an gray image of lena...however I always get a zero size of the vector storing the features as cv::Point2f...I have tried using a zero mask also but in that case the application hangs up..I tried playing with the quality level value from 0.01 to 0.001. However still the size of the vector is zero..any idea?...following is my code..
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/video/tracking.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <vector>
using namespace cv;
using namespace std;
int main()
{
Mat frameROI;
frameROI = imread("C:\\lena.jpg");
std::vector<cv::Point2f> corners;
cvtColor(frameROI,frameROI,CV_RGB2GRAY);
//Mat mask(frameROI.size(), CV_8UC1);
//mask.setTo(Scalar::all(0));
//goodFeaturesToTrack(frameROI,corners,10,0.001,10,mask,3,false,0.04);
goodFeaturesToTrack(frameROI,corners,10,0.001,10);//AFTER EDIT
cout<<"SIZE OF FEATURE VECTOR = "<<corners.size()<<endl;
imshow("VIDEO ROI",frameROI);
waitKey();
return 0;
}
OUTPUT:
SIZE OF FEATURE VECTOR = 0
EDIT: afte Bob's suggestion I omitted the line for mask and modified the function..but now the application hangs up after the goodFeaturesToTrack function is invoked...Any idea?
By setting the mask to all zeros, you are basically excluding the whole image from the search. You should either remove mask.setTo(Scalar::all(0)); completely (thus leaving the matrix empty) or replace it with mask.setTo(Scalar::all(1)); (that is, to search for features in the whole image; otherwise, you should set the mask with 1's in the region of interest and 0's otherwise).
Following image is what your code returns for me if I remove mask.setTo(Scalar::all(0)); completely and draw the points:
Solved the problem just now....instead of using the pre build libs n dlls....build it with MSVC2008 and now its working fine...the same points indicated by Bob are detected..

Change dpi of an image in OpenCV

When I open an image in OpenCv (which may be of 300 dpi, 72 dpi etc.), the dpi of the image is automatically changed to 96 dpi. I want to variate this dpi. Please help. Thanks in advance...
#include "stdafx.h"
#include <cv.h>
#include <cxcore.h>
#include <highgui.h>
#include <iostream>
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
IplImage *img1=cvLoadImage("input.jpg");
cvSaveImage("output.jpg",img1);
return(0);
}
OpenCV does not support meta-data mangling like this. You need to use another tool to re-set the DPI or consider incorporating libjpeg directly.
Another option is to take the OpenCV jpeg writer code and change it according to your needs.
You need to edit the image meta data libexif(c) or exiv2 (c++)

Detect RGB color interval with OpenCV and C++

I would like to detect a red colored object in a video or image, with OpenCV and C++. What algorithms are available to do this?
I would like to do a comparison of the relationship between levels of color. Indeed, when the brightness varies, the ratio remains constant. So I want to determine the interval of acceptable values ​​for the colors of zone of interest.
For cases I look at the red R (x, y) and G (x, y) / R (x, y) and B (x, y) / R (x, y).
I will then find the ranges of acceptable values​​: to get a first idea,
it releases the maximum and minimum for each report from a palette image
red
I would like to find something like this :
if minR<=R(x,y)<=maxR and minG<=G(x,y)<=maxG minB<=B(x,y)<=maxB so
couleur(x,y)=blanc else couleur(x,y)=NOIR
Preprocess the image using cv::inRange() with the necessary color bounds to isolate red. You may want to transform to a color-space like HSV or YCbCr for more stable color bounds because chrominance and luminance are better separated. You can use cvtColor() for this. Check out my answer here for a good example of using inRange() with createTrackbar().
So, the basic template would be:
Mat redColorOnly;
inRange(src, Scalar(lowBlue, lowGreen, lowRed), Scalar(highBlue, highGreen, highRed), redColorOnly);
detectSquares(redColorOnly);
EDIT : Just use the trackbars to determine the color range you want to isolate, and then use the color intervals you find that work. You don't have to constantly use the trackbars.
EXAMPLE :
So, for a complete example of the template here you go,
I created a simple (and ideal) image in GIMP, shown below:
Then I created this program to filter all but the red squares:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
Mat redFilter(const Mat& src)
{
assert(src.type() == CV_8UC3);
Mat redOnly;
inRange(src, Scalar(0, 0, 0), Scalar(0, 0, 255), redOnly);
return redOnly;
}
int main(int argc, char** argv)
{
Mat input = imread("colored_squares.png");
imshow("input", input);
waitKey();
Mat redOnly = redFilter(input);
imshow("redOnly", redOnly);
waitKey();
// detect squares after filtering...
return 0;
}
NOTE : You will not be able to use these exact same filter intervals for your real imagery; I just suggest you tune the intervals with trackbars to see what is acceptable.
The output looks like this:
Voila! Only the red square remains :)
Enjoy :)
In that case, try to find out any unique feature for your required square which distinguish it from other squares.
For eg,
1) Color of square:- If color is different from all other squares, you can check inside each square, and select square with required color, as explained by mevatron.
2) Size of square :- If you know size of square, then compare size of each square and select best.
You can convert your image from RGB value to HSV type using inbuilt function. After you can find every color has some HSV value range. So you can find that and give that as threshold and differentiate those points from others.

Resources