I have an uint16 satellite image whose values range from 0 to 3458 and its histogram is like this:
original histogram
I want to convert the image to float (range 0-1) but of course, I can simply divide everything by 3458 otherwise I will get a very dark image (because most pixels are below 500 as you can see from the histogram).
I would like to get a histogram like this:
new histogram
but I don't really know how to do it.
First of all you should convert your image type to float precision. If you are using MATLAB the function that can help you could be im2double().
Secondly, are you using imhist() function to show your histogram? If yes, It can easily get the range of bins you want to show your histogram.
Related
I have a bunch of values that seem to be 12-bit numbers. If I put them in a matrix and scale each one to a value 0-255 and then show them as an image, I get something that looks like a photo, but it's quite bland.
I think that they might be direct reading off of a camera sensor. They have a sort of stippled pattern, kind of like plaid, that makes me think that they might be a sort of Bayer filter. https://en.wikipedia.org/wiki/Bayer_filter
I want to convert these number into RGB values. What do I need to do? For each 2x2 in the Bayer pattern, do I convert the red to R, blue to B, and then average the green values? Do I need a gamma correction?
I noticed that the max value is much lower than the full 0xfff. Do I need to scale the values?
The procedure is well-described here: https://www.strollswithmydog.com/raw-file-conversion-steps/
Looks like I was getting it mostly right by the problem was grey balance. There is a transformation that needs to be made on the sensor values to map it to the 0-255 RGB component and the transform that needs to be made depends on the color. The best way is to take a photo of a perfect grey and calibrate.
I have computed an image with values between 0 and 255. When I use imageview(), the image is correctly displayed, in grey levels, but when I want to save this image or display it with imshow, I have a white image, or sometimes some black pixels here and there:
Whereas with imageview():
Can some one help me?
I think that you should use imshow(uint8(image)); on the image before displaying it.
Matlab expects images of type double to be in the 0..1 range and images that are uint8 in the 0..255 range. You can convert the range yourself (but change values in the process), do an explicit cast (and potentially loose precision) or instruct Matlab to use the minimum and maximum value found in the image matrix as the white and black value to scale to when visualising.
See the following example with an uint8 image present in Matlab:
im = imread('moon.tif');
figure; imshow(im);
figure; imshow(double(im));
figure; imshow(double(im), []);
figure; imshow(im2double(im));
I wanna calculate the perceived brightness of an image and classify the image into dark, neutral and bright. And I find one problem here!
And I quote Lakshmi Narayanan's comment below. I'm confused with this method. What does "the average of the hist values from 0th channel" mean here? the 0th channel refer to gray image or value channel in hsv image? Moreover, what's the theory of that method?
Well, for such a case, I think the hsv would be better. Or try this method #2vision2. Compute the laplacian of the gray scale of the image. obtain the max value using minMacLoc. call it maxval. Estimate your sharpness/brightness index as - (maxval * average V channel values) / (average of the hist values from 0th channel), as said above. This would give you certain values. low bright images are usually below 30. 30 - 50 can b taken as ok images. and above 50 as bright images.
If you have an RGB color image you can get the brightness by converting it to another color space that separates color from intensity information like HSV or LAB.
Gray images already show local "brightness" so no conversion is necessary.
If an image is perceived as bright depends on many things. Mainly your display device, reference images, contrast, human...
Using a few intensity statistics values should give you an ok classification for one particular display device.
I have a vector of Point2f which have color space CV_8UC4 and need to convert them to CV_64F, is the following code correct?
points1.convertTo(points1, CV_64F);
More details:
I am trying to use this function to calculate the essential matrix (rotation/translation) through the 5-point algorithm, instead of using the findFundamentalMath included in OpenCV, which is based on the 8-point algorithm:
https://github.com/prclibo/relative-pose-estimation/blob/master/five-point-nister/five-point.cpp#L69
As you can see it first converts the image to CV_64F. My input image is a CV_8UC4, BGRA image. When I tested the function, both BGRA and greyscale images produce valid matrices from the mathematical point of view, but if I pass a greyscale image instead of color, it takes way more to calculate. Which makes me think I'm not doing something correctly in one of the two cases.
I read around that when the change in color space is not linear (which I suppose is the case when you go from 4 channels to 1 like in this case), you should normalize the intensity value. Is that correct? Which input should I give to this function?
Another note, the function is called like this in my code:
vector<Point2f>imgpts1, imgpts2;
for (vector<DMatch>::const_iterator it = matches.begin(); it!= matches.end(); ++it)
{
imgpts1.push_back(firstViewFeatures.second[it->queryIdx].pt);
imgpts2.push_back(secondViewFeatures.second[it->trainIdx].pt);
}
Mat mask;
Mat E = findEssentialMat(imgpts1, imgpts2, [camera focal], [camera principal_point], CV_RANSAC, 0.999, 1, mask);
The fact I'm not passing a Mat, but a vector of Point2f instead, seems to create no problems, as it compiles and executes properly.
Is it the case I should store the matches in a Mat?
I am no sure do you mean by vector of Point2f in some color space, but if you want to convert vector of points into vector of points of another type you can use any standard C++/STL function like copy(), assign() or insert(). For example:
copy(floatPoints.begin(), floatPoints.end(), doublePoints.begin());
or
doublePoints.insert(doublePoints.end(), floatPoints.begin(), floatPoints.end());
No, it is not. A std::vector<cv::Pointf2f> cannot make use of the OpenCV convertTo function.
I think you really mean that you have a cv::Mat points1 of type CV_8UC4. Note that those are RxCx4 values (being R and C the number of rows and columns), and that in a CV_64F matrix you will have RxC values only. So, you need to be more clear on how you want to transform those values.
You can do points1.convertTo(points1, CV_64FC4) to get a RxCx4 matrix.
Update:
Some remarks after you updated the question:
Note that a vector<cv::Point2f> is a vector of 2D points that is not associated to any particular color space, they are just coordinates in the image axes. So, they represent the same 2D points in a grey, rgb or hsv image. Then, the execution time of findEssentialMat doesn't depend on the image color space. Getting the points may, though.
That said, I think your input for findEssentialMat is ok (the function takes care of the vectors and convert them into their internal representation). In this cases, it is very useful to draw the points in your image to debug the code.
I'm trying to create an histogram using opencv. I have an image (32 bit) that came out from a blurring operation, so I just know that the values are in the range [-0.5; 0.5] but I don't know anything else about the starting data.
the problem is that I don't understand how to set the parameters to compute such histogram.
the code I wrote is:
int numbins=1000;
float range[]={-0.5, 0.5};
float *ranges[]={range};
CvHistogram *hist=cvCreateHist(1, &numbins, CV_HIST_ARRAY, ranges, 1);
cvCalcHist(&img, hist);
were img is the image I want to get the histogram. if I try to print the histogram I just get a black picture, while with the same function I get a correct histogram if use a grayscale 8bit image.
Have you looked at the calcHist example? Also, the camshiftdemo makes heavy use of histograms.
Are you normalizing the histogram output with normalize before display (camshiftdemo shows how to do this)? Values near 0 will appear black when displayed, but when normalized between say 0 and 255 will show up nicely.