Selecting ROI for high resolution images with Qt and Opencv - opencv

I am working on a project that involves selecting an ROI from high resolution image(more like 5187x3268 like that). Right now i am using findContours in OpenCV to detect a round object(since hough circles is kind of slow for high res images). The problem is that due to high amount of texture on the object, findContours will be erroneous sometimes.
What i am doing now is, to show the user what findContours has detected in a Qt Window, and decide if it has detected correctly. If it has detected correctly, the user will press Ok button, if not No, Let me select button will be pressed.
When ever the user pressed No, Let me select, the application will start capturing mouseEvent and displays a rectangle using QRubberBand. I am using QLabel to display the image, since my screen size is 1920x1080, i have to resize the image to some resolution( lets say 1537x1280, so that it leaves some space for buttons).
I am using opencv resize to resize the image.
width = myImageDisplayer.width()
height = myImageDisplayer.height()
resizedImage = cv2.resize(myImage,(height,width),cv2.INTER_LINEAR)
I am using ratios to calculate the size reduction like this(
xReduction = originalImage.rows()/resizedImage.rows()
yReduction = originalImage.cols()/resizedImage.cols()
, and multiplying the event.pos() coordinates with the ratios to get correct coordinates in the original image.
xrealCoordinates = event.pos().x()*xReduction
yrealCoordinates = event.pos().y()*yReduction
Since the coordinates will be of float, im rounding them off. The problem is in rounding off the float values, as i am losing precision in conversion.
Precision is important since i need to recalculate the principal coordinates(calculated by calibrating the stereo setup)after selecting ROI from the images.
How does Opencv calculates original image coordinates correctly after resizing?
I observed it when i opened the same image using imshow, and if i drag my mouse i can see original image coordinates, even though the image has been resized to fit the screen.
If anybody can help me in this issue, i will be thankful.

Related

ArUco Marker Detection behavior in different cases with noisy object in background

Python Version: 3.7
OpenCV Version: 4.1.1 / 3.4.X
Mobile Phone: Asus Zenfone Max Pro M1
Initial Setup
Screenshot of plotted image
Dictionary used : cv2.aruco.DICT_ARUCO_ORIGINAL .
Aruco Parameters [Edit 9th Dec, 19]:
parameters = cv2.aruco_DetectorParameters.create()
parameters.cornerRefinementMaxIterations = 80
parameters.cornerRefinementMethod = 1
parameters.polygonalApproxAccuracyRate = 0.05
parameters.cornerRefinementWinSize = 20
parameters.cornerRefinementMinAccuracy = 0.05
parameters.perspectiveRemovePixelPerCell = 8
parameters.maxErroneousBitsInBorderRate = 0.04
parameters.errorCorrectionRate = 0.2
parameters.adaptiveThreshWinSizeStep= 3
parameters.adaptiveThreshWinSizeMax= 23
The red marks show rejected points, while the green marks show the corner points.
Screenshot of aruco detection on image. No corners were detected in this case but lots of rejected points.
Case 1: Effects of Cropping
Cropped Image: 200 to 1574 on Y-axis and 883 to 2633 on X-axis. I cropped it using OpenCV so that there is no loss.
There were some instances where it detected the corner points and some instances where it captured more noise than before.
What I don't understand is, why do the rejected points change?
Screenshot of aruco detection on cropped image. In this case, there are more rejected points than before.
Case 2: Effects of Smoothing
I used Median Blur of 11x11 kernel on this image. The false detection was low and the marker was detected perfectly.
Initially I assumed that it was due to the noise removal in image after applying Median Blur, but the results did not improve by gradually increasing/decreasing the kernel size. For e.x: for one image, corners got detected using 9x9 filter but not using 5x5, 7x7, 11x11, 15x15. On another image, it might work using 11x11.
Why does it behave that way?
Screenshot of aruco detection after noise removal, zoomed for convenience.
I can't post the original image here since it is more than 2MB.
What I don't understand is, why do the rejected points change?
If you check "detectMarkers" function in the OpenCV library, it applies adaptive thresholding by selecting small windows in the image. This number of small windows is defined as follows:
// number of window sizes (scales) to apply adaptive thresholding
int nScales = (params->adaptiveThreshWinSizeMax - params->adaptiveThreshWinSizeMin) /
params->adaptiveThreshWinSizeStep + 1;
Even if you crop in, unless you change these parameters, the image processing is going to break your image into the same number of small windows. This is going to change (may decrease or increase) the number of total (accepted + rejected) candidates detected.
Why does it behave that way?
Again, this is the same reason. When you smoothen the image, the way adaptive thresholding gets applied changes.
I think it would be great if you change the marker to a one that has more concave black areas. This is just a suggestion since center part of the marker is mostly a white square, while your other white pieces of the paper are also mostly white squares.

Convert a Picture to RGB Dots Image (Half Toning Like Effect)

I'm trying to show students how the RGB color model works to create a particular color (or moreover to convince them that it really does). So I want to take a picture and convert each pixel to an RGB representation so that when you zoom in, instead of a single colored pixel, you see the RGB colors.
I've done this but for some very obvious reasons the converted picture is either washed out or darker than the original (which is a minor inconvenience but I think it would be more powerful if I could get it to be more like the original).
Here are two pictures "zoomed out":
Here is a "medium zoom", starting to show the RGB artifacts in the converted picture:
And here is a picture zoomed in to the point that you can clearly see individual pixels and the RGB squares:
You'll notice the constant color surrounding the pixels; that is the average RGB of the picture. I put that there so that you could see individual pixels (otherwise you just see rows/columns of shades of red/green/blue). If I take that space out completely, the image is even darker and if I replace it with white, then the image looks faded (when zoomed out).
I know why displaying this way causes it to be darker: a "pure red" will come with a completely black blue and green. In a sense if I were to take a completely red picture, it would essentially be 1/3 the brightness of the original.
So my question is:
1: Are there any tools available that already do this (or something similar)?
2: Any ideas on how to get the converted image closer to the original?
For the 2nd question, I could of course just increase the brightness for each "RGB pixel" (the three horizontal stripes in each square), but by how much? I certainly can't just multiply the RGB ints by 3 (in apparent compensation for what I said above). I wonder if there is some way to adjust my background color to compensate for me? Or would it just have to be something that needs to be fiddled with for each picture?
You were correct to assume you could retain the brightness by multiplying everything by 3. There's just one small problem: the RGB values in an image use gamma correction, so the intensity is not linear. You need to de-gamma the values, multiply, then gamma correct them again.
You also need to lose the borders around each pixel. Those borders take up 7/16 of the final image which is just too much to compensate for. I tried rotating every other pixel by 90 degrees, and while it gives the result a definite zig-zag pattern it does make clear where the pixel boundaries are.
When you zoom out in an image viewer you might see the gamma problem too. Many viewers don't bother to do gamma correction when they resize. For an in-depth explanation see Gamma error in picture scaling, and use the test image supplied at the end. It might be better to forgo scaling altogether and simply step back from the monitor.
Here's some Python code and a crop from the resulting image.
from PIL import Image
im = Image.open(filename)
im2 = Image.new('RGB', (im.size[0]*3, im.size[1]*3))
ld1 = im.load()
ld2 = im2.load()
for y in range(im.size[1]):
for x in range(im.size[0]):
rgb = ld1[x,y]
rgb = [(c/255)**2.2 for c in rgb]
rgb = [min(1.0,c*3) for c in rgb]
rgb = tuple(int(255*(c**(1/2.2))) for c in rgb)
x2 = x*3
y2 = y*3
if (x+y) & 1:
for x3 in range(x2, x2+3):
ld2[x3,y2] = (rgb[0],0,0)
ld2[x3,y2+1] = (0,rgb[1],0)
ld2[x3,y2+2] = (0,0,rgb[2])
else:
for y3 in range(y2, y2+3):
ld2[x2,y3] = (rgb[0],0,0)
ld2[x2+1,y3] = (0,rgb[1],0)
ld2[x2+2,y3] = (0,0,rgb[2])
Don't waste so much time on this. You cannot make two images look the same if you have less information in one of them. You still have your computer that will subsample your image in weird ways while zooming out.
Just pass a magnifying glass through the class so they can see for themselves on their phones or other screens or show pictures of a screen in different magnification levels.
If you want to stick to software, triple the resolution of your image, don't use empty rows and columns or at least make them black to increase contrast and scale the RGB components to full range.
Why don't you keep the magnified image for the background ? This will let the two images look identical when zoomed out, while the RGB strips will remain clearly visible in the zoom-in.
If not, use the average color over the whole image to keep a similar intensity, but the washing effect will remain.
An intermediate option is to apply a strong lowpass filter on the image to smoothen all details and use that as the background, but I don't see a real advantage over the first approach.

How to measure ratio between lines in a photo

I'm working with OpenCV for a task on measuring the solar angle in a photo (without any camera parameter). In the photo there is a straight stick with the height of 3 meters standing in the middle of the field. The shadow it casts, however, lies obliquely on the ground (not in the same projection plane as the stick). I can obtain the pixel length of the stick and shadow, but I don't know if the ratio should be directly calculated with the two numbers, since only lines within the same projection plane have the same scale.
This is more like a graphic issue rather than algorithm. Can anyone shed some light on me about how to determine the height-shadow ratio?

Determining the average distance of pixels (to the centre of an image) in OpenCV

I'm trying to figure out how to do the following calculation in OpenCV.
Assuming a binary image (black/white):
Average distance of white pixels from the centre of the image. An image with most of its white pixels near the edges will have a high score, whereas an image with most white pixels near the centre will have a low score.
I know how to do this manually with loops, but since I'm working Java I'd rather offload it to a set of high-performance OpenCV calls which are native.
Thanks
distanceTransform() is almost what you want. Unfortunately, it only calculates distance to the nearest black pixel, which means the data must be massaged a little bit. The image needs to contain only a single black pixel at the center for distanceTransform() to work properly.
My method is as follows:
Set all black pixels to an intermediate value
Set the center pixel to black
Call distanceTransform() on the modified image
Calculate the mean distance via mean(), using the white pixels in the binary image as a mask
Example code is below. It's in C++, but you should be able to get the idea:
cv::Mat img; // binary image
img.setTo(128, img == 0);
img.at<uchar>(img.rows/2, img.cols/2) = 0; // Set center point to zero
cv::Mat dist;
cv::distanceTransform(img, dist, CV_DIST_L2, 3); // Can be tweaked for desired accuracy
cv::Scalar val = cv::mean(dist, img == 255);
double mean = val[0];
With that said, I recommend you test whether this method is actually any faster than iterating in a loop. This method does a fair bit more processing than necessary to accommodate the API call.

highlight overexposed areas in a UIImage

I'm making a simple camera app for iOS and MAC. After the user snaps a picture it generates a UIimage on iOS (NSImage on MAC). I want to be able to highlight the areas in the image that is over exposed. Basically the overexposed areas would blink when that image is displayed.
Anybody knows the algorithm on how to tell where on the image is overexposed. Do I just add up the R,G,B values at each pixel. And if the total at each pixel is greater than a certain amount, then start blinking that pixel, and do that for all pixels?
Or do I have to do some complicated math from outer space to figure it out?
Thanks
rough
you will have to traverse the image, depending on your desired accuracy and precision, you can combine skipping and averaging pixels to come up with a smooth region...
it will depend on the details of you color space, but imagine YUV space (because you only need to look at one value, the Y or luminance):
if 240/255 is considered white, then a greater value of say 250/255 would be over exposed and you could mark it, then display in an overlay.

Resources