Remove outliers lines after findContours in image using python - opencv

I want to detect all rectangles in image and I use findContours in OpenCv , and I want to delete unnecessary shapes that have been identified by FindContours.
My image https://i.stack.imgur.com/eLb1s.png
My result: https://i.stack.imgur.com/xQqeF.png
My code:
img =cv2.imread('CD/A.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
img1=np.ones(img.shape, dtype=np.uint8)*255
ret,thresh = cv2.threshold(gray,127,255,1)
(_,contours,h) = cv2.findContours(thresh,1,2)
for cnt in contours:
approx = cv2.approxPolyDP(cnt,0.01*cv2.arcLength(cnt,True),True)
if len(approx)==4:
cv2.drawContours(img1,[cnt],0,(0,255,0),2)
cv2.imshow('Detected line',img1)
cv2.waitKey(0)
cv2.destroyAllWindows()
I want to remove these extreme lines that exist within the rectangles :
https://i.stack.imgur.com/n9byP.png
Need your help guys .

One thing you could do is find the connected components and remove the ones that are small:
from skimage.morphology import label
import numpy as np
comps = label(thresh) # get the label map of the connected components
# The comps array will have a unique integer for each connected component
# and 0 for the background. np.unique gets the unique label values.
#
# Therefore, this loop allows us to pluck out each component from the image
for i in range(1, len(np.unique(comps))):
# comps == i will convert the array into True (1) if that pixel is in the
# i-th component and False (0) if it is not.
#
# Therefore, np.sum(comps == i) returns the "area" of the component
if np.sum(comps == i) < small_number:
# If the area is less than some number of pixels,
# set the pixels of this component to 0 in the thresholded image
thresh[comps == i] = 0
You can do the labeling with OpenCV as well with connectedComponentsWithStats or something like that but I'm more familiar with skimage.

If you can convert your image into a binary image (with a simple threshold), you can perform a morphological open operation which can help you filter out small lines in your image within the rectangle and then find contours again on the new image.
https://docs.opencv.org/trunk/d9/d61/tutorial_py_morphological_ops.html

Related

Extract a specific feature from an image

I’m working on this project where I’ve to separate some lines segments from others. I used Hough transform to detect these lines, however, I’m stuck on how I can extract only the lines I want. As you can see, in the image, I would like to extract the lines marked in red.
If someone has an idea of where I can find documentation that can help or provide some code help, I’ll be grateful.
I’ve provided my Hough transform code in case it can help with something.
img = cv2.imread('input.png')
lines_list = list()
if len(img.shape) == 3:
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
image_all = img.copy()
else:
gray = img.copy()
image_all = cv2.cvtColor(img,cv2.COLOR_GRAY2RGB)
edges = cv2.Canny(gray,50,150,apertureSize=3)
lines = cv2.HoughLinesP(
edges, # Input edge image
5, # Distance resolution in pixels
np.pi / 180, # Angle resolution in radians
150, # Min number of votes for valid line
np.array([]),
minLineLength = 90, # Min allowed length of line
maxLineGap = 40 # Max allowed gap between line for joining them
)
for points in lines:
# Extracted points nested in the list
x1,y1,x2,y2=points[0]
# Draw the lines joing the points On the original image
cv2.line(image_all,(x1,y1),(x2,y2),(0,100,255),4)
The final result should have only the lines marked in red.

How to connect image regions

I've matrix with values, I've colored the matrix for visual analysis. The green region shows values e.g. 5 and brown shows values 6 and black shows value 0. I want to connect the broken region with value 5. I've tried different structuringElement.g. [110;110;000] and used 2 dilation followed by median filter to get this result.
se_mask = centered(Bool[1 1 0; 1 1 0; 0 0 0])
result = dilate(dilate(gt_mat, se_mask), se_mask)
d_gt_mat = mapwindow(median, result, (5, 5))
I'm not sure whats a better way to connect the broken regions to fill up.
I'm working with JULIA.
The functions in ImageMorphology package might be what you are looking for.
For example, with the first image in the OP, something like this can be done:
using Images
using ImageMorphology
using ImageInTerminal
using IterTools
img = load("/path/to/dir/so-image.png")
gray_img = Gray.(img)
size(gray_img) # (117, 238)
# applying dilate 7 times and then erode 7 times
closed_img =
nth(iterated(erode,nth(iterated(dilate, gray_img),7)),7)
save("/tmp/gray_img.png", gray_img)
save("/tmp/closed_img.png", closed_img)
and the results are:
Original in gray version
After processing

What is the purpose of decimation when calibrating a camera with Charuco?

I have been working on performing camera calibration using ChAruCo boards.
Following the code here (my commented version is shown below), it appears that only every other image is used when performing the camera calibration - due to the decimator.
What could be a reason for this? Other than to save processing power, which seems unnecessary since this step is only performed once.
def read_chessboards(chessboard_images):
# Charuco base pose estimation.
print("POSE ESTIMATION STARTS:")
# Declare lists to store corner locations and IDs
allCorners = []
allIds = []
decimator = 0
# SUB PIXEL CORNER DETECTION CRITERION
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.00001)
# for each of the chessboard images
for im in chessboard_images:
print("=> Processing image {0}".format(im))
frame = cv2.imread(im) # read current image into frame variable
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # convert to grayscale
corners, ids, rejectedImgPoints = cv2.aruco.detectMarkers(gray, ARUCO_DICT) # detect markers present in image
# if there are any markers detected
if len(corners) > 0:
# SUB PIXEL DETECTION
for corner in corners:
# refine corner locations
# TODO: check if this works
cv2.cornerSubPix(gray, corner,
winSize=(3, 3),
zeroZone=(-1, -1),
criteria=criteria)
# interpolate position of ChArUco board corners.
res2 = cv2.aruco.interpolateCornersCharuco(corners, ids, gray, board)
print(f'Charuco corners at: {res2}')
# if 3+ corners are detected, add to allCorners list for every other image
if res2[1] is not None and res2[2] is not None and len(res2[1]) > 3 and decimator % 1 == 0:
allCorners.append(res2[1])
allIds.append(res2[2])
# why only every other chessboard image?
decimator += 1
imsize = gray.shape
return allCorners, allIds, imsize
Just realized that x % 1 always evaluates to 0, so it doesn't actually do anything. I guess it was included as an optional feature - if you change 1 to some other number.

Simple blob detector does not detect blobs

I'm trying to use simple blob detector as described here, however a simplest possible code I've hacked does not seem to yield any results:
img = cv2.imread("detect.png")
detector = cv2.SimpleBlobDetector_create()
keypoints = detector.detect(img)
This code yields empty keypoints array:
[]
The image I'm trying to detects the blobs in is:
I would expect at least 2 blobs to be detected -- according to the documentation simpleblobdetector detects dark blobs and the image does contain 2 of those.
I know it is probably something embarassingly simple I'm missing here, but I cant seem to figure out what it is. My wild guess is, that it has to do something with the circularity of the blob, but trying all kinds of filter parameters I can't seem to figure out the right circularity parameters.
Update:
As per the comment below, where it has been suggested that I should invert my image, despite what the documentations suggests (unless I'm misreading it), I've tried to invert it and run the sample again:
img = cv2.imread("detect.png")
img = cv2.bitwise_not(img)
detector = cv2.SimpleBlobDetector_create()
keypoints = detector.detect(img)
However, as I suspected this gives the same results - no detections:
[]
The problem is the parameters :) and for the bottom blob is too close to the border...
You can take a look to the default parameters in this github link. And an interesting graph at the end of this link where you can check how the different parameters will influence the result.
Basically you can see that by default it is filtered by inertia, area and convexity. Now, if you remove the convexity and inertia filters, it will mark the top one. If you remove the area filter, still it will show only the top blob... The main issue with the bottom one is that it is too close to the border... and seems not to be a "blob" for the detector... but if add a small border to the image, it will appear. Here is the code I used for it:
import cv2
import numpy as np
img = cv2.imread('blob.png');
# create the small border around the image, just bottom
img=cv2.copyMakeBorder(img, top=0, bottom=1, left=0, right=0, borderType= cv2.BORDER_CONSTANT, value=[255,255,255] )
# create the params and deactivate the 3 filters
params = cv2.SimpleBlobDetector_Params()
params.filterByArea = False
params.filterByInertia = False
params.filterByConvexity = False
# detect the blobs
detector = cv2.SimpleBlobDetector_create(params)
keypoints = detector.detect(img)
# display them
img_with_keypoints = cv2.drawKeypoints(img, keypoints, outImage=np.array([]), color=(0, 0, 255),flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imshow("Frame", img_with_keypoints)
cv2.waitKey(0)
cv2.destroyAllWindows()
and the resulting image:
And yes, you can achieve similar results without deactivating the filters but rather changing the parameters of it. For example, these parameters worked with exactly the same result:
params = cv2.SimpleBlobDetector_Params()
params.maxArea = 100000
params.minInertiaRatio = 0.05
params.minConvexity = .60
detector = cv2.SimpleBlobDetector_create(params)
It will heavily depend on the task at hand, and what are you looking to detect. And then play with the min/max values of each filter.

Get Depth image in grayscale in ROS with imgmsg_to_cv2 [python]

I am using Kinect v1 and I want to get the depth image in greyscale mode from the channel "/camera/depth_registered/image" in ROS. As I found here, I can do it by using the function imgmsg_to_cv2. The default desired_encoding for my depth messages is "32FC1", which I keep. The problem is that when I use the cv2.imshow() function to show it, I get the image in binary... When I do the same for the RGB image everything is being shown just fine...
Any help appreciated!
So after all, I found a solution, which you can see here:
def Depthcallback(self,msg_depth): # TODO still too noisy!
try:
# The depth image is a single-channel float32 image
# the values is the distance in mm in z axis
cv_image = self.bridge.imgmsg_to_cv2(msg_depth, "32FC1")
# Convert the depth image to a Numpy array since most cv2 functions
# require Numpy arrays.
cv_image_array = np.array(cv_image, dtype = np.dtype('f8'))
# Normalize the depth image to fall between 0 (black) and 1 (white)
# http://docs.ros.org/electric/api/rosbag_video/html/bag__to__video_8cpp_source.html lines 95-125
cv_image_norm = cv2.normalize(cv_image_array, cv_image_array, 0, 1, cv2.NORM_MINMAX)
# Resize to the desired size
cv_image_resized = cv2.resize(cv_image_norm, self.desired_shape, interpolation = cv2.INTER_CUBIC)
self.depthimg = cv_image_resized
cv2.imshow("Image from my node", self.depthimg)
cv2.waitKey(1)
except CvBridgeError as e:
print(e)
However, the result is not that perfect as the one I get from the image_view node of ROS, but it is still pretty acceptable!

Resources