Opencv. cv2.drawMatches() not drawing matches. Opencv 3.0 - opencv

Not drawing matches. Opencv 3.0, fully updated Ubuntu. The code runs but it doesn't show any matches. The test region is directly cut and copied from the image to match.
import numpy as np
import cv2
cv2.ocl.setUseOpenCL(False)
img1 = cv2.imread('images/ingrassroi.png',0)
img2 = cv2.imread('images/ingrass.png',0)
img3 = img1.copy()
# Initiate ORB detector
orb = cv2.ORB_create()
# compute the descriptors with ORB
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf.match(des1,des2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)
# Draw first 10 matches.
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10],None, flags=2)
cv2.imshow("Matches",img3)
cv2.waitKey(-1)

Turns out that the image to match with was too small. It was not finding any key points in the train image. I enlarged the area that I cropped from the test image and it found matches and correctly identified the cigarette butt in the image. Just an FYI if you decide to try the FLANN based matcher on the same page in the OpenCV Python tutorials, be sure to define FLANN_INDEX_LSH = 6.

Related

Is there an equivalent function or an implmentation of skimage.feature.peak_local_max in OpenCV?

I have been trying to segment biological cells in an image using watershed algorithm. I found an excellent article on pyimagesearch which clearly gives an overview of the algorithm and its implementation in python. The code uses both opencv and scikit-image for processing the image.
My goal is to convert the whole code into pure opencv. But the issues is that there's a function called scipy.feature.peak_local_max in scikit-image which does the job of finding local peaks in an image very efficiently. I couldn't find or devise such function in OpenCV.
Original Code(I have documented this snippet according to my understanding, please correct if am wrong):
import the necessary packages
from skimage.feature import peak_local_max
from skimage.morphology import watershed
from scipy import ndimage
import numpy as np
import argparse
import imutils
import cv2
from matplotlib import pyplot as plt
# load the image and perform pyramid mean shift filtering
# to aid the thresholding step
image = cv2.imread("test2.png")
shifted = cv2.pyrMeanShiftFiltering(image, 21, 51)
# Apply grayscale
gray = cv2.cvtColor(shifted, cv2.COLOR_BGR2GRAY)
# Convert to binary
thresh = cv2.threshold(gray, 0, 255,cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
# Watershed starts from here
# compute the exact Euclidean distance from every binary
# pixel to the nearest zero pixel, then find peaks in this
# distance map
D = ndimage.distance_transform_edt(thresh)
localMax = peak_local_max(D, indices=False, min_distance=10,labels=thresh)
# perform a connected component analysis on the local peaks,
# using 8-connectivity, then appy the Watershed algorithm
markers = ndimage.label(localMax, structure=np.ones((3, 3)))[0]
# Apply segmentation
labels = watershed(-D, markers, mask=thresh)
print("[INFO] {} unique segments found".format(len(np.unique(labels)) - 1))
cv2.imwrite("labels.png",labels)
# Contouring
for label in np.unique(labels):
# if the label is zero, we are examining the 'background'
# so simply ignore it
if label == 0:
continue
# otherwise, allocate memory for the label region and draw
# it on the mask
mask = np.zeros(gray.shape, dtype="uint8")
mask[labels == label] = 255
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
c = max(cnts, key=cv2.contourArea)
# draw a circle enclosing the object
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.018 * peri, True)
cv2.drawContours(image, [approx], -1, (0,0,255), 2)
cv2.imwrite("output.jpg",image)
Pure OpenCV Code till finding distance map:
# import the necessary packages
import numpy as np
import cv2
# load the image and perform pyramid mean shift filtering
# to aid the thresholding step
image = cv2.imread("1.png")
shifted = cv2.pyrMeanShiftFiltering(image, 21, 51)
# Apply grayscale
gray = cv2.cvtColor(shifted, cv2.COLOR_BGR2GRAY)
# Convert to binary
thresh = cv2.threshold(gray, 0, 255,cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
# Watershed starts from here
# compute the exact Euclidean distance from every binary
# pixel to the nearest zero pixel, then find peaks in this
# distance map
D = cv2.distanceTransform(thresh,cv2.DIST_L2,0)
The point till D, both the original code and the pure opencv code which I have tried have exactly the same outputs, the issue is I dont exactly have a clear idea on how to implement peak_local_max in opencv which would give identical result as scikit's function.
It would be really helpful if someone who has relavent knowledge could explain how this function works in finding those peaks in such a fine grained manner.
Input Image:
Peak Local max output in scikit-image(BGR format image):
Required output:

How can count outlier and inlier points after applying RANSAC?

I have gone through the code below and would like to know how can I count the outlier points and inlier points after using RANSAC? could you point to a good code how it can be done?
Second question, which feature matching algorithm is better: BFMatcher.knnMatch() with Test ratio or bf = cv.BFMatcher(cv.NORM_HAMMING, crossCheck=True) with shortest distance? any reference for this comparison?
**# BFMatcher with default params
bf = cv.BFMatcher()
matches = bf.knnMatch(des1, des2, k=2)
# Apply ratio test
good_matches = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good_matches.append([m])
# Draw matches
img3=cv.drawMatchesKnn(img1,kp1,img2,kp2,good_matches,None,flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
cv.imwrite('matches.jpg', img3)
# Select good matched keypoints
ref_matched_kpts = np.float32([kp1[m[0].queryIdx].pt for m in good_matches])
sensed_matched_kpts = np.float32([kp2[m[0].trainIdx].pt for m in good_matches])
# Compute homography
H, status = cv.findHomography(sensed_matched_kpts, ref_matched_kpts, cv.RANSAC,5.0)**
Count number of outliers and inliers
# number of detected outliers: len(status) - np.sum(status)
# number of detected inliers: np.sum(status)
# Inlier Ratio, number of inlier/number of matches: float(np.sum(status)) / float(len(status))
Feature Matching Algorithm
I would say that if you are using the sparse feature-based algorithm (SIFT or SURF), BFMatcher.knnMatch() with Test ratio is preferred. While the bf = cv.BFMatcher(cv.NORM_HAMMING, crossCheck=True) is used for binary-based algorithm (ORB, FAST, etc). My suggestion would be try both algorithms on your project to investigate which one is better.

OpenCV Best way to match the spot patterns

I'm trying to write an app for wild leopard classification and conservation in South Asia. For this, I have the main challenge to identify the leopards by their spot pattern in the forehead.
The current approach I am using is,
Store the known leopard forehead images as a base list
Get the user-provided leopard image and crop the forehead of the leopard
Pre-process the images with the bilateral filter to reduce the noise
Identify the keypoints using the SIFT algorithm
Use FLANN matcher to get KNN matches
Select good matches based on the ratio threshold
Sample code:
# Pre-Process & reduce noise.
img1 = cv.bilateralFilter(baseImg, 9, 75, 75)
img2 = cv.bilateralFilter(userImage, 9, 75, 75)
detector = cv.xfeatures2d_SIFT.create()
keypoints1, descriptors1 = detector.detectAndCompute(img1, None)
keypoints2, descriptors2 = detector.detectAndCompute(img2, None)
# FLANN parameters
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50) # or pass empty dictionary
matcher = cv.FlannBasedMatcher(index_params, search_params)
knn_matches = matcher.knnMatch(descriptors1, descriptors2, 2)
allmatchpointcount = len(knn_matches)
ratio_thresh = 0.7
good_matches = []
for m, n in knn_matches:
if m.distance < ratio_thresh * n.distance:
good_matches.append(m)
goodmatchpointcount = len(good_matches)
print("Good match count : ", goodmatchpointcount)
matchsuccesspercentage = goodmatchpointcount/allmatchpointcount*100
print("Match percentage : ", matchsuccesspercentage)
Problems I have with this approach:
The method has a medium-low success rate and tends to break when there is a new user image.
The user images are sometimes taken from different angles where some key patterns are not visible or warped.
The user image quality affects the match result significantly.
I appreciate any suggestions to get this improved in any manner.
Sample Images
Base Image
Above is matching to below: (Incorrect pattern matched)
More sample images as requested.

How can I improve recognition?

I set myself the task of recognizing passports, but I can’t completely recognize all areas. Tell me, what can help? Used a different filtering and canny algorithm, but something is missing. Код не может распознать серию и номер документа, а также мелкие символы, иногда не может распознать имя или фамилию совсем....
# import the necessary packages
from PIL import Image
import pytesseract
import argparse
import cv2
import os
import numpy as np
# построить разбор аргументов и разбор аргументов
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image" )
ap.add_argument("-p", "--preprocess", type=str, default="thresh")
args = vars(ap.parse_args())
# загрузить пример изображения и преобразовать его в оттенки серого
image = cv2.imread ("pt.jpg")
gray = cv2.cvtColor (image, cv2.COLOR_BGR2GRAY)
gray = cv2.Canny(image,300,300,apertureSize = 3)
# check to see if we should apply thresholding to preprocess the
# image
if args["preprocess"] == "thresh":
gray = cv2.threshold (gray, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
# make a check to see if median blurring should be done to remove
# noise
elif args["preprocess"] == "blur":
gray = cv2.medianBlur (gray, 3)
# write the grayscale image to disk as a temporary file so we can
# apply OCR to it
filename = "{}.png".format (os.getpid ())
cv2.imwrite (filename, gray)
# load the image as a PIL/Pillow image, apply OCR, and then delete
# the temporary file
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
text = pytesseract.image_to_string (image, lang = 'rus+eng')
os.remove (filename)
print (text)
os.system('python gon.py > test.txt') # doc output file
# show the output images
cv2.imshow ("Image", image)
cv2.imshow ("Output", gray)
cv2.waitKey (0)
It's easier (and faster) for Tesseract to recognize text when you provide it only with the regions that contain the text you want to interpret, in your case the big, black letters in the middle, for example:
I'm referring to running Tesseract only in the regions in green. Since the document's structure is predictable, you could easily find these regions as follows:
binarize and invert the image (black = empty)
use opencv's connectedComponentsWithStats() function to get a list of all connected components with their positions and size
you can hardcode thresholds to filter only the characters you want, or get a histogram of areas and use statistics to define thresholds dinamically
on the remaining connected components use morphological operations (e.g. dilation with a horizontal kernel) to connect letters together horizontally
get the bounding box of the final connected components
Optional: postprocessing will work much better in these isolated boxes
Feed each bounding box as a separate Mat to tesseract, it will greatly simplify the problem.

Interpret ORB matches in opencv Python

I need to evaluate the results after using ORB and BFMatcher in OpenCV, such that I interpret the matches after comparing img1 to img3, and img2 to img3. I understand that ORB matches contain a list of hamming distances, but I want to convert this vector into a scalar value of similarity.
I thought of two scenarios:
1) using the length of matches, the higher indicates the great similarity. But how we deal if the length of matches1 = length matches2?. In this case, we can 2)add all distances, and the minimum is preferable.
Can we combine all cases into one metric?
Here is the minimum of my work:
orb = cv2.ORB()
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1,des2)
matches = sorted(matches, key = lambda x:x.distance)
return len(matches)
Thanks

Resources