Given an image, how can I create an algorithm using image processing techniques to identify the sections where there are no products present. I also need to create a bounding with coordinates for the empty spaces where products are not present.
I need to accomplish this using OpenCV. And I cannot use Deep Learning here.
I have used a Canny edge detector, and empty spaces are well identified using this.
Should I use a Contour on the results of the Canny edge detector?
Any help would be appreciated.
yes you can do that in image processing technique by using Matlab, vector images.
Use two images.
One is with empty shelves.
Second one is the current image.
Subtract 1st image from second second.
You will get the results how many shelves are empty.
Related
I am fairly new in image processing. For making a Content Based Image Retrieval(CBIR) system i have to match image feature information of the query image with that of the images in the image database to find images from the database that are same or similar to the query image. I have selected Sobel Edge Detection as feature for now.
I can extract edge information from a subject image in the form of an edge-image by Sobel edge detection algorithm. The result is a black picture with white pixels representing edges of the original image. (These descriptions might seem very basic and unnecessary but I want to clarify exactly what amount of data I have in hand)
I have to compare this edge information of two images to find out how similar/dissimilar they are. Actually I need to compare the query image with all the images of the database in this manner to find similar images and how similar they are to the query image. I need a numeric measurement to tell the distance between two images after comparison (like manhattan distance/chi square distance etc).
So, after extracting the edge detection by applying the Sobel Operator, how should I 'compare' two edge images? Should I make a histogram from the edge image and calculate difference between the two histograms? Or should follow some other method?
I need suggestion. Every paper I find online describes the same thing again and again, what is edge detection and how to do it. I can't find any useful exact suggestion on what I should do after I detect the edges to use in a CBIR system. And also, any software/language specific answer is not going to be useful for me. I need an algorithm and I will implement it myself.
On your images first apply contorlet transform and extract the mean and variance values which becomes the edge features of your image then on these edge features you apply any similarity check test, best one is the Euclidean distance metric.
Is there any algorithm recommendation to do this? My project is using single-channel image (BW), consist of 2 images. First image is user defined which is a "map" of an area (let says room) and second image is sensor result (using RpLIDAR 360 degree laser scanner). The second image only contain some parts of the first one. The goal is find corresponding position on the first image.
I'm familiar with OpenCV2.4.11 and work using raspberry-pi2
This is RpLidar raw input which already converted to an image
By using erode and dilate function for filter, HoughLinesPrediction to improve line result, SURF feature detection (I already try using ORB), and FLANN matcher, here's the result so far:
Missmatched feature point
One of expected result
Hope i make my question clear, thanks in advance
You can try to make you own corner detection instead. The logic is, observing 3 points, and calculate the angle created.
I have an image I want to extract lines from (a vascular network), using the Hough line algorithm. First I preprocess the image, then use Canny edge detection to generate the binary image.
I want to get a polygon/an array of joined line segments representing the shape of the vascular network. However applying the Hough line transform directly on this image yields mediocre results, partly because edge detection means each vessel is represented by two lines on each side, instead of a single line.
I'm new to OpenCV and image processing in general, so I'm probably going about this the wrong way. Any suggestions, or any recommended literature?
I suggest not using Canny edge detection.
Instead, first use a binary threshold to get a binary image of the vascular network (see http://docs.opencv.org/3.1.0/d7/d4d/tutorial_py_thresholding.html#gsc.tab=0 for applying a binary threshold). Then, pixels that are "on" should be points inside the network and those that are "off" should be outside.
Then use the findContours method:
http://opencvexamples.blogspot.com/2013/09/find-contour.html
This method gives you an array of contours, each of which is a list of points. A list of points will represent the list of line segments you are looking for (it will represent a contour, and if you are lucky it might be a polygon!).
Hough may not be the best tool for this job. Hough will give you straight lines or other geometric shapes. It is not designed to follow a detailed pattern like this.
Given the image, I would read research papers which already solve this. Here are a few examples from a search on Google Scholar. If they don't work for you, look up the citations as they should lead you down other paths.
https://scholar.google.com/scholar?hl=en&q=retina+computer+vision+vascular
http://ijesat.org/Volumes/2012_Vol_02_Iss_04/IJESAT_2012_02_04_25.pdf
http://www.vision.cs.rpiscrews.us/publications/pdfs/shen_itbm_submitted.pdf
I ran findCountours on the following Image:
And got the following contour image (I'm showing only "parent" contours according to the hierarchy):
As you can see, there are many different contours around each object (each one in a different color). Now, I want to unify the contours around the person to obtain one enclosing contour, so I could segment her our from the image.
I'm not sure that it can be done, but I thought I should ask here.
Is there any method to intelligently unify the contours in the image so I could segment different objects out?
Thanks,
Gil.
First, do you want to achieve the result only on this image or any other image where different people will present in different pose and different dresses?
If you want to segment only this image, then with some color thresholding or with some morphology operations you can achieve it. But to make it work for any image with different persons probably you may need to pursue a PhD in computer vision.
But if your task is segmentation only then I would suggest a Semi-Automatic Segmentation technique like Grab Cut or graph cut. These are very popular segmentation algorithms which are readily available in opencv or matlab. They work very well on all kind of images. Here is the result of grab cut algorithm on your image.
There is lots of work on Contour based segmentation in the literature out there.
The Ultrametric contour map produces a hierarchy of contours which are segmentations of objects in an input image.
Pub: Contour Detection and Hierarchical Image SegmentationPablo Arbelaez, Michael Maire, Charless Fowlkes, Jitendra Malik
I have a processed binary image of dimension 300x300. This processed image contains few object(person or vehicle).
I also have another RGB image of the same scene of dimensiion 640x480. It is taken from a different position
note : both cameras are not the same
I can detect objects to some extent in the first image using background subtraction. I want to detect corresponding objects in the 2nd image. I went through opencv functions
getAffineTransform
getPerspectiveTransform
findHomography
estimateRigidTransform
All these functions require corresponding points(coordinates) in two images
In the 1st binary image, I have only the information that an object is present,it does not have features exactly similar to second image(RGB).
I thought conventional feature matching to determine corresponding control points which could be used to estimate the transformation parameters is not feasible because I think I cannot determine and match features from binary and RGB image(am I right??).
If I am wrong, what features could I take, how should I proceed with Feature matching, find corresponding points, estimate the transformation parameters.
The solution which I tried more of Manual marking to estimate transformation parameters(please correct me if I am wrong)
Note : There is no movement of both cameras.
Manually marked rectangles around objects in processed image(binary)
Noted down the coordinates of the rectangles
Manually marked rectangles around objects in 2nd RGB image
Noted down the coordinates of the rectangles
Repeated above steps for different samples of 1st binary and 2nd RGB images
Now that I have some 20 corresponding points, I used them in the function as :
findHomography(src_pts, dst_pts, 0) ;
So once I detect an object in 1st image,
I drew a bounding box around it,
Transform the coordinates of the vertices using the above found transformation,
finally draw a box in 2nd RGB image with transformed coordinates as vertices.
But this doesnt mark the box in 2nd RGB image exactly over the person/object. Instead it is drawn somewhere else. Though I take several sample images of binary and RGB and use several corresponding points to estimate the transformation parameters, it seems that they are not accurate enough..
What are the meaning of CV_RANSAC and CV_LMEDS option, ransacReprojecThreshold and how to use them?
Is my approach good...what should I modify/do to make the registration accurate?
Any alternative approach to be used?
I'm fairly new to OpenCV myself, but my suggestions would be:
Seeing as you have the objects identified in the first image, I shouldn't think it would be hard to get keypoints and extract features? (or maybe you have this already?)
Identify features in the 2nd image
Match the features using OpenCV FlannBasedMatcher or similar
Highlight matching features in 2nd image or whatever you want to do.
I'd hope that because all your features in the first image should be positives (you know they are the features you want), then it'll be relatively straight forward to get accurate matches.
Like I said, I'm new to this so the ideas may need some elaboration.
It might be a little late to answer this and the asker might not see this, but if the 1st image is originally a grayscale then this could be done:
1.) 2nd image ----> grayscale ------> gray2ndimg
2.) Point to Point correspondences b/w gray1stimg and gray2ndimg by matching features.