Calculating a bottle height and two diameter with OpenCv- Edge Detection - opencv

I have graduate project about calculate software with OpenCv. Firstly take a grey photo with webcam then detect the edge of bottle. The head and buttom side edge detect. Then calculate a value the bottle height with pixel. Getting reference form head and button, it take a and b value pixel then get diameters. Edge Detection method is not matter.
This picture can be more comprehensible for problem http://speedy.sh/vRgTy/w.JPG
I must get all values for different bottle and can with a OpenCv solution. I How can I solve this problem? It would be very good if anybody give a hint to carry out this project.

Since this is a graduate project, you should probably do some reading in the literature. Specifically about object detection/recognition. There is a lot of literature out there on the topic, and Google Scholar can be your friend.
You should also take a look at this post, there is a large amount of info there which is related to what you want to do.
Lastly, there is a wonderful free book on Machine Vision you can read here, and they go into detail on object recognition in chapter 15.
Good Luck!

Related

image processing technique is used to solve

I have a problem and a solution but i wonder whether my answer is true or not or where did i do wrong thanks for your help-->Question is that-->
A professor of archeology doing research on currency exchange practices became aware that
four Roman coins crucial to his research are listed in the holdings of a museum. Unfortunately,
he learned that the coins had been recently stolen. Good thing on his part is that the museum
keeps photographs of every item in their holdings. Unfortunately, the photos of the coins he is
interested in are blurred to the point where the date and other markings are not readable. The
original camera used to take the photographs is available, as well as other representative coins
from the same era.
You are hired to as an image processing consultant to help the professor to determine whether
computer processing can be utilized to process the images to the point where the professor can
read markings.
Propose a step by step solution to this problem.
Than my solution-->
Firstly, from the coins and cameras that i have
setting that causes blurred photo to occur
and select a start method accordingly.After using the deconvolution method to get rid of the blur, I use a high pass filter to reveal the perimeter of the coin or use tresholding method so that I can reveal the shape of the money.In 2D photography, it is easy to see where the coin is and what pixelles I have to trade.So for example, instead of applying a whole process to the image,
I have been working on the pixeller that has been determined.Then the place where you have the necessary information from the coin I take the gradient in the x-y direction to determine what parts might be from the image.Then i apply dialation to necessary part so that this part can be clear.Once the place has become clearer I get a clear image of the required area with OCR (optical character recognition) or template matching.If you make a comment at least I am very pleased thanks a lot.
Take a picture of a single, sharp black dot with the old camera to estimate its PSF.
Now use a supervised deblurring algorithm using that PSF.
If the settings of the camera were lost, you can try an reproduce similarly blurred images of the available coins, by varying the focus and optimizing the correlation to the available pictures.

Detect and dtermine the width of a human hand in opencv

I want to detect a human hand and determin its width. Is there a way to it in openCV, or any technique to do that.
I've tried searching google but couldn't find any solution.
My segmentation result:
As your question is too broad to be answered without writing a 10 page essay I'll give you a few ideas at least.
Option 1:
Detect the finger tips and fit a hand model (there should be plenty of papers, open source code and other resources available online that do hand and gesture detection). you can then position your line using that hand model.
Option 2:
The dimension you are looking for should be the shortest cross section of any hand. Place a scan line over your hand, rotate it around it's centroid and measure the distance between the transition hand - background on both ends. Then pick the minimum. Make sure you have the right rotation center. Maybe cut the fingers of using morphological operations to move the centroid a bit further down so you don't get a wrong measurement.
Option 3: Estimate the width of the hand by its total size. Human proportions usually follow some rules. See if you can find some correlation between that measure and other hand features. If you don't need too exact measures (your image resoltion suggests this) this should be the quickest and easiest solution.
There are many other options. Take a ruler and your hand and start thinking. Or do more research on gesture recognition in general. I'm sure you can apply may things they use to get your width.

OpenCV compare similar hand drawn images

I am trying to compare two mono-chrome, basic hand drawn images, captured electronically. The scale may be different but the essences of the image is the same. I want to compare one hand drawn image to a save library of images and get a relative score of how similar they are. Think of several basic geometric shapes, lines, and curves that make up a drawing.
I have tried several techniques without much luck. Pixel based comparisons are too exact. I have tried scaling and cropping images and that did not get accurate results.
I have tried OpenCV with C# and have had a little success. I have experimented with SURF and it works for a few images, but not others that the eye can tell are very similar.
So now my question: Are there any examples of using openCV or commercial software that can support comparing drawings that are not exact? I prefer C# but I am open to any solutions.
Thanks in advance for any guidance.
(I have been working on this for over a month and have searched the internet and Stack Overflow without success. I of course could have missed something)
You need to extract features from these images and after that using a basic euclidean distance would be enough to calculate similarity. But hand writtend drawn thins are not easy to extract features. For example, companies that work on face recognition generally have much less accuracy on drawn face portraits.
I have a suggestion for you. For a machine learning homework, one of my friends got the signature recognition assingment. I do not fully know how he did it with a high accuracy, but I know feature extraction part. Firtstly he converted it to binary image. And than he calculated the each row's black pixel count. Than he used that features to train a NN or etc.
So you can use this similar approach to extract features. Than use a euclidean distance to calculate similarities.

2d images to 3d(reconstruction)

I need your help and advice. The question consists of the following items: there are pictures from a chamber that stands in a room in the strictly fixed place(a chamber turns about the axis) . How to combine all these pictures in one so that there was an effect as though we see it with the eyes? There are all pictures of foreshortening (left, right, top, bottom and other foreshortening) of room from one point. I think that I need to use 3d calibration and reconstructionin emgu(opencv). Your help and advice are needed. And also some example of using. Maybe someone has already faced such problem. I’ll be grateful for your help.
There are various methods for 2D to 3D reconstruction, most commonly used are
Stereography (This method requires two camera placed at some offset)
Laser Projection based such as Kinect or Lidar or line laser based.
SFM (structure from motion).
Taking all shots from one point wont give you any 3D information, since there need to have some parallex to determine the difference in depth(unless you are using laser projection).
it is better if you selfstudy relevent topics first before asking questions on the forum, to show other that you really did your part.

How to align two different pictures in such a way, that they match as close as possible?

I need to automatically align an image B on top of another image A in such a way, that the contents of the image match as good as possible.
The images can be shifted in x/y directions and rotated up to 5 degrees on z, but they won't be distorted (i.e. scaled or keystoned).
Maybe someone can recommend some good links or books on this topic, or share some thoughts how such an alignment of images could be done.
If there wasn't the rotation problem, then I could simply try to compare rows of pixels with a brute-force method until I find a match, and then I know the offset and can align the image.
Do I need AI for this?
I'm having a hard time finding resources on image processing which go into detail how these alignment-algorithms work.
So what people often do in this case is first find points in the images that match then compute the best transformation matrix with least squares. The point matching is not particularly simple and often times you just use human input for this task, you have to do it all the time for calibrating cameras. Anyway, if you want to fully automate this process you can use feature extraction techniques to find matching points, there are volumes of research papers written on this topic and any standard computer vision text will have a chapter on this. Once you have N matching points, solving for the least squares transformation matrix is pretty straightforward and, again, can be found in any computer vision text, so I'll assume you got that covered.
If you don't want to find point correspondences you could directly optimize the rotation and translation using steepest descent, trouble is this is non-convex so there are no guarantees you will find the correct transformation. You could do random restarts or simulated annealing or any other global optimization tricks on top of this, that would most likely work. I can't find any references to this problem, but it's basically a digital image stabilization algorithm I had to implement it when I took computer vision but that was many years ago, here are the relevant slides though, look at "stabilization revisited". Yes, I know those slides are terrible, I didn't make them :) However, the method for determining the gradient is quite an elegant one, since finite difference is clearly intractable.
Edit: I finally found the paper that went over how to do this here, it's a really great paper and it explains the Lucas-Kanade algorithm very nicely. Also, this site has a whole lot of material and source code on image alignment that will probably be useful.
for aligning the 2 images together you have to carry out image registration technique.
In matlab, write functions for image registration and select your desirable features for reference called 'feature points' using 'control point selection tool' to register images.
Read more about image registration in the matlab help window to understand properly.

Resources