I am doing a measuring project and I should measure the the width of moon crescent every night and make a plot of them by time. I searched the web but I didn't find useful thing. I want to know about what resources(Matlab , Image processing , ...) I should study.
I know about doing general works by Matlab, but I don't have any information about image processing in it. please help me !
Related
I am new to image processing in deep learning, and my work is to measure the width of nano fibers in Scanning Electron Microscope(SEM) images.
I am wondering if I can use Detectron2 to find the area and length of the detected fibers by object detection and key-point detection, respectively, and then divide them to get the desired width.
Here is one example image
I would greatly appreciate it if someone kindly give me some advice.
My school project is to measure the distances and angles between the center and several mark points (basically a panel with a big red spot). My idea is using one camera and process the red spot on each panel, the red color can be distinguished from surronding colors. I have googled some materials and i know that it can be implemented with Python and OpenCV library and a OS-based computer. But i think my image processing is quite simple ( just read the color and pixel of the spot) so i dont need those OS-based computer. Is it possible, i still have zero knowledge about image processing. Thanks! I appreciate it a lot
How do I use feature detection to measure dimensions and locate a circle/line/rectangle in an image on LABVIEW.
For example, lets say I inserted an image into labview I want labview to detect if it has any shape in it!
I used lab view in high school for a robotics team and we developed a lot of real time image tracking code that did pretty much that.
What we did was create a little system that took an image checked it for pixels that were a specific Hue, Saturation, and Luminosity, than grouped them together by creating a convex hull around said pixels. We then we scored the convex hull on its linear averages and that was put up against expected results.
And once you get the convex hull you can preform some particle analysis and a few calculations later you have your dimensions.
You could use the function "IMAQ Find Circles" to locate circles. For lines and rectangles I'd probably write something on my own. Segment the image using IMAQ Threshold and let "IMAQ Particle Analysis" give you characteristics of the resulting blobs.
A sample image of what you're trying to achieve would help to understand the problem you're facing. Please upload one.
Also refer to the image processing manuals for LabVIEW. These two are pretty good and give a lot of examples on how to process images:
NI Vision for LabVIEW User Manual:
http://www.ni.com/pdf/manuals/371007b.pdf
NI Vision Concepts Manual:
http://www.ni.com/pdf/manuals/372916e.pdf
labview have some method to find shape you can find all of them in vision assistance machine vision part but you need some post processing to use such algorithm after that
you can use geometry matching and other vi to find any shape you want
I have graduate project about calculate software with OpenCv. Firstly take a grey photo with webcam then detect the edge of bottle. The head and buttom side edge detect. Then calculate a value the bottle height with pixel. Getting reference form head and button, it take a and b value pixel then get diameters. Edge Detection method is not matter.
This picture can be more comprehensible for problem http://speedy.sh/vRgTy/w.JPG
I must get all values for different bottle and can with a OpenCv solution. I How can I solve this problem? It would be very good if anybody give a hint to carry out this project.
Since this is a graduate project, you should probably do some reading in the literature. Specifically about object detection/recognition. There is a lot of literature out there on the topic, and Google Scholar can be your friend.
You should also take a look at this post, there is a large amount of info there which is related to what you want to do.
Lastly, there is a wonderful free book on Machine Vision you can read here, and they go into detail on object recognition in chapter 15.
Good Luck!
I'm trying to do an application which, among other things, is able to recognize chess positions on a computer screen from screenshots. I have very limited experience with image processing techniques and don't wish to invest a great amount of time in studying this, as this is just a pet project of mine.
Can anyone recommend me one or more image processing techniques that would yield me a good result?
The conditions are:
The image is always crispy clean, no noise, poor light conditions etc (since it's a screenshot)
I'm expecting a very low impact on computer performance while doing 1 image / second
I've thought of two modes to start the process:
Feed the piece shapes to the program (so that it knows what a queen, king etc. looks like)
just feed the program an initial image which contains the startup position, from which the program can (after it recognizes the position of the board) pick each chess piece
The process should be relatively easy to understand, as I don't have a very good grasp of image processing techniques (yet)
I'm not interested in using any specific technology, so technology-agnostic documentation would be ideal (C/C++, C#, Java examples would also be fine).
Thanks for taking the time to read this, and I hope to get some good answers.
It' an interesting problem, but you need to specify a lot more than in your original question in order to find an acceptable answer.
On the input images: "screenshots" is quote vague a category. Can you assume that the chessboard will always be entirely in view? Will you have multiple views of the same board? Can you assume that no pieces will be partially or completely occluded in all views?
On the imaged objects and the capture system: will the same chessboard and pieces be used, under very similar illumination? Will the same lens/camera/digitization pipeline be used?
Salut Andrei,
I have done a coin counting algorithm from a picture so the process should be helpful.
The algorithm is called Generalized Hough transform
Make the picture black and white, it is easier that way
Take the image from 1 piece and "slide it over the screenshot"
For each cell you calculate the nr of common pixel in the 2 images
Where you have the largest number there you have the piece
Hope this helps.
Yeah go with Salut Andrei,
Convert the picture into greyscale
Slice into 64 squares and store in array
Using Mat lab can identify the pieces easily
Color can be obtained from Calculating the percentage of No. dot pixels(black pixels)
threshold=no.black pixels /no. of black pixels + no. of white pixels,
If ur value is above threshold then WHITE else BLACK
I'm working on a similar project in c# finding which piece is which isn't the hard part for me. First step is to find a rectangle that shows just the board and cuts everything else out. I first hard-coded it to search for the colors of the squares but would like to make it more robust and reliable regardless of the color scheme. Trying to make it find squares of pixels that match within a certain threshold and extrapolate the board location from that.