width measurement of carbon fiber by Detectron2? - image-processing

I am new to image processing in deep learning, and my work is to measure the width of nano fibers in Scanning Electron Microscope(SEM) images.
I am wondering if I can use Detectron2 to find the area and length of the detected fibers by object detection and key-point detection, respectively, and then divide them to get the desired width.
Here is one example image
I would greatly appreciate it if someone kindly give me some advice.

Related

Face size using image processing

I'm trying to find out some measures which could allow me to quantify (approximatively) the size of a face (small, medium, large). What measures can I considerer ? (for example, distance between the nose and mouth...).
Can you bring some references if exist?
Thank you
In order to find small, medium, large faces, you can measure facial landmarks, OpenCV will help you to overcome this problem.
Please see:
https://www.pyimagesearch.com/2017/04/03/facial-landmarks-dlib-opencv-python/
Another option might be to detect faces with drawing circles. Then find circumference of circle.

Imageprocessing on neural networks

I have a video of a welding process, and for each frame I need to detect the edges of the welded area for further measure this area.
The problem is that the quality of the video is poor, so I can't detect the edges of the frames through normal methodes, image enhacement doesn't work either.
I posted this question before here on Stackoverflow, and one guy managed to enhance my area of interest through NEURAL NETWORKS.
I need to select the area as following (Paint outlined).
This guy who managed to make it, he gave me the following explanation:
'''''''I trained a neural network to learn the pattern of the area you want to isolate. I took blocks of data centered in a pixel that is either in or out of the region you marked as your region of interest. So, the network learn that if a pixel has neighbors with that pattern, it will mark that pixel as 1, else as 0. The output of the neural network after learning the pattern is this. Then you can simply use Sobel to get the edges of the result, which is the area you want.'''''''''
So this is the exact result, as described, that I want.
But I don't know at all how to work with neural networks, either how to do this.
Does someone know it ?
If someone know how to measure the area of my outlined area for each frame (a code for the whole video) I would appreciate very much.
Thank you very much guys.
This is the link of the original question_ ImageProcessing MatLab
After detecting edges using sobel or canny, you can use the countour method from openCV to detect the boundary of the ROI. Then you can use the areacountour method to calculate the area bounded by that countour
You can extract each frame from the video into independent images

which algorithm to choose for object detection?

I am interested in detecting single object more precisely a fire extinguisher which has no inter class variability (all fire extinguisher looks same). However, The application is supposedly realtime i.e a robot is exploring the environment and whenever it sees the object of interest it should be able to detect it and give pixel coordinates of it.
My question is which algorithm will be good choice for this task?
1. Is this a classification problem and should we use features(sift/surf etc) + bow +svm?
2. some other solution (no idea yet).
Any kind of input will be appreciated.
Thanks.
(P.S bear with me i am newbie to computer vision and stack over flow)
update1:
Height varies all are mounted on the wall but with different height. I tried with SIFT features and bow but it is expensive to extract bow descriptors in testing part. Moreover I have no idea how to locate the object(pixel coordinates) inside the image after its been classified positive.
update 2:
I finally used sift + bow + svm and am able to classify the object. But using this technique, i only get output interms of whether the object is present in the scene or not?
How can i detect the object i.e getting the bounding box or centre of the object. what is the compatible approach with the above method for achieving these results.
Thank you all.
I would suggest using color as the main feature to look for, and only try other features as needed. The fire extinguisher red is very distinctive, and should not occur too often elsewhere in an office environment. Other, more computationally expensive tests can then be performed only in regions of the right color.
Here is a good tutorial for color detection that also explains how to find good thresholds for your desired color.
I would suggest the following approach:
denoise your image with a median filter
convert the image to HSV format (Hue, Saturation, Value)
select pixels close to that particular shade of red with InRange()
Now you have a binary image image that contains only the pixels that are red.
count the number of red pixels with CountNonZero()
If that number is too small, abort
remove noise from the binary image by morphological opening / closing
find contours of all blobs in your picture with findContours or the CvBlob library
check if there are blobs of the correct width, correct height and correct width/height ratio
since your fire extinguishers are vertical cylinders, the width/height ratio will be constant from every angle. The width and height will of course vary somewhat with distance to the camera.
if the width and height do not match, abort
repeat these steps to find the black-colored part on the bottom of the extinguisher,
abort if there is no black region with correct width/height below the red region
(perhaps also repeat these steps for the metallic top and the yellow rectangle)
These tests should all be very fast. If they are too slow, you could reduce the resolution of your input images.
Depending on your environment, it is possible that this is already a robust enough test. If not, you can proceed with sift/surf feature matching, but only in a small region around the blobs with the correct color. You also do not necessarily have to do that for each frame, each n-th frame should be be enough for confirmation.
This is a old question .. but will still like to give my recommendation to use YOLO algorithm to solve this problem.
YOLO fits very well to this scenario.

measuring the shape of moon by matlab

I am doing a measuring project and I should measure the the width of moon crescent every night and make a plot of them by time. I searched the web but I didn't find useful thing. I want to know about what resources(Matlab , Image processing , ...) I should study.
I know about doing general works by Matlab, but I don't have any information about image processing in it. please help me !

Overlapping face detection in OpenCV

First let me give some information about what I'm trying to do.
I'm working on a face verification problem using profile faces, and my first step is face detection. I'm using OpenCV face detector with 'haarcascade_profileface.xml'. The problem is, detector does not find faces consistently. By not consistent I mean, it finds a face in some region, but sometimes it finds the face bigger, sometimes smaller and sometimes both. I want it to find same region as a face all the time.
I'm adding some images to tell my problem better. You can find them here.
What should I do to overcome this multiple face detection in the same area (overlapping face detection)?
The first thing that came into my mind is increasing the minNeighbors parameter, but that causes the detection rate to drop so I don't want to do it. Then I think of applying some image stabilization algorithm on facial images, but I think it will be too expensive. If anyone could give me some advice on overcoming this problem I will be glad.
I should mention that I'm using OpenCV 2.4.5 and I set the minNeighbor parameter to 4, scaleFactor was 1.75 and did not set any size limitation.
Thanks in advance,
Regards,
Güney
If your'e detecting faces from a video, you can apply a filter on the bounding box to keep the bounding box change smoothly. It will reduce those "inconsistencies" in the face bounding box.
CurrentFrameBoundingBox = a*PrevFrameBoundingBox + (1-a)*DetectedBoundingBox
as a is larger, it will give more weight to the previous frame bounding box and reduce inconsistencies.
You do this for every coordinate in the bounding box.
Maybe you can do a customized meanshift clustering that suits your need on the raw bounding detection boxes. If I recall correctly OpenCV is filtering or clustering these raw results, because the classifier fires multiple times for the same object. If you are not satisfied with the routine in OpenCV you can try other density based clustering methods. Or you can simply take the median of these raw results.

Resources