Ratchet head tracking with openCV and python - opencv

I need to track ratchet head using opencv + python (raspberry-pi) in real time. Please refer to the images below.
I have been able to make certain progress there. But really unable to figure out a way to go for the complex shape along with shapes inside a complex shape (as there are circles and screws within).
I would need to also figure out the ratchet from sides but to begin with, I would like to seek support for figuring out the top view of the ratchet only.
Until now I have been able to detect circles in the image which was very straight-forward. I also tried template matching but that doesn't help in this case. I feel that if there is some way to detect the ratchet head by its outer shape and the big circle within, the job would be done.
Please note that the sizes of the ratchet heads will change but the shape roughly remains the same.

Related

Is there an easy way of removing unwanted features from a meshed 3D scanned part with well defined edges to prepare it for 3D printing?

I am working with a part that was taken from a *.acs point cloud map to a meshed part through Autodesk ReCap. This object had hidden features that I was unable to erase (or see) when converting from a *.acs (point cloud), to a *.rcp (ReCap file) to a *.obj (mesh), and I have not found an intuitive way to remove the "flash" from the sides without creating holes that I have not been able to fill. I tried manually removing these edges in MeshMixer and Meshlab, but feel like there must be an easier way than what I have tried so far. Image for reference
Thanks!

How to detect text in a photo

I am researching into the best way to detect test in a photo using open source libraries.
I think the standard way is as follows (note: steps 1 - 4 all use OpenCV):
1) detect outline of document
2) transform document so it's flat and cropped, using said outline
3) Make the background of document white, using a filter
4) Feed resulting image to Tesseract
Is this the optimum process, or is there a better way, or better tools?
Also, what happens for case if the photo doesn't have a document outline (It's possible that step 1 & 2 are redundant)?
Is there anyway to automatically detect document orientation (i.e. portrait / landscape)?
I think your process is fine. I've used a similar process for an Android project.
I think that the only way you can discover if a document is portrait/landscape is to reason with the length of the sides of the bounding box of your outline.
I don't think there's an automatic way to do this, maybe you can find the most external contour approximable with a 4 segment polyline (all doable in opencv). In order to get this you'll have to work with contour hierarchy and contous approximation (see cv2.approxPolyDP).
This is how I would go for automatic outline detection. As I said, the rest of your algorithm seems just fine to me.
PS. I'll leave my Android project GitHub link. I don't know if it can be useful to you, but here I specify the outline by dragging some handles, then transform the image and feed it to Tesseract, using Java and OpenCV. Yeah It's a very bad idea to do that in the main thread of an Android app and yeah, the app is not finished. I just wanted to experiment with OCR, so I didn't care much of performance and usability, since this was not intended to use, but just for studying.
Look up the uniform width transform.
What this does is detect edges which have more or less the same width with respect to their opposite edge. So things like drainpipes (which can be eliminated at a later pass) but also the majority of text. Whilst conceptually it's similar to a distance transform, the published method uses rather ad hoc normal projection methods and Canny edge detection.

image processing close edge matlab

I have the following image
What I need to do is connect the edges in MATLAB that are obviously of the same object in order to use regionprops later. By 'obviously' I mean the edges of the inside object and those of the outside one. What I thought is that I somehow must keep the pixels of each edge in a struct and then for each edge find the one that is closer to it and then apply some fitting(polynomial, bspline etc). The problem is that I have to make it for thousands of such images so I need a robust algorithm and I cannot do it by hand for all of them. Is there a way for somebody to help me? The image of which the previous image is obtained is this one. Ideally I have to catch the two interfaces shown there.
Thank you very much in advance

Find corner of field

I am working on project in C#/Emgu CV, but answer in any language with OpenCv should be ok.
I have following image: http://i42.tinypic.com/2z89h5g.jpg
Or it might look like this: http://i43.tinypic.com/122iwsk.jpg
I am trying to do automatic calibration and I would like to know how to find corners of the field. They are marked by LEDs, but I would prefer to find it by color tags. If need I can replace all tags by same color tags. (Note that light in room is changing so the colors might be bit different next time)
Edge detection might be ok too, but I am afraid that I would not find the corner correctly.
Please help.
Thank you.
Edit:
Thanks aardvarkk for advice, but I think I need to give you little bit more info.
I am already able to detect and identify robots in field and get their position and rotation. But for that I have to set corners of field manually first. So I was looking for aa automatic way, but I was worried I would not be able to distinguish color tags from background because light in the room is changing quite often.
And as for the camera angle. Point of this is that camera can be every time from different (reasonable) angle.
I would start by searching for the colours. The LEDs won't be much help to you as they're not much brighter than anything else in the scene. I would look for the rectangular pieces of coloured tape. Try segmenting the image based on colour. That may allow you to retrieve the corner tape pieces directly without needing to know their exact colour in advance. After that, you may look for pairs of the same colour blob that are close to each other to define the corners where the pieces of tape are the same. Knowing what kinds of camera angles you are going to have to solve is also very important -- if you need this to work when viewing from the side, then this approach certainly won't work. If it's almost top down, this would probably be a good start. Nobody will be able to provide you with a start to finish solution, but this might be a good base to begin with.

Parsing / Scraping information from an image

I am looking for a library that would help scrape the information from the image below.
I need the current value so it would have to recognise the values on the left and then estimate the value of the bottom line.
Any ideas if there is a library out there that could do something like this? Language isn't really important but I guess Python would be preferable.
Thanks
I don't know of any "out of the box" solution for this and I doubt one exists. If all you have is the image, then you'll need to do some image processing. A simple binarization method (like Otsu binarization) would make it easier to process:
The binarization makes it easier because now the pixels are either "on" or "off."
The locations for the lines can be found by searching for some number of pixels that are all on horizontally (5 on in a row while iterating on the x axis?).
Then a possible solution would be to pass the image to an OCR engine to get the numbers (tesseractOCR is an open source OCR engine hosted at Google (C++): tesseractOCR). You'd still have to find out where the numbers are in the image by iterating through it.
Then, you'd have to find where the lines are relative to the keys on the left and do a little math and you can get your answer.
OpenCV is a beefy computer vision library that has things like the binarization. It is also a C++ library.
Hope that helps.

Resources