I'm new to image processing and I'm trying to extract the area within a test tube.
The tube will be placed in front of a camera and then the image is captured. The image obtained will be as shown below. However, there will be slight movements of the tube so its not exactly at the same position each time. The Idea is to recognize the tube position and extract an image corresponding to the inside of the test tube (holding some reaction). Can somebody please guide me how to achieve this using OpenCV or any other image processing library. Thanks a lot!
Related
I've been trying to implement some code where, given an image with some depth data, it specifically returns the part of the image that is focused/closest to the camera. If it's a person, then their face, if it's a plant then the branch. Effectively I'm trying to get the part of the image which would be focused on if the image was taken using Portrait mode on the camera app.
I've been reading this documentation but I've not been able to think of a way to manipulate the data here. My guess would be to use embedsDepthDataInPhoto and then use the depth data in some way to get rid of all other parts of the data if they are a certain distance away or greater from the camera. I'm quite new to this so any help would be greatly appreciated
I am trying to use tesseract for frames captured by opencv from windows screen. I am not using a camera feed here instead I am trying to capture certain message that appears in a message box of a certain color. I am able to crop out the part of the screen that shows the message box and I want to use tesseract for reading the message. I am able to read the message using tesseract from the screenshot of the same message box cropped image but when I try to do the same in real time screen capture it is giving really bad output.
The screenshot is saved using imwrite() of opencv on the same Mat image which is passed to tesseract.
Can anyone explain why is this happening?
How can I make it work?
Regards
i am able to show square preview of recording using AVCaptureVideoPreviewLayer class, But it saving video in rectangular in library. Instead of rectangle i want square video. I have used composition class to crop video and it is working but taking too much time. I check vine app which has square video output.
Please give me suggestion.
its late answer but it is used for another.. see the my ans
https://github.com/piemonte/PBJVision
record video in square
I am trying to make an app for image recognition with Open CV, i want to implement something like this but i don't know how should i do it can any one give me any help where should i begin from i have downloaded Opencv for iOS from here,
I have a hardcopy of image as an example which i want to scan through the camera and the images(markers) i have imported in project now when i scan the image through camera then it should overlay the markers on the image and when i tap/select the marker it should show the info of that marker.
Here is my image :
It's just an example i have taken (Square,Circle and Triangle as Markers)
So now when the image is scanned then the markers will come up as an overlay and on clicking the markers i should get the names (If the Overlay image over the Circle Named "Air" is tapped it should show me "Air" on an alert or if Square Named "Tiger" is tapped it should say "Tiger")
My problem is that the images are kind of same pattern but the result is different on every part so i don't know how should i approach in this ..
Please can any one help me out by suggesting any idea or if any one has done thing like this please tell me how should i implement it.
I have to start from scratch any help please .
Can this be achieved using Open CV or i have to use any other SDK such as vuforia or layar.
Maybe you should search a little bit before asking help...
Anyway, the shapes you want to find do not seems to change (scale, rotation) so, you can look at the template matching methods implemented in OpenCV (see Tutorial OpenCV)
If the shapes are changing, you should look at more powerful methods such as SIFT or SURF. Both are already implemented in OpenCV (the link from aishack is a tutorial to re-implement SIFT, you can find in the same website a tutorial to use the OpenCV method).
My basic task is to capture a part of an image and then use it. To give you an overview, I am creating an app based on OCR, I allow to user to take a picture using camera. However, rather than processing the entire image, I only want some part of it to be selected and send for processing (Preferably a rectangle). So, to sum it up, I want an overlay to be provided, and I want the image inside that overlay to be further used rather than the entire clicked image.
Now, by my understanding, I realize that AVFoundation is the tool to capture the image, however in my app I have used UIImagePicker. I am totally confused since I am a newbie and not sure hot to proceed ahead. Appreciate all for the help. Thanks again
There is fine open source library for OCR in iOS :
https://github.com/nolanbrown/Tesseract-iPhone-Demo
This will work best if the image resolution is 150 * 150 . For more in formation of other libraries you can also refer the SO question:OCR Lib