Distortion when loading a picture in ios using opencv - ios

The first question I want to ask is:
when I link opencv to my iOS project like this:
There is distortion when loading the picture to the App:
and the oringal picture is:
You see the difference here? Why it happened? And is there any ways to avoid it?
And another question is:
Is there any image processing books/tutorials/websites in opencv using Mat style codes? I read an image processing book using IplImage style codes, so you see my way to do image processing in iOS is first load a IplImage picture and convert it to Mat and finally convert it to UIImage and show it to the view. But Mat is newer than IplImage, anything recommend for me?
Thank you very much!

Related

Image processing library to make a curl image from a flat image in iOS or C#

I want to create a iOS app and a CMS (using C#) to manage image data. The image processing can be implemented at client side or server side.
The question: Is there any image processing library to make a curl image from a flat image in iOS or C#?
Any idea will be appreciated.
Thank you!
Input a flat image:
Output will be a curl image:
You can use GPUImage library to achieve that. Specifically you can use GPUImageTransformFilter(affineTransform/transform3D) to transform the flat image. It's not possible to do the shadow and magazine pages, but you can have a template image which you can super-impose with a transformed flat image to do this.
If its too much for an iPhone, you can use ImageMagick on the server-side to do that for you.
Hope this helps.

OpenCV + Tesseract for text recognition on video frames in realtime

I am trying to use tesseract for frames captured by opencv from windows screen. I am not using a camera feed here instead I am trying to capture certain message that appears in a message box of a certain color. I am able to crop out the part of the screen that shows the message box and I want to use tesseract for reading the message. I am able to read the message using tesseract from the screenshot of the same message box cropped image but when I try to do the same in real time screen capture it is giving really bad output.
The screenshot is saved using imwrite() of opencv on the same Mat image which is passed to tesseract.
Can anyone explain why is this happening?
How can I make it work?
Regards

OpenCV: Making GPU pyrlk_optical_flow.cpp work on video input

It seems that the pyrlk_optical_flow.cpp sample code (opencv\samples\gpu) only works on two still images.
If so, do any of you know of examples of how to convert the code from still image input to streaming video or webcam input?
Any help is appreciated. Thank you.

Get Started with Open CV image recognition

I am trying to make an app for image recognition with Open CV, i want to implement something like this but i don't know how should i do it can any one give me any help where should i begin from i have downloaded Opencv for iOS from here,
I have a hardcopy of image as an example which i want to scan through the camera and the images(markers) i have imported in project now when i scan the image through camera then it should overlay the markers on the image and when i tap/select the marker it should show the info of that marker.
Here is my image :
It's just an example i have taken (Square,Circle and Triangle as Markers)
So now when the image is scanned then the markers will come up as an overlay and on clicking the markers i should get the names (If the Overlay image over the Circle Named "Air" is tapped it should show me "Air" on an alert or if Square Named "Tiger" is tapped it should say "Tiger")
My problem is that the images are kind of same pattern but the result is different on every part so i don't know how should i approach in this ..
Please can any one help me out by suggesting any idea or if any one has done thing like this please tell me how should i implement it.
I have to start from scratch any help please .
Can this be achieved using Open CV or i have to use any other SDK such as vuforia or layar.
Maybe you should search a little bit before asking help...
Anyway, the shapes you want to find do not seems to change (scale, rotation) so, you can look at the template matching methods implemented in OpenCV (see Tutorial OpenCV)
If the shapes are changing, you should look at more powerful methods such as SIFT or SURF. Both are already implemented in OpenCV (the link from aishack is a tutorial to re-implement SIFT, you can find in the same website a tutorial to use the OpenCV method).

cv::VideoCapture in android native code

I am using a cv::VideoCapture in native code and I am having issues with it :
In android java code, Videocapture gives a Yuv420 frame, in native code it's a BGR one. Since I need a gray image, having a Yuv image would be better (I read there was no cost in converting Yuv to GRAY).
Here are my questions :
A im using a Asus TF201, acquiring a frame takes about 26ms which is a lot... as the standard android camera API gives Yuv does the native version of VideoCapture performs a conversion ? (which would explain the time cost)
Is it possible to change the format with CV_CAP_PROP_FORMAT ? Whenever I try mycapture.get(CV_CAP_PROP_FORMAT) my app crashes...
EDIT : Andrey Kamaev answered this one. I have to use grab/retrieve methods adding a argument in the second one :
capture.retrieve(frame, CV_CAP_ANDROID_GRAY_FRAME);
Thanks
Look at the OpenCV samples for Android. Most of them are getting gray image from a VideoCapture object:
capture.retrieve(mGray, Highgui.CV_CAP_ANDROID_GREY_FRAME);
Internally this gray image is "converted" from yuv420 frame in the most efficient way - even without extra copying.

Resources