Canny Edge algorithm vs OpenCv - Windows Phone 8 - opencv

I'm making an app that gets the stream from the camera and uses the canny algorithm to display the edges.
In android everything worked fine,using OpenCv to get the egdes and it was in real time. After that i went on developing to WP8, and found out WP8 doesn't support OpenCv yet. Because my only problem was that canny edge algorithm i got one from the internet,adapted the code to silverlight and it was a complete mess. It wasn't real-time at all,I was displaying the information in like 1 sec.I searched a bit about alternatives, found:EmguCv(but nothing about canny edge algorithm) and some guys that tried to compile an Opencv subset for W8 ARM.Even tried the second one,but ended up failing.My questions now are:
why on earth is it moving so slow?
if i manage to get a OpenCv library,will it be quicker?
do you guys have another alternatives/sugestions?

Think they're bringing our windows phone 8 support- with emgucv you can access all opencv functions and I'm sure canny edge is one of them. Apparently opencvsharp is good. As for speed- no idea.

Related

Find surfaces in 3d image

I'm working on a C++ project using a ToF camera. The camera is inside a room and has to detect walls, doors or other big planar surfaces. I'm currently using OpenCV but answers using other C++ libaries are also okay. What is a good algorithmn to detect the surfaces, also if they are rotated and aren't facing the camera directly. I've heard things like making a point cloud and using RANSAC. If you suggest me doing that please explain it in detail or provide a resource for explanation, because I don't know much about this topic (I'm a beginner in computer vision).
Thanks for your responses.
Are you familiar with PCL?
This tutorial shows how to find planar segments in a point-cloud using PCL.

Find lines in image with endpoints using GPUImage

I'm pretty new to computer vision, but I'm trying to basically find all the straight lines in a hand drawn image as seen by the camera. I'm using the GPUImage library by Brad Larson, and I'm able to find the lines using a Hough Transform, but this gives me slopes and intercepts. Is there a way to get the endpoints? It's not really possible to determine the lengths via intersections due to the nature of the images. I've started looking into the full OpenCV library, but that seems considerably more complicated.

Facedetection in iOS

I'm currently working on a project, where I need to detect a face and then take a photo with the camera. (after the camera focused everything correctly).
Is something like this possbile in iOS?
Are there any good tutorials on this?
i would suggest to use opencv for this as it has proven algorithm and fast enough to work on image as well as video
https://github.com/aptogo/FaceTracker
https://github.com/mjp/FaceRecognition
This solution will work for android too using opencv port to android.
Use GPUImage for face detection.
Face detection example is also available in GPUImage.
see last point in FilterShowCase example project of GPUImage for face detection.
iOS 10 and Swift 3
You can check apple example you can detect face
https://developer.apple.com/library/content/samplecode/AVCamBarcode/Introduction/Intro.html
you can select the face metedata to make camera track the face and show yellow box on the face its have good performace than this example
https://github.com/wayn/SquareCam-Swift

Finding multiple objects in an image

I am currently trying to detect [P Plates]
in images made from panoramas off the top of a car (so the P plates could be coming from in front or behind me, and may be distorted). There may be more than 2 P plates so I would need the ability to detect more than 1 at a time. I have used OpenCV template matching with mixed success, it doesn't seem to cope with P plates on an angle well and I cannot seem to get it to recognise 2 in an image. I have also tried SURF but with no luck. Does anyone have any recommendations for the kind of algorithm I should use here (preferably one that is integrated into OpenCV).
You may want to go with SIFT using Rob Hess' SIFT Library. It's using OpenCV and is pretty fast.
Another way is to detect squares and than use content of the square for further processing.

Speed up stitching of 2 images?

I am working with 2 fly cameras and trying to stitch them together.
I am working with OpenCV and C++ here.
Since I am trying to cover large region using both cameras (and to contour detection later on), I am wondering if there's a fast way to stitch both images from both cameras together ?
Currently here's what I am doing:
Removing each camera's image with previously stored background image (to speed up contour detection later on)
Un-distort each image using cvRemap function
And finally to set the ROI of the images for stitching them together.
My question is, is it possible to speed this up even more ? Since currently these steps take around 60ms, and with additional functionality it slows down to 0.1 second.
Have I been using the slower functions of OpenCV ? Or are there any tricks to gain more speed ?
Take the latest OpenCV snapshot from here and try the stitching module implemented here. They have been working on stitching performance lately, so it's possible to get some good improvements.
By the way, what step takes the most? Did you profile your app? Take a look at the profile results, and you'll be able to understand exactly where to optimize, and maybe how to do it.

Categories

Resources