openCV from window to ios - opencv

I want to train openCV from a server and send the xml generated by openCV to an ios device where an app will recognize the face using the xml trainned by the server. I will use openCV in both app but the server has window (trainning) and the device has ios (recognition).
So my main question is very simple:
The xml generated in openCV window version can be used an openCV IOS version without any trouble? Somebody made something similar who can give me some tips?
In window I will use .Net.
I think they won't have trouble because they are same libraries (openCV), so I suppose they have same internal algorithms but I want to be sure before start the project.
Best Regards and thanks for your time

There is no problem, but you must train with images taken from your devices. It is normal to have multiple xml sets depending on your different cameras. Normally you release these with the binary, and not as a download but still...

Related

Jmyron and Windows 8

I am running into hardware issues that perhaps someone here knows a workaround. I am using a PC and windows.
For several years I have been making interactive installations using video tracking: the Jmyron library in Processing, which has functioned marvelously for me. I use this set up: cctv type microcameras to a multiplexer, the I digitize this signal via a firewire cable to a pci card. Then Processing reads these quads (sometimes more) as a single window, and it has always worked (from windows xp all the way to 7). Then comes windows 8: Processing seems to prefer the built-in webcam to the firewire bus. On previous version of windows, the firewire bus would naturally override the webcam, provided I had first opened a video capture in Windows Maker, and then shut it down before running the Processing sketch. In Windows 7, which had no native video capture software, I used this great open source video editor called Capture Flux. The webcam never interfered. With Windows 8, no matter what I try, Processing defaults to the webcam, which for my purposes is useless. I have an exhibition coming up real soon, and there is no way I am going to have the time to rewrite all that code for Open CV or other newer libraries.
I am curious if anyone has had similar problems, found a work around? Is there a way of disabling the webcam in Windows 8 (temporarily of course, because I need it to be operational for other applications), or some other solution?
Thank you!
Try this:
type "windows icon+x" choose device manager (or use run/command line: "mmc devmgmt.msc")
look for imaganing devices, find your integrated webcamera
right click on it and choose disable - now processing should skip the device.
Repeat the steps to reenable the device.
Other solution would be using commands in processing:
println (Capture.list()); (google it on processing.org) this way you will get all avaliable devices and you can choose the particular one based on its name.
Hope this helps.

porting wireshark to qnx based system

I am a newbie to industry and as a part of my internship I have been assigned the above project.I have no experience in how to go about porting a particular application to a different OS.
So far,i have tried to understand the basic structure of a component(thats what an application is called IOS-XR) but as far as I can understand,porting wireshark will also require porting the libpcap lib to XR.
Can someone please shed some light as to how should i go about approaching it?
I know nothing about QNX;
However, I will note that Wireshark has a lot of dependencies on various libraries:
Some examples;
libgLib
libgtk
libffi-5
libfontconfig-1
libfreetype-6
libintl-8
libjasper-1
libjpeg-8
liblzma-5
libpixman-1-0
libpng15-15
libtiff-5
libxml2-2
...
Are these libraries available on QNX ?
With respect to libpcap:
libpcap is needed for capturing files. If not available, it certainly would need to be ported. I could imagine that this might be a large effort given that presumably the code is presumably quite dependent upon the exact OS capabilities to get access to the network level data.
For information about developing Wireshark (on Windows and *nix) see the
Wireshark Developer's Guide.

Fingeprint thinning code preserving continuity of ridges

I am trying to develop a project that involves fingerprint matching. Now, I am stuck at the stage of Fingerprint thinning. I am coding my project using OpenCV and c++ Visual Studio 2010.
I tried erode() function, but it doesn't preserve the continuity of ridge lines. I also tried the following algorithm of Zhang-Suen thinning,
http://opencv-code.com/quick-tips/implementation-of-thinning-algorithm-in-opencv/
but, this shows an exception at memory location. I don't know how to proceed and i am stuck with this step.
Kindly help me with the code for fingerprint thinning + also preserving continuity of ridges.
If you're just looking for a code example of extraction, SourceAFIS (BSD License) goes from full greyscale to binarized and thinned with some artifact trimming as well and then identifies minutia. It's written in c# but it might give you some bright ideas.

Detect edge of a document in windows phone

I need to detect the corner/edge of a document in a captured image in Windows Phone 7. I cannot use openCV as WP 7 does not have support for native code lib.
Can anybody suggest me some algorithm or open source library that I can use for this purpose.
I want to do the similar thing on Windows Phone app as posted in another SO question: DETECT the Edge of a Document in iPhoneSDK
you can try using A-Forge for this task.
the A-forge .NET have great functionality and I think it will not be a problem to use it on windows phone

Dealing with large Blackberry applications (lots of pictures)

I have a blackberry application with lots of images that was build for pre-OS7 handsets. I have to make it up to date with the new screen sizes, and my 5Mb app will be almost twice as big, which means over the limit for it to work.
What is the best way to handle that in the BB Java Plug-in for Eclipse ?
I've come to the conclusion that i have 2 choices :
Including the new images as a cod (or is it jar?) library in my current project, but didn't manage to do that. Most of what i read was for the JDE anyway and i'd like to do that in Eclipse.
Have a second Bundle for new handsets, but how to do that without having 2 different projects ?
Downloading the new images on install seems to be another one, but it's not an option for this project.
Details and/or links appreciated, as i'm quite new to BB development.
Many thanks
From my point of view the better way is to use only the biggest possible images in the project and scale them down proportionally for every device at the runtime.
When you scale down an image its quality {almost} does not change. There are exceptions, sure. But in general this rule works.
Also you may use preprocessor to build different cod files for different devices with different screens.
You can keep bigger images and get rid of smaller ones. You can handle devices that has lower resolution via image scaling. This way your application becomes smaller.
According to me i suggest you that you have to make same app for only Blackberry OS 7.0 because it has different different resolution and if you manager your application for all Blackberry OS than your app will become larger size and it may be possibilities that we cant upload our app in Blackberry app world.
Remove all previous OS graphic and put into for only Blackberry OS 7 and upload it on market so OS 7.0 user download the latest app.

Resources