Detect edge of a document in windows phone - image-processing

I need to detect the corner/edge of a document in a captured image in Windows Phone 7. I cannot use openCV as WP 7 does not have support for native code lib.
Can anybody suggest me some algorithm or open source library that I can use for this purpose.
I want to do the similar thing on Windows Phone app as posted in another SO question: DETECT the Edge of a Document in iPhoneSDK

you can try using A-Forge for this task.
the A-forge .NET have great functionality and I think it will not be a problem to use it on windows phone

Related

How can I create hybrid apps on IOS?

I see on Android that apps exist to create android apps. I understand nothing like this exists on IOS because of apple's terms. On IOS however, some apps, pythonista for example, allow the user to create scripts that run similar to apps. Is this functionality currently available for hybrid frameworks, IE phonegap/cordova, react native, etc? Barring this, is there some method whereby I can code and test such apps on my iPhone/iPad?
Bottom line, I want to code apps while commuting, etc, on IOS. I understand I need a computer to compile the final product, that's ok, it's just the coding/testing process I want to do on IOS.
I am up for any hack you can think of to make this work, so long as it is accessible with VoiceOver, apple's screen reader, as I cannot see at all. One example of something I thought of that won't work is using remote desktop software, there is no such software that is accessible as it uses an image of the remote screen, I have no access to this.
I am looking forward to your creativity, so far this has me stumped.
Thanks in advance.
Similar to the playgrounds answer, but if you wanted to use Xamarin you could use Continuous .NET. It’s a C# IDE for iOS. You could then use Working Copy to to keep the version on your computer in line.
The other option is to VNC into your computer at home, but if you’re on the train that might not be a great option.
It's not a solution for your problem, but if you have an iPad, you can write parts of apps in Swift Playgrounds. There you have access to all the UIKit stuff. Unfortunately some of the frameworks you can use in iOS are missing.

Jmyron and Windows 8

I am running into hardware issues that perhaps someone here knows a workaround. I am using a PC and windows.
For several years I have been making interactive installations using video tracking: the Jmyron library in Processing, which has functioned marvelously for me. I use this set up: cctv type microcameras to a multiplexer, the I digitize this signal via a firewire cable to a pci card. Then Processing reads these quads (sometimes more) as a single window, and it has always worked (from windows xp all the way to 7). Then comes windows 8: Processing seems to prefer the built-in webcam to the firewire bus. On previous version of windows, the firewire bus would naturally override the webcam, provided I had first opened a video capture in Windows Maker, and then shut it down before running the Processing sketch. In Windows 7, which had no native video capture software, I used this great open source video editor called Capture Flux. The webcam never interfered. With Windows 8, no matter what I try, Processing defaults to the webcam, which for my purposes is useless. I have an exhibition coming up real soon, and there is no way I am going to have the time to rewrite all that code for Open CV or other newer libraries.
I am curious if anyone has had similar problems, found a work around? Is there a way of disabling the webcam in Windows 8 (temporarily of course, because I need it to be operational for other applications), or some other solution?
Thank you!
Try this:
type "windows icon+x" choose device manager (or use run/command line: "mmc devmgmt.msc")
look for imaganing devices, find your integrated webcamera
right click on it and choose disable - now processing should skip the device.
Repeat the steps to reenable the device.
Other solution would be using commands in processing:
println (Capture.list()); (google it on processing.org) this way you will get all avaliable devices and you can choose the particular one based on its name.
Hope this helps.

IBM Worklight - Is there an embedded rendering engine? Can we change it?

I'm working on a software that includes 3D graphics. Those are massive enough, so I decided to use OpenGL to keep a quite fluent animation. I selected THREE.js graphic library (WebGL).
Reading the html through a Web browser works very well : WebGL functions are recognized. I did it on my desktop (Win32/Firefox17, please do not judge me on my configuration !) and on a Nexus 10 (Android 4.3, FF24 and FF25Beta, tried with Chrome30Beta but no joy...). But I need to access native data, like the file system, to get informations for my program. So I wrapped my code with WL, and deployed it as an app on my Nexus 10... And so disappeared the WebGL capability... :(
So I looked for a reason to that :
I found on the IBM site two different ideas : in one way, I understand that a JS engine is embedded, in another that WL uses the engine of the default Web browser of the tablet (what I understood the first time)...
Let's be precise with the different engines : On the Nexus, FFs have obviously Gecko engines, Chrome30 is a Blink (webkit-like, version 537.36). Those are the ones detected by window.navigator.useragent as I read directly in the browser, no surprise. In the Eclipse/WL preview, I got different interpreters, depending the browser I selected, FF or IE (not Safari, I don't have it installed), but not the one from my desktop (the ones used are even older than my own FF...). But, when I detect the one used in the app (after wrapping in an apk), it returns an AppleWebkit 534.30/Worklight/6.0...
Maybe I'm wrong (tell me), but if 'Worklight' is in the version of the engine, and if webkit is used even when I suppress Chrome from my tablet (the version is different, but who knows..), I wonder that, for this app, as it's configured, the engine is embedded by Cordova or WL.
If it is so, I agree it allows to read a code with a fully-compatible interpreter, regardless the browser installed on the hardware. But when a webkit engine does not please you for the functions it supports (like WebGL, very partially supported), it looks a problem for me.
Does anybody have a confirmation of how it works ? If the engine is wrapped with the app, do you know if we can choose the one to be included, or configure it (like enabling WebGL ;) ) ? Another idea ?
Thanks,
Vincent.
Worklight applications do not bundle an interpreter. The application will use what that is bundled in the OS.
In other words, the default WebView in Worklight is the one that the OS provides, in the case of Android it uses the bundled WebKit.
This is not something Worklight controls what-so-ever.
You could, maybe, somehow, bundle in your app the Firefox engine libraries and hook it all up together, but the task to do so is incredibly large and complex in size... and not supported by IBM Worklight. Also, I do not know whether Cordova supports this as well ( it is used in Worklight to interface with native functionality).
As for the useragent, the string "Worklight" is attached to it as part of support for IBM WebSphere Portal.

openCV from window to ios

I want to train openCV from a server and send the xml generated by openCV to an ios device where an app will recognize the face using the xml trainned by the server. I will use openCV in both app but the server has window (trainning) and the device has ios (recognition).
So my main question is very simple:
The xml generated in openCV window version can be used an openCV IOS version without any trouble? Somebody made something similar who can give me some tips?
In window I will use .Net.
I think they won't have trouble because they are same libraries (openCV), so I suppose they have same internal algorithms but I want to be sure before start the project.
Best Regards and thanks for your time
There is no problem, but you must train with images taken from your devices. It is normal to have multiple xml sets depending on your different cameras. Normally you release these with the binary, and not as a download but still...

Render video to Direct3D 9.0c texture

I have been trying to play a video in my Direct3D application and have been trying to do so with the help of DirectShow. My problem is that I cannot find how to get the frame data to put into a texture (ISampleGrabber won't install).
Does anyone know of any methods or examples of this being done?
ISampleGrabber is available in all versions of Windows released last 15+ years
Look for "Microsoft® DirectX® 9.0 SDK Update (October 2004)" which contains sample app, which does exactly what you want:
Texture3D Sample Description
Draws video on a Microsoft® Direct3D texture surface.
Note This sample does not support changing the display properties of
the monitor while the sample is running.
Path
Source: (SDK root)\Samples\C++\DirectShow\Players\Texture3D
Executable: (SDK root)\Samples\C++\DirectShow\Bin\Texture3D.exe
UPDATE. Even though Sample Grabber existed though many many versions of Windows, it was finally removed along with other filters hosted by qedit.dll in most recent versions of operating systems (Windows Server 2008 in particular). Those whose application are dependent on this API, should consider building a replacement using Grabber sample from older SDKs. The same applies to those needing this filter because of so many references on Internet and tutorials on how to use it to get access to media streams.
The filter was removed silently and without any replacement. Microsoft suggests that Media Foundation is an alternate option and successor to DirectShow, which is however hardly helpful.

Resources