I need some list of Augmented Reality toolkit that can work with .Net. I want to implement it with basic .Net for windows not with WPF or Silverlight.
This may be helpfull to you SLARToolkit - Silverlight and Windows Phone Augmented Reality Toolkit
Features:-
Direct Support for Silverlight's CaptureSource
Support for Windows Phone's Photo Camera class
Built-in support for Silverlight 5's hardware accelerated 3D API
Flexible through a generic and a WriteableBitmap detector
Multiple marker detection
Simple black square markers
Custom markers
Real time performance
Easy to use
Documentation including a step by step Beginner's Guide
Based on established algorithms and techniques
Uses the Matrix3DEx library
Related
Info seems to be scarse, hoping someone can point me to a sdk, libary, code to get the infra frame from the hello camera in the surface pro.
Does opencv support this?
More info the camera is Intel AVStream Camera 2500 as listed in the device manager of the surface pro.
To my best knowledge Media Foundation API has no support for infrared cameras. Microsoft did not update the API to extend it to such inputs even though it is technically possible when it comes to undocumented.
You can read infrared frames through a newer API offered for UWP development: Process media frames with MediaFrameReader, the keyword there is this: MediaFrameSourceKind.Infrared. This API is built on top of Media Foundation and Sensor APIs and gets you infrared cameras even though underlying Media Foundation alone has no equivalent public interface.
Given that this is UWP API, you might have troubles fitting this all together with OpenCV if you need the latter. UWP/OpenCV bridging might be on help there: Create a helper Windows Runtime component for OpenCV interop.
Since OpenCV is supposedly interfacing directly to traditional Windows APIs, DirectShow and Media Foundation, it is highly unlikely that it is capable of capturing infrared stream out of the box, unless, of course, the driver itself represents it as normal video. "Proper" markup on Surface Pro as infrared, thus, hides sensor from the mentioned APIs and, respectively, OpenCV.
I want to export Navisworks 3D navigation models to my IPhone device , Is there any API available to achieve this. I want to create my Own App to read models into IOS - similar to Navisworks Freedom viewer for IOS.
I have lots searched on internet but couldn't find any useful.
There is no Navisworks viewer for iOS, but there is a WebGL viewer that can be embedded on mobile apps (or web or desktop too).
There is a live sample at https://360.autodesk.com/viewer
See the API at http://developer.autodesk.com
iOS sample at https://github.com/Developer-Autodesk/workflow-ios-view.and.data.api
I recommend developing your own native or web app to build a mobile 3d model viewer.
Web App - you could use Unity3d or Three.js. These communities are strong and there are plenty of resources available. The benefit here is that it would work on desktop too.
Native app - You could make a model viewer in Swift using Apple's Metal library. I am not familiar with Android 3d shader libraries.
Both of these endeavours are huge amounts of works. I hope you would keep any eye out for code you can copyright (or open source), perhaps even patent if you develop a new, complex algorithm for converting/displaying 3d data.
I am using an XBox Kinect with the Kinect for Windows SDK. I want to make an application that will augment a 3D mask (a 3D model of a mask made in 3DS Max) onto the face of anyone using the application. The application will be used in an exhibit locally. I have not tried much because I don't know where to start. So what I want to know is, is it currently possible to augment a 3DS Max model onto a live video stream using the facial recognition and skeletal tracking features in the newest Kinect for Windows SDK, and if so, how/where should I start trying to do/implement this? Any point in the right direction would be great. Thank you! PS And yes, I have read the UI guidelines and the facial documentation. My problem is one of not knowing where to start programming, not one of not understanding the fundamental concepts. Thanks!
If you are serious about getting into developing for the Kinect I would recommend getting this book:
http://www.amazon.com/Programming-Kinect-Windows-Software-Development/dp/0735666814
This goes through developing with the Kinect for Windows SDK from the ground up. There is a face tracking and an augmented reality example so I'm pretty sure you will be able to achieve your goal quite easily.
All the code from the book is here:
http://kinecttoolbox.codeplex.com/
Alternatively, there is an example here which pretty much is what you want to achieve:
http://www.codeproject.com/Articles/213034/Kinect-Getting-Started-Become-The-Incredible-Hulk
It is developed using the Beta version of the SDK, but the same priciples apply.
You can also check out the quick start videos here:
http://channel9.msdn.com/Series/KinectQuickstart
In summary, based on my own experience, I would spend some time going through the beginner examples either in the vides or the book (I found the book very good) just to get familiar with how to setup a simple Kinect project and how the different parts of the SDK work.
When you have developed some throwaway apps with the Kinect, I would then try tackling your project (although, the Incredible Hulk project above should get you most the way there!)
Best of luck with your project
Is it possible to use the MS Ink API to analyze a scanned image of handwriting, or does it only work with tablet pen input?
If it is possible, where is the relevant documentation?
No, its not possible.
Windows Ink is technology that uses information from the writing process, such as pen up and down, as well as direction of movement. this makes handwriting recognition relatively easy, but only because it can get the live writing data.
Analyzing previously written handwriting is very different and much more difficult, and requires machine learning. In this case it is kind of using the same method of recognition as humans use. check out the developments in Intelligent Character Recognition and Intelligent Word Recognition. It requires a lot of processing power and thats why services such as google goggles have to send the image to a 'trained' machine, and even then it cant really read handwriting well. its cutting edge technology, not ready at all for mass deployment.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms698159(v=VS.85).aspx
This application demonstrates how you can build a handwriting recognition application.The Windows Vista SDK provides versions of this sample in C# and Visual Basic .NET, as well. This topic refers to the Visual Basic .NET sample, but the concepts are the same between versions.
And you must also investigate the main class InkAnalyzer.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb969147(v=VS.85).aspx
This sample shows how to use Ink Analysis to create a form-filling application, where the form is based on a scanned paper form.
Anyone know any open source framework for augmented reality in BlackBerry or a good tutorials for creating an augmented reality application from scratch?
Here is an interface prototype for the free LayarPlayer for third party BlackBerry7 apps: https://gist.github.com/1219438. Not sure if Wikitude will have a lib or not.
If you wanna roll your own AR lib (not recommended, unless you have tons of time and energy) OpenGL ES is platform independent, just use ComponentCanvas for overlaying it on top of the camera view.
BlackBerry OS 7 SDK apparently includes APIs to assist in developing augmented reality applications.
I am working on an OpenGL application for BlackBerry and I too have realised there are not many OpenGL tutorials for it. But you can always use Android ones. They are not really very different.
And I think we should profit from the new BlackBerry graphics card and CPU to create some exciting 3D application for the patform.
You can find OpenGL basic samples on the BlackBerry website and in the BlackBerry SDK.
Notice: All BlackBerry devices that run on Os 7 have a dedicated graphics card and 1.2ghz of CPU frequency.