How to render ink on screen using DierctX?
I am aware of InkManager in C# for rendering ink on canvas.
I am trying to do similar functionality using SharpDX.
But I don't have any sample code to refer.
Is there any tutorial or sample code which explains about rendering ink using directX?
Anything is ok, either C++ or C#
The closest technology in DirectX that would provide a basic infrastructure for "Ink rendering" would be Direct2D. This is probably what "InkManager" is using internally, at least for the drawing part. There is no "handwriting recognition" in Direct2D. But as Direct2D is a low level API, you will have to manage lots of details.
There are dozens of samples in SharpDX with Direct2D (either samples on Desktop - where the drawing part is still valid on WinRT, or plain WinRT), but you will not have a direct "Ink" samples, so you will have to dig into this yourself.
Also, the only source of information for Direct2D is msdn. There is no book and very view tutorials about this API.
Have a look at:
http://code.msdn.microsoft.com/windowsapps/XAML-SwapChainPanel-00cb688b
Specifically scenarios 2 and 4. I have used this example to render and scale inkstrokes from C#.
Related
I'm using the Elm WebGL library found here to make webGL graphics for my website. I would like to use certain graphics techniques such as shadow mapping which require the ability to use the results of operations performed on the graphics card; a write to a renderbuffer backed by a texture, if I recall my OpenGL ES terminology correctly, which is then used by the shader which draws to the screen.
Looking in the API provided it doesn't look like doing this is possible, because the only thing in the API that can actually execute/hold the result of a WebGL pipeline/Entity is of type Element.
My question is if it is possible to use techniques like shadow mapping and SSAO which require more than one pass to draw the scene with the standard Elm WebGL library, and how I might accomplish this.
Sadly, the answer is indeed: No, you cannot do multiple passes and generate textures using the graphics card yet. The WebGL library is pretty new, so this is a feature that was only requested for the first time 6 days ago on the elm-discuss mailing list.
The author of the WebGL library has yet to respond, but I expect the features described in the linked post will become available at some point.
I'm creating a simple Breakout style game and would like a simple way to display the score.
I've been doing some research and found several ways to do text in OpenGL ES, but most methods look fairly complicated.
This looks like it would do the trick, but I couldn't get it to work.
I've looked into FTGL and FreeType, but they look complicated.
I've also read one can display a UILabel over the EAGLContext, but not sure how that would be in the performance department.
I could probably get any of these options to work, I'm just wondering what the best solution is for this situation.
For simple use cases like you're describing, on even vaguely modern hardware (i.e. iPhone 3GS and later, I think), the compositing penalty for layering UIKit/CoreAnimation content on top of OpenGL ES content is negligible. (You can see this if you run your app in Instruments with the "Color OpenGL ES fast path blue" option turned on.)
They say premature optimization is the root of all evil — it's pretty easy to try UILabel, see if it makes a significant difference to your app's performance, and look into third-party libraries and more complicated solutions only if it does.
(Also, it sounds like you might be trying to manage your own CAEAGLLayer. For common use cases, it's a lot easier to use GLKView, plus GLKViewController for animation.)
I'd recommend checking out the Print3D functionality of the PowerVR SDK's PVRTools framework. Print3D is free to use, cross-platform (iOS, Android, Linux, Windows, OS X etc.) and it efficiently renders text within OpenGL ES 1.x, 2.0 & 3.0 applications. The SDK includes an example application with source that demonstrates how to use the Print3D framework (IntroducingPrint3D).
The PowerVR Graphics SDK can be downloaded for free from Imagination's website: http://www.imgtec.com/powervr/insider/sdkdownloads/index.asp
An overview of the source included in the SDK can be found here: http://www.imgtec.com/powervr/insider/sdkdownloads/learn_more.asp
I'm looking into a solution that will allow to use OpenStreetMap data to render a 2D top-view vector-based map in iOS, instead of using pre-rendered tiles from a server. Similar to Apple and Google Maps in iOS6+.
I've done extensive research on this matter, but didn't found too much information.
There are a number of iOS apps that do this, but no information on how they implement it. A couple of these apps are:
ForeverMap 2 by skobbler
Galileo Offline Maps
OffMaps 2
The first 2 apps work similar to Apple and Google Maps. The map is drawn in real time whenever the zoom changes.
The last one appears to be using a slightly different approach. It renders the vector data at specific zoom levels and creates tiles which are then used as normal tiles downloaded from a tile server. So the rendering engine could actually be a tile source for the Route-Me library, but instead of downloading the tiles it renders them on the fly.
The first method is preferred.
[Q] I guess one could switch between methods fairly easy, once the OpenGL ES renderer is in place. I mean you could use the renderer as a source for Route-Me to create tiles, or you could use it as a real-time drawer, similar to a game. Am I right?
The closest solution I found is OpenStreetPad. However, it is using Core Graphics instead of OpenGL ES, so the rendering is not hardware accelerated.
Mapbox stated they are working on vector tiles and they'll probably provide an iOS solution for rendering, however it may use Mapnik so I am not sure how efficient will that be. And there's no ETA on since mid 2013.
[Q] Do you know of any other libraries, papers, guides, examples, or some other useful information on how to approach this? Basically how to handle the OSM data and how to actually use OpenGL ES / GLKit to draw that data on the device. Maybe some of the people who have done it can share a few things?
Old question, but there's a new answer.
WhirlyGlobe-Maply will render tile based vector maps on iOS. http://mousebirdconsulting.blogspot.com/2014/03/vector-maps-introduction.html
The technology that powered skobbler's ForeverMap 2 and their current GPS Nav & Maps app is now available on a pay-per use basis. See their developer platform.
Note: they also have a free tier that can be used to develop/launch small apps.
They render the map using OpenGL and "vector data tiles". This vector data tiles contain information regarding road geometry (so you can have routing), POI data & other map features. (eg. boundary limits).
There is a list of OSM-based applications for iOS. It also includes a few open source projects, for example Navit. Navit seems to render the map using SDL/OpenGL. See the Navit iOS wiki page for more information.
I am new to iOS programming and am interested in working with images. Basically, I want to be able to obtain the (0,255) and RGB tuples of every pixel in a given image. What would be the best way of doing this? Would I need to use Open GL, or something similar?
Thanks
If you want to work with images, get a copy of Apple's 'Quartz 2D Programming Guide'. If you want even more detailed how-to, get a copy of the "Programming with Quartz" book on Amazon (its says Mac in the title as it predates iOS).
Essentially you are going to take images, draw them into bit map contexts, then determine the rgba layout by querying the image.
If you want to use system resources to assist you in making certain types of changes to images, there is a OSX framework recently moved to iOS called the Accelerate Framework. and it has a lot of functions in it for image manipulation (vImage).
For reading and writing images to the file system look at Apple's 'Image I/O Guide'. For advanced filtering there is Core Image, which allows you to apply filters to images.
EDIT: If you have any interest in really fast accellerated code that uses the GPU to perform some sophisticated filtering, you can checkout Brad Larson's GPU Image project on github.
Is it possible to use the MS Ink API to analyze a scanned image of handwriting, or does it only work with tablet pen input?
If it is possible, where is the relevant documentation?
No, its not possible.
Windows Ink is technology that uses information from the writing process, such as pen up and down, as well as direction of movement. this makes handwriting recognition relatively easy, but only because it can get the live writing data.
Analyzing previously written handwriting is very different and much more difficult, and requires machine learning. In this case it is kind of using the same method of recognition as humans use. check out the developments in Intelligent Character Recognition and Intelligent Word Recognition. It requires a lot of processing power and thats why services such as google goggles have to send the image to a 'trained' machine, and even then it cant really read handwriting well. its cutting edge technology, not ready at all for mass deployment.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms698159(v=VS.85).aspx
This application demonstrates how you can build a handwriting recognition application.The Windows Vista SDK provides versions of this sample in C# and Visual Basic .NET, as well. This topic refers to the Visual Basic .NET sample, but the concepts are the same between versions.
And you must also investigate the main class InkAnalyzer.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb969147(v=VS.85).aspx
This sample shows how to use Ink Analysis to create a form-filling application, where the form is based on a scanned paper form.