I'm developing an Augmented Reality iOS App with Vuforia. I need to change the Teapot (I'm working over the ImageTargets sample) with a UIWebView that will provide some links and some static text.
Is there a way to render a UIWebView as a texture with OpenGL-ES and draw it instead of the Teapot?
Well, yes you can do that. I talk from a purely Vuforia Unity standpoint, but a similar construction can be used and the logic stays the same.
Construct a cube and add the texture with the Text over it. But for this you'll have to create the texture to add using OpenGL
With this cube replace it with the teapot on top of your ImageTarget
Now if your words need to be clicked, you can always use Virtual Buttons. I've explained about same in this question
Related
I am making an augmented reality App to demonstrate the options in MacBook and I used the Vuforia SDK.
Here is my problem:
1) I tried with Vuforia Sample Core Feature and I used Image Targets. In Image targets it gives only one image at a time. I attached the output in below mentioned image.
2) My expectation is to show multiple text or image while capturing the real MacBook like below mentioned image.
Please guide me to achieve this.
Vuforia iOS SDK is using OpenGL ES loading 3D object,which's unfriendly to use.
You can use Scenekit, put your objects in a scene, set a rectangle node, put the model in four corners of a rectangle.When ImageTrack is success, load your scene.
How to use SceneKit in Vuforia? check this:https://github.com/yshrkt/VuforiaSampleSwift
I'm surprisingly struggling a lot to display text with OpenGL ES 2.0. There are a ton of posts on stackoverflow, debating the subject, showing a few lines of code, or showing links from 2010 but working with OpenGL ES 1.x (not compatible).
But they are quite vague, and to my knowledge there is not complete code or convenient way to display text with version 2.
Do you know if there is a modern way to display text ? Like just adding a pod and writing something like this ?
[font drawText:#"This is a text" size:#12];
Thanks a lot in advance, any help would be very much liked.
EDIT 1 : I have to use OpenGL 2.0, and I can't use something else, for internal reasons.
EDIT 2 : I've found two libraries that do it :
FTGLES : It crashes at runtime when I try to use it
https://github.com/chandl34/public/tree/master/personal/c%2B%2B/Font
This one is simple but written for ES1, so I need to port the code to ES2.
EDIT 3 : ES1 and ES2 are different : ES2 works with shaders.
Since you are targeting ES2 and shaders are supported it might be a bit hard to create a tool that does this for you. But if you find such a tool it might be conflicted with your flow in many ways such as binding its own shaders and buffers, using settings on blending, depth buffer... Then if it would do that you are having trouble as you may not have a good control over how and where the text is drawn (on a 3D rotating box for instance).
But from your question (it's snippet) it seems more like all you want to do is add some 2D overlay with text which would look exactly like using an UILabel. If this is the case then I suggest you to actually use UILabel to draw these texts. You can add all of these views easily on your view that shows openGL content.
In the other case where you still need to draw text on a 3D object and want to do it very easily I suggest you to still use UILabel but create its screenshot and push it to a new (or atlas) texture. Then you can draw it as any other object. UILabel will then handle all fonts, alignments, colors, multiline and text wrapping, font size adjustments... So if you already have a system to draw a texture in the scene you should not be too far from creating yourself a tool that draws some text on the screen since you use a texture to transfer the data.
Nothing has changed since OpenGL ES1. Usually text is displayed in planar projection and created quads that are textured using font texture. There are many tutorials on this topic.
One example how to do it.
However, this is quite a lot of work when you started from scratch.
There might be better way, but depending on what you plan to do might not be suitable. You may mix UIKit with OpenGL view, having text drawn with UIKit as overlay. (UILabel, UIButton..etc.)
I'm trying to make a very simple 3D model viewer in a phonegap app for use on an iPhone 4. I'm using three.js which is working fine when I make a simple website. The problem is that when I try it out on the phone the 3D object doesn't appear. Simple geometrical shapes like a cube and cylinder will load on the canvas but obj files won't.
I use an objLoader to bring in the .obj file and have all relevant files in the same directory in the app just in case. I think the problem might lie with using webGL on iOS but I'm not really sure.
Thanks very much for your help. If anyone has any suggestions for building a model viewer in phonegap for display in iOS I'd be delighted to hear them.
As very few mobile browsers support WebGL at the moment I opted to use the canvas to render the 3D models. I used a simple web 3D object viewer called JSC3D to create a model viewer in PhoneGap on iOS. It can use webGL but I just went with rendering using the 2D canvas.
I tested my app on an iPhone 4 and the result was that the model took between 2 and 5 seconds to load up and when you go to rotate the object it takes some time to redraw it depending on how complex it is. While not the most satisfactory result it did do the job. I'm going to try it out on a more advanced phone in Android and I'll let you know the result.
I suggest you try use XDK Intel if you are packaging for iphone but for android use AIDE Phonegap. Make sure you use only var renderer = new THREE.CanvasRenderer();and avoid using
Anything that has to do with WebGL for its not supported on most devices except BB Playbook and BB 10.
I think IOS render ability is better than Android, some scene renders well in IOS, but not in the Android.
Usually the movile have more render ability, than tha mobile app. I use the firefox to render the three.js obj demo, it works well. But when I use the webview in the app, it renders nothing.
I've make an Android app to render stl models. First when I use the mobile browser to render the scene, it not render the full scene, and when I remove the shadow effect, it renders. Then I try to use webview with WebGlRenderer or CanvasRenderer, no works. Last I refer to the XWalkView of crosswalk a web engine which can used as an addin of the app, to replace the webview, I also close the shadow effect, I renders well.
You can refer this answer for more info.
Here is the render result.
You definitely should not use WebGL renderer of three.js as it's not supported on iOS. Try Canvas or SVG renderer.
I have developed a Canvas prototype of a game (kind of), and even though I have it running at a decent 30 FPS in a desktop browser, the performance on iOS devices is not what I hoped (lots of unavoidable pixel-level manipulation in nested x/y loops, already optimized as far as possible).
So, I'll have to convert it to a mostly native ObjC app.
I have no knowledge of ObjC or Cocoa Touch, but a solid generic C background. Now, I guess I have two options -- can anyone recommend one of them and whether they are at all possible?
1) Put the prototype into a UIWebView, and JUST do the pixel buffer filling loops in C. Can I get a pointer to a Canvas pixel array living in a web view "into C", and would I be allowed to write to it?
2) Make it all native. The caveat here is that I use quite a few 2D drawing functions too (bezierCurveTo etc.), and I wouldn't want to recode those, or find drawing libraries. So, is there a Canvas-compatible drawing API available in iOS that can work outside a web view?
Put the prototype into a UIWebView, and JUST do the pixel buffer filling loops in C
Nah. Then just embed a web view into your app and continue coding in JavaScript. It's already JITted so you don't really have to worry about performance.
Can I get a pointer to a Canvas pixel array living in a web view "into C"
No, not directly. If you are a hardcore assembly hacker and reverse engineer, then you may be able to do it by looking at the memory layout and call stack of a UIWebView drawing to a canvas, but it's just nonsense.
and would I be allowed to write to it?
Once you program your way down to that, I'm sure you would.
Make it all native.
If you wish so...
The caveat here is that I use quite a few 2D drawing functions too (bezierCurveTo etc.), and I wouldn't want to recode those, or find drawing libraries.
You wouldn't have to do either of those, instead you could just read the documentation of the uber awsum graphix libz called CoreGraphics. It comes by default with iOS (as well as OS X, for the record).
So, is there a Canvas-compatible drawing API available in iOS that can work outside a web view?
No, I don't know of one.
Translating your objective c code to html5 sounds like a daunting task, How about a 3rd option where you don't have to change your JavaScript code ?
http://impactjs.com/ejecta
Ejecta is like a Browser without the Browser. It's specially crafted
for Games and Animations. It has no DIVs, no Tables, no Forms – only
Canvas and Audio elements. This focus makes it fast.
Yes, it's open source
https://github.com/phoboslab/Ejecta
I want to develop a graphics application for the iPad (ported from some existing code). The application centers around creation/manipulation of a 2D image and the pixel level, but otherwise would look more like a 2D game than a standard 'business' application, so I could probably save a lot of time using a 2D game framework.
Ideally I think my best solution would be a 2D game engine that allows image manipulation. Does such an engine exist? I looked at the Corona SDK, but that doesn't support image processing.
Alternatively does there exist any OpenGL frameworks that include a widget set suitable for creating bespoke controls?