I have made an augmented reality app in XNA that displays a 3d model over an SLAR marker (Silverlight Augmented Reality-toolkit).
I would like to display an rectangle floating over the 3d model showing a texture, but I'm not sure what object to use.
Texture2D doesn't have any 3d space rendering capabilties I know of, is there anything else I can use?
The texture I would like to show is generated using the SetData method so it needs to be a Texture2D or an object with simular SetData method.
One solution is to render Textured Quad.
This basically means you're drawing a rectangle in 3d space, and texturing it with whatever texture you'd like.
The link i supplied covers the basics of doing this.
Notice that the link involved XNA 4.0 (not sure if there are any changes from previous releases), but the sample also exists for earlier versions of XNA.
Related
I am not an ios developer but my client wants me to make an iphone app like
https://itunes.apple.com/us/app/trippy-booth-amazing-filterswarps/id448037560?mt=8
I have seen some custom library like
https://github.com/BradLarson/GPUImage
but do not find any camera lens customization example.
any kind of suggestions would be helpful
Thanks in advance
You can do it through some custom shader written in OpenGL(or metal just for iOS), then you can apply your shader to do interesting stuff like the image in above link.
I suggest you take a look at how to use the OpenGL framework in iOS.
Basically the flow would like:
Use whatever framework to capture(even in real time) a image.
Use some framework to modify the image. (The magic occur here)
Use another stuff to present the image.
You should learn how to obtain a OpenGL context, draw a image on it, write a custom shader, apply the shader, get the output, to "distort the image". For real, the hardest part is how to create that "effect" in your mind by describing it using a formula.
This is quite similar to the photoshop mesh warp (Edit->Transform->Warp). Basically you treat your image as a texture and then you render it on to a mesh (Bezier Patch) that is a grid that has been distorted into bezier curves, but you leave the texture coordinates as if it was still a grid. This has the effect of "pulling" the image towards the nodes of the patch. You can use OpenGL (GL_PATCHES) for this; I imagine metal or sceneKit might work as well.
I can't tell from the screen shots but its possible that the examples you reference are actually placing their mesh based on facial recognition. CoreImage has basic facial recognition to give youth out and eye positions which you could use to control some of the nodes in your mesh.
I would like to do something like this:
Have the camera on and tap on the screen to get the color of that area and then replace that color with a texture. I have done something similar by replacing the color on the screen with another color (that is still not working right though), but replacing with a texture is more complex i think.
So please, can somebody give me a hint on how i can do this?
Also , on how to create the texture.
Thank you,
Alex
basically you will want to do this with a boolean operation in the fragment shader.
you'll need to feed two textures to the shader, one is the camera image, the other is the replacement image. then you need a function which determines if the per-fragment color from the camera texture is within a certain color range (which you choose), and depending on that either show the camera texture or the other texture.
your question is a bit vague, you should try to break it down into smaller problems. the tricky part, if you haven't done this before, is getting the OpenGL boilerplate code right.
you need to know:
how to write, compile and use basic GLSL shaders
how to load images into OpenGL textures and use them in your shaders (search for sampler2d)
a good first step is to do the following:
figure out how to show a texture as a flat fullscreen image using 2D geometry. You'll need to render two triangles, and map the texture's coordinates (UV) onto the triangle points.
if you follow this tutorial you'll be able to do the thing you want:
http://www.raywenderlich.com/70208/opengl-es-pixel-shaders-tutorial
What's the simplest way to render the "inside" on a polyhedron in WebGL?
In other words, I want an octagonal prism (like a cylinder) that's "hollow" and I want to be "standing" inside of it. That's all.
I am guessing at your problem, but typically polygons are drawn single-sided -- that is, they don't render when you see them from behind. This is more efficient in most cases.
If the polygons are disappearing when the camera is inside the solid prism, consider either rendering them two-sided (if you want to render BOTH inside and outside at different times), or just reverse the winding (vertex order) of your polygons, or reverse the one-sided polygonal culling state (back/front) of OpenGL to show you the backs rather than the fronts.
Most WebGL frameworks turn on culling so WebGL/OpenGL culls back facing triangles.
You can make sure culling is off with
gl.disable(gl.CULL_FACE);
You can enable culling with
gl.enable(gl.CULL_FACE);
If culling is on you can chose which faces WebGL culls. By default a back facing triangle is one where in screen space the vertices go in clockwise order. For most 3D models that's faces when viewed from inside the model. To cull those back facing triangles you can use
gl.cullFace(gl.BACK);
Or you can tell WebGL to cull front facing triangles with.
gl.cullFace(gl.FRONT);
This article has a section on culling.
Depending on the framework you use, you might use something like below:
Draw the object
Ensure culling is disabled (GL_CULL). In three.js you can use "Doubleside" property for the object
Translate the viewer position (or the object) so that the viewpoint is inside the
object. For example three.sphere.translate
Is there any demo or article about how to paint solid colors on a 3D model using Delphi GLScene component or FireMonkey ?
In the GLScene Demos\interface\hfpick, you have an example of painting on an height field.
Painting on a more complex mesh will require to lookup the texture coordinates of the point under the cursor, which depending on how the texture was mapped may be more or less complex.
A quick hack for small textures that will take advantage of the GPU can be to use the texture coordinates as RGB color, you can do that in a fragment shader, returning the u,v into R,G f.i, and the texture index into the B channel. Render that in an off-screen buffer and lookup the color of the point under the cursor, that'll give you the texture & coordinates, and trivial support for 256x256 textures (and even up to 4096x4096 if you use the texture index wisely).
To do it in a mathematically correct way (using the CPU), you'll have to modify the mesh RayCast methods, so that instead of just finding the triangle being hit, it'll also return the texture coordinates of the point being hit.
This blog article, Visualizing wave interference using FireMonkey, published yesterday, may be of interest.
A couple of screenshots:
(source: embarcadero.com)
(source: embarcadero.com)
I'm relatively new to WebGL, and OpenGL too for that matter, but in recent days I've filled up most my time writing a little game for it. However, when I wanted to implement something like heat waves, or any sort of distortion, I was left stuck.
Now, I can make a texture ripple using the fragment shader, but I feel like I'm missing something when it comes to distorting the content behind an object. Is there any way to grab the color of a pixel that's already been rendered within the fragment shader?
I've tried rendering to a texture and then having the texture of the object be that, but it appears if you choose to render your scene to a texture, you cannot render it to the screen also. And beyond that, if you want to render to a texture, that texture must be a power of two (which many screen resolutions do not quite fit into)
Any help would be appreciated.
You're going to have to render to a texture and draw that texture onto the screen while distorting it. Also, there's no requirement that framebuffer objects must be of a power-of-two size in OpenGL ES 2.0 (which is the graphics API WebGL uses). But non-power-of-two textures can't have mipmapping or texture-wrapping.
I believe you can modify individual canvas pixels directly. Might be a good way to ripple a small area, but might not be gpu-accelerated.