OpenGL pixel manipulation for graphic - ios

I want to simulate stroking a carpet, so you would have a graphic of a fury carpet and with your finger you can move around and stroke it. I need to shift pixels and create some fake distortion around where I am touching.
Anyone have any tips ?
Firstly I guess do I have enough to work with assuming I have 1 jpeg of the material. Not any skeleton or 3d file, just a flat image

this can be also improved with 'fur rendering'
I've some examples:
http://www.ozone3d.net/benchmarks/fur/
http://www.xbdev.net/directx3dx/specialX/Fur/index.php
or new demo from NVidia:
http://www.youtube.com/watch?v=2Fp5N-pOxKA - around 35sec

Sounds like a typical task to be solved with OpenGL shaders.

As MrTJ says: Shaders is your key here.
Apart from your diffuse use a second texture as your "carpet" map that you modify. Maybe use the like a normal map, storing a directional vector per texel.
Use your "carpet" map in your shader and distort however you like to create your desired carpet effect.

Related

Saturn-like rings node in SceneKit

I am creating a model of Saturn and I'm having problems when creating the rings. I found this asset
but when I try to set it as a diffuse, it projects like this
How can I control the way a texture projects over a geometry?
I found the solution. By replacing the cylinder with a torus and rotating the image 90 degrees, XCode did the mapping itself.
But there must be a better way.
This isn’t specifically a SceneKit or IOS issue, the same would apply in any 3D package.
You can control the way a texture projects over a geometry by using UV mapping. In practice that means you map the vertices and faces of the model on to the texture in software such a Blender. The texture you use now is meant to be tiled but because the lines on the texture are perfectly straight it will never look optimal.
To save yourself some trouble, use a texture that shows the entire ring from the top/above.
I think the best way is to use SCNTube.

How to make custom camera lens effects in ios

I am not an ios developer but my client wants me to make an iphone app like
https://itunes.apple.com/us/app/trippy-booth-amazing-filterswarps/id448037560?mt=8
I have seen some custom library like
https://github.com/BradLarson/GPUImage
but do not find any camera lens customization example.
any kind of suggestions would be helpful
Thanks in advance
You can do it through some custom shader written in OpenGL(or metal just for iOS), then you can apply your shader to do interesting stuff like the image in above link.
I suggest you take a look at how to use the OpenGL framework in iOS.
Basically the flow would like:
Use whatever framework to capture(even in real time) a image.
Use some framework to modify the image. (The magic occur here)
Use another stuff to present the image.
You should learn how to obtain a OpenGL context, draw a image on it, write a custom shader, apply the shader, get the output, to "distort the image". For real, the hardest part is how to create that "effect" in your mind by describing it using a formula.
This is quite similar to the photoshop mesh warp (Edit->Transform->Warp). Basically you treat your image as a texture and then you render it on to a mesh (Bezier Patch) that is a grid that has been distorted into bezier curves, but you leave the texture coordinates as if it was still a grid. This has the effect of "pulling" the image towards the nodes of the patch. You can use OpenGL (GL_PATCHES) for this; I imagine metal or sceneKit might work as well.
I can't tell from the screen shots but its possible that the examples you reference are actually placing their mesh based on facial recognition. CoreImage has basic facial recognition to give youth out and eye positions which you could use to control some of the nodes in your mesh.

ios how to replace a color from the camera preview with a texture

I would like to do something like this:
Have the camera on and tap on the screen to get the color of that area and then replace that color with a texture. I have done something similar by replacing the color on the screen with another color (that is still not working right though), but replacing with a texture is more complex i think.
So please, can somebody give me a hint on how i can do this?
Also , on how to create the texture.
Thank you,
Alex
basically you will want to do this with a boolean operation in the fragment shader.
you'll need to feed two textures to the shader, one is the camera image, the other is the replacement image. then you need a function which determines if the per-fragment color from the camera texture is within a certain color range (which you choose), and depending on that either show the camera texture or the other texture.
your question is a bit vague, you should try to break it down into smaller problems. the tricky part, if you haven't done this before, is getting the OpenGL boilerplate code right.
you need to know:
how to write, compile and use basic GLSL shaders
how to load images into OpenGL textures and use them in your shaders (search for sampler2d)
a good first step is to do the following:
figure out how to show a texture as a flat fullscreen image using 2D geometry. You'll need to render two triangles, and map the texture's coordinates (UV) onto the triangle points.
if you follow this tutorial you'll be able to do the thing you want:
http://www.raywenderlich.com/70208/opengl-es-pixel-shaders-tutorial

Colors not blending properly in OpenGL ES

I'm trying to render 2 (light) circles in OpenGL ES in 2D. The middle is white, the border is black. It works fine, as long as they don't overlap:
But as soon as they do, I get this artifact:
I'm using glBlendFunc(GL_ONE, GL_ONE) with blending enabled of course.
What could be causing this? Is there a way to fix it?
I'd like them to blend more like this:
Thanks!
Are your circles currently linear gradients? You might get less of an artifact if you have a different curve.
Based on your example, though, it looks like you want the maximum intensity of the two circles, not the sum of the intensities. It appears that Apple's OpenGL ES 2.0 implementation support the EXT_blend_minmax extension, which lets you specify that the resulting fragment values should be the maximum of the inbound and existing values. Maybe try that?
The result you're seeing is exactly what should come out for linear gradients. Hint: Open up Photoshop or The GIMP draw two radial gradients into two layers and set them to "Addition" blending mode. It will look exactly like your picture.
A effect like what you desire is given with square gradients. If your gradient is in the range 0…1 take the square of the value and draw this. You may apply a sqrt later if you want to linearize the single gradients.
Not that this is something not easily done using the blending stage; it can be done using multiple passes, but then it's actually more straightforward to use a shader to combine passed from two FBOs.

transform texture position on DirectX Meshes

is there a way that I can adjust the uv positions of textures on my mesh when rendering without manually recalculating and changing all the UVs? If so, which would be more efficient? I read something about a transformation that might work but it sounded like it might not work if I'm also transforming the position and size of my mesh. This would be a problem so please take this into account when considering any possibilities.
(Sorry if I'm talking weird. I'm a bit sleep deprived, atm.)
thanks
I'm a bit sleep deprived
I suggest to get some sleep instead of programming with directx.
is there a way that I can adjust the uv positions of textures
IDirect3DDevice9::SetTransform(D3DTS_TEXTURE0, &matrix). IN shaders transformations must be done manually.

Resources