How to make custom camera lens effects in ios - ios

I am not an ios developer but my client wants me to make an iphone app like
https://itunes.apple.com/us/app/trippy-booth-amazing-filterswarps/id448037560?mt=8
I have seen some custom library like
https://github.com/BradLarson/GPUImage
but do not find any camera lens customization example.
any kind of suggestions would be helpful
Thanks in advance

You can do it through some custom shader written in OpenGL(or metal just for iOS), then you can apply your shader to do interesting stuff like the image in above link.
I suggest you take a look at how to use the OpenGL framework in iOS.
Basically the flow would like:
Use whatever framework to capture(even in real time) a image.
Use some framework to modify the image. (The magic occur here)
Use another stuff to present the image.
You should learn how to obtain a OpenGL context, draw a image on it, write a custom shader, apply the shader, get the output, to "distort the image". For real, the hardest part is how to create that "effect" in your mind by describing it using a formula.

This is quite similar to the photoshop mesh warp (Edit->Transform->Warp). Basically you treat your image as a texture and then you render it on to a mesh (Bezier Patch) that is a grid that has been distorted into bezier curves, but you leave the texture coordinates as if it was still a grid. This has the effect of "pulling" the image towards the nodes of the patch. You can use OpenGL (GL_PATCHES) for this; I imagine metal or sceneKit might work as well.
I can't tell from the screen shots but its possible that the examples you reference are actually placing their mesh based on facial recognition. CoreImage has basic facial recognition to give youth out and eye positions which you could use to control some of the nodes in your mesh.

Related

Augmented Reality – Lighting Real-World objects with Virtual light

Is it possible to import a virtual lamp object into the AR scene, that projects a light cone, which illuminates the surrounding space in the room and the real objects in it, e.g. a table, floor, walls?
For ARKit, I found this SO post.
For ARCore, there is an example of relighting technique. And this source code.
I have also been suggested that post-processing can be used to brighten the whole scene.
However, these examples are from a while ago and perhaps threre is a newer or a more straight forward solution to this problem?
At the low level, RealityKit is only responsible for rendering virtual objects and overlaying them on top of the camera frame.
If you want to illuminate the real scene, you need to post-process the camera frame.
Here are some tutorials on how to do post-processing:
Tutorial1⃣️
Tutorial2⃣️
If all you need is an effect like This , then all you need to do is add a CGImage-based post-processing effect for the virtual object (lights).
More specifically, add a bloom filter to the rendered image(You can also simulate bloom filters with Gaussian blur).
In this way, the code is all around UIImage and CGImage, so it's pretty simple😎
If you want to be more realistic, consider using the depth map provided by LiDAR to calculate which areas can be illuminated for a more detailed brightness.
Or If you're a true explorer, you can use Metal to create a real world Digital Twin point cloud in real time to simulate occlusion of light.
There's nothing new in relighting techniques based on 3D compositing principles in 2021. At the moment, when you're working with RealityKit or SceneKit, you have to personally implement the relighting functionality with the help of two additional render passes (RGB pass is always needed) - Normals pass and PointPosition pass. Both AOVs must be 32-bit.
However, in the near future, when Apple engineers finally implement texture capturing in Scene Reconstruction – any inexperienced AR developer will be able to apply a relighting procedure.
Watch this Vimeo Video to find out how relighting can be achieved in The Foundry NUKE.
A crucial point here, when implementing the Relighting effect, is the presence of a LiDAR scanner (or iToF sensor if you're using ARCore). In other words, today's relighting solution for iOS is Metal + RealityKit.

Metal dynamically generate 2D texture at run-time?

I'm working on a university project for which I want to texturize meshes which were generated using the new LiDAR sensor on recent Apple devices at run-time.
I've read some research papers about texturing 3D reconstructions and I want to try a 'simple' approach were mesh faces (triangles) are transformed to image-space of key-frames that were taken with the camera of the device. Then, I want to perform a pixel-lookup at the position of the triangle in the image and then add that part of the image to a texture map. When I keep track of all triangles in the mesh and determine its corresponding color in a relevant key-frame, I should be able to infer the texture / color of all triangles in the mesh.
Here is a demonstration of the aforementioned principle:
I need to do this natively in swift and therefore I will be using Metal for this. However, I have not worked with Metal before and my question is whether it is possible to "stich together" a texture from different image sources dynamically at run-time?

UIImage nonlinear resizing

I have unicolor image and i need to resize some parts from it, with different scale. Desired result is showed at image.
I've looked at applying grid mesh in opengles but i could not find some sample code or more detailed tutorial.
I've also looked at imgwrap but as far i can see this library requires qt framework. Any ideas, sample code or links for further read will be appreciated, thanks.
The problem you are facing is called "image warping" in computer graphics. First you have to define some control points in the original image and corresponding points in a sample destination image. Then you have to calculate a dense displacement field (in this application called also wrapping grid) and simple apply this field to the original image.
More practically: your best bet on iOS will be to create a 2D grid of vertices in OpenGL. Map your original image as a texture over this grid and deform the original grid by displacing some of its points. Then you simple take a screenshot of the resulting image with glReadPixels.
I do not know any CIFilter that would implement displacement field mapping of this kind.
UPDATE: I found also an example code that uses 8 control points to morph images with OpenCV. http://engineeering.blogspot.it/2008/07/image-morphing-with-opencv.html
OpenCV has working ports to iOS, so you could simple experiment with the code on the link above also on a target device.
I am not sure but i Suggest that if you want do this types of work then you need to crop some part for image and applies your resize feature/functionality in this croped part of image and put at position as it is. I just give my opinion not sure that it is true for you or not.
Here also i give you link of Question please read it, It might be helpful in your case:
How to scale only specific parts of image in iPhone app?
A few ideas:
Copy parts of the UIImage into different CGContext using
CGBitmapContextCreateImage(), copy the parts around, scale
individually, put back together.
Use CIFilter effects on parts of your Image, masking the parts you want to scale. (Core Image Programming Guide)
I suggest you check out Brad Larson's GPUImage project on GitHub. Under Visual effects you will find filters such as GPUImageBulgeDistortionFilter, which you should be able to adapt to your needs.
You might want to try this example using thin plate splines and OpenCV. This, in my opinion, is the easiest-to-try solution that is online.
You'll probably want to look at OpenGL shaders. What I'd do is I'd load the image in as a texture, apply a fragment shader (which is a small program that will let you alter distort the image), render the results back to a texture, and either display that texture or save it as a bitmap.
It's not going to be simple, but there is some sample code out there for other distortions. Here's a swirl in shaders:
http://www.geeks3d.com/20110428/shader-library-swirl-post-processing-filter-in-glsl/
I don't think there is an easier way to do this without involving OpenGL, and you probably wouldn't find great performance in doing this outside of the GPU either.

OpenGL pixel manipulation for graphic

I want to simulate stroking a carpet, so you would have a graphic of a fury carpet and with your finger you can move around and stroke it. I need to shift pixels and create some fake distortion around where I am touching.
Anyone have any tips ?
Firstly I guess do I have enough to work with assuming I have 1 jpeg of the material. Not any skeleton or 3d file, just a flat image
this can be also improved with 'fur rendering'
I've some examples:
http://www.ozone3d.net/benchmarks/fur/
http://www.xbdev.net/directx3dx/specialX/Fur/index.php
or new demo from NVidia:
http://www.youtube.com/watch?v=2Fp5N-pOxKA - around 35sec
Sounds like a typical task to be solved with OpenGL shaders.
As MrTJ says: Shaders is your key here.
Apart from your diffuse use a second texture as your "carpet" map that you modify. Maybe use the like a normal map, storing a directional vector per texel.
Use your "carpet" map in your shader and distort however you like to create your desired carpet effect.

How can I create a corner pin effect in XNA 4.0?

I am trying to write a strategy game using XNA 4.0, with a dynamically generating map, and it's really difficult to create all the ground textures, having to distort them individually in photoshop.
So what I want to do is create a flat image, and then apply the distortion programatically to simulate perspective, by moving the corners of the image.
Here is an example done in photoshop:
How can I do that in XNA?
My answer isn't XNA-specific as I've never actually used the library; however the concept should still apply.
In general, the best way to get a good perspective effect is to actually give 3d coordinates and transformations and let DirectX/OpenGL handle the rest. This has great benefits over attempting to do it yourself - specifically, ease of use, performance (much of the work is passed on to your graphics card), and perspective-correct texturing. And nothing's stopping you from doing 3d and 2d in the same scene, if that's a concern. There are numerous tutorials online for getting set up in the third dimension with XNA. I'd suggest heading over to MSDN.

Resources