I have obtained a 3D wavefront model of face inside a portrait and then that face is transformed using mesh_numpy and a 2D transformed image is saved. As the images are attached. Now I want to warp this transformed image back onto the original image. It's like face swap where I have both source and target image. What is the better way to achieve it.
Original:
Transformed:
Update: If anyone faces this issue in near future, I managed to achieve this by using Homography.
I'm using CATiledLayer to render some 2D graphics in the draw(in:) callback.
The scene is composed of elements such as open & filled paths, images, etc that are drawn procedurally using the painter's model. Some areas need to be blurred, and then there may be non-blurred graphics drawn on top of them.
I believe that Gaussian blurs need a CIImage to apply to, but don't know what the best way is to create a CIImage in this scenario. I've spent a fair bit of time searching for a solution, but haven't come up with anything. I would like to avoid having to compose the scene using one or more offscreen bitmaps, and having to blit the result back to the CALayer.
I have unicolor image and i need to resize some parts from it, with different scale. Desired result is showed at image.
I've looked at applying grid mesh in opengles but i could not find some sample code or more detailed tutorial.
I've also looked at imgwrap but as far i can see this library requires qt framework. Any ideas, sample code or links for further read will be appreciated, thanks.
The problem you are facing is called "image warping" in computer graphics. First you have to define some control points in the original image and corresponding points in a sample destination image. Then you have to calculate a dense displacement field (in this application called also wrapping grid) and simple apply this field to the original image.
More practically: your best bet on iOS will be to create a 2D grid of vertices in OpenGL. Map your original image as a texture over this grid and deform the original grid by displacing some of its points. Then you simple take a screenshot of the resulting image with glReadPixels.
I do not know any CIFilter that would implement displacement field mapping of this kind.
UPDATE: I found also an example code that uses 8 control points to morph images with OpenCV. http://engineeering.blogspot.it/2008/07/image-morphing-with-opencv.html
OpenCV has working ports to iOS, so you could simple experiment with the code on the link above also on a target device.
I am not sure but i Suggest that if you want do this types of work then you need to crop some part for image and applies your resize feature/functionality in this croped part of image and put at position as it is. I just give my opinion not sure that it is true for you or not.
Here also i give you link of Question please read it, It might be helpful in your case:
How to scale only specific parts of image in iPhone app?
A few ideas:
Copy parts of the UIImage into different CGContext using
CGBitmapContextCreateImage(), copy the parts around, scale
individually, put back together.
Use CIFilter effects on parts of your Image, masking the parts you want to scale. (Core Image Programming Guide)
I suggest you check out Brad Larson's GPUImage project on GitHub. Under Visual effects you will find filters such as GPUImageBulgeDistortionFilter, which you should be able to adapt to your needs.
You might want to try this example using thin plate splines and OpenCV. This, in my opinion, is the easiest-to-try solution that is online.
You'll probably want to look at OpenGL shaders. What I'd do is I'd load the image in as a texture, apply a fragment shader (which is a small program that will let you alter distort the image), render the results back to a texture, and either display that texture or save it as a bitmap.
It's not going to be simple, but there is some sample code out there for other distortions. Here's a swirl in shaders:
http://www.geeks3d.com/20110428/shader-library-swirl-post-processing-filter-in-glsl/
I don't think there is an easier way to do this without involving OpenGL, and you probably wouldn't find great performance in doing this outside of the GPU either.
My problem is simple: I have to process each frame of a video. The process computes a zone to crop on the original frame. To have better performances, I have to downscale the original frame. Nowadays, It is done thanks to a dedicated library. However, it is slow. We are wondering if there is any possibility to downscale this frame thanks to OpenGL ES 2.0 glsl.
David
If you're using AV Foundation to load the video from disk or to pull video from the camera, you could use my open source GPUImage framework to handle the underlying OpenGL ES processing for you.
Specifically, you can use a GPUImageCropFilter to crop out a selected region of the input video using normalized 0.0-1.0 coordinates in a CGRect. The FilterShowcase example shows how this works in practice for live video from the camera. With this, you don't need to touch any manual OpenGL ES API calls if you don't want to.
Finally, i Will use a frame buffer object to render my texture. I will set the viewport to desired size and render my texture as usual. To get back downsampled image, i Will use glGetReadPixels.
David
I have a complex pre-rendered scene which I would like to use as a backdrop in a 3D iPad game which uses a static camera.
For each frame redraw the screen will be erased to this background. The part I do not know how to do is set the depth buffer to the one stored in this pre-rendered image, so that dynamic 3D objects will respect the depth information in said image.
Is there any way to achieve this on an iPad, using opengl es 2.0?
I looked into several approaches, but could not find anything suitable so far.