I have unicolor image and i need to resize some parts from it, with different scale. Desired result is showed at image.
I've looked at applying grid mesh in opengles but i could not find some sample code or more detailed tutorial.
I've also looked at imgwrap but as far i can see this library requires qt framework. Any ideas, sample code or links for further read will be appreciated, thanks.
The problem you are facing is called "image warping" in computer graphics. First you have to define some control points in the original image and corresponding points in a sample destination image. Then you have to calculate a dense displacement field (in this application called also wrapping grid) and simple apply this field to the original image.
More practically: your best bet on iOS will be to create a 2D grid of vertices in OpenGL. Map your original image as a texture over this grid and deform the original grid by displacing some of its points. Then you simple take a screenshot of the resulting image with glReadPixels.
I do not know any CIFilter that would implement displacement field mapping of this kind.
UPDATE: I found also an example code that uses 8 control points to morph images with OpenCV. http://engineeering.blogspot.it/2008/07/image-morphing-with-opencv.html
OpenCV has working ports to iOS, so you could simple experiment with the code on the link above also on a target device.
I am not sure but i Suggest that if you want do this types of work then you need to crop some part for image and applies your resize feature/functionality in this croped part of image and put at position as it is. I just give my opinion not sure that it is true for you or not.
Here also i give you link of Question please read it, It might be helpful in your case:
How to scale only specific parts of image in iPhone app?
A few ideas:
Copy parts of the UIImage into different CGContext using
CGBitmapContextCreateImage(), copy the parts around, scale
individually, put back together.
Use CIFilter effects on parts of your Image, masking the parts you want to scale. (Core Image Programming Guide)
I suggest you check out Brad Larson's GPUImage project on GitHub. Under Visual effects you will find filters such as GPUImageBulgeDistortionFilter, which you should be able to adapt to your needs.
You might want to try this example using thin plate splines and OpenCV. This, in my opinion, is the easiest-to-try solution that is online.
You'll probably want to look at OpenGL shaders. What I'd do is I'd load the image in as a texture, apply a fragment shader (which is a small program that will let you alter distort the image), render the results back to a texture, and either display that texture or save it as a bitmap.
It's not going to be simple, but there is some sample code out there for other distortions. Here's a swirl in shaders:
http://www.geeks3d.com/20110428/shader-library-swirl-post-processing-filter-in-glsl/
I don't think there is an easier way to do this without involving OpenGL, and you probably wouldn't find great performance in doing this outside of the GPU either.
Related
I have a three.js canvas to which I uploaded a 2730x4096 resolution image. I have given options to download different resolutions of images from the canvas. When I download a lower resoltuion image - say 960x1440 image, I get a very blurry image with jagged edges.
I tried to increase the sharpness by setting anisotropy to max(16 in my case) and also tried using map.minFilter= THREE.LinearFilter, which also sharpened the image further but the edges are still jagged.
I tried to run it through an FXAA anti aliasing composer, but the anti-aliasing is not great. May be I'm not giving correct parameters.
All the while antialias from renderer is active (renderer.antialias=true)
When I try to do the same in opencv, I used cv2.INTER_AREA interpolation for downsizing the 2730x4096 image which gives me a very sharp images with absolutely no jagged edges.
So I was thinking if implementing INTER_AREA interpolation for the minFilter instead of THREE.LinearFilter might yield better results. Is there something existing already in three.js that I'm not utilizing, or if I have to use this new interpolation method, how to go about it?
Illustration:
PFA two files - one file is downloaded using three.js canvas directly at 960x1440 resolution (bottom one) and other is an image which is downsized from 2730x4096 to 960x1440 using opencv (top one). In the opencv downsized image, the details are sharper and the edges are cleaner than the three.js image. I'm starting to believe this is because of the INTER_AREA interpolation for downsizing in opencv. Is that replicable in three.js?
The original high resolution image can be downloaded from here
Not sure what your setup is, but by enabling THREE.LinearFilter, you're no longer using Mipmapping, which is very effective at downsampling large textures. The default is LinearMipMapLinearFilter but you have several other options, as outlined in the texture constants docs page. I've found that using anything but the default gives you crunchy textures.
Also, make sure you're enabling antialiasing with renderer.antialias = true; if you need it. Not sure if you're already doing this step, given the limited scope of the code in your question.
I am not an ios developer but my client wants me to make an iphone app like
https://itunes.apple.com/us/app/trippy-booth-amazing-filterswarps/id448037560?mt=8
I have seen some custom library like
https://github.com/BradLarson/GPUImage
but do not find any camera lens customization example.
any kind of suggestions would be helpful
Thanks in advance
You can do it through some custom shader written in OpenGL(or metal just for iOS), then you can apply your shader to do interesting stuff like the image in above link.
I suggest you take a look at how to use the OpenGL framework in iOS.
Basically the flow would like:
Use whatever framework to capture(even in real time) a image.
Use some framework to modify the image. (The magic occur here)
Use another stuff to present the image.
You should learn how to obtain a OpenGL context, draw a image on it, write a custom shader, apply the shader, get the output, to "distort the image". For real, the hardest part is how to create that "effect" in your mind by describing it using a formula.
This is quite similar to the photoshop mesh warp (Edit->Transform->Warp). Basically you treat your image as a texture and then you render it on to a mesh (Bezier Patch) that is a grid that has been distorted into bezier curves, but you leave the texture coordinates as if it was still a grid. This has the effect of "pulling" the image towards the nodes of the patch. You can use OpenGL (GL_PATCHES) for this; I imagine metal or sceneKit might work as well.
I can't tell from the screen shots but its possible that the examples you reference are actually placing their mesh based on facial recognition. CoreImage has basic facial recognition to give youth out and eye positions which you could use to control some of the nodes in your mesh.
I would like some advice on how to approach this problem. I am making an app where users will be retrieving photos of faces from a camera roll or camera capture (assuming they are always portrait) and I want to make it appear as though the face images are talking (ex. moving pixels around the mouth up and down) using any known image manipulation techniques. The resultant animation of the photo will appear on a separate view. I have started learning OpenGL and researched Open CV, Core Image, GPUImage and other frameworks. I have been given a small timeframe and generally, my experience with graphics processing is limited. I would appreciate it if anybody were to instruct me on what to do using any of the frameworks or libraries I have mentioned above.
Since all you need is some animation of the image, I don't think it is a good idea to move the pixels around as you said. It's very complicated and the result of moving the pixels around might looks bad.
A much simpler approach is by using gif image. All you need to do is to make the animation of talking as a gif image and then use it in your app.
Please refer to the following question.
I need to paint a square image, mapped or transformed to an unknown-at-compile-time four-sided polygon. How can I do this?
Longer explanation
The specific problem is rendering a map tile with a non-rectangular map projection. Suppose I have the following tile:
and I know the four corner points need to be here:
Given that, I would like to get the following output:
The square tile may be:
Rotated; and/or
Be narrower at one end than at the other.
I think the second item means this requires a non-affine transformation.
Random extra notes
Four-sided? It is plausible that to be completely correct, the tile should be
mapped to a polygon with more than four points, but for our purposes
and at the scale it is drawn, a square -> other four-cornered-polygon
transformation should be enough.
Why preferably GDI only? All rendering so far is done using GDI, and I want to keep the code (a) fast and (b) requiring as few extra
libraries as possible. I am aware of some support for
transformations in GDI and have been experimenting with them
today, but even after experimenting with them I'm not sure if they're
flexible enough for this purpose. If they are, I haven't managed to
figure it out, and so I'd really appreciate some sample code.
GDI+ is also ok since we use it elsewhere, but I know it can be slow, and speed is
important here.
One other alternative is anything Delphi- /
C++Builder-specific; this program is written mostly in C++ using
the VCL, and the graphics in question are currently painted to a
TCanvas with a mix of TCanvas methods and raw WinAPI/GDI calls.
Overlaying images: One final caveat is that one colour in the tile may be for color-key
transparency: that is, all the white (say) squares in the above tile
should be transparent when drawn over whatever is underneath.
Currently, tiles are drawn to square or axis-aligned rectangular
targets using TransparentBlt.
I'm sorry for all the extra caveats that make this question more complicated
than 'what algorithm should I use?' But I will happily accept answers with
only algorithmic information too.
You might also want to have a look at Graphics32.
The screen shot bewlow shows how the transfrom demo in GR32 looks like
Take a look at 3D Lab Vector graphics. (Specially "Football field" in the demo).
Another cool resource is AggPas with full source included (download)
AggPas is Open Source and free of charge 2D vector graphics library. It is an Object Pascal native port of the Anti-Grain Geometry library - AGG, originally written by Maxim Shemanarev in C++. AggPas doesn't depend on any graphic API or technology. Basically, you can think of AggPas as of a rendering engine that produces pixel images in memory from some vectorial data.
Here is how the perspective demo looks like:
After transformation:
The general technique is described in George Wolberg's "Digital Image Warping". It looks like this abstract contains the relevant math, as does this paper. You need to create a perspective matrix that maps from one quad to another. The above links show how to create the matrix. Once you have the matrix, you can scan your output buffer, perform the transformation (or possibly the inverse - depending on which they give you), and that will give you points in the original image that you can copy from.
It might be easier to use OpenGL to draw a textured quad between the 4 points, but that doesn't use GDI like you wanted.
I was wondering what the steps would be to convert a photo into a pencil drawing. People usually suggest to:
invert the image (make negative)
apply Gaussian Blur
blend the above images by linear dodge or color dodge.
See here: Convert Image to Pencil Sketch
Are there other methods? I'd be particularly interested in methods which emphasise the stroke of the pencil, like this iPhone App: Snap and Sketch
I'd be very grateful for any suggestions of how to get started.
I think you will have to iterate through all the pixels in the image and implement the algorithm you mentioned in the question itself. There is no default image filtering library in iphone (CoreImage is there but only for Mac). I think your options are
A third party library named
imageMagic is there, and
these people seems to have
ported it successfully to
iphone. Something to ponder over.
Another simple image filtering
library (especially for iphone)
does some basic image
filtering. Gaussian blur is one of
them.
Implement your own methods by
going through each pixel in
image..This thread is very
useful for image filtering
seekers in iphone. They are
implementing some filters. At least
you will get information about how to go through every pixels of an image.
EDIT: Core Image now present in iphone. I did not get a chance to play with it yet. This is the documentation