GPUImage smudge tool like effect - ios

I am looking to achieve an effect in my App similar to the Smudge Tool using user touches and I've seen articles like this on how to implement it, if I port the code to iOS that is.
I've already implemented GPUImage into the App and it works well filtering images. Can the same effect be achieved using the GPUImage library somehow?

I would think, yes. You'd probably want to apply a gaussian blur filter with a fairly small radius to a circular area under the user's finger, and apply it repeatedly along the path of the finger drag.
Blurs are pretty easy to apply as convolution filter, which is taylor-made for the GPU. I suggest searching on "blur convolution filter".
As for how to implement the UI for a smudge tool, you'll have some original work to do.

Related

Face image manipulation iOS

I would like some advice on how to approach this problem. I am making an app where users will be retrieving photos of faces from a camera roll or camera capture (assuming they are always portrait) and I want to make it appear as though the face images are talking (ex. moving pixels around the mouth up and down) using any known image manipulation techniques. The resultant animation of the photo will appear on a separate view. I have started learning OpenGL and researched Open CV, Core Image, GPUImage and other frameworks. I have been given a small timeframe and generally, my experience with graphics processing is limited. I would appreciate it if anybody were to instruct me on what to do using any of the frameworks or libraries I have mentioned above.
Since all you need is some animation of the image, I don't think it is a good idea to move the pixels around as you said. It's very complicated and the result of moving the pixels around might looks bad.
A much simpler approach is by using gif image. All you need to do is to make the animation of talking as a gif image and then use it in your app.
Please refer to the following question.

Auto-Detecting blurry regions of an image

I am working on images that are partially blur on some sections. These are noises that should be taken care of, but here is the problem:
Are there methods to detect whether an image is blur or partially blur at some sections of an image? For instance, take a look at sample image below:
You can see in the image that there are 3 sections that are visually blur: bottom-left, near center region and top-right. Now, is it possible to detect that any portion of an image is blur programming-wise or mathematically?
As lain_b pointed out, with an image like this you can use an edge detector and look for an absence of edges. I tried it on your image and it seems to work pretty well. First I used the kernel
[0,1,0,
1,-4,1,
0,1,0]
Which is a simple edge detector. Its result was
Then I used a threshold to get
Then I closed the image and opened it to get
This is obviously not a finished version, the top right portion did not recognize well at all. Perhaps you could improve it by blurring before performing thresholding, or by choosing better values for the threshold and the radii of the opening and closing operations. A lot of the decisions you will need to make depend on the constraints you can put on your problem. I think this technique will work for you though.
Edit
If you are looking for blur detection of arbitrary images you are going to have to investigate a wide variety of techniques. Things are much easier if you can make assumptions about your set of input images. Without any assumptions I don't know what will work best for you. Here is some reading on the topic
Image Blur Metrics
Reserach paper on using the Harr wavelet transform
Similar SO Question and look at the question that question links to
Blur detection is a very active research field, there is no one answer. You will just need to try all the methods you can find (these were found by googling detect blur in image).
This paper may be of some help. It does blur estimation (mostly for out of focus, but I think it also does blur) to recreate a similarly blurred object in the image.
I think you should be able to use it to detect the blurred areas, and how blurred they are. It should be especially relevent to your problem as it is designed to work with real-world images.

Determine the corners of a sheet of paper with iOS 5 AV Foundation and core-image in realtime

I am currently building a camera app prototype which should recognize sheets of paper lying on a table. The clue about this is that it should do the recognition in real time, so I capture the video stream of the camera, which in iOS 5 can easily be done with the AV foundation. I looked at here and here
They are doing some basic object recognition there.
I have found out that using OpenCV library in this realtime environment does not work in a performant way.
So what I need is an algorithm to determine the edges of an image without OpenCV.
Does anyone have some sample code snippets which lay out how to do this or point me in the right direction.
Any help would be appreciated.
You're not going to be able to do this with the current Core Image implementation in iOS, because corner detection requires some operations that Core Image doesn't yet support there. However, I've been developing an open source framework called GPUImage that does have the required capabilities.
For finding the corners of an object, you can use a GPU-accelerated implementation of the Harris corner detection algorithm that I just got working. You might need to tweak the thresholds, sensitivities, and input image size to work for your particular application, but it's able to return corners for pieces of paper that it finds in a scene:
It also finds other corners in that scene, so you may need to use a binary threshold operation or some later processing to identify which corners belong to a rectangular piece of paper and which to other objects.
I describe the process by which this works over at Signal Processing, if you're interested, but to use this in your application you just need to grab the latest version of GPUImage from GitHub and make the GPUImageHarrisCornerDetectionFilter the target for a GPUImageVideoCamera instance. You then just have to add a callback to handle the corner array that's returned to you from this filter.
On an iPhone 4, the corner detection process itself runs at ~15-20 FPS on 640x480 video, but my current CPU-bound corner tabulation routine slows it down to ~10 FPS. I'm working on replacing that with a GPU-based routine which should be much faster. An iPhone 4S currently handles everything at 20-25 FPS, but again I should be able to significantly improve the speed there. Hopefully, that would qualify as being close enough to realtime for your application.
I use Brad's library GPUImage to do that, result is perfectible but enough good.
Among detected Harris corners, my idea is to select:
The most in the upper left for the top-left corner of the sheet
The most in the upper right for the top-right corner of the sheet
etc.
#Mirco - Have you found a better solution ?
#Brad - In your screenshot, what parameters for Harris filter do you use to have just 5 corners detected ? I have a lot of than that ...

How can I create a corner pin effect in XNA 4.0?

I am trying to write a strategy game using XNA 4.0, with a dynamically generating map, and it's really difficult to create all the ground textures, having to distort them individually in photoshop.
So what I want to do is create a flat image, and then apply the distortion programatically to simulate perspective, by moving the corners of the image.
Here is an example done in photoshop:
How can I do that in XNA?
My answer isn't XNA-specific as I've never actually used the library; however the concept should still apply.
In general, the best way to get a good perspective effect is to actually give 3d coordinates and transformations and let DirectX/OpenGL handle the rest. This has great benefits over attempting to do it yourself - specifically, ease of use, performance (much of the work is passed on to your graphics card), and perspective-correct texturing. And nothing's stopping you from doing 3d and 2d in the same scene, if that's a concern. There are numerous tutorials online for getting set up in the third dimension with XNA. I'd suggest heading over to MSDN.

iPhone: How to convert a photo into a pencil drawing

I was wondering what the steps would be to convert a photo into a pencil drawing. People usually suggest to:
invert the image (make negative)
apply Gaussian Blur
blend the above images by linear dodge or color dodge.
See here: Convert Image to Pencil Sketch
Are there other methods? I'd be particularly interested in methods which emphasise the stroke of the pencil, like this iPhone App: Snap and Sketch
I'd be very grateful for any suggestions of how to get started.
I think you will have to iterate through all the pixels in the image and implement the algorithm you mentioned in the question itself. There is no default image filtering library in iphone (CoreImage is there but only for Mac). I think your options are
A third party library named
imageMagic is there, and
these people seems to have
ported it successfully to
iphone. Something to ponder over.
Another simple image filtering
library (especially for iphone)
does some basic image
filtering. Gaussian blur is one of
them.
Implement your own methods by
going through each pixel in
image..This thread is very
useful for image filtering
seekers in iphone. They are
implementing some filters. At least
you will get information about how to go through every pixels of an image.
EDIT: Core Image now present in iphone. I did not get a chance to play with it yet. This is the documentation

Resources