I would like some advice on how to approach this problem. I am making an app where users will be retrieving photos of faces from a camera roll or camera capture (assuming they are always portrait) and I want to make it appear as though the face images are talking (ex. moving pixels around the mouth up and down) using any known image manipulation techniques. The resultant animation of the photo will appear on a separate view. I have started learning OpenGL and researched Open CV, Core Image, GPUImage and other frameworks. I have been given a small timeframe and generally, my experience with graphics processing is limited. I would appreciate it if anybody were to instruct me on what to do using any of the frameworks or libraries I have mentioned above.
Since all you need is some animation of the image, I don't think it is a good idea to move the pixels around as you said. It's very complicated and the result of moving the pixels around might looks bad.
A much simpler approach is by using gif image. All you need to do is to make the animation of talking as a gif image and then use it in your app.
Please refer to the following question.
Related
I need to detect where objects (mostly people) are in relation to a wall. I can have a fixed position camera in the ceiling so I thought to get an image of the space with nothing in it. Then use the difference of that and the current camera image to get an image with just the things. Then I can do blob detection I think to get the positions (only need x).
Does this seem sound? I'm not very accomplished in OpenCV so am looking for some advice.
That would be one way of going about it, but not very robust as the video feed won't produce consistent precise images so the background will never be nicely subtracted out, and people walking through the scene will occlude light and could also possibly match parts of your background.
This process of removing the background from a video is simply dubbed "background subtraction" and there are built-in OpenCV methods for it.
OpenCV has tutorials on their site showing the basics, for both python and C++.
I have unicolor image and i need to resize some parts from it, with different scale. Desired result is showed at image.
I've looked at applying grid mesh in opengles but i could not find some sample code or more detailed tutorial.
I've also looked at imgwrap but as far i can see this library requires qt framework. Any ideas, sample code or links for further read will be appreciated, thanks.
The problem you are facing is called "image warping" in computer graphics. First you have to define some control points in the original image and corresponding points in a sample destination image. Then you have to calculate a dense displacement field (in this application called also wrapping grid) and simple apply this field to the original image.
More practically: your best bet on iOS will be to create a 2D grid of vertices in OpenGL. Map your original image as a texture over this grid and deform the original grid by displacing some of its points. Then you simple take a screenshot of the resulting image with glReadPixels.
I do not know any CIFilter that would implement displacement field mapping of this kind.
UPDATE: I found also an example code that uses 8 control points to morph images with OpenCV. http://engineeering.blogspot.it/2008/07/image-morphing-with-opencv.html
OpenCV has working ports to iOS, so you could simple experiment with the code on the link above also on a target device.
I am not sure but i Suggest that if you want do this types of work then you need to crop some part for image and applies your resize feature/functionality in this croped part of image and put at position as it is. I just give my opinion not sure that it is true for you or not.
Here also i give you link of Question please read it, It might be helpful in your case:
How to scale only specific parts of image in iPhone app?
A few ideas:
Copy parts of the UIImage into different CGContext using
CGBitmapContextCreateImage(), copy the parts around, scale
individually, put back together.
Use CIFilter effects on parts of your Image, masking the parts you want to scale. (Core Image Programming Guide)
I suggest you check out Brad Larson's GPUImage project on GitHub. Under Visual effects you will find filters such as GPUImageBulgeDistortionFilter, which you should be able to adapt to your needs.
You might want to try this example using thin plate splines and OpenCV. This, in my opinion, is the easiest-to-try solution that is online.
You'll probably want to look at OpenGL shaders. What I'd do is I'd load the image in as a texture, apply a fragment shader (which is a small program that will let you alter distort the image), render the results back to a texture, and either display that texture or save it as a bitmap.
It's not going to be simple, but there is some sample code out there for other distortions. Here's a swirl in shaders:
http://www.geeks3d.com/20110428/shader-library-swirl-post-processing-filter-in-glsl/
I don't think there is an easier way to do this without involving OpenGL, and you probably wouldn't find great performance in doing this outside of the GPU either.
I am working on images that are partially blur on some sections. These are noises that should be taken care of, but here is the problem:
Are there methods to detect whether an image is blur or partially blur at some sections of an image? For instance, take a look at sample image below:
You can see in the image that there are 3 sections that are visually blur: bottom-left, near center region and top-right. Now, is it possible to detect that any portion of an image is blur programming-wise or mathematically?
As lain_b pointed out, with an image like this you can use an edge detector and look for an absence of edges. I tried it on your image and it seems to work pretty well. First I used the kernel
[0,1,0,
1,-4,1,
0,1,0]
Which is a simple edge detector. Its result was
Then I used a threshold to get
Then I closed the image and opened it to get
This is obviously not a finished version, the top right portion did not recognize well at all. Perhaps you could improve it by blurring before performing thresholding, or by choosing better values for the threshold and the radii of the opening and closing operations. A lot of the decisions you will need to make depend on the constraints you can put on your problem. I think this technique will work for you though.
Edit
If you are looking for blur detection of arbitrary images you are going to have to investigate a wide variety of techniques. Things are much easier if you can make assumptions about your set of input images. Without any assumptions I don't know what will work best for you. Here is some reading on the topic
Image Blur Metrics
Reserach paper on using the Harr wavelet transform
Similar SO Question and look at the question that question links to
Blur detection is a very active research field, there is no one answer. You will just need to try all the methods you can find (these were found by googling detect blur in image).
This paper may be of some help. It does blur estimation (mostly for out of focus, but I think it also does blur) to recreate a similarly blurred object in the image.
I think you should be able to use it to detect the blurred areas, and how blurred they are. It should be especially relevent to your problem as it is designed to work with real-world images.
I am currently building a camera app prototype which should recognize sheets of paper lying on a table. The clue about this is that it should do the recognition in real time, so I capture the video stream of the camera, which in iOS 5 can easily be done with the AV foundation. I looked at here and here
They are doing some basic object recognition there.
I have found out that using OpenCV library in this realtime environment does not work in a performant way.
So what I need is an algorithm to determine the edges of an image without OpenCV.
Does anyone have some sample code snippets which lay out how to do this or point me in the right direction.
Any help would be appreciated.
You're not going to be able to do this with the current Core Image implementation in iOS, because corner detection requires some operations that Core Image doesn't yet support there. However, I've been developing an open source framework called GPUImage that does have the required capabilities.
For finding the corners of an object, you can use a GPU-accelerated implementation of the Harris corner detection algorithm that I just got working. You might need to tweak the thresholds, sensitivities, and input image size to work for your particular application, but it's able to return corners for pieces of paper that it finds in a scene:
It also finds other corners in that scene, so you may need to use a binary threshold operation or some later processing to identify which corners belong to a rectangular piece of paper and which to other objects.
I describe the process by which this works over at Signal Processing, if you're interested, but to use this in your application you just need to grab the latest version of GPUImage from GitHub and make the GPUImageHarrisCornerDetectionFilter the target for a GPUImageVideoCamera instance. You then just have to add a callback to handle the corner array that's returned to you from this filter.
On an iPhone 4, the corner detection process itself runs at ~15-20 FPS on 640x480 video, but my current CPU-bound corner tabulation routine slows it down to ~10 FPS. I'm working on replacing that with a GPU-based routine which should be much faster. An iPhone 4S currently handles everything at 20-25 FPS, but again I should be able to significantly improve the speed there. Hopefully, that would qualify as being close enough to realtime for your application.
I use Brad's library GPUImage to do that, result is perfectible but enough good.
Among detected Harris corners, my idea is to select:
The most in the upper left for the top-left corner of the sheet
The most in the upper right for the top-right corner of the sheet
etc.
#Mirco - Have you found a better solution ?
#Brad - In your screenshot, what parameters for Harris filter do you use to have just 5 corners detected ? I have a lot of than that ...
I was wondering what the steps would be to convert a photo into a pencil drawing. People usually suggest to:
invert the image (make negative)
apply Gaussian Blur
blend the above images by linear dodge or color dodge.
See here: Convert Image to Pencil Sketch
Are there other methods? I'd be particularly interested in methods which emphasise the stroke of the pencil, like this iPhone App: Snap and Sketch
I'd be very grateful for any suggestions of how to get started.
I think you will have to iterate through all the pixels in the image and implement the algorithm you mentioned in the question itself. There is no default image filtering library in iphone (CoreImage is there but only for Mac). I think your options are
A third party library named
imageMagic is there, and
these people seems to have
ported it successfully to
iphone. Something to ponder over.
Another simple image filtering
library (especially for iphone)
does some basic image
filtering. Gaussian blur is one of
them.
Implement your own methods by
going through each pixel in
image..This thread is very
useful for image filtering
seekers in iphone. They are
implementing some filters. At least
you will get information about how to go through every pixels of an image.
EDIT: Core Image now present in iphone. I did not get a chance to play with it yet. This is the documentation