I've a problem. I need to apply a filter like Pixelate or Blur to an entire UIView.
Like the eBay iPad app.
I thought to use GPUImage but I don't know how to do it.
There is a way to apply a filter to a GPUImageView directly without pass a UIImage?
The primary problem is that making a screenshot of a large UIView on an iPad 3rd is to expensive (2 seconds for the UIWindow grab). So the perfect solution is to apply filter directly to the views, just like eBay app, but.. how?
Thanks to all!
To pull a view into GPUImage, you can use a GPUImageUIElement source, which takes a UIView or CALayer as input. There's an example of this in the FilterShowcase sample application.
This does rely on the -renderInContext: method of an underlying CALayer, which can be expensive for redrawing the view. However, if the view is static, you just need to use this update once and the resulting image will be cached on the GPU as a texture. Filter actions applied to it after that point will be very fast.
You might be able to achieve the look you are after by applying CIFilters to your views layer.filters property. Check the docs for more info:
https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CALayer_class/Introduction/Introduction.html
Maybe this is something for you? Haven't tried it but read about it in a post once:
StackBlur
Ow sorry, I read your post again and this extension is about blurring an UIImage, and you said that this was something you didn't want...
Well I'll leave it here anyways if people go googling for blurring an image..
Sorry :(
Related
Im trying to continually update a grid of colors on my iPhone screen ( testing with 50x50, but would like to scale up later ) I have done some research but can't seem to find an agreed upon solution. I've tested CAShapes and UIBezier paths and colored UIViews, but everything is slow. Is there another option besides diving into OpenGL or Metal? Doesn't need to be crazy fast, just faster than the before-mentioned options. Thanks, I'm working in Objective-C
If you don’t want to dive into Metal then what I found much quicker for an app I wrote years ago was to put my data into a byte array and then use that array to render a bitmap image.
I don’t have all the details now. It used something like an “image provider” and various other parts. But it was much quicker than any other method I tried.
I was able to draw over 5000 “pixels” per frame using it so it should be good For you now.
Then you can either draw it into a view in drawRect or put it into a UIImageView.
I'm trying to make that effect, but I don't know how or the name
Of the effect?
That's not "an effect" but could be accomplished a few different ways. If it were me I'd look into Core Image filters (CIFIlters). If you've never used them, start with this Apple example:
https://developer.apple.com/library/prerelease/ios/samplecode/CIFunHouse/Introduction/Intro.html
Then look into changing the size and position of the type, applying pixellation and maybe a bit of blur.
I need to display a sequence procedurally generated images as video sequence preferably with built-in controls (controls would be nice to have, but no requirement) and I'm just looking for a bit of guidance of which API to use. There seem to be a number of options but I'm not sure which one is the best suited to my needs. GPUImage, Core Video, Core Animation, OpenGL ES or something else?
Targetting just iOS 6 and up would be no problem if that helps.
Update: I'd prefer something that would allow me to display the video frames directly rather than writing them to a temporary movie.
Checkout the animationImages property of UIImageView. This may do what you are looking for. Basically you would store all of your images into this array so it can handle the animation for you. UIImageView Reference
I want to use an UIImagePicker to have a camera preview being displayed. Over this preview I want to place an overlay view with controls.
Is it possible to apply any effects to the preview which will be displayed from camera? I particularly need to apply a blur effect to the camera preview.
So I want to have a blurred preview from camera and overlay view with controls. If I decide to capture the still image from the camera, I need to have it original without blur effect. So blur effect must applied only to the preview.
Is this possible using such configuration or maybe with AVFoundation being used for accessing the camera preview or maybe somehow else, or that's impossible at all?
With AV foundation you could do almost everything you want since you can obtain single frame from the camera and elaborate them, but it could lead you at a dead-end applying a blur on an image in realtime is a pretty intensive task with laggy video results, that could lead you to waste hours of coding. I would suggest you to use the solution of James WebSster or OpenGL shaders. Take a look at this awesome free library written by one of my favorite guru Brad http://www.sunsetlakesoftware.com/2012/02/12/introducing-gpuimage-framework even if you do not find the right filter, probably it will lead you to a correct implementation of what you want to do.
The right filter is Gaussian blur of course, but I don't know if it is supported, but you could do by yourself.
Almost forgot to say than in iOS 5 you have full access to the Accelerate Framework, made by Apple, you should look also into that.
From the reasonably limited amount of work I've done with UIImagePicker I don't think it is possible to apply the blur to the image you see using programatic filters.
What you might be able to do is to use the overlay to estimate blur. You could do this, for example, by adding an overlay which contains an image of semi-transparent frosted glass.
Im trying to generate an image, and have found UIGraphicsBeginImageContext() in the Apple docs which looks perfect. Ive been looking at some Quartz tutorials and in each one they use a custom view to do their drawing which doesn't seem necessary in this case, but i dont know enough to be sure. Whats the best way to do my drawing using UIGraphicsBeginImageContext()?
Well, you probably want to use UIGraphicsBeginImageContextWithOptions() and scale=0.0 to get resolution independence, but yes, once you call the function the frameworks will have set up a normal graphics context you can use as in the tutorials. You can get at it with UIGraphicsGetCurrentContext().
When you have finished drawing you will likely want to use UIGraphicsGetImageFromCurrentImageContext()to actually capture what you have drawn.
And don't forget to call UIGraphicsEndImageContext() when finished.