Animated characters on an overlay of camera capture - ios

I was wondering how the characters in this app are animated on screen. Is it possible to have a video with transparent background to put as the overlay of camera capture ? Is this just a set of UIImages animated together ? These characters seems more animated than simple gifs.

That is most likely an OpenGL animation you are seeing overlaid on the camera display. See one of the often cited answers of Brad Larson on how to do that - includes a linked example project (that dude rocks).
To achieve that effect, you use the input of the camera, put that on a planar object as a texture and render your stuff (highly animated characters or even naked, dancing robot women) on top of it, presto.

Related

Unity3D Displaying a RenderTexture overlayed ontop of another Camera

So to be simple I have a RenderTexture of another camera, and I need to overlay it onto another camera either through:
a) A RenderTexture of that camera
or
b) directly to the cameras rendering
What I'm trying to do can also be seen in this representation:
fig1 shows the main render, fig2 shows the desired overlay to be applied, fig3 shows them applied in a way of overlay, fig4 shows post processing of the now newly edited image
Where the first box is the main camera, and the second is what I want overlayed onto it as a RenderTexture in OnRenderObject() A.K.A when these two get rendered. Then in OnPostRender() these are combined where the overlay will be ontop. Then in OnRenderImage(), image effects can freely change the combined images together.
So as a list of what I need help with in explaining is that:
I do not know how to either:
Access the cameras rendering directly
or
Set a RenderTexture as a cameras rendering in OnPostRender()
I also need help though explanation in correctly overlaying a RenderTexture onto either one of the above (This would be using the depth rendered to the RenderTexture as alpha) just as shown in fig3 of the image.
This is the method I've thought up in order to overlay a forward rendering onto deferred for image effects. If you have any other solutions or ideas, it'd be very well appreciated if you could post them as a comment.
Just to clarify I'm not asking for source code, just methods and or links to Unity3D's documentation of said methods that I'm asking about.
Thank you very much so in advance. :)

Darken an opaque UIView without blending

My App's background is an opaque UIImageView. Under some circumstances I would like to darken this down in an animated way from full brightness to about 50%. Currently I lower the alpha property of the view and this works well. Because nothing is behind the view, the background image just becomes dark.
However, I've been profiling using the Core Animation Instrument and when I do this, I see that the whole background shows as being blended. I'd like to avoid this if possible.
It seems to me that this would be achievable during compositing. If a view is opaque, it is possible to mix is with black without anything behind showing through. It's not necessary to blend it, just adjust the pixel values.
I wondered if this was something that UIKit's GPU compositing supports. While blending isn't great, it's probably a lot better than updating the image on the CPU, so I think a CPU approach is probably not a good substitute.
Another question asks about this, and a few ideas are suggested including setting the Alpha. No one has brought up a mechanism for avoiding blending though.
An important question here is whether you want the change to using a darkened background to be animated.
Not animated
Prepare two different background images and simply swap between them. The UIImage+imageEffects library could help with generating the darkened image, or give you some leads.
Animated.
Take a look at GPUImage - "An open source iOS framework for GPU-based image and video processing". Based on this you could render the background in to the scene in a darkened way.

How to dim/blur everything outside given rect in iOS?

I'm currently developing an iOS app that is using OCR.
Currently I'm using AVFoundation to preview the video from the camera (using Apples sample AVCam).
For a good user experience I want to lay out a rectangle in the preview layer. The image inside this rectangle will be the image parsed by the OCR engine. My problem is that I also would like to "dim" everything outside this rectangle and I'm currently out of ideas how to solve this. Does anybody know how to do this?
Edit
This is what I would like to accomplish (image taken from the app Horizon):
http://i.imgur.com/MuuJNS9.png
You can use two black images covering the top and bottom areas that you want to "dim", set the alpha of those images to a certain value, like 0.5.
Why not add a subview that covers the entire screen and set the background color to a semi transparent gray - your gray overlay?
And then add the image parsed by the OCR engine add a subview of this grayoverlay int the center of it

xcode custom overlay capture

I am working on OCR recognition App and I want to give the user the option to manually select the area (during the camera selection) on which to perform the OCR. Now, the issue I face is that I provide a rectangle on the camera screen by simply overriding the - (void)drawRect:(CGRect)rect method, However, despite there being a rectangle ,the camera tries to focus on the entire captured area rather than just within rectangle specified.
In other word, I do not want the entire picture to be send for processing but rather only the part of the captured image inside the rectangle. I have managed to provide a rectangle, However with no functionality. I do not want the entire screen area to be focused, but only the area under the rectangle.
I hope this makes sense since i have tried my best to explain it.
Thanks and let me know
Stream the camera's image to a UIScrollView using an AVCaptureOutput then allow the user to pinch/pull/pan the camera into the proper place... now use UIGraphics Image Context to take a "screen-shot" of this area and send that UIImage.CGImage in for processing.

Circular White Pinch Gesture Overlay commonly used for Blurring Images in iOS

I have implemented the ability to blur images in my iOS app using the pinch gesture, however I would like to implement a circular white overlay that is commonly used as a reference point with the pinch gesture so that the user can adjust the amount of blur. Just like the image below:
The image above was from: https://media.tumblr.com/tumblr_lutwauVUW31qm4rc3.png
How can I implement this feature?
Thanks!
GaussianSelectiveBlurFilter in GPUImage lib may be help yo a lot. well,here is the github source.
I think it is not hard to use, hope you will enjoy it.
You can use a GPUImageVignetteFilter, and set the vignette color to white.
I'm guessing you're implementing the blur with GPUImageGaussianSelectiveBlurFilter within GPUImage (because I see you tagged GPUImage in your question). If you are, you'll notice that the properties on GPUImageGaussianSelectiveBlurFilter don't exactly translate over to GPUImageVignetteFilter, so you'll have to do a bit of math to translate to a new "coordinate" system, but it's fairly trivial.

Resources