Darken an opaque UIView without blending - ios

My App's background is an opaque UIImageView. Under some circumstances I would like to darken this down in an animated way from full brightness to about 50%. Currently I lower the alpha property of the view and this works well. Because nothing is behind the view, the background image just becomes dark.
However, I've been profiling using the Core Animation Instrument and when I do this, I see that the whole background shows as being blended. I'd like to avoid this if possible.
It seems to me that this would be achievable during compositing. If a view is opaque, it is possible to mix is with black without anything behind showing through. It's not necessary to blend it, just adjust the pixel values.
I wondered if this was something that UIKit's GPU compositing supports. While blending isn't great, it's probably a lot better than updating the image on the CPU, so I think a CPU approach is probably not a good substitute.
Another question asks about this, and a few ideas are suggested including setting the Alpha. No one has brought up a mechanism for avoiding blending though.

An important question here is whether you want the change to using a darkened background to be animated.
Not animated
Prepare two different background images and simply swap between them. The UIImage+imageEffects library could help with generating the darkened image, or give you some leads.
Animated.
Take a look at GPUImage - "An open source iOS framework for GPU-based image and video processing". Based on this you could render the background in to the scene in a darkened way.

Related

CGColor - Determining the blended color from two UIViews that are on top of each other

I am trying to obtain the resultant CGColor (or UIColor) that would be displayed as a result of two (or perhaps more) views that are sat on top of each other each with differing colours. Obviously the view(s) nearer the foreground have an alpha value of less than 1, allowing the colour of views behind to bleed through.
Essentially, I guess I'm trying to mimic exactly what the UIView compositing process does exactly when it prepares for the painting of a scene.
NB. I'd like to steer away from a manual programmatic blend algorithm as will likely not be the same as the Cocoa blend mechanism.
I have just found this... Can I mix two UIColor together? but thought there must be an iOS/Cocoa equivalent
Heres a tutorial-ish thing:
Blending Modes in iOS
And here is the Apple Documentation that covers blend modes. Should get you on the right track. The exact formulas are shown for each blend mode option.
Finally, see this answer:
How to get the color of a displayed pixel

Interactive blurring of UIImage and UIView like iOS 8 spotlight

iOS 8 introduces some pretty snazzy interactive blurring. Most notably, there's the interactive blur when you pull down for spotlight, but there's also the animation when opening and closing Siri (though that's not interactive). I've only noticed this interactive blur in one other place: the official Twitter app when pulling down on a profile view (parallax header image zooms and blurs sometimes).
I've attempted to animate something basic with a UISlider with both CoreImage and GPUImage (based on the answer to this question and also Apple's UIImage+ImageEffects, but nothing seems appropriately performant enough to animate the blur interactively (i.e. blurring an image to a single value works quickly once, but not at a framerate fast enough to blur continuously).
How can I implement these methods in a way that they are performant enough to both blur and unblur a UIImage (and ideally a UIView or CIContext snapshot) interactively?
There is no simple and one way of doing it, but it's definitively doable if you follow following steps and optional ways:
Most important: downsample the image. Basic resolution is not very important for gaussian blur. If you downsample only to half resolution, the amount of data is down to quarter!
Define end target blur radius.
Retrieve the architecture of device with the help of C functions and use different values of delta saturation parameter for different architectures, according to processing power, of course.
Experiment creating the blur with Apple provided library with radius step regarding to step value of parameter that is interacting with (KVO to contentOffset property, for example). dispatch_async and don't forget to callback with blurred image to main queue!
Methods above will almost for sure cater all the architectures from arm7s onwards, but you might still have some issues with arm7 - iPhone 4s).
If you still have issues, like with mentioned arm7, then double the contentOffset change required to make next blur with next radius. Then instead of changing the image property on UIImageView, rather create new UIImageView with new blurred UIImage and fade in alpha channel from 0 to 1 within the period the next blurred page is being created.
You might use number of tricks, like creating all blurred images one after another for the full interactive scale and cache them in collection and use them in method described in point 6.
There a also many other techniques if the animation is not interactive, but rather timed in the certain frame.

Implement photoshop blending mode in iOS

There were numerous situations when designers gave me psd files, and there two layers where set with blending mode multiply. and always I couldn't implement that behaviour, and just used different colour for front view with opacity set to, for instance, 0.5, just to some kind of simulate this blending mode behaviour. However now I just want to implement what designer has given me.
For instance, let us take a UITableView. Here is the screenshot of custom design.
Here the section header isn't half opaque, but it has blending mode set to Multiply. And here is the actual colour of it
If I set custom view as a section header, with background colour set to above screenshot, how can I make the section view "blend" with the background of the UITableView???
This was just an example of the problem that I've stumbled multiple times. In general, I always have a front view and a rare view, and in photoshop front view is set, for instance, multiply blend mode with rare view, and I want to have the same effect with iOS. Is there any way to implement this??
Thanks for the answers.
You can use JS image processing script from Pixastic: http://www.pixastic.com/lib/docs/actions/blend/
I used this script before for several projects, it works realy great :)
Hope it helps you)

Blending V.S. offscreen-rendering, which is worse for Core Animation performance?

Blending and offscreen-rendering are both expensive in Core Animation.
One can see them in Core Animation instrument in Instruments, with Debug Options:
Here is my case:
Display 50x50 PNG images on UIImageViews. I want to round the images with a 6-point corer radius. The first method is to set UIImageView.layer's cornerRadius and masksToBounds which causes offscreen-rendering. The second method is to make PNG image copies with transparent corners which causes blending(because of the alpha channel).
I've tried both, but I can't see significant performance difference. However, I still want to know which is worse in theory and best practices if any.
Thanks a lot!
Well, short answer, the blending has to occur either way to correctly display the transparent corner pixels. However, this should typically only be an issue if you want the resulting view to also animate in some way (and remember, scrolling is the most common type of animation). Also, I'm able to recreate situations where "cornerRadius" will cause rendering errors on older devices (iPhone 3G in my case) when my views become complex. For situations where you do need performant animations, here are the recommendations I follow.
First, if you only need the resources with a single curve for the rounded corners (different scales are fine, as long as the desired curvature is the same), save them that way to avoid the extra calculation of "cornerRadius" at runtime.
Second, don't use transparency anywhere you don't need it (e.g. when the background is actually a solid color), and always specify the correct value for the "opaque" property to help the system more efficiently calculate the drawing.
Third, find ways to minimize the size of transparent views. For example, for a large border view with transparent elements (e.g. rounded corners), consider splitting the view into 3 (top, middle, bottom) or 7 (4 corners, top middle, middle, bottom middle) parts, keeping the transparent portions as small as possible and marking the rectangular portions as opaque, with solid backgrounds.
Fourth, in situations where you're drawing lots of text in scrollViews (e.g. highly customized UITableViewCell), consider using the "drawRect:" method to render these portions more efficiently. Continue using subviews for image elements, in order to split the render time between the overall view between pre-drawing (subviews) and "just-in-time" drawing (drawRect:). Obviously, experimentation (frames per second while scrolling) could show that violating this "rule-of-thumb" may be optimal for your particular views.
Finally, making sure you have plenty of time to experiment using the profiling tools (especially CoreAnimation) is key. I find that it's easiest to see improvements using the slowest device you want to target, and the results look great on newer devices.
After watching WWDC videos and having some experiments with Xcode and Instruments I can say that blending is better then offscreen rendering. Blending means that system requires some additional time to calculate color of pixels on transparent layers. The more transparent layers you have (and bigger size of these layers) then blending takes more time.
Offscreen rendering means that system will make more then one rendering iteration. At first iteration system will make rendering without visualization just to calculate bounds and shape of area which should be rendered. In next iterations system does regular rendering (depends on calculated shape) including blending if required.
Also for offscreen rendering system creates a separate graphics context and destroys it after rendering.
So you should avoid offscreen rendering and it's better to replace it with blending.

iOS: Smooth button Glow effect by blending between images

I am creating a custom button that needs to be able to glow to a varying degree
How would I use these pictures to make a button that 'glows' the diamond when it is pressed, and have this glow gradually fade back to inert state?
I want to churn out several different colours of diamond as well... I am hoping to generate all different coloured diamonds from the same stock images presented here.
I would like to get my head around the basic methods available, in enough detail that I can see each one through and make a decision which path to take...
My tangled efforts so far... ( I will delete all of this, or move it into possibly several answers as a solution unfolds... )
I can see 3 potential solution paths:
GL
it looks as though GL has everything it takes to get complete fine-grained control over the process, although functions exposed by core graphics come tantalisingly close, and that would save several hundred lines of code spread over a bunch of source files, which seems a bit ridiculous for such a basic task.
core graphics, and core animation to accomplish the blending
documentation goes on to say
Anything underneath the unpainted samples, such as the current fill color or other drawing, shows through.
so I can chroma-key mask the left image, setting {0,0,0} ie Black as the key.
this at least secures a transparent background, now I have to work on making it yellow instead of grey.
so maybe I could have started instead with setting a yellow back colour for my image context, then use some CGContextSetBlendMode(...) to imprint the diamond on the yellow, THEN use chroma-key masking to get a transparent background
ok, this covers at least getting the basic unlit image on-screen
now I could overlay the sparkly image, using some blend mode, maybe I could keep it in its current greyscale state, and that would just boost the colours of the original
only problem with this is that it is a lot of heavy real-time blending
so maybe I could pre-calculate every image in the animation... this is looking increasingly mucky...
Cocos2D
if this allows me to set the blend mode to additive blending then I could just composite the glowing image over the original image with an appropriate Alpha setting.
After digging through a lot of documentation, the optimal solution seems to be to use core graphics functions to get the source images into a single 2-component GL texture, and then use GL to blend between them.
I will need to pass a uniform value glow_factor into the shader
The obvious solution might seem to simply use
r,g,b = in_r,g,b * { (1 - glow_factor) * inertPixel + glow_factor * shinyPixel }
(where inertPixel is the appropriate pixel of the inert diamond etc)...
it looks like I would also do well to manufacture my own sparkles and add them over the top; a gem should sparkle white irrespective of its characteristic colour.
After having looked at this problem a little more, I can see several solutions
Solution A -- store the transition from glow=0 to glow=1 as 60 frames in memory, then load the appropriate frame into a GL texture every time it is required.
this has an obvious benefit that a graphic designer could construct the entire sequence and I could load it in as a bunch of PNG files.
another advantage is that these frames wouldn't need to be played in sequence... the appropriate frame can be chosen on-the-fly
however, it has a potential drawback of a lot of sending data RAM->VRAM
this can be optimised by using glTexSubImage2D; several frames can be sent simultaneously and then unpacked from within GL... in fact maybe the entire sequence. if this is so, then it would make sense to use PVRT texture compression.
iOS: playing a frame-by-frame greyscale animation in a custom colour
Solution B -- load glow=0 and glow=1 images as GL textures, and manually write shader code that takes in the glow factor as a uniform and performs the blend
this has an advantage that it is close to the wire and can be tweaked in all sorts of ways. Also it is going to be very efficient. This advantage is that it is a big extra slice of code to maintain.
Solution C -- set glBlendMode to perform additive blending.
then draw the glow=0 image image, setting eg alpha=0.2 on each vertex.
then draw the glow=1 image image, setting eg alpha=0.8 on each vertex.
this has an advantage that it can be achieved with a more generic code structure -- ie a very general ' draw textured quad / sprite ' class.
disadvantage is that without some sort of wrapper it is a bit messy... in my game I have a couple of dozen diamonds -- at any one time maybe 2 or 3 are likely to be glowing. so first-pass I would render EVERYTHING ( just need to set Alpha appropriately for everything that is glowing ) and then on the second pass I could draw the glowing sprite again with appropriate Alpha for everything that IS glowing.
it is worth noting that if I pursue solution A, this would involve creating some sort of real-time movie player object, which could be a very useful reusable code component.

Resources