Implement photoshop blending mode in iOS - ios

There were numerous situations when designers gave me psd files, and there two layers where set with blending mode multiply. and always I couldn't implement that behaviour, and just used different colour for front view with opacity set to, for instance, 0.5, just to some kind of simulate this blending mode behaviour. However now I just want to implement what designer has given me.
For instance, let us take a UITableView. Here is the screenshot of custom design.
Here the section header isn't half opaque, but it has blending mode set to Multiply. And here is the actual colour of it
If I set custom view as a section header, with background colour set to above screenshot, how can I make the section view "blend" with the background of the UITableView???
This was just an example of the problem that I've stumbled multiple times. In general, I always have a front view and a rare view, and in photoshop front view is set, for instance, multiply blend mode with rare view, and I want to have the same effect with iOS. Is there any way to implement this??
Thanks for the answers.

You can use JS image processing script from Pixastic: http://www.pixastic.com/lib/docs/actions/blend/
I used this script before for several projects, it works realy great :)
Hope it helps you)

Related

CALayer or other technique to actually modify the colors of layers behind, rather like clipping

Summary, in iOS how to have a view that modifies the pixels of all the views behind it.
Say you have a view, well any views, but let's consider a collection view which happens to just be some color blocks:
Say we added a view on top, CleverView, which either just blocks that view (so, white - trivial) or even "cuts a hole" in that layer (relatively easy to do, google).
No problem so far: so here's CleverView just a white square:
But what if we want to do this:
CleverView is changing all the saturation below it,
Or perhaps this:
CleverView is changing the hue or whatever below it.
Note that in the examples it's working in a pixel-wise fashion, it's not ("just") flagging each collection view cell to change all of the cell color.
So ideally CleverView would do this to anything at all that happens to be behind it (ie, whatever bunch of views it covers or partly covers, hence the collection view example which is just 'many views).
Naturally both the underneath stuff, and the shape of CleverView, can be animating, moving, in real time.
Is there a way this could be done in iOS?
(In that specific example, what I do is just, have two of the collection views: the bottom one and the top one has the new color values. Simply with care clip the top one to achieve the effect. But obviously that's not as Clever as a view that actually "modifies the values of all the pixels behind it".)
{Note too that, obviously, you can just make basically a screen shot, munge that image, and show it; not really a great solution.}
The CALayer has a property backgroundFilters where you could normally add a CIFilter that would do the job. But, documentation states that
Special Considerations This property is not supported on layers in
iOS.
That's annoying, but that's all that we have. Probably, it's due to performance considerations.
I would suggest to look into SceneKit, there the primitives are very similar to CoreAnimation, also animatable with CAAnimation, but provide advanced tools to configure and control many more aspects of the rendering.
For example, SCNNode has filters: https://developer.apple.com/documentation/scenekit/scnnode/1407949-filters?language=objc

CGColor - Determining the blended color from two UIViews that are on top of each other

I am trying to obtain the resultant CGColor (or UIColor) that would be displayed as a result of two (or perhaps more) views that are sat on top of each other each with differing colours. Obviously the view(s) nearer the foreground have an alpha value of less than 1, allowing the colour of views behind to bleed through.
Essentially, I guess I'm trying to mimic exactly what the UIView compositing process does exactly when it prepares for the painting of a scene.
NB. I'd like to steer away from a manual programmatic blend algorithm as will likely not be the same as the Cocoa blend mechanism.
I have just found this... Can I mix two UIColor together? but thought there must be an iOS/Cocoa equivalent
Heres a tutorial-ish thing:
Blending Modes in iOS
And here is the Apple Documentation that covers blend modes. Should get you on the right track. The exact formulas are shown for each blend mode option.
Finally, see this answer:
How to get the color of a displayed pixel

Darken an opaque UIView without blending

My App's background is an opaque UIImageView. Under some circumstances I would like to darken this down in an animated way from full brightness to about 50%. Currently I lower the alpha property of the view and this works well. Because nothing is behind the view, the background image just becomes dark.
However, I've been profiling using the Core Animation Instrument and when I do this, I see that the whole background shows as being blended. I'd like to avoid this if possible.
It seems to me that this would be achievable during compositing. If a view is opaque, it is possible to mix is with black without anything behind showing through. It's not necessary to blend it, just adjust the pixel values.
I wondered if this was something that UIKit's GPU compositing supports. While blending isn't great, it's probably a lot better than updating the image on the CPU, so I think a CPU approach is probably not a good substitute.
Another question asks about this, and a few ideas are suggested including setting the Alpha. No one has brought up a mechanism for avoiding blending though.
An important question here is whether you want the change to using a darkened background to be animated.
Not animated
Prepare two different background images and simply swap between them. The UIImage+imageEffects library could help with generating the darkened image, or give you some leads.
Animated.
Take a look at GPUImage - "An open source iOS framework for GPU-based image and video processing". Based on this you could render the background in to the scene in a darkened way.

iOs draw an intereactive map

I need to draw an interactive map for an iOs application. For example it can be the map of the US showing the states. It will need to show all the states in different colors ( I'll get this from a delegate colorForStateNo: ) It will need to allow the user to select a state by touching it, when the color will change, and a "stick out" effect should be shown, maybe even a symbol animated to appear over the selected state. Also the color of some states will need to change depending on external events. This color change will mean an animation like a circle starting in the middle of the state and progressing towards the edges changing the color from the current one to the one inside the circle.
Can this be done ,easily in core-graphics? Or is it only possible with Open GL ES? What is the easiest way to do this? I have worked with core graphics and it doesn't seem to handle animation very well, I just redraw the entire screen when something needed to move... Also how could I use an external image to draw the map? Setting up a lot of drawLineToPoint seems like , a lot of work to draw only one state let alone the whole map ...
You could create the map using vector graphics and then have that converted to OpenGL calls.
Displaying SVG in OpenGL without intermediate raster
EDIT: The link applies to C++, but you may be able to find a similar solution.

Blending V.S. offscreen-rendering, which is worse for Core Animation performance?

Blending and offscreen-rendering are both expensive in Core Animation.
One can see them in Core Animation instrument in Instruments, with Debug Options:
Here is my case:
Display 50x50 PNG images on UIImageViews. I want to round the images with a 6-point corer radius. The first method is to set UIImageView.layer's cornerRadius and masksToBounds which causes offscreen-rendering. The second method is to make PNG image copies with transparent corners which causes blending(because of the alpha channel).
I've tried both, but I can't see significant performance difference. However, I still want to know which is worse in theory and best practices if any.
Thanks a lot!
Well, short answer, the blending has to occur either way to correctly display the transparent corner pixels. However, this should typically only be an issue if you want the resulting view to also animate in some way (and remember, scrolling is the most common type of animation). Also, I'm able to recreate situations where "cornerRadius" will cause rendering errors on older devices (iPhone 3G in my case) when my views become complex. For situations where you do need performant animations, here are the recommendations I follow.
First, if you only need the resources with a single curve for the rounded corners (different scales are fine, as long as the desired curvature is the same), save them that way to avoid the extra calculation of "cornerRadius" at runtime.
Second, don't use transparency anywhere you don't need it (e.g. when the background is actually a solid color), and always specify the correct value for the "opaque" property to help the system more efficiently calculate the drawing.
Third, find ways to minimize the size of transparent views. For example, for a large border view with transparent elements (e.g. rounded corners), consider splitting the view into 3 (top, middle, bottom) or 7 (4 corners, top middle, middle, bottom middle) parts, keeping the transparent portions as small as possible and marking the rectangular portions as opaque, with solid backgrounds.
Fourth, in situations where you're drawing lots of text in scrollViews (e.g. highly customized UITableViewCell), consider using the "drawRect:" method to render these portions more efficiently. Continue using subviews for image elements, in order to split the render time between the overall view between pre-drawing (subviews) and "just-in-time" drawing (drawRect:). Obviously, experimentation (frames per second while scrolling) could show that violating this "rule-of-thumb" may be optimal for your particular views.
Finally, making sure you have plenty of time to experiment using the profiling tools (especially CoreAnimation) is key. I find that it's easiest to see improvements using the slowest device you want to target, and the results look great on newer devices.
After watching WWDC videos and having some experiments with Xcode and Instruments I can say that blending is better then offscreen rendering. Blending means that system requires some additional time to calculate color of pixels on transparent layers. The more transparent layers you have (and bigger size of these layers) then blending takes more time.
Offscreen rendering means that system will make more then one rendering iteration. At first iteration system will make rendering without visualization just to calculate bounds and shape of area which should be rendered. In next iterations system does regular rendering (depends on calculated shape) including blending if required.
Also for offscreen rendering system creates a separate graphics context and destroys it after rendering.
So you should avoid offscreen rendering and it's better to replace it with blending.

Resources