Trim UIImageView to fit content - ios

Does anyone know how I could trim a UIImageView of some image with surrounding transparency down to just barely fit the content by cropping off the edges?

You can try using this category:
https://github.com/Clstroud/UIImage-Trim
Here is a usage example from their docs:
UIImage-Trim Category for trimming transparent pixels of an UIImage
object.
How to use
Add the UIImage+Trim files to your project. Include UIImage+Trim.h in
the files where you want to trim your images.
Trimming is pretty straightforward:
[yourImage imageByTrimmingTransparentPixels];
Optionally, you may want to consider any non-opaque pixels as being
transparent (for instance, cropping out a light drop shadow). This can
be achieved by using the alternate method:
[yourImage imageByTrimmingTransparentPixelsRequiringFullOpacity:YES];
Additionally, if you merely desire to know the UIEdgeInsets of the
transparency around the image, you may want to use the following:
[yourImage transparencyInsetsRequiringFullOpacity:YES];
This call works based on the same principles as the "advanced" trim
method, with the boolean dictating whether non-opaque pixels should be
considered transparent.

Related

Performance: UIImage vs UIView

Out of sheer interest:
Is there any difference, specifically in (theoretic) performance or memory usage between using a UIImage of 25x25 pixels, square, one color, png on the one hand, or a UIView of the same size and color?
Consider the Unread bullet in Mail.app. Would you use an image for that? Or a UIView with rounded edges?
An image takes more space, and resided in a UIImageview, and has a resolution dependency, but on the other hand, once it is loaded, it wouldn't make too much difference, would it?
If you use UIImageView then it requires image. Where if you use UIView you not need image, so it makes your application light weight. Second thing, large image takes more memory to load. So, it is always beneficial to use UIView instead of image wherever possible! It keeps your application light weight and can give better performance!
An UIImageView, being a subclass of UIView, will instantiate a regular view with all the extension that UIKit developers have built to support displaying an image inside a plain UIView. By doing that you're only creating an extended (aka possibly heavier) version of a standard UIView.
That said, using an UIView for simple UI elements (like Mail.app's bullet icon) will also allow you to forget about the resolution of the graphical asset since you don't have to care about #2x or #3x resolutions resulting also in a smaller project size.
Of course you'll only save kilobytes when it comes to simple shapes, but reusing this pattern all across the app will benefit you exponentially in the long term.

How to show images as pixelated instead of blurry on iOS [duplicate]

When you put a UIImage into a UIImageView that is smaller than the view, and the content mode of the view is ScaleToFit, iOS enlarges the bitmap, as expected. What I have never expected is that it blurs the edges of the pixels it has scaled-up. I can see this might be a nice touch if you're looking at photographs, but in many other cases I want to see those nasty, hard, straight edges!
Does anyone know how you can configure a UIImage or UIImageView to enlarge with sharp pixel edges? That is: Let it look pixellated, not blurred.
Thanks!
If you want to scale up any image in UIImageView with sharpen edge, use the following property of CALayer.
imageview.layer.magnificationFilter = kCAFilterNearest;
It seems that magnificationFilter affects an interpolation method of contents of UIView. I recommend you to read an explanation of the property in CALayer.h.
/* The filter types to use when rendering the `contents' property of
* the layer. The minification filter is used when to reduce the size
* of image data, the magnification filter to increase the size of
* image data. Currently the allowed values are `nearest' and `linear'.
* Both properties default to `linear'. */
#property(copy) NSString *minificationFilter, *magnificationFilter;
I hope that my answer is useful for you.

iOS - What is the equivalent of "stretchableImageWithLeftCapWidth:0 topCapHeight:0"?

What is an equal value of
resizableImageWithCapInsets: for stretchableImageWithLeftCapWidth:0 topCapHeight:0?
Also, Is there any simple way to convert the values from stretchableImageWithLeftCapWidth to resizableImageWithCapInsets?
Because the stretchableImageWithLeftCapWidth method was deprecated in iOS 5.
Thanks
I guess your looking for resizableImageWithCapInsets:UIEdgeInsetsZero?
By using:
stretchableImageWithLeftCapWidth:0 topCapHeight:0
You're telling iOS that you want the entire image to be vertically stretchable and you want the entire image to also be horizontally stretchable (itals copied directly from Apple docs). In other words, no part of the image will be preserved unstretched.
In converting your code to use the new method introduced in iOS 5, though, you'll have to keep in mind that the new method works differently than the old one when it determines how to resize the image not contained in the caps (hence the move away from using "stretch" in the method name itself).
While the old method used pure (and not always efficient) scaling of every pixel not contained in the caps as a whole, the new method defaults to the more efficient tiling approach. As a consequence, the results between old and new methods will be very different if the part of the image you're stretching (the part not contained in the caps) measures more than one pixel in the direction(s) being stretched and isn't uniform.
While you have not provided screenshots of the actual image you are working with or the results you're getting, which makes it impossible to tell you exactly why you're not getting the results you want, it does sound like you specifically want your image to be resized with the UIImageResizingModeStretch resizing mode (rather than the default tiling mode). If so, it's likely that you should be using this method instead:
resizableImageWithCapInsets:(UIEdgeInsets)capInsets resizingMode:
The answer to your second question also depends on how you want the stretchable part of your image stretched.
Heuristically:
If your old caps were non-zero and the portion being stretched was either 1x1 pixel or at least uniform in the direction(s) of stretching, you'd probably be able to use UIEdgeInsetsMake(*top*, *left*, *bottom*, *right*) with your LeftCap value as left and right and your TopCap value as top and bottom, given that the old method assumed you wanted to stretch the image symmetrically.
If your old caps were 0 or the portion being stretched was larger than 1x1 pixel in any direction(s) of resized and non-uniform, you could still convert the caps in the same way but you'd want to use the method variant that allows you to control the resizingMode.

How to figure out, If an imagefile (base64) has transparency?

i wonder, how i could figure out if an image has a transparency effect applied. Is there any way in JavaScript or HTML5? I have a Base64-coded image. Is there a way to read out the transparency-information (alpha-channel). For example, if i load a PNG-Image, then convert it to base64, then drop it to html5-canvas, now how can i know if this has transparency-effect activated?
thanx alot
okyo
When you say 'drop it to html5-canvas', I assume you mean using an image element with the 'data:' URI scheme. Also, let's take it as given that you don't want to write javascript code to parse the image files.
You could do something like this pseudo-code:
create 2 off-screen canvases
color one opaque white and the other opaque black
draw the image on both of them
call getImageData on each canvas, using the image bounds
compare the image data
If the image has any transparent or partially-transparent pixels, then presumably the two canvases will end up at least a little different. One exception would be if the image has the transparency feature enabled but is entirely opaque anyway. Another would be if the non-opaque pixels are only very slightly transparent - not enough to alter a white or black background. But this technique would catch images where transparency is noticeable.

How to apply a soft shaped shadow to graphics which have transparent areas in them?

Normally I'm using CALayer shadowRadius, but now I also need to use UIImage and apply shaped shadows to it based on the content in the image.
For example when I have a layer with text in it and I set a shadow, it works automatically on the text and not just on the rectangle of the layer.
In Photoshop this is known as "layer style" and it automatically works based on the shape of the image content.
I am afraid that I need to implement some Harvard-Stanford-MIT-NASA kind of hardcore logic to apply a shadow on a "shaped image", i.e. an image of an round icon where the areas around the icon are fully transparent.
I'm able to manipulate images on a per-pixel level as I'm doing this already to draw charts, so if there was an open-sourced implementation of some fantastic algorithms this would be fantastic. And if not: How does this basically work? My guess is I would "just" try to blur a grayscaled version of my image somehow and then overlay it with the non-blurred version.
My guess is I would "just" try to blur a grayscaled version of my image somehow and then overlay it with the non-blurred version.
That's pretty much it, actually. Except instead of blurring a greyscaled version of the image, blur a solid-colored version of the image (i.e. keep the alpha channel, but make all pixels black). Although CALayer's shadowing should do this already for you.
If your images are already composited onto a background (i.e. without real transparency), you have a harder problem as you first need to "remove" the background before you can have the shape of the object in order to generate the shadow.

Resources