When you put a UIImage into a UIImageView that is smaller than the view, and the content mode of the view is ScaleToFit, iOS enlarges the bitmap, as expected. What I have never expected is that it blurs the edges of the pixels it has scaled-up. I can see this might be a nice touch if you're looking at photographs, but in many other cases I want to see those nasty, hard, straight edges!
Does anyone know how you can configure a UIImage or UIImageView to enlarge with sharp pixel edges? That is: Let it look pixellated, not blurred.
Thanks!
If you want to scale up any image in UIImageView with sharpen edge, use the following property of CALayer.
imageview.layer.magnificationFilter = kCAFilterNearest;
It seems that magnificationFilter affects an interpolation method of contents of UIView. I recommend you to read an explanation of the property in CALayer.h.
/* The filter types to use when rendering the `contents' property of
* the layer. The minification filter is used when to reduce the size
* of image data, the magnification filter to increase the size of
* image data. Currently the allowed values are `nearest' and `linear'.
* Both properties default to `linear'. */
#property(copy) NSString *minificationFilter, *magnificationFilter;
I hope that my answer is useful for you.
Related
Can I get pixel value of image and crop its black part. For instance, I have the this image:
.
And I want something like this
without the black part.
Any possible solution on how to do this? Any libraries/code?
I am using Objective C.
I have seen this solution to the similar question but I don't understand it in detail. Please kindly provide steps in detail. Thanks.
Probably the fastest way of doing this is iterating through the image and find the border pixels which are not black. Then redraw the image to a new context clipping the rect received by border pixels.
By border pixels I mean the left-most, top-most, bottom-most and right-most. You can find a way to get the raw RGBA buffer from the UIImage through which you may then iterate through width and height and set the border values when appropriate. That means for instance to get leftMostPixel you would first set it to some large value (or to the image width) and then in the iteration if the pixel is not black and if leftMostPixel > x then leftMostPixel = x.
Now that you have the 4 bounding values you can create a frame from it. To redraw just the target rectangle you may use various tools with contexts but probably the easiest is creating the view with size of bounding rect and put an image view with the size of the original image on it and create a screenshot of the view. The image view origin must be minus the origin of the bounded rect though (we put it offscreen a bit).
You may encounter some issues with the orientation of the image though. If the image will have some orientation other then up the raw data will not respect that. So you need to take that into account when creating the bounded rect... Or redraw the image first to make it oriented correctly... Or you can even create a sub buffer with RGBA data and create the CGImage from those data and applying the same orientation to the output UIImage as with input.
So after getting the bounds there are quite a few procedures. Some are slower, some take more memory, some are simply hard to code and have edge cases.
So in my scenario, I have a square that is (for understanding's sake) 100x100 and need to display an image that is 300x800 inside of it.
What I want to do is be able to have the image scale just as it would with UIViewContentMode.ScaleAspectFill so that the width scales properly to 100.
However, after that, I would like to then "move" the image up to the top of the image instead of it putting it inside the imageView right in the center, basically what UIViewContentMode.Top does. However that doesn't scale it first.
Is there anyway to do this type of behavior with the built in tools? Anyway to add multiple contentModes?
I already had a helper function that I wrote that scaled an image to a specific size passed in, so I just wrote a function that calculated the scaled image that would fit into the smaller square I had similar to the size AspectFill would do, and then I wrote code that would crop it with the rectangle size I needed at (0,0).
I am using the TilingView from the Apple PhotoScroller example to tile some images. This works great for most of my images, but I have a few get weird scale values. I set the level of detail to 4. My images are all scaled at different values, 100,50,25,12.5 scales then tiled 256x256 at those levels.
In TilingView drawRect method, the scale I get here must be one of 4 values and normally is 1.0,0.50,0.25,0.125. Since I store my images off based on these scale values when I get a weird scale value it breaks and cannot load the images. For example I have an image that at .50 scale the actual value I get is 0.499798.
Any ideas whats going on here? If I tell the CATiledLayer to have 4 levels of detail, how do I end up with these weird values?
CGFloat scale = CGContextGetCTM(context).a;
NSLog(#"scale = %f",scale);
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
CGSize tileSize = tiledLayer.tileSize;
How can I ensure that the image size I pass actually will return me one of the 4 scales 100,50,25,12,5 for any image size I specify?
There are several bugs in that sample's code, one of which involves proper rounding of those scale values, which leads to the issue you are seeing. But there are also other subtle issues. Please have a look at this question, where those issues (and the fixes) are described in more detail.
Does anyone know how I could trim a UIImageView of some image with surrounding transparency down to just barely fit the content by cropping off the edges?
You can try using this category:
https://github.com/Clstroud/UIImage-Trim
Here is a usage example from their docs:
UIImage-Trim Category for trimming transparent pixels of an UIImage
object.
How to use
Add the UIImage+Trim files to your project. Include UIImage+Trim.h in
the files where you want to trim your images.
Trimming is pretty straightforward:
[yourImage imageByTrimmingTransparentPixels];
Optionally, you may want to consider any non-opaque pixels as being
transparent (for instance, cropping out a light drop shadow). This can
be achieved by using the alternate method:
[yourImage imageByTrimmingTransparentPixelsRequiringFullOpacity:YES];
Additionally, if you merely desire to know the UIEdgeInsets of the
transparency around the image, you may want to use the following:
[yourImage transparencyInsetsRequiringFullOpacity:YES];
This call works based on the same principles as the "advanced" trim
method, with the boolean dictating whether non-opaque pixels should be
considered transparent.
In our iOS app, we have subclassed a UISlider in order to use a custom design. However, no matter what we try, the images used in the UISlider come out distorted. For instance, the original thumb rect is 156x44. We have overridden this to be 15x22, which results in a blown up image. Other permutations result in the thumb going off-track, and still blown up. How can we correctly set the image size for the thumb and track, by using padding on the images or by overriding the appropriate functions?
I think that UISlider was designed one resolution and one only, and making the elements smaller, will negatively affect user experience. But if you really want to, try one of these:
Use the bigger images, but set the transform property of an UISlider instances to some CGAffineTransformMakeScale(scaleX, scaleY).
Add transparent padding to your images, so they look small, but are actually of a proper size.
That is, if you really want your UISlider smaller. If all you are bothered with is the distortion caused, but automatic upscaling, consider loading UIImages as stretchable, f.i. with
- (UIImage *)resizableImageWithCapInsets:(UIEdgeInsets)capInsets // iOS >=5.0
or
- (UIImage *)stretchableImageWithLeftCapWidth:(NSInteger)leftCapWidth topCapHeight:(NSInteger)topCapHeight // iOS <5.0