I thought this would be rather straight forward but it seems it's not.
Things I have noticed when trying to crop an image like this:
#import "C4Workspace.h"
#implementation C4WorkSpace{
C4Image *image;
C4Image *copiedImage;
}
-(void)setup {
image=[C4Image imageNamed:#"C4Sky.png"];
//image.width=200;
image.origin=CGPointMake(0, 20);
C4Log(#" image width %f", image.width);
//[self.canvas addImage:image];
copiedImage=[C4Image imageWithImage:image];
[copiedImage crop:CGRectMake(50, 0, 200, 200)];
copiedImage.origin=CGPointMake(0, 220);
[self.canvas addObjects:#[image, copiedImage]];
C4Log(#"copied image width %f", copiedImage.width);
}
#end
origin of CGRectMake (the x and y coordinates) do not start from the upper left corner, but from the lower left and the height goes up instead of down then.
size of cropped image is actually the same as from the original image. I suppose the image doesn't really get cropped but only masked?
different scales In the example above I'm actually not specifying any scale, nevertheless original and cropped image do NOT have the same scale. Why?
I'm actually wondering how this function can be useful at all then... It seems that it would actually make more sense to go into the raw image data to crop some part of an image, rather than having to guess which area has been cropped/masked, so that I'd know where exactly the image actually remains...
Or maybe I'm doing something wrong?? (I couldn't find any example on cropping an image, so this is what I made...)
What you have found is a bug in the expected implementation of the crop: filter being run on your image.
1) The crop: method is actually an implementation done using Core Graphics, and is specifically running a Core Image filter (CIFilter) on your original image. The placement of (0,0) in Core Graphics is in the bottom left corner of the image. This is why the origin is off.
2) Yes. I'm not sure if this is should be considered a bug or a feature, something for me to think about... This actually has to do with the way that "filters" are designed.
3) Because of the bug in the way crop: is built, the filter doesn't account for the fact that the image scale should be 2.0, and it is re-rendering at 1.0 (and it shouldn't do this)
Finally, you've found a bug. I've listed it to be fixed here:
https://github.com/C4Framework/C4iOS/issues/110
The reason for much of the confusion, I believe, is that I built the filter methods for C4Image when I was originally working on a device / simulator that wasn't retina. I haven't had the opportunity to revisit how those are built, there also haven't been any questions about this issue before!
Related
I have been searching from last two days on internet, I have checked many source codes on net but none of them has provided the result I want.
The image rotation would have perspective but still there would be no changes in the heights of both left and right sides of an image.
I want to set image inside the laptop screen
Please help me out, Thanks.
So you want to 2D pespective drawing of a laptop screen (on an iOS device?) and put a 2D image on that screen, but with the image transformed so its perspective looks correct on the laptop screen, right?
What you need to do is to add an image view on top of your laptop image view. Lets call it laptopScreenImageView.
Then apply a CATransform3D to that the laptopScreenImageView's layer.
The trick to get 3D perspective out of a CALayer is to modify the .m34 value of the transform. Typically you set the .m34 value to a very small negative number, somewhere around -1/200 to -1/500 (the denominator in the fraction is the z coordinate of the "eye position" for viewing the perspective image, in pixels, or how many pixels "above" the image the viewer's eye should seem to be. I don't fully understand it, to be honest. I fiddle with the .m34 value until I get something that looks right.)
Alternately you could try adding a CATransformLayer to your laptop image view's layer, and then adding a CALayer containing your image as a sublayer of the CATransformLayer. I haven't used CATransformLayers before, but the docs say they are supposed to support layers with 3D perspective, giving you the same effect as modifying the .m34 component of a layer's transform.
screenshot below has been taken from 3.5inch simulator
these are bunch of UIButton, and the border created programmatically like:
btn.layer.cornerRadius = btn.frame.size.width / 2;
i don't know, but now all Fonts and UIButtons in the app get pixelated. mostly everything got pixelated.
I checked every setting in Xcode.
I tried to clean the project, then cleaned DerivedData folder.
I tried building the app in another machine.
I tried the app on real device. same problem.
nothing worked out yet.
An easy way to get pixellation on retina devices is rasterizing layers without setting the right rasterizationScale.
view.layer.shouldRasterize = YES;
view.layer.rasterizationScale = UIScreen.mainScreen.scale;
Without the second line, stuff will look fine for non-retina devices but it'll look awful on retina devices.
...hard to say if that's the problem you're seeing without more of the code, but it's a common enough bug that it merits posting.
Whether or not you should actually be rasterizing is a separate question...there are performance tradeoffs of which to be aware.
It could be that the resulting frame isn't aligned on an even integer. i.e. Moving/changing width is causing the frame to be something like (100.5, 50.0, 50.0, 50.0). When you are drawing on a half-pixel boundary, some of the drawing routines are going to make things look blurry to try and make it appear in the correct place. I would print out the frame after the animation and check:
NSLog(#"%#", NSStringFromCGRect(yourButton.frame));
If you see any non-integer values, use one of the floor() functions to modify the resulting frame in order to snap it to a boundary.
I had same problem with my UILabel(when i changed frame),
Before using floor() method:
And after:
Post title can be a little bit weird but here's what I'm looking forward to do:
Look at the image in the middle, it's displaying only a part of the original image. It's still high definition, not really cropped but only masked with the center of the image at the center of the view.
So basically they put a bigger image behind a smaller view (I did that for having circular imageviews in the past). But how can I achieve that particularly?
Is there any cocoapods or something to do so or should I get started doing it myself? Any suggestions on how to code-wisely build this?
The main goal here is to keep a static space to display images so they're always the same width/height. Doing this effect seems like a good way of achieving this.
EDIT: Here's a little sketch of an idea I just had to mimic that behavior:
Thanks a lot and have a nice day.
If you're asking what I think you're asking, you don't have to look far to find this functionality. Use UIView's built in contentMode property, specifically in this case, UIViewContentModeScaleAspectFill.
[imageView setContentMode:UIViewContentModeScaleAspectFill];
Then to crop of the parts of the image extending out of the frame, be sure to use clipsToBounds:
[imageView setClipsToBounds:YES];
Here is another solution, but make sure your image width and height is greater than the imageview,
imageView.contentMode=UIViewContentModeCenter;
imageView.clipsToBounds=YES;
imageView.clearsContextBeforeDrawing=NO;
I have tried the UIImage+Resize category that's popular, and with varying interpolation settings. I have tried scaling via CG methods, and CIFilters. However, I can never get an image downsized that does not either look slightly soft in focus, nor full of jagged artifacts. Is there another solution, or a third party library, which would let me get a very crisp image?
It must be possible on the iPhone, because for instance the Photos app will show a crisp image even when pinching to scale it down.
You said CG, but did not specify your approach.
Using drawing or bitmap context:
CGContextSetInterpolationQuality(gtx, kCGInterpolationHigh);
CGContextSetShouldAntialias(gtx, true); << default varies by context type
CGContextDrawImage(gtx, rect, image);
and make sure your views and their layers are not resizing the image again. I've had good results with this. It's possible other views are affecting your view or the context. If it does not look good, try it in isolation to sanity check whether or not something is distorting your view/image.
If you are drawing to a bitmap, then you create the bitmap with the target dimensions, then draw to that.
Ideally, you will maintain the aspect ratio.
Also note that this can be quite CPU intensive -- repeatedly drawing/scaling in HQ will cost a lot of time, so you may want to create a resized copy instead (using CGBitmapContext).
Here is the Routine that I wrote to do this. There is a bit of soft focus, though depending on how far you are scaling the original image its not too bad. I'm scaling programatic Screen Shot Images.
- (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
CGContextSetInterpolationQuality is what you are looking for.
You should try this category additions
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
When an image is scaled down, it is often a good idea to apply some sharpening.
The problem is, Core Image on iOS does not (yet) implement the sharpening filters (CISharpenLuminance, CIUnsharpMask), so you would have to roll your own. Or nag Apple till they implement these filters on iOS, too.
However, Sharpen luminance and Unsharp mask are fairly advanced filters, and in previous projects I have found that even a simple 3x3 kernel would produce clearly visible and satisfactory results.
Hence, if you feel like working at the pixel level, you could get the image data out of a graphics context, bit mask your way to R, G and B values and code graphics like it is 1999. It will be a bit like re-inventing the wheel, though.
Maybe there are some standard graphics libraries around that can do this, too (ImageMagick?)
Greetings,
I'm working on an application inspired by the "ZoomingPDFViewer" example that comes with the iOS SDK. At some point I found the following bit of code:
// to handle the interaction between CATiledLayer and high resolution
// screens, we need to manually set the tiling view's
// contentScaleFactor to 1.0. (If we omitted this, it would be 2.0
// on high resolution screens, which would cause the CATiledLayer
// to ask us for tiles of the wrong scales.)
pageContentView.contentScaleFactor = 1.0;
I tried to learn more about contentScaleFactor and what it does. After reading everything of Apple's documentation that mentioned it, I searched Google and never found a definite answer to what it actually does.
Here are a few things I'm curious about:
It seems that contentScaleFactor has some kind of effect on the graphics context when a UIView's/CALayer's contents are being drawn. This seems to be relevant to high resolution displays (like the Retina Display). What kind of effect does contentScaleFactor really have and on what?
When using a UIScrollView and setting it up to zoom, let's say, my contentView; all subviews of contentView are being scaled, too. How does this work? Which properties does UIScrollView modify to make even video players become blurry and scale up?
TL;DR: How does UIScrollView's zooming feature work "under the hood"? I want to understand how it works so I can write proper code.
Any hints and explanation is highly appreciated! :)
Coordinates are expressed in points not pixels. contentScaleFactor defines the relation between point and pixels: if it is 1, points and pixels are the same, but if it is 2 (like retina displays ) it means that every point has two pixels.
In normal drawing, working with points means that you don't have to worry about resolutions: in iphone 3 (scaleFactor 1) and iphone4 (scaleFactor 2 and 2x resolution), you can use the same coordinates and drawing code. However, if your are drawing a image (directly, as a texture...) and just using normal coordinates (points), you can't trust that pixel to point map is 1 to 1. If you do, then every pixel of the image will correspond to 1 point but 4 pixels if scaleFactor is 2 (2 in x direction, 2 in y) so images could became a bit blurred
Working with CATiledLayer you can have some unexpected results with scalefactor 2. I guess that having the UIView a contentScaleFactor==2 and the layer a contentScale==2 confuse the system and sometimes multiplies the scale. Maybe something similar happens with Scrollview.
Hope this clarifies it a bit
Apple has a section about this on its "Supporting High-Resolution Screens" page in the iOS dev documentations.
The page says:
Updating Your Custom Drawing Code
When you do any custom drawing in your application, most of the time
you should not need to care about the resolution of the underlying
screen. The native drawing technologies automatically ensure that the
coordinates you specify in the logical coordinate space map correctly
to pixels on the underlying screen. Sometimes, however, you might need
to know what the current scale factor is in order to render your
content correctly. For those situations, UIKit, Core Animation, and
other system frameworks provide the help you need to do your drawing
correctly.
Creating High-Resolution Bitmap Images Programmatically If you
currently use the UIGraphicsBeginImageContext function to create
bitmaps, you may want to adjust your code to take scale factors into
account. The UIGraphicsBeginImageContext function always creates
images with a scale factor of 1.0. If the underlying device has a
high-resolution screen, an image created with this function might not
appear as smooth when rendered. To create an image with a scale factor
other than 1.0, use the UIGraphicsBeginImageContextWithOptions
instead. The process for using this function is the same as for the
UIGraphicsBeginImageContext function:
Call UIGraphicsBeginImageContextWithOptions to create a bitmap
context (with the appropriate scale factor) and push it on the
graphics stack.
Use UIKit or Core Graphics routines to draw the content of the
image.
Call UIGraphicsGetImageFromCurrentImageContext to get the bitmap’s
contents.
Call UIGraphicsEndImageContext to pop the context from the stack.
For example, the following code snippet
creates a bitmap that is 200 x 200 pixels. (The number of pixels is
determined by multiplying the size of the image by the scale
factor.)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(100.0,100.0), NO, 2.0);
See it here: Supporting High-Resolution Screens