I used the very simple way.
I create a bitmap based image context and draw the image scaled.
But I always get empty images.
- (UIImage *)resizeImage:(UIImage *)image to:(CGSize)newSize
{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 1.0);
[image drawInRect:CGRectMake(0.0, 0.0, newSize.width, newSize.height)];
UIImage * scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
Any hint where was wrong?
I try to change the coordinate system. It still does not work.
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0.0, newSize.height);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
Related
I am having problem with UIImage resizing, image masking is working fine but after applying mask UIImage is starched, the problem is with scaling as image is not scaled properly.
CCClippingNode *clippingNode = [[CCClippingNode alloc] initWithStencil:pMaskingFrame ];
pTobeMasked.scaleX = (float)pMaskingFrame.contentSize.width / (float)pTobeMasked.contentSize.width;
pTobeMasked.scaleY = (float)pMaskingFrame.contentSize.height / (float)pTobeMasked.contentSize.height;
clippingNode.alphaThreshold = 0;
[pContainerNode addChild:clippingNode];
pTobeMasked.position = ccp(pMaskingFrame.position.x, pMaskingFrame.position.y);
[clippingNode addChild:pTobeMasked];
One of my project I have used below function to resize an image;
/*
method parameters definition
image : original image to be resized
size : new size
*/
+ (UIImage*)resizeImage:(UIImage *)image size:(CGSize)size
{
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
//here is the scaled image which has been changed to the size specified
UIGraphicsEndImageContext();
return newImage;
}
This will work like a charm. It's similar to the already posted answer, but it has some more options:
+(UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize
{
//UIGraphicsBeginImageContext(newSize);
// In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
// Pass 1.0 to force exact pixel size.
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
My code:
-(UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
The first parameter is image which is screen shot my view controller.
The second parameter newSize is actually smaller than the image size which follows the aspect ratio also. But the image is looks good but the text(UILabel) are some what blur.
How can i solve this any idea?
Assuming newSize is in points:
UIGraphicsBeginImageContextWithOptions(newSize, NO, image.scale);
I am writing a method that takes a image and a alpha value then return a uiimage with that alpha value.
I have found some code like this
+ (UIImage *)imageByApplyingAlpha:(UIImage*) originalImage andAlpha:(CGFloat) alpha {
UIGraphicsBeginImageContextWithOptions(originalImage.size, NO, 0.0f);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, originalImage.size.width, originalImage.size.height);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -area.size.height);
CGContextSetBlendMode(ctx, kCGBlendModeMultiply);
CGContextSetAlpha(ctx, alpha);
CGContextDrawImage(ctx, area, originalImage.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
but this only work on single image. It will generate malformed GIF when I use the returned image to form GIF.
Please help me to find a easy way to get UIImage with alpha.
Thanks all!
Unfortunately, the GIF format doesn't support alpha channels, so you won't be able to set alpha values on one. Transparency in GIF uses index transparency, which is all or nothing.
I'm working on an App which works with gestureRecognizer. With gestures it is possible to select an UIImage (such as rectangle.png) and it is possible with a UIPopoverView to change the color of that image by selecting a color for the selected image.
This image lay in a UIImageView and I think the best solution is to mask that image and set a colored image with the dame size and frame instead.
Is it the right way? How can I optimize my approach?
Which could be the best practice for this requirement?
(UIImage *)maskImage:(UIColor *)maskColor
{
CGRect rect = CGRectMake(0, 0, self.size.width, self.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, rect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, rect, self.CGImage);
CGContextSetFillColorWithColor(context, maskColor.CGColor);
CGContextFillRect(context, rect);
UIImage *smallImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return smallImage;
}
How does one scale down a UIImage equally on both dimensions by half?
I have a UIImage that is double the size of the UIImageView and would like it to be the same size, without using any of the content modes for filling or scaling etc.
Just make yourself a new image (expressed as a category method of UIImage) :
- (UIImage*)scaleToSize:(CGSize)size {
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0.0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, size.width, size.height), self.CGImage);
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}