set size of an image programatically - ios

is there any way I can set the size of an image in my NavigationBar programatically instead of changing in manually?
Here is the code which creates the button:
UIBarButtonItem *Share = [[UIBarButtonItem alloc] initWithImage:[UIImage imageNamed:#"fb_share.png"] style:UIBarButtonItemStyleBordered target:self action:#selector(Share:)];
My images size is 256*256 so it simply show up in this resolution and whole screen is messed.

You just want to change the size of your image? If so, something like this should work:
// Given a UIImage image and a CGSize newSize:
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Then use newImage for the image of the UIBarButtonItem.

Normaly you should use contentMode (in this case UIViewContentModeScaleAspectFit) every UIView subclass have this property usually you might use it to UIImageView. But i'm almost sure that does not work with UIBarButtonItem. Try, but you might hit a wall.
If my prediction was true you have to resize image programatically:
UIImage *myImage = [UIImage imageNamed:#"fb_share"];
float maximumHeight = 44;
float ratio = maximumHeight/ myImage.size.height;
CGSize imageNewSize = CGSizeMake(myImage.size.width*ratio, maximumHeight);
UIGraphicsBeginImageContextWithOptions(imageNewSize, NO, 0.0);
[myImage drawInRect:CGRectMake(0, 0, imageNewSize.width, imageNewSize.height)];
UIImage *resizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIBarButtonItem *Share = [[UIBarButtonItem alloc] initWithImage:resizedImage style:UIBarButtonItemStyleBordered target:self action:#selector(Share:)];

Related

How do I set the height and width to an NSArray animation?

I have animation of images and some images are different widths and heights. What I am looking to do is set the width and height to each of theses images.
jumperguy.animationImages = [NSArray arrayWithObjects:
[UIImage imageNamed:#"jumperguy_a_1.png"],
[UIImage imageNamed:#"jumperguy_a_2.png"],
[UIImage imageNamed:#"jumperguy_a_3.png"],
[UIImage imageNamed:#"jumperguy_a_4"],nil];
[jumperguy setAnimationRepeatCount:1];
jumperguy.animationDuration = 1;
[jumperguy startAnimating];
UIImageView animation does a "flip book" style animation where each image is drawn into the same frame. I don't believe it will handle images of different sizes between frames. As Wain suggests, you should scale your images to fit in the same frame in an image editor before putting them into your app.
As I mentioned in my comment, you can programmatically scale each image in your array like so:
...
CGFloat width = whateverWidth;
CGFloat height = whateverHeight;
jumperguy.animationImages = [NSArray arrayWithObjects:
[self imageWithImage:[UIImage imageNamed:#"jumperguy_a_1.png"] scaledToSize:CGSizeMake(width, height)],
[self imageWithImage:[UIImage imageNamed:#"jumperguy_a_2.png"] scaledToSize:CGSizeMake(width, height)],
[self imageWithImage:[UIImage imageNamed:#"jumperguy_a_3.png"] scaledToSize:CGSizeMake(width, height)],
[self imageWithImage:[UIImage imageNamed:#"jumperguy_a_4.png"] scaledToSize:CGSizeMake(width, height)],nil];
[jumperguy setAnimationRepeatCount:1];
jumperguy.animationDuration = 1;
[jumperguy startAnimating];
}
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
//UIGraphicsBeginImageContext(newSize);
// In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
// Pass 1.0 to force exact pixel size.
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Text Water mark in xcode

hi all i have looked at answers to similar questions and none seem to work for me. I am trying to water mark an image from the camera (image in the below) and add an image and text as a water mark. The below is working perfectly for adding the image but have no idea how to do the text.
WmarkImage = [UIImage imageNamed:#"60.png"];
UIGraphicsBeginImageContext(image.size);
[image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height)];
[WmarkImage drawInRect:CGRectMake(image.size.width - WmarkImage.size.width, image.size.height - WmarkImage.size.height, WmarkImage.size.width, WmarkImage.size.height)];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[imageView setImage:image];
You should convert your text to image then merge them here is an code for this please check this.
NSString* kevin = #"Hello";
UIFont* font = [UIFont systemFontOfSize:12.0f];
CGSize size = [kevin sizeWithFont:font];
// Create a bitmap context into which the text will be rendered.
UIGraphicsBeginImageContext(size);
// Render the text
[kevin drawAtPoint:CGPointMake(0.0, 0.0) withFont:font];
// Retrieve the image
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIImage *MergedImage = [UIImage imageNamed:#"mark.png"];
CGSize newSize = CGSizeMake(200, 400);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[MergedImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity if applicable
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *imageView = [[UIImageView alloc]initWithFrame:CGRectMake(20, 20, 300, 400)];
[imageView setImage:newImage];
[self.view addSubview:imageView];
This might help..
CATextLayer *theTextLayer = [CATextLayer layer];
theTextLayer.string = #"Your Text here";
theTextLayer.font = #"Helvetica";
theTextLayer.fontSize = #"12"
theTextLayer.alignmentMode = kCAAlignmentCenter;
theTextLayer.bounds = CGRectMake(0, 0, 40, 40);//give whatever width or height you want
[imageview.layer addSubLayer:theTextLayer];

merge 2 images into one just like instagram

I am trying to figure out to get a single image pit of 2 image views like instagram
These are the 2 images. Thanks in advance.
UIImageView *photoImageView = [[UIImageView alloc] initWithFrame:CGRectMake(20.0f, 42.0f, 280.0f, 280.0f)];
[photoImageView setBackgroundColor:[UIColor blackColor]];
[photoImageView setImage:self.image];
[photoImageView setContentMode:UIViewContentModeScaleAspectFit];
//Add overlay
UIImage *overlayGraphic = [UIImage imageNamed:#"chiu"];
UIImageView *overlayGraphicView = [[UIImageView alloc] initWithImage:overlayGraphic];
overlayGraphicView.frame = CGRectMake(30, 100, 260, 200);
[photoImageView addSubview:overlayGraphicView];
I'm not sure you can do this directly with just a UIImageView control. I think you're going to have to get into low-level drawing routines to get this done.
Option 1 (not recommended, just trying to answer the original question):
Have you tried placing the overlay UIImageView on top of the "main" UIImageView and setting its opacity to something less than 1 (say 0.4)? It's a crude hack, but it might get you somewhere.
Option 2 (probably the better path to travel):
Create an image context and then draw your "base" and "overlay" images on it. Then you'll have a UIImage you can output and will only need 1 UIImageView. Something like this (NOTE: this is a basic outline; you will need to add a LOT of time to get exactly what you want out of it!):
UIImage *baseImage = [UIImage imageNamed:#"base"];
UIImage *overlayImage = [UIImage imageNamed:#"overlay"];
UIGraphicsBeginImageContextWithOptions(baseImage.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect rect = CGRectMake(0, 0, baseImage.size.width, baseImage.size.height);
CGContextDrawImage(context, rect, baseImage.CGImage);
CGContextDrawImage(context, rect, overlayImage.CGImage);
UIImage *combined = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Strange behaviour with CGContext - iOS

The following code splits an image into 2. It seems working fine with non-retina devices, however it gives a different output with retina devices. Could someone please help me fix it? Thanks..
My Code
UIImage *img = [UIImage imageNamed:#"apple.png"];
CGSize sz = [img size];
UIGraphicsBeginImageContextWithOptions(CGSizeMake(sz.width/2, sz.height), NO, 0);
[img drawAtPoint:CGPointMake(-sz.width/2, 0)];
UIImage *right = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
rightView = [[[UIImageView alloc] initWithImage:right] autorelease];
rightView.frame = CGRectMake(self.view.frame.size.width/2, 0, self.view.frame.size.width/2, self.view.frame.size.height);
CGImageRef leftRef = CGImageCreateWithImageInRect([img CGImage],CGRectMake(0,0,sz.width/2,sz.height));
UIGraphicsBeginImageContextWithOptions(CGSizeMake(sz.width/2, sz.height), NO, 0);
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con, CGRectMake(0,0,sz.width/2.0,sz.height), leftRef);
UIImage *left = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage *rotatedImage = [left imageRotatedByDegrees:180.0];
leftView = [[[UIImageView alloc] initWithImage:rotatedImage] autorelease];
leftView.frame = CGRectMake(0, 0, self.view.frame.size.width/2, self.view.frame.size.height);
leftView.transform = CGAffineTransformMake(-1,0,0,1,0,0);
CGImageRelease(leftRef);
[self.view addSubview:leftView];
[self.view addSubview:rightView];
non-retina
retina
PS: I don't know if this is important but apple.png has a #2x version..
The [-UIImage size] property returns the size in points, not in pixels. You probably need to also call [-UIImage scale] to figure out how the image is scaled.
When you create the left view with
leftView = [[[UIImageView alloc] initWithImage:rotatedImage] autorelease];,
you're not specifying the correct scale. Rather than creating your UIImage this way:
UIImage *left = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage *rotatedImage = [left imageRotatedByDegrees:180.0];
try creating a CGImageRef and then initializing the UIImage using
[UIImage imageWithCGImage:scale:orientation:]
while specifying the correct scale. There are a number of ways to convert the raw image data from the context to a CGImageRef, or you can use the image you've created and use the CGImage property of UIImage.

Create UIImage from 2 UIImages and label

I got one big UIImage. Over this UIImage i got one more, witch is actually a mask. And one more - i got UILabel over this mask! Witch is text for the picture.
I want to combine all this parts in one UIImage to save it to Camera Roll!
How should I do it?
UPD. How should i add UITextView?
i found:
[[myTextView layer] renderInContext:UIGraphicsGetCurrentContext()];
But this method doesn't place myTextView on the right place.
create two UIImage objects and one UILabel objects then use drawInRect: method
//create image 1
UIImage *img1 = [UIImage imageNamed:#"image1.png"];
//create image 2
UIImage *img2 = [UIImage imageNamed:#"image2.png"];
// create label
UILabel *label = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, 50,50 )];
//set you label text
[label setText:#"Hello"];
// use UIGraphicsBeginImageContext() to draw them on top of each other
//start drawing
UIGraphicsBeginImageContext(img1.size);
//draw image1
[img1 drawInRect:CGRectMake(0, 0, img1.size.width, img1.size.height)];
//draw image2
[img2 drawInRect:CGRectMake((img1.size.width - img2.size.width) /2, (img1.size.height- img2.size.height)/2, img2.size.width, img2.size.height)];
//draw label
[label drawTextInRect:CGRectMake((img1.size.width - label.frame.size.width)/2, (img1.size.height - label.frame.size.height)/2, label.frame.size.width, label.frame.size.height)];
//get the final image
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The resultImage which is UIImage contains all of your images and labels as one image. After that you can save it where ever you want.
Hope helps...

Resources