First of all I'm using kingpin in order to display an array of annotations. Everything works just as expected.
My question is: How can I create a custom view for an annotation?
And I'm not referring to replacing the image of MKPinAnnotationView.
if (annotationView == nil) {
annotationView = [[MKPinAnnotationView alloc]initWithAnnotation:annotation reuseIdentifier:#"cluster"];
annotationView.canShowCallout = YES;
annotationView.image = [UIImage imageNamed:#"map_cluster"];
}
With this I'm only able to replace the default pin with an Image. But this is not all I need.
http://imgur.com/nhUIvdx
I come from an Android background where I solved this problem by inflating a layout as a cluster (aka annotation in iOS). In this XML layout I'm positioning a container (the white frame), a picture and a counter.
The result can bee seen in the picture below:
http://imgur.com/QoAQQES
How can I do this in iOS?
Thanks in advance for any suggestions!
Here you can find how to create a custom annotation:
custom annotation for maps
If you have any question, you can ask me:)
A quite late response, but I guess other people would still be interested
In a MKAnnotationView subclass:
First, define some dimensions:
#define ckImageHeight 65
#define ckImageWidth 55
#define kBorder 5
Then define the Annotation View's frame:
self.frame = CGRectMake(0, 0, ckImageWidth, ckImageHeight);
If you want the background to be an image and not just a color:
self.backgroundColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"map_checkin"]];
Then make a placeholder for the image
CGRect checkInPictureRect = CGRectMake(kBorder, kBorder, ckImageWidth - 9 , ckImageWidth - 9);
UIView *checkInPictureView = [[UIView alloc]initWithFrame:checkInPictureRect];
Then the fun starts:
// Crop image
UIImage *croppedImage = [ImageHelper centerCropImage:image];
// Resize image
CGSize checkInPictureSize = CGSizeMake(checkInPictureRect.size.width*1.5, checkInPictureRect.size.height*1.5);
UIGraphicsBeginImageContext(checkInPictureSize);
[croppedImage drawInRect:CGRectMake(0, 0, checkInPictureRect.size.width*1.5, checkInPictureRect.size.height*1.5)];
UIImage* resizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *imageView = [[UIImageView alloc] initWithImage:resizedImage];
imageView.frame = checkInPictureView.bounds;
[checkInPictureView addSubview:imageView];
[self addSubview:checkInPictureView];
// Counter
UIView *counterView = [[UIView alloc]initWithFrame:CGRectMake(45, -2, 15, 15)];
counterView.opaque = YES;
(checkIn.isNow) ? [counterView setBackgroundColor:[UIColor enloopBlue]] : [counterView setBackgroundColor:[UIColor enloopGreen]];
counterView.layer.cornerRadius = 8;
self.counterLabel = [[UILabel alloc] init];
self.counterLabel.frame = CGRectMake(4, 2, 10, 10);
if (self.count >= 10) {
counterView.frame = CGRectMake(45, -2, 18, 18);
self.counterLabel.frame = CGRectMake(3, 3, 12, 12);
}
[self.counterLabel setTextColor:[UIColor whiteColor]];
[self.counterLabel setFont:[UIFont fontWithName: #"Trebuchet MS" size: 11.0f]];
[self.counterLabel setText:[[NSString alloc] initWithFormat:#"%lu", (unsigned long)self.count]];
[counterView addSubview:self.counterLabel];
[counterView bringSubviewToFront:self.counterLabel];
[self addSubview:counterView];
As for the centerCropImage helper, nothing special:
+ (UIImage *)centerCropImage:(UIImage *)image {
// Use smallest side length as crop square length
CGFloat squareLength = MIN(image.size.width, image.size.height);
// Center the crop area
CGRect clippedRect = CGRectMake((image.size.width - squareLength) / 2, (image.size.height - squareLength) / 2, squareLength, squareLength);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], clippedRect);
UIImage * croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
I know there are quite a few things to improve, but untill then I hope it will help others. :)
Related
I use two different methods to crop the center square out of this image. One works, one doesn't. My question is why.
Here are the two results:
Clearly, the left is buggy and the right works.
The image you see on the left uses only CGImageCreateWithImageInRect to
select the region of the image, where the rect is scaled by the ratio of the
original image dimensions to those of the view's dimensions. Why doesn't this method work?
The image you see on the right translates the image and then selects the region
of interest with the origin at 0,0 using CGImageCreateWithImageInRect
Here's the code that draws both images:
- (UIImage *)cropImage:(UIImage *)original inRect:(CGRect)rect {
CGFloat heightScale = original.size.height / self.view.frame.size.height;
CGFloat widthScale = original.size.width / self.view.frame.size.width;
CGRect scaledRect = CGRectMake(rect.origin.x * widthScale, rect.origin.y * heightScale, rect.size.width * widthScale, rect.size.height * heightScale);
UIGraphicsBeginImageContextWithOptions(original.size, YES, 1.0);
[original drawAtPoint:CGPointMake(-scaledRect.origin.x, -scaledRect.origin.y)];
UIImage *translatedImage = UIGraphicsGetImageFromCurrentImageContext();
CGRect finalRect = CGRectMake(0, 0, scaledRect.size.width, scaledRect.size.height);
CGImageRef imageRefForRightImage = CGImageCreateWithImageInRect([translatedImage CGImage], finalRect);
CGImageRef imageRefForLeftImage = CGImageCreateWithImageInRect([original CGImage], scaledRect);
UIImage *croppedRightImage = [UIImage imageWithCGImage:imageRefForRightImage];
UIImage *croppedLeftImage = [UIImage imageWithCGImage:imageRefForLeftImage];
CGImageRelease(imageRefForRightImage);
CGImageRelease(imageRefForLeftImage);
UIImageView *colorImageView = [[UIImageView alloc] initWithFrame:self.view.frame];
colorImageView.backgroundColor = [UIColor purpleColor];
[self.view addSubview:colorImageView];
CGRect rectLeft = CGRectMake(0, 0, 160, 160);
CGRect rectRight = CGRectMake(160, 0, 160, 160);
UIImageView *croppedImageViewLeft = [[UIImageView alloc] initWithFrame:rectLeft];
UIImageView *croppedImageViewRight = [[UIImageView alloc] initWithFrame:rectRight];
croppedImageViewLeft.image = croppedLeftImage;
croppedImageViewRight.image = croppedRightImage;
croppedImageViewLeft.contentMode = UIViewContentModeScaleAspectFit;
croppedImageViewRight.contentMode = UIViewContentModeScaleAspectFit;
[self.view addSubview:croppedImageViewLeft];
[self.view addSubview:croppedImageViewRight];
croppedImageViewRight.image = croppedRightImage;
croppedImageViewLeft.image = croppedLeftImage;
return croppedRightImage;
}
hi all i have looked at answers to similar questions and none seem to work for me. I am trying to water mark an image from the camera (image in the below) and add an image and text as a water mark. The below is working perfectly for adding the image but have no idea how to do the text.
WmarkImage = [UIImage imageNamed:#"60.png"];
UIGraphicsBeginImageContext(image.size);
[image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height)];
[WmarkImage drawInRect:CGRectMake(image.size.width - WmarkImage.size.width, image.size.height - WmarkImage.size.height, WmarkImage.size.width, WmarkImage.size.height)];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[imageView setImage:image];
You should convert your text to image then merge them here is an code for this please check this.
NSString* kevin = #"Hello";
UIFont* font = [UIFont systemFontOfSize:12.0f];
CGSize size = [kevin sizeWithFont:font];
// Create a bitmap context into which the text will be rendered.
UIGraphicsBeginImageContext(size);
// Render the text
[kevin drawAtPoint:CGPointMake(0.0, 0.0) withFont:font];
// Retrieve the image
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIImage *MergedImage = [UIImage imageNamed:#"mark.png"];
CGSize newSize = CGSizeMake(200, 400);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[MergedImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity if applicable
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *imageView = [[UIImageView alloc]initWithFrame:CGRectMake(20, 20, 300, 400)];
[imageView setImage:newImage];
[self.view addSubview:imageView];
This might help..
CATextLayer *theTextLayer = [CATextLayer layer];
theTextLayer.string = #"Your Text here";
theTextLayer.font = #"Helvetica";
theTextLayer.fontSize = #"12"
theTextLayer.alignmentMode = kCAAlignmentCenter;
theTextLayer.bounds = CGRectMake(0, 0, 40, 40);//give whatever width or height you want
[imageview.layer addSubLayer:theTextLayer];
I create a UIImageView with two UILabel as subviews of it:
UIImageView *img=[[UIImageView alloc]initWithFrame:CGRectMake(0, 0, 320, [[UIScreen mainScreen] bounds].size.height)];
img.backgroundColor=self.view.backgroundColor;
UILabel *textOne=[[UILabel alloc]initWithFrame:CGRectMake(10, 10, 290, 250)];
textOne.text=_textOneLabel.text;
textOne.textColor=_textOneLabel.textColor;
UILabel *textTwo = [[UILabel alloc] initWithFrame:CGRectMake(100,180,200,100)];
textTwo.text=_textTwoLabel.text;
textTwo.textColor=_textTwoLabel.textColor;
[img addSubview:textOne];
[img addSubview:textTwo];
UIImageWriteToSavedPhotosAlbum(img.image, nil, nil, nil); //img.image is null
I want to create a UIImage to save it in the camera roll with the UIImageView and the 2 created UILabel. img.image returns null because there is no image assigned to the UIImageView.
Do you have an idea of how achieving that?
Use below method to convert your UIImageView to UIImage.
UIImage *image = [self ChangeImageViewToImage:img];
//Method
-(UIImage *) ChangeImageViewToImage : (UIImageView *) view{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
This should help you
/* Creates an image with a home-grown graphics context, burns the supplied string into it. */
- (UIImage *)burnTextIntoImage:(NSString *)text :(UIImage *)img {
UIGraphicsBeginImageContext(img.size);
CGRect aRectangle = CGRectMake(0,0, img.size.width, img.size.height);
[img drawInRect:aRectangle];
[[UIColor redColor] set]; // set text color
NSInteger fontSize = 14;
if ( [text length] > 200 ) {
fontSize = 10;
}
UIFont *font = [UIFont boldSystemFontOfSize: fontSize]; // set text font
[ text drawInRect : aRectangle // render the text
withFont : font
lineBreakMode : UILineBreakModeTailTruncation // clip overflow from end of last line
alignment : UITextAlignmentCenter ];
UIImage *theImage=UIGraphicsGetImageFromCurrentImageContext(); // extract the image
UIGraphicsEndImageContext(); // clean up the context.
return theImage;
}
I have a UITextView and an UIImageView that I want to convert into a single image to share.
SaveImageView, is the ImageView, where I want to save the image of the textview.
The Textview can move it across the screen, so I decide to save their final position and give it to the SaveImageView.
First convert the UItextView in an image and save his position.
Then, I want to join two imageview, into a single image.
self.SaveTextView.frame = self.textView.frame;
UIGraphicsBeginImageContext (self.textView.bounds.size);
[self.textView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.SaveTextView.image=resultingImage;
//join two uiimageview
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0.0);
[self.BaseImageView.image drawInRect:CGRectMake(0, 0, self.BaseImageView.frame.size.width, self.BaseImageView.frame.size.height)];
[self.SaveTextView.image drawInRect:CGRectMake(self.SaveTextView.frame.origin.x, self.SaveTextView.frame.origin.y, self.SaveTextView.frame.size.width, self.SaveTextView.frame.size.height)];
_SaveImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The problem is that the place where I write in the textview is not in the same position of the saved image, the image with the text is slightly moved, when it should be the exact spot where I have written, not find where is the error.
Any Help ??
Try implement this method and call it on superview:
#implementation UIView (ScreenShot)
- (UIImage *)screenShot
{
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.opaque, 0.0);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
#end
Try this...
//Make Label
UILabel *_labelTimeOutName = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, 428, 32)];
_labelTimeOutName.text = #"Your label text here";
_labelTimeOutName.minimumScaleFactor = 0.5;
_labelTimeOutName.numberOfLines = 1;
_labelTimeOutName.textColor = [UIColor whiteColor];
_labelTimeOutName.backgroundColor = [UIColor clearColor];
_labelTimeOutName.font = [UIFont fontWithName:#"TrebuchetMS-Bold" size:24];
_labelTimeOutName.textAlignment = NSTextAlignmentCenter;
//add label inside a temp View
UIView *tempView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 428, 32)];
tempView.backgroundColor = [UIColor clearColor];
[tempView addSubview:_labelTimeOutName];
//render image
//CGContextRef context = UIGraphicsGetCurrentContext();
UIGraphicsBeginImageContextWithOptions(tempView.bounds.size, tempView.opaque, 0.0);
// The view to be rendered
[[tempView layer] renderInContext:UIGraphicsGetCurrentContext() ]; //context];
// Get the rendered image
//////////////////////////////////////////////////////////////////////
UIImage *FinalTextIamge = UIGraphicsGetImageFromCurrentImageContext();
//////////////////////////////////////////////////////////////////////
UIGraphicsEndImageContext();
Enjoy!
I have an UIImage and a CGPoint which tells me in what direction I should move it to create another image. The background can be anything.
Give the initial UIImage how I can create the new one? What is the most efficient way of doing it?
Here is what I'm doing:
int originalWidth = image.size.width;
int originalHeight = image.size.height;
float xDifference = [[coords objectAtIndex:0] floatValue];
float yDifference = [[coords objectAtIndex:1] floatValue];
UIView *tempView = [[UIView alloc] initWithFrame:CGRectMake(0, 240, originalWidth, originalHeight)];
UIImageView *imageView = [[UIImageView alloc] initWithImage:image];
imageView.contentMode = UIViewContentModeTopLeft;
CGRect imageFrame = imageView.frame;
imageFrame.origin.x = xDifference;
imageFrame.origin.y = -yDifference;
imageView.frame = imageFrame;
UIGraphicsBeginImageContext(tempView.bounds.size);
[tempView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Is there a more optimal version?
You can use CGImageCreateWithImageInRect(). You can get a CGImage from a UIImage with the property of the same name.
Basically what you end up doing is apply masks to the existing image to extract the portions you need. Like so -
myImageArea = CGRectMake(xOrigin, yOrigin, myWidth, myHeight);//newImage
mySubimage = CGImageCreateWithImageInRect(oldImage, myImageArea);
myRect = CGRectMake(0, 0, myWidth*2, myHeight*2);
CGContextDrawImage(context, myRect, mySubimage);
This post gives a good idea of how to use this property.