I have a MKMapView in which I can have multiple ARDVenueAnnotationView (subclass of MKAnnotationView) with a custom image at same coordinates. Thus, these annotations overlap. What I have done for this, is to change the anchorPoint of the annotation view's layer. This is working as image below (4 centered annotations have the same coordinates) :
Besides, I would like the annotations to change their image orientation so the little image tail points to the coordinate (don't mind the annotations order) :
Here comes my issue, when I setImage: on my annotation view, constructing this image with + (UIImage *)imageWithCGImage:scale:orientation:, the orientation does not change. Here is my code that update the image :
- (void)updateImage
{
UIImage *selectedImage = [UIImage imageNamed:#"redPin"];
if (!self.isCluster && self.selected) {
selectedImage = [UIImage imageNamed:#"whitePin"];
}
UIImageOrientation orientation;
switch (self.anchorCorner) {
case ARDVenueAnnotationAnchorCornerBottomLeft:
orientation = UIImageOrientationUpMirrored;
break;
case ARDVenueAnnotationAnchorCornerTopLeft:
orientation = UIImageOrientationDown;
break;
case ARDVenueAnnotationAnchorCornerTopRight:
orientation = UIImageOrientationDownMirrored;
break;
default:
orientation = UIImageOrientationUp;
break;
}
UIImage *image = [[UIImage alloc] initWithCGImage:selectedImage.CGImage scale:selectedImage.scale orientation:orientation];
[self setImage:image];
}
Where anchorCorner is the property to set when I want the annotation view to shift for the image little tail to points to the coordinates.
This method never changes the image orientation (default image has the tail at bottom right) and it keeps rendering as first picture above.
When I add an UIImageView as subview of my annotation view, it shows the good image orientation (as shown in the second picture).
My questions :
Why setImage: does not consider the image orientation ? Or maybe I am doing something wrong...
How can I achieve this without adding UIImageView as subview ? after all, image property is here for a reason
Try creating an image with the original image with the code:
static inline double radians (double degrees) {return degrees * M_PI/180;}
-(UIImage*) image :(UIImage*) src withOrientation: (UIImageOrientation) orientation
{
UIGraphicsBeginImageContext(src.size);
CGContextRef context = UIGraphicsGetCurrentContext();
if (orientation == UIImageOrientationRight) {
CGContextRotateCTM (context, radians(90));
} else if (orientation == UIImageOrientationLeft) {
CGContextRotateCTM (context, radians(-90));
} else if (orientation == UIImageOrientationDown) {
// NOTHING
} else if (orientation == UIImageOrientationUp) {
CGContextRotateCTM (context, radians(90));
}
[src drawAtPoint:CGPointMake(0, 0)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
You may also need to flip the image so use this code:
UIImage* flippedImage = [UIImage imageWithCGImage:image.CGImage
scale:image.scale
orientation:UIImageOrientationUpMirrored];
Related
I am using the library
linked here to resize a camera-captured image with the following code:
UIImage *image = [[UIImage alloc] initWithData:imageData];
CGFloat squareLength = SC_APP_SIZE.width;
CGFloat headHeight = _previewLayer.bounds.size.height - squareLength;//_previewLayer的frame是(0, 44, 320, 320 + 44)
CGSize size = CGSizeMake(squareLength * 2, squareLength * 2);
UIImage *scaledImage = [image resizedImageWithContentMode:UIViewContentModeScaleAspectFill bounds:size interpolationQuality:kCGInterpolationHigh];
CGRect cropFrame = CGRectMake((scaledImage.size.width - size.width) / 2, (scaledImage.size.height - size.height) / 2 + headHeight, size.width, size.height);
UIImage *croppedImage = [scaledImage croppedImage:cropFrame];
UIDeviceOrientation orientation = [UIDevice currentDevice].orientation;
if (orientation != UIDeviceOrientationPortrait) {
CGFloat degree = 0;
if (orientation == UIDeviceOrientationPortraitUpsideDown) {
degree = 180;// M_PI;
} else if (orientation == UIDeviceOrientationLandscapeLeft) {
degree = -90;// -M_PI_2;
} else if (orientation == UIDeviceOrientationLandscapeRight) {
degree = 90;// M_PI_2;
}
croppedImage = [croppedImage rotatedByDegrees:degree];
}
The code works no matter what orientation, back camera or front camera, when the image is captured through the camera.
Problem happens when I use the same code (except getting the image's exif orientation instead of the device orientation) for an image that was originally camera-captured, but accessed through iOS camera roll.
Case: when the image saved in the camera roll was captured in landscape orientation by the BACK camera. Effectively, the image gets an exif orientation that indicates it was captured in portrait mode...but it's still a widescreen image.
The code crops the image disproportionately, leaving a black bar of empty space on the edge. I can't figure out what the problem is. Can someone point me in the right direction?
I am using AVCapture to capture the images from camera.Everything works fine except this issue.
I need the final captured image as same like which is visible in camera.But the image shows more area(which is not like visible in camera).How can i get the same visible image as final stillImageOutput?
Any help would be highly appreciated.
use your view/imageview object name instead of contentScrollview. This will help you to render the view and provide you an image.
for reference:https://charangiri.wordpress.com/2014/09/18/how-to-render-screen-taking-screen-shot-programmatically/
- (UIImage *) createScreenshotOfCompleteScreen
{
UIImage* image = nil;
UIGraphicsBeginImageContext(contentScrollview.contentSize);
{
CGPoint savedContentOffset = contentScrollview.contentOffset;
CGRect savedFrame = contentScrollview.frame;
contentScrollview.contentOffset = CGPointZero;
contentScrollview.frame = CGRectMake(0, 0, contentScrollview.contentSize.width, contentScrollview.contentSize.height);
if ([[NSString versionofiOS] intValue]>=7)
{
[contentScrollview drawViewHierarchyInRect:contentScrollview.bounds afterScreenUpdates:YES];
}
else
{
[contentScrollview.layer renderInContext: UIGraphicsGetCurrentContext()];
}
image = UIGraphicsGetImageFromCurrentImageContext();
contentScrollview.contentOffset = savedContentOffset;
contentScrollview.frame = savedFrame;
}
UIGraphicsEndImageContext();
return image;
}
This is my code:
- (void)scrollViewDidEndScrollingAnimation:(UIScrollView *)scrollView
{
// at this point the webView scrolled to the next section
// I save the offset to make the code a little easier to read
CGFloat offset = _webPage.scrollView.contentOffset.y;
UIGraphicsBeginImageContextWithOptions(_webPage.bounds.size, NO, 0);
[_webPage.layer renderInContext:UIGraphicsGetCurrentContext()];
viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
// if we are not done yet, scroll to next section
if (offset < _webPage.scrollView.contentSize.height)
{
[_webPage.scrollView scrollRectToVisible:CGRectMake(0, _webPage.frame.size.height+offset, _webPage.frame.size.width, _webPage.frame.size.height) animated:YES];
}
}
In which I save an undefined number of screenshots (UIImages) by scrolling the web view. This works, I have in my photo gallery all the parts of the web page.
But I don't want parts, I want ONE long UIImage. So how do I put (one by one?) my UIImages together?
You can write a UIImage category to do that
UIImage+Combine.h
#import <UIKit/UIKit.h>
#interface UIImage (Combine)
+ (UIImage*)imageByCombiningImage:(UIImage*)firstImage withImage:(UIImage*)secondImage;
#end
UIImage+Combine.m
#import "UIImage+Combine.h"
#implementation UIImage (Combine)
+ (UIImage*)imageByCombiningImage:(UIImage*)firstImage withImage:(UIImage*)secondImage {
UIImage *image = nil;
CGSize newImageSize = CGSizeMake(MAX(firstImage.size.width, secondImage.size.width), firstImage.size.height + secondImage.size.height);
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, [[UIScreen mainScreen] scale]);
} else {
UIGraphicsBeginImageContext(newImageSize);
}
[firstImage drawAtPoint:CGPointMake(roundf((newImageSize.width-firstImage.size.width)/2), 0)];
[secondImage drawAtPoint:CGPointMake(roundf(((newImageSize.width-secondImage.size.width)/2) ),
roundf((newImageSize.height-secondImage.size.height)))];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
and then you can call the function in your code with:
UIImage *img = [UIImage imageByCombiningImage:image1 withImage:image2];
This will draw a new image that has the width of the biggest of the two images and the height of both images combined. image1 will be at the top position and image2 below that.
I am trying to check the current orientation of an image and flip it to the opposite of what it currently is.
This code works if I only have 1 element on my screen, but as soon as i increase the array its a 50/50 chance of getting the orientation right.
-(IBAction)flipBtn:(id)sender
{
UIImage* flippedImage;
if(flipBtn.tag==FLIP_BUTTON_TAG)
{
UIImage* sourceImage1 = self.outletImageView.image;
flippedImage = [UIImage imageWithCGImage:sourceImage1.CGImage scale:1.0 orientation: UIImageOrientationUpMirrored];
flipBtn.tag = FLIP_VUTTON_TAG2;
}else{
UIImage* sourceImage1 = self.outletImageView.image;
flippedImage = [UIImage imageWithCGImage:sourceImage1.CGImage scale:1.0 orientation: UIImageOrientationUp];
flipBtn.tag = FLIP_BUTTON_TAG;
}
NSInteger index1 = [imgViewArray indexOfObject:self.outletImageView];
[imgViewArray removeObject:self.outletImageView];
self.outletImageView.image = flippedImage;
[imgViewArray insertObject:self.outletImageView atIndex:index1];
}
Instead of: [imgViewArray removeObject:self.outletImageView];
Write: [imgViewArray removeObjectAtIndex:index1];
And afterwards you need to check flippedImage.imageOrientation == UIImageOrientationUp to see if it's inverted or not and make the appropriate change.
You can verify which orientation the picture has by accessing the info dictionary [info objectForKey:#"Orientation"]
Here is a method that will return the new desired orientation based on your previous check (you'd pass flippedImage.imageOrientation as the argument):
- (UIImageOrientation)orientation:(int)orientation
{
UIImageOrientation newOrientation;
switch (orientation)
{
case 1:
newOrientation = UIImageOrientationUp;
break;
case 2:
newOrientation = UIImageOrientationUpMirrored;
break;
}
return newOrientation;
}
I have two view controller class on first i have a image view plus textField inside the imageView and on second view controller there is a imageView. First view controller have a done-button, on clicking done-button i want imageView with textField pass to the secondViewController imageView.
Is there any way to do it?
Please suggest me.
- (UIImage *)renderImageFromView:(UIView *)view withRect:(CGRect)frame
{
// Create a new context the size of the frame
CGSize targetImageSize = CGSizeMake(frame.size.width,frame.size.height);
// Check for retina image rendering option
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(targetImageSize, NO, 0);
else
UIGraphicsBeginImageContext(targetImageSize);
CGContextRef context2 = UIGraphicsGetCurrentContext();
// The view to be rendered
[[view layer] renderInContext:context2];
// Get the rendered image
UIImage *original_image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return original_image;
}
Try this function for Render image
And the below one for merge them
- (UIImage *)mergeImage:(UIImage *)img
{
CGSize offScreenSize = CGSizeMake(206, 432);
if (NULL != UIGraphicsBeginImageContextWithOptions) UIGraphicsBeginImageContextWithOptions(offScreenSize, NO, 0);
else UIGraphicsBeginImageContext(offScreenSize);
CGRect rect = CGRectMake(0, 0, 206, 432);
[imgView.image drawInRect:rect];
[img drawInRect:rect];
UIImage* imagez = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imagez;
}
You can change coordinates and height,width according to your requirement.
Example:
This method is declared in ClassB.h
- (void) setProperties:(UIImage *)myImage MyLabel:(UILabel *)myLabel;
This above is implemented in ClassB.m
- (void) setProperties:(UIImage *)myImage MyLabel:(UILabel *)myLabel
{
//assign myImage and myLabel here
}
Then in ClassA
ClassB *classB = [[ClassB alloc] init];
[classB setProperties:myImage MyLabel:myLabel];