I tried implementing this answer: https://stackoverflow.com/a/22716610, to the problem of adding overlays to mkmapsnapshotter in iOS7 (cant do renderInContext method). I did this as shown below, but the image returned has only the map with no overlays. Forgive me, I am quite new to this. Thanks.
-(void)mapViewDidFinishRenderingMap:(MKMapView *)mapView fullyRendered:(BOOL)fullyRendered
{
if (mapView.tag == 100) {
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc] init];
options.region = mapView.region;
options.size = mapView.frame.size;
options.scale = [[UIScreen mainScreen] scale];
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc] initWithOptions:options];
[snapshotter startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if (error) {
NSLog(#"[Error] %#", error);
return;
}
UIImage *image = snapshot.image;
UIGraphicsBeginImageContextWithOptions(image.size, YES, image.scale);
{
[image drawAtPoint:CGPointMake(0, 0)];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
CGContextSetLineWidth(context,5.0f);
CGContextBeginPath(context);
bool first = YES;
NSArray *overlays = mapView.overlays;
for (id <MKOverlay> overlay in overlays) {
CGPoint point = [snapshot pointForCoordinate:overlay.coordinate];
if(first)
{
first = NO;
CGContextMoveToPoint(context,point.x, point.y);
}
else{
CGContextAddLineToPoint(context,point.x, point.y);
}
}
UIImage *compositeImage = UIGraphicsGetImageFromCurrentImageContext();
NSData *data = UIImagePNGRepresentation(compositeImage);
placeToSave = data;
NSLog(#"MapView Snapshot Saved.");
//show image for debugging
UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 200, 320, 320)];
imageView.image = compositeImage;
[self.view addSubview:imageView];
}
UIGraphicsEndImageContext();
}];
[mapView setHidden:YES];
}
}
Related
I have done zooming functionality in camera with GPUImage. But when I capture image from camera with zoom and save it but still it save as normal pict(no zooming found). I want in whichever mode I capture image that must be saved in album. How can I solve this problem Any suggestion will be great. Thanks guys. My code :
- (void)viewDidLoad {
[super viewDidLoad];
self.library = [[ALAssetsLibrary alloc] init];
[self setViewLayOut];
[self setupFilter];
[self setZoomFunctionlityOnCamera];
}
- (void)setupFilter;
{
videoCamera = [[GPUImageStillCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
switch (filterType)
{
case GPUIMAGE_COLORINVERT:
{
self.title = #"Color Negative";
filter = [[GPUImageColorInvertFilter alloc] init];
};
break;
case GPUIMAGE_GRAYSCALE:
{
self.title = #"Black and White Positive";
filter = [[GPUImageGrayscaleFilter alloc] init];
};
break;
default: filter = [[GPUImageFilter alloc] init];
self.title = #"Color Positive";
break;
}
videoCamera.runBenchmark = YES;
filterView = (GPUImageView *)cameraView;
[filter addTarget:filterView];
[videoCamera addTarget:filter];
[videoCamera startCameraCapture];
}
- (IBAction)clickPhotoBtn:(id)sender {
if (!isCameraPermissionAccessed) {
[self showAccessDeinedMessage :#"Camera permission denied" withMessage:#"To enable, please go to settings and allow camera permission for this app."];
return;
}
[videoCamera capturePhotoAsJPEGProcessedUpToFilter:filter withCompletionHandler:^(NSData *processedJPEG, NSError *error){
if (error!=nil)
{
[self showErrorMessage:#"Unable to capture image" ];
return ;
}
else {
UIImage *image = [UIImage imageWithData:processedJPEG];
if (filterType == GPUIMAGE_GRAYSCALE) {
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:image];
GPUImageColorInvertFilter *stillImageFilter = [[GPUImageColorInvertFilter alloc] init];
[stillImageSource addTarget:stillImageFilter];
[stillImageFilter useNextFrameForImageCapture];
[stillImageSource processImage];
UIImage *currentFilteredVideoFrame = [stillImageFilter imageFromCurrentFramebuffer];
UIImageWriteToSavedPhotosAlbum(currentFilteredVideoFrame, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
else{
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
}
}];
}
Use below code it may helpful to you
+(UIImage*)croppedImageWithImage:(UIImage *)image zoom:(CGFloat)zoom
{
CGFloat zoomReciprocal = 1.0f / zoom;
CGPoint offset = CGPointMake(image.size.width * ((1.0f - zoomReciprocal) / 2.0f), image.size.height * ((1.0f - zoomReciprocal) / 2.0f));
CGRect croppedRect = CGRectMake(offset.x, offset.y, image.size.width * zoomReciprocal, image.size.height * zoomReciprocal);
CGImageRef croppedImageRef = CGImageCreateWithImageInRect([image CGImage], croppedRect);
UIImage* croppedImage = [[UIImage alloc] initWithCGImage:croppedImageRef scale:[image scale] orientation:[image imageOrientation]];
CGImageRelease(croppedImageRef);
return croppedImage;
}
Im currently using the MKSnapshotter but I noticed (similar to MKMapView) that it holds on to a high memory consumption and never releases it for the duration of the app. I've tried releasing the memory but no use:
-(void)viewWillDisappear:(BOOL)animated{
[super viewWillDisappear:animated];
[self releaseMKMapSnapshotMem];
}
-(void)releaseMKMapSnapshotMem{
self.snapshotter=nil;//MKSnapShotter
self.options=nil;//MKSnapShotterOptions
}
Any help is greatly appreciated.
Update
Includes more detail
MKMapSnapshotOptions * snapOptions= [[MKMapSnapshotOptions alloc] init];
self.options=snapOptions;
CLLocation * salonLocation = [[CLLocation alloc] initWithLatitude:self.lat longitude:self.long];
MKCoordinateRegion region = MKCoordinateRegionMakeWithDistance(location.coordinate, 300, 300);
self.options.region = region;
self.options.size = self.view.frame.size;
self.options.scale = [[UIScreen mainScreen] scale];
MKMapSnapshotter * mapSnapShot = [[MKMapSnapshotter alloc] initWithOptions:self.options];
self.snapshotter =mapSnapShot;
[self.snapshotter startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if (error) {
NSLog(#"[Error] %#", error);
return;
}
UIImage *image = snapshot.image;
self.mapImage = image;
NSData *data = UIImagePNGRepresentation(image);
[self saveMapDataToCache:data WithKey:mapName];
}];
Try this:
MKMapSnapshotOptions * snapOptions= [[MKMapSnapshotOptions alloc] init];
CLLocation * salonLocation = [[CLLocation alloc] initWithLatitude:self.lat longitude:self.long];
//location.coordinate or salonLocation.coordinate below????
MKCoordinateRegion region = MKCoordinateRegionMakeWithDistance(location.coordinate, 300, 300);
snapOptions.region = region;
snapOptions.size = self.view.frame.size;
snapOptions.scale = [[UIScreen mainScreen] scale];
MKMapSnapshotter * mapSnapShot = [[MKMapSnapshotter alloc] initWithOptions: snapOptions];
[mapSnapShot startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if (error) {
NSLog(#"[Error] %#", error);
return;
}
UIImage *image = snapshot.image;
NSData *data = UIImagePNGRepresentation(image);
[self saveMapDataToCache:data WithKey:mapName];
}];
I'm doing pretty much the same as you're doing with only two differences:
options.region = _mapView.region;
options.size = _mapView.frame.size;
my _mapView is the map being displayed in the view controller...
I want to develop a small Jigsaw puzzle game but having problem when combining the image pieces. I can split image but cannot combine them as per my requirement. Here is what I am doing.
For cropping:
[customImageView setImage:[self cropImage:self.mainImage withRect:mCropFrame]];
- (UIImage *) cropImage:(UIImage*)originalImage withRect:(CGRect)rect
{
return [UIImage imageWithCGImage:CGImageCreateWithImageInRect([originalImage CGImage], rect)];
}
For Clipping:
[self setClippingPath:[pieceBezierPathsMutArray_ objectAtIndex:i]:view];
- (UIImageView *) setClippingPath:(UIBezierPath *)clippingPath : (UIImageView *)imgView;
{
if (![[imgView layer] mask])
{
[[imgView layer] setMask:[CAShapeLayer layer]];
}
[(CAShapeLayer*) [[imgView layer] mask] setPath:[clippingPath CGPath]];
return imgView;
}
For Combining:
-(id)initByCombining:(id)oneView andOther:(id)twoView withRegularSize:(CGSize)pieceSize;
{
CustomImageView *one = oneView;//[oneView copy];
CustomImageView *two = twoView;
CGPoint onepoint, twopoint;
if (one.frame.origin.x < two.frame.origin.x)
{
onepoint.x = 0;
twopoint.x = onepoint.x + one.frame.size.width;
}
else
{
onepoint.x = onepoint.x + one.frame.size.width;
twopoint.x = 0;
}
if (one.frame.origin.y < two.frame.origin.y)
{
onepoint.y = 0;
twopoint.y = 0;
}
else
{
onepoint.y = 0;
twopoint.y = 0;
}
CGRect frame;
frame.origin = CGPointZero;
frame.size.width = onepoint.x + one.frame.size.width + two.frame.size.width;
frame.size.height = MAX(one.frame.size.height , two.frame.size.height);
if (self = [self initWithFrame:frame])
{
UIGraphicsPushContext(UIGraphicsGetCurrentContext());
UIGraphicsBeginImageContext(frame.size);
[one.image drawAtPoint:onepoint];
[two.image drawAtPoint:twopoint];
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
self.image = UIGraphicsGetImageFromCurrentImageContext();
self.backgroundColor = [UIColor redColor];
UIGraphicsEndImageContext();
UIGraphicsPopContext();
self.center = one.center;
self.transform = CGAffineTransformScale(incomingTransform, 0.5, 0.5);
self.previousRotation = self.transform;
}
return self;
}
My initial image is this:
After cropping and clipping it becomes like this:
It should look like this after combining.
But it is becoming like this.
When You want to combine the Images I would suggest you to place the Clipping within another UIView , so that the UIView becomes a SuperView of all the Placed Clippings, after the Images have been Placed you can do something like this,
UIGraphicsBeginImageContextWithOptions(superView.bounds.size, superView.opaque, 0.0);
[superView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * CombinedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return CombinedImage;
and Then just save it as follows..
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeImageToSavedPhotosAlbum:[CombinedImage CGImage] orientation:(ALAssetOrientation)[SAVEIMAGE imageOrientation] completionBlock:^(NSURL *assetURL, NSError *error){
if (error) {
// TODO: error handling
UIAlertView *al=[[UIAlertView alloc]initWithTitle:#"" message:#"Error saving image, Please try again" delegate:nil cancelButtonTitle:#"OK" otherButtonTitles:nil, nil];
[al show];
} else {
NSData *imageData = UIImagePNGRepresentation(CombinedImage);
UIImage *finalImage=[UIImage imageWithData:imageData];
}
..Hope this helps
Based on this answer:
Snapshot of MKMapView
I tried to convert my map to picture, but the App never enters the snapshotter block.
Why?
//Get location an then get a Picture of the Map.
CLLocation *userLoc = self.map.userLocation.location; //self.map is a MKMapView;
CLLocationCoordinate2D punto = userLoc.coordinate;
MKCoordinateRegion region = MKCoordinateRegionMakeWithDistance(punto, 500, 500);
[self.map setRegion:(region)];
[self.map setShowsUserLocation:YES];
//Place a Pin in actual location.
MKPointAnnotation *pin = [[MKPointAnnotation alloc]init];
pin.coordinate = punto;
pin.title = #"LocalizaciĆ³n";
[self.map addAnnotation:pin];
//Convert map to picture.
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc] init];
options.region = self.map.region;
options.scale = [UIScreen mainScreen].scale;
options.size = self.map.frame.size;
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc] initWithOptions:options];
[snapshotter startWithQueue:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0) completionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
NSLog(#"Entering the block."); //Never prints!
// get the image associated with the snapshot
UIImage *image = snapshot.image;
NSLog(#"imagen %#",image); //Niether do this!
// Get the size of the final image
CGRect finalImageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Get a standard annotation view pin. Clearly, Apple assumes that we'll only want to draw standard annotation pins!
MKAnnotationView *pin = [[MKPinAnnotationView alloc] initWithAnnotation:nil reuseIdentifier:#""];
UIImage *pinImage = pin.image;
// ok, let's start to create our final image
UIGraphicsBeginImageContextWithOptions(image.size, YES, image.scale);
// first, draw the image from the snapshotter
[image drawAtPoint:CGPointMake(0, 0)];
// now, let's iterate through the annotations and draw them, too
for (id<MKAnnotation>annotation in self.map.annotations)
{
CGPoint point = [snapshot pointForCoordinate:annotation.coordinate];
if (CGRectContainsPoint(finalImageRect, point)) // this is too conservative, but you get the idea
{
CGPoint pinCenterOffset = pin.centerOffset;
point.x -= pin.bounds.size.width / 2.0;
point.y -= pin.bounds.size.height / 2.0;
point.x += pinCenterOffset.x;
point.y += pinCenterOffset.y;
[pinImage drawAtPoint:point];
}
}
// grab the final image
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
NSLog(#"Picture inside the block %#",finalImage); //Never prints.
UIGraphicsEndImageContext();
// and save it
NSData *data = UIImagePNGRepresentation(finalImage);
[data writeToFile:#"Picture.jpg" atomically:YES];
if (error) {
NSLog(#"Error"); //This is not printed.
}else{
NSLog(#"Success!"); //Neither do this.
self.fotoParaEnviar = finalImage;
}
}];
NSLog(#"Picture outside the block %#",self.fotoParaEnviar); //This is allway NULL
Look like everything is instantiated fine.
So why does the block is never executed?
If you are already displaying map, then there is no magic required to save it into image, Snapshot of MKMapView in iOS7 gets it almost correctly , I don't understand why they get black image, but I do not pass 0.0 as rendering scale, but 1.0 or 2.0 (retina) and maybe their code is not on the main thread as it should be for graphics.
Anyway, I've just tried this on 7.1 and got the correct image with user blue dot and annotation pins:
[ObCommons createJPEGfromView:self.map withSize:self.map.bounds.size toPath:[ObCommons getPathInDocuments:#"test.jpg"]];
+(UIImage*) createImageFromView:(UIView*)newt withSize:(CGSize)rensize {
UIGraphicsBeginImageContextWithOptions(rensize, NO, 2.0); // 1.0 or 2.0 for retina (get it from UIScreen)
[newt.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
+(UIImage*) createJPEGfromView:(UIView*)newt withSize:(CGSize)rensize toPath: (NSString*)filePath quality:(float)quality{
UIImage *ximage = [ObCommons createImageFromView:newt withSize:rensize];
NSData *imageData = UIImageJPEGRepresentation(ximage, quality);
if (filePath!=nil) {
[imageData writeToFile:filePath atomically:YES];
}
return ximage;
}
+(CGFloat)retinaFactor {
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)] && [[UIScreen mainScreen] scale] > 1) {
return [[UIScreen mainScreen]scale];
} else {
return 1.0f;
}
}
To be more readable, here is gist of associated methods: https://gist.github.com/quentar/d92e95728ce0d950db65
What if you change this:
[snapshotter startWithQueue:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
To this:
[snapshotter startWithQueue:dispatch_get_main_queue()
And this NSLog(#"Picture inside the block %#",self.fotoParaEnviar); will be always NULL as snapshotter is async and by the time you reach your NSLog the code above still executs its block
Or you might also try this instad:
[snapshotter startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
UIImage *image = snapshot.image;
// and so on
}];
I will close this question with the following solution:
FIRST let the MKMapView load completly in the View, and then enter the block to convert it to a UIImage.
Thank you all for your help.
I have a problem to capture full screen of iCarousel. it can capture only index of Carousel only .
UIGraphicsBeginImageContext(caputureView.bounds.size);
[caputureView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try something like this:
- (void) getFullScreenScreenShot
{
AppDelegate* appDelegate = (AppDelegate*)[[UIApplication sharedApplication] delegate];
UIView* superView = appDelegate.viewController.view;
CGRect fullScreenFrame = superView.frame;
UIGraphicsBeginImageContextWithOptions(fullScreenFrame.size, YES, 0.0f);
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0.0f, 0.0f);
[superView.layer renderInContext: UIGraphicsGetCurrentContext()];
UIImageView* screenShot = [[UIImageView alloc] initWithImage: UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
NSData* imageData = UIImageJPEGRepresentation(screenShot.image, 1.0);
NSString* previewFileNamePath = [[CPFileManager documentsPath] stringByAppendingString: #"image.jpg"];
if ([imageData writeToFile: previewFileNamePath
atomically: NO])
{
NSLog(#"See filename:%#", previewFileNamePath);
}
else
{
NSLog(#"Error: %#", previewFileNamePath);
}
}