UIImage becomes NSNumber - ios

I am trying to save UIImage to app delegate. When I load that UIViewController for the first time, it returns UIImage(see first screen shot).
Help me ! Stuck here for almost 2 days.
But when I redirect back to that controller, UIImage becomes NSNumber. (see second screen shot)
Before I leave the controller I set the image to app delegate as follows
UIGraphicsBeginImageContextWithOptions(self.imageView.frame.size,YES, 0.0);
[self.imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
del.getImage = resultingImage;

This problem is most easily solved using the debugger. First, as the comments said, change the property name to say myImage. Then, in your UIViewController class, provide you own setter:
- (void)setMyImage:(UIImage *)myImage
{
NSLog(#"Image class: %#", NSStringFromClass([myImage class]));
if([myImage isKindOfClass:[NSNumber class]]) {
NSLog(#"YIKES: a number!");
}
_myImage = myImage; // assumes ARC
}
Put a breakpoint on the second NSLog, or both. When you hit the second breakpoint, look at the call stack and you will find out who is setting it.

Related

Take a snapshot of view without adding it to the screen

I'm trying to load a view from a nib, configure it, and take a snapshot of it, without adding it to the screen:
+ (void)composeFromNibWithImage:(UIImage*)catImage completion:(CompletionBlock)completion {
NSArray *nibContents = [[NSBundle mainBundle] loadNibNamed:#"CatNib" owner:nil options:nil];
CatView *catView = [nibContents firstObject];
//here, catView is the correct size from the nib, but is blank if inspected with the debugger
catView.catImageView.image = catImage;
catView.furButton.selected = YES;
UIImage *composite = [UIImage snapshot:catView];
completion(composite);
}
where snapshot is the typical:
+ (UIImage *)snapshot:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0.0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
However, both catView and composite are sized correctly but blank as I step through the code. How can I create a UIImage from a view that is loaded from a nib, without adding the view to the screen?
I had a similar issue some time ago, and as far as I can tell it is not possible to take a snapshot of a view, that is not actually on the screen. I created a workaround and placed the concerned view, which I wanted to snapshot, outside of the current ViewControllers view bounds, so that you won't see it. In this case it was possible to create a valid snapshot. Hope this helps :)
This feels a bit hacky, but I tried it and it does get the job done. If you load the view from your nib behind the main view, you can take a screenshot of just that layer. So long as it's not a memory intensive view, the user would never know that view was added to the super view.
UIGraphicsBeginImageContext(CGSizeMake(catView.frame.size.width, catView.frame.size.height));
CGContextRef context = UIGraphicsGetCurrentContext();
[catView.layer renderInContext:context];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
So run this code after you load your nib behind your main view. Once it's loaded you can remove it with `[catView removeFromSuperiew]'
UIGraphicsBeginImageContext(view.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

UIImage loaded from managed object data doesn't show up in UIImageView

I've got a pretty simple iOS app (adapted from basic master-detail boilerplate).
I've got RestKit set up to load data from a server. If an object's image URL gets updated, I download the image (using an AFHTTPClient subclass) and save its data using UIImagePNGRepresentation(image). Pretty straightforward.
So now, I've got a database that's already populated with objects - including their imageData. But for some reason, though I can get a UIImage instance from the data, that UIImage won't show up in a UIImageView.
I've got a category on the auto-generated NSManagedObject subclass, which (among other things) pulls the image data, and returns a UIImage instance:
#implementation Artwork (Helpers)
// ...
- (UIImage*)image {
if (self.imageData) {
return [UIImage imageWithData:self.imageData];
}
return nil;
}
#end
In my detail view, I have a UIImageView, whose image is set from the above method. Here's the relevant bit from my detail view controller. It gets called just before the segue, and works fine for setting the description text, but doesn't set the image correctly.
- (void)configureView {
// Update the user interface for the detail item (a Artwork instance in this case).
if (self.detailItem) {
// this works just fine
self.detailDescriptionText.text = self.detailItem.rawDescription;
// ... but this doesn't! Nothing is shown in the
UIImage *image = self.detailItem.image;
if (image) {
// Yes, the UIImage *is* there
NSLog(#"UIImage instance: %#, size: %fx%f", image, image.size.width, image.size.height);
// ... but this doesn't seem to any effect
self.imageView.image = image;
}
}
}
The NSLog call prints:
UIImage instance: <UIImage: 0x109a0d090>, size: 533.000000x300.000000
so it certainly seems like the UIImage object exists and has been unpacked from the data just like it should. But nothing shows up in the UIImageView.
Interestingly, if I set up a simple touch-listener on the detail view controller, I can show the image using the exact same code:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UIImage *image = self.detailItem.image;
if (image) {
NSLog(#"UIImage instance: %#, size: %fx%f", image, image.size.width, image.size.height);
self.imageView.image = image;
}
}
That works perfectly - tap the screen and the image shows up immediately, and the NSLog call prints:
UIImage instance: <UIImage: 0x10980a7e0>, size: 533.000000x300.000000
So there really is image data, and it does get unpacked into a proper UIImage - but it won't show up.
So, all in all, it seems like there's some sort of timing or threading issue. But here I'm drawing a blank.
Make sure to set your image on the main thread :)
dispatch_async(dispatch_get_main_queue(), ^(void) {
/* your code here */
});

Draw text and add to a UIImage iOS 5/6

I have a textbox, where i want the written text to be added to a UIImage.
How can i draw NSString to a UIImage?
I´ve searched, and found lots of examples, but non of them works. Xcode just gives me lots of errors.
Simply put, i want to draw a NSString to a UIimage. The UIImage should be the same size as a predefined UIImageView, and be placed in center.
Any ideas?
i think solution is here..i have used this method in one of my application
here lbltext is the object of my label. this method creates new image with given text & return the object of new image with text.
-(UIImage *) drawText:(NSString*) text inImage:(UIImage*)image atPoint:(CGPoint)point
{
UIFont *font = lblText.font;
UIGraphicsBeginImageContext(iv.frame.size);
[image drawInRect:CGRectMake(0,0,iv.frame.size.width,iv.frame.size.height)];
CGRect rect = CGRectMake(point.x,point.y,lblText.frame.size.width, lblText.frame.size.height);
[lblText.textColor set];
[text drawInRect:CGRectIntegral(rect) withFont:font];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
in your below comment you have passed iv.center as a point.. try manualy like this
CGPoint point;
point.x=30; // dynamicaly just for example lblText.frame.origin.x;
point.y=40; //lblText.frame.origin.y;
you have to call above method like this...
UIImage *img = [self drawText:#"Test String" inImage:originalImage atPoint:point];
here,img is another instace of UIImage for storing new image with text...after calling this method use img imager for your further use..
for example...
[newimageviewObj setImage:img];
i hope this will help you
UIImage is not a subview of UIView, so you cant add a subview to it. Also NSString is not a subview of UIView. If you want to show things on the screen, they should inherit from UIView.
So try this:
Create a UIImageView - set its image property to be your UIImage instance.
Create a UILabel - set its text property to your NSString.
Add the UILabel as a subview of your UIImageView.
and finally add your UIImageView to your current view controllers view.
Emm, here is some thoughts.
I think that one simple way is like this :
Put aUIImageView on aView;
Add aUITextView on aView;
Get ScreenShot from aView;
This may works fine.
Also, this may come with a problem that screenshot may be not clear.
Then, After step1 and step2, we may get new image by UIGraphics.
(With here, we already know position where text should be on image and this should be ok.)
PS: Some image may not have some attribution like CGImage and CGImage is needed for this. So, transform it.

Merging two UIImage into 1 to be savedtolibrary

I would like to know how can I merge 2 uiimage into 1? I would like to save the end product to the library. For saving images I'm using a UI button. Here's snippet of how I save a UIImageview.image.
-(IBAction)getPhoto:(id)sender {
UIImage* imageToSave = imageOverlay.image;
UIImageWriteToSavedPhotosAlbum(imageToSave, nil, nil, nil);
}
I've looked online and read about UIGraphicsBeginImageContext. Found an example but I couldn't understand how to actually apply it to mine. Here's the one I've got so far.
- (UIImage*)addImage:(UIImage *)image secondImage:(UIImage *)image2
{
UIGraphicsBeginImageContext(image.size);
[image drawInRect:CGRectMake(0,0,image.size.width,image.size.height)];
[image2 drawInRect:CGRectMake(10,10,image2.size.width,image2.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Right now I have 2 UIImageviews which is imageOverlay.image and imageView.image. If I use the method above how to assign the return value to UIImageWriteToSavedPhotoAlbum? Hope someone can point me to the right direction.
Thanks very much.
Seems like you don't have the header for the method in your header file, in the .h file of the class/view controller from which you're running the IBAction, add the first line of your method, -(UIImage*) addImage:... And end it with a ; before the #end. That should work, though you'd have to implement it in the .m file of the same .h file somewhere.
Try this:
-(IBAction)getPhoto:(id)sender {
UIImage* imageToSave = [self addImage:imageOverlay.image secondImage:imageView.image];
UIImageWriteToSavedPhotosAlbum(imageToSave, nil, nil, nil);
}
create a NSLog, and pass them through it like this, image1 and image2 are your images:
NSLog (#"testing images aren't nil. Details for image1: %# image2: %#",image1,image2);
I'm a newbie too, and I don't really know a more sophisticated way to see if the image is nil, but I saw that by passing a nil image to the statement above, you'd only get nil instead of the values of those two images. If you get lots of information, even though they're alot of codes and you may not understand them either, as I usually don't, at least you can test whether they are nil or not.

Make a UIImage from a MKMapView

I want to create a UIImage from a MKMapView. My map is correctly displayed in the view, however the UIImage produced is just a gray image. Here's the relevant snippet.
UIGraphicsBeginImageContext(mapView.bounds.size);
[mapView.layer renderInContext:UIGraphicsGetCurrentContext()];
mapImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Anyone know how to make a UIImage using MapKit?
I am using the same code that is tested with ios sdk 4.1 and works fine. So, when map is already displayed to user and user press the button this action will be called:
UIImage *image = [mapView renderToImage];
and here is the wrapper function realized as UIView extension:
- (UIImage*) renderToImage
{
UIGraphicsBeginImageContext(self.frame.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
So, the problem is not in that code part.
On iOS7, there is a new API on MapKit for this called MKMapSnapshotter. So you actually don't need to create a mapview, load the tiles and create a graphic context capturing yourself.
Take a look into it at https://developer.apple.com/library/ios/documentation/MapKit/Reference/MKMapSnapshotter_class/Reference/Reference.html
Here is the improved function for retina display:
#implementation UIView (Ext)
- (UIImage*) renderToImage
{
// IMPORTANT: using weak link on UIKit
if(UIGraphicsBeginImageContextWithOptions != NULL)
{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
} else {
UIGraphicsBeginImageContext(self.frame.size);
}
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Hey Loren. There are multiple layers in the mapView. I think the first one is the map and the second on is the google layer. They might have changed something in the mapkit after 3.1. You can try
[[[mapView.layer sublayers] objectAtIndex:1] renderInContext:UIGraphicsGetCurrentContext()];
You can also try
CGRect rect = [mapView bounds];
CGImageRef mapImage = [mapView createSnapshotWithRect:rect];
Hope this helps.
Note that, mapView may not finish load so image may be grey.
as
mapViewDidFinishLoadingMap:
will not always be called, you should get UIImage in
mapViewDidFinishRenderingMap:fullyRendered:
so, the code just like this
- (UIImage *)renderToImage:(MKMapView *)mapView
{
UIGraphicsBeginImageContext(mapView.bounds.size);
[mapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
If you are calling this immediately after initializing the map (maybe in viewDidLoad?), you could get a gray image as the map is not finished drawing yet.
Try:
Calling the capture code using performSelector:withObject:afterDelay: using a short delay (even 0 seconds might work so it fires right after the current method is finished)
If you are adding annotations, call the capture code in the didAddAnnotationViews delegate method
Edit:
On the simulator, using performSelector, a zero delay works. On the device, a longer delay is required (about 5 seconds).
However, if you add annotations (and you capture in the didAddAnnotationViews method), it works immediately on both the simulator and device.

Resources