UIImage loaded from managed object data doesn't show up in UIImageView - ios

I've got a pretty simple iOS app (adapted from basic master-detail boilerplate).
I've got RestKit set up to load data from a server. If an object's image URL gets updated, I download the image (using an AFHTTPClient subclass) and save its data using UIImagePNGRepresentation(image). Pretty straightforward.
So now, I've got a database that's already populated with objects - including their imageData. But for some reason, though I can get a UIImage instance from the data, that UIImage won't show up in a UIImageView.
I've got a category on the auto-generated NSManagedObject subclass, which (among other things) pulls the image data, and returns a UIImage instance:
#implementation Artwork (Helpers)
// ...
- (UIImage*)image {
if (self.imageData) {
return [UIImage imageWithData:self.imageData];
}
return nil;
}
#end
In my detail view, I have a UIImageView, whose image is set from the above method. Here's the relevant bit from my detail view controller. It gets called just before the segue, and works fine for setting the description text, but doesn't set the image correctly.
- (void)configureView {
// Update the user interface for the detail item (a Artwork instance in this case).
if (self.detailItem) {
// this works just fine
self.detailDescriptionText.text = self.detailItem.rawDescription;
// ... but this doesn't! Nothing is shown in the
UIImage *image = self.detailItem.image;
if (image) {
// Yes, the UIImage *is* there
NSLog(#"UIImage instance: %#, size: %fx%f", image, image.size.width, image.size.height);
// ... but this doesn't seem to any effect
self.imageView.image = image;
}
}
}
The NSLog call prints:
UIImage instance: <UIImage: 0x109a0d090>, size: 533.000000x300.000000
so it certainly seems like the UIImage object exists and has been unpacked from the data just like it should. But nothing shows up in the UIImageView.
Interestingly, if I set up a simple touch-listener on the detail view controller, I can show the image using the exact same code:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UIImage *image = self.detailItem.image;
if (image) {
NSLog(#"UIImage instance: %#, size: %fx%f", image, image.size.width, image.size.height);
self.imageView.image = image;
}
}
That works perfectly - tap the screen and the image shows up immediately, and the NSLog call prints:
UIImage instance: <UIImage: 0x10980a7e0>, size: 533.000000x300.000000
So there really is image data, and it does get unpacked into a proper UIImage - but it won't show up.
So, all in all, it seems like there's some sort of timing or threading issue. But here I'm drawing a blank.

Make sure to set your image on the main thread :)
dispatch_async(dispatch_get_main_queue(), ^(void) {
/* your code here */
});

Related

Saving an image displayed in a UIImageView

I have an array of images that are displayed in a UICollectionView.
When a cell in the collection view is pressed, that image is pushed to a view controller and displayed in a UIImageView.
I want to then be able to press a button and save the image to the users camera roll.
But I'm having some trouble doing so...
I think I'm on the right lines with the code but can't get it all to work together:
- (IBAction)onClickSavePhoto:(id)sender{
UIImage *img = [UIImage imageNamed:#"which ever image is being currently displayed in the image view"];
UIImageWriteToSavedPhotosAlbum(img, nil, nil, nil);
}
How can i manipulate the code to allow the user to save the image displayed in the image view?
Thanks in advance!
UPDATE:
Found the solution to the problem in another post:
Save image in UIImageView to iPad Photos Library
How to save an image to the library:
You can use this function:
UIImageWriteToSavedPhotosAlbum(UIImage *image,
id completionTarget,
SEL completionSelector,
void *contextInfo);
You only need completionTarget, completionSelector and contextInfo if you want to be notified when the UIImage is done saving, otherwise you can pass in nil.
More info here
Supposedly a faster way to save an image to the library than using UIImageWriteToSavedPhotosAlbum:
There`s much more fast then UIImageWriteToSavedPhotosAlbum way to do it using iOS 4.0+ AVFoundation framework
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeImageToSavedPhotosAlbum:[image CGImage] orientation:(ALAssetOrientation)[image imageOrientation] completionBlock:^(NSURL *assetURL, NSError *error){
if (error) { // TODO: error handling }
else { // TODO: success handling }
}];
//for non-arc projects
//[library release];
Get image of whatever is in the UIImageView as a screenshot:
iOS 7 has a new method that allows you to draw a view hierarchy into the current graphics context. This can be used to get an UIImage very fast.
This is a category method on UIView to get the view as an UIImage:
- (UIImage *)takeSnapShot {
UIGraphicsBeginImageContextWithOptions(self.myImageView.bounds.size, NO, [UIScreen mainScreen].scale);
[self drawViewHierarchyInRect:self.myImageView.bounds afterScreenUpdates:YES];
// old style [self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
It is considerably faster then the existing renderInContext: method.
Reference : https://developer.apple.com/library/ios/qa/qa1817/_index.html
UPDATE FOR SWIFT: An extension that does the same:
extension UIView {
func takeSnapshot() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.myImageView.bounds.size, false, UIScreen.mainScreen().scale);
self.drawViewHierarchyInRect(self.myImageView.bounds, afterScreenUpdates: true)
// old style: self.layer.renderInContext(UIGraphicsGetCurrentContext())
let image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
}

How do I programmatically combine multiple UIImages into one long UIImage?

I'm using the code from here How can I programmatically put together some UIImages to have one big UIImage? to combine multiple screenshots into one large vertical image. However, I'm having trouble calling this function to combine multiple images together using:
imageContainer = [UIImage imageByCombiningImage:imageContainer withImage:viewImage];
How do I call this UIImage+combine category to merge the library of images together?
Here's my Code:
- (void) printScreen {
// this is a method that starts the screenshot taking process
// this line is necessary to capture the completion of the scrollView animation
_webView.scrollView.delegate = self;
// save the first image
UIGraphicsBeginImageContextWithOptions(_webView.bounds.size, NO, 0);
[_webView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
// scroll to the next section
[_webView.scrollView scrollRectToVisible:CGRectMake(0, _webView.frame.size.height, _webView.frame.size.width, _webView.frame.size.height) animated:YES];
}
- (void )scrollViewDidEndScrollingAnimation:(UIScrollView *)scrollView {
// at this point the _webView scrolled to the next section
// I save the offset to make the code a little easier to read
CGFloat offset = _webView.scrollView.contentOffset.y;
UIGraphicsBeginImageContextWithOptions(_webView.bounds.size, NO, 0);
// note that the below line will not work if you replace _webView.layer with _webView.scrollView.layer
[_webView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(image1, nil, nil, nil);
// if we are not done yet, scroll to next section
if (offset < _webView.scrollView.contentSize.height) {
[_webView.scrollView scrollRectToVisible:CGRectMake(0, _webView.frame.size.height+offset, _webView.frame.size.width, _webView.frame.size.height) animated:YES];
}
UIImage *image = [UIImage imageByCombiningImage:image1 withImage:image2];
[[self theNewImageView] setImage:image];
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
NSLog(#"%# printScreen Image", image);
NSLog(#"%# printScreen Image2", image2);
}
Edit:
At this point I've tried alot of different things. Im new to objective C development so I've been digging deep into the Apple developer docs and other good places for info like stack and lynda.
Im seeing two different codes from the logs: printScreen Image2, printScreen Image, but i just cant get the function to call the new category.
I'm able to write all the images in separate pieces to the photo album and I can merge image1 or image2 into one image but not the both images or all images.
I figured it out! Right in front of my face the entire time. I needed to add the call to the category within the scrollViewDidEndScrollingAnimation then save to the UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);. I probably tried a couple different version of this solution but I renamed some variables early on to align with my project which screwed everything up.

GPUImage output image is missing in screen capture

I am trying to capture screen portion to post image on social media.
I am using following code to capture screen.
- (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Above code is perfect for capturing screen.
Problem :
My UIView contains GPUImageView with the filtered image. When I tries to capture screen using above code, that particular portion of GPUImageView does not contains the filtered image.
I am using GPUImageSwirlFilter with the static image (no camera). I have also tried
UIImage *outImage = [swirlFilter imageFromCurrentFramebuffer]
but its not giving image.
Note : Following is working code, which gives perfect output of swirl effect, but I want same image in UIImage object.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
GPUImageSwirlFilter *swirlFilter = [GPUImageSwirlFilter alloc] init];
swirlLevel = 4;
[swirlFilter setAngle:(float)swirlLevel/10];
UIImage *inputImage = [UIImage imageNamed:gi.wordImage];
GPUImagePicture *swirlSourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage];
inputImage = nil;
[swirlSourcePicture addTarget:swirlFilter];
dispatch_async(dispatch_get_main_queue(), ^{
[swirlFilter addTarget:imgSwirl];
[swirlSourcePicture processImage];
// This works perfect and I have filtered image in my imgSwirl. But I want
// filtered image in UIImage to use at different place like posting
// on social media
sharingImage = [swirlFilter imageFromCurrentFramebuffer]; // This also
// returns nothing.
});
});
1) Am I doing something wrong with GPUImage's imageFromCurrentFramebuffer ?
2) And why does screen capture code is not including GPUImageView portion in output image ?
3) How do I get filtered image in UIImage ?
First, -renderInContext: won't work with a GPUImageView, because a GPUImageView renders using OpenGL ES. -renderinContext: does not capture from CAEAGLLayers, which are used to back views presenting OpenGL ES content.
Second, you're probably getting a nil image in the latter code because you've forgotten to set -useNextFrameForImageCapture on your filter before triggering -processImage. Without that, your filter won't hang on to its backing framebuffer long enough to capture an image from it. This is due to a recent change in the way that framebuffers are handled in memory (although this change did not seem to get communicated very well).

Change Back And Forth UIImages

Hi I’m trying to get an image to change on a method call and if the method is recalled the original image comes back
-(void)change
{
if ((player.image = [UIImage imageNamed:#"first.png"]))
{
player.image = [UIImage imageNamed:#"second.png"]));
}
else
{
player.image = [UIImage imageNamed:#"first.png"]));
}
}
This works change the "first.png" image to "second.png" but when called again it doesn’t.
Where am I going wrong?
As pointed out by Michael Dautermann, the way you're comparing two UIImages is wrong. Can't you simply keep an NSString property that tells you the image you're showing?
-(void)change
{
if ([self.displayedImageTitle isEqualToString:#"first.png"]){
player.image = [UIImage imageNamed:#"second.png"]));
self.displayedImageTitle = #"second.png";
}
else{
player.image = [UIImage imageNamed:#"first.png"]));
self.displayedImageTitle = #"first.png";
}
}
In your code above, you're making an invalid assumption that "[UIImage imageNamed: #"first.png"]" will always be the same UIImage object when you do a comparison. That isn't true. Every time you call the "imageNamed" API, you may be creating a brand new UIImage object.
Now, since UIImage object doesn't know anything about file names or image titles, you need to keep track of which image is being displayed and then toggle on that.
I'd suggest creating a subclassed UIImage object that has a filename or title property (that you would also need to set), or have your two UIImages as properties or ivars loaded into memory (that is, if these aren't giant memory hogging images) and then do your toggling.

UIImage becomes NSNumber

I am trying to save UIImage to app delegate. When I load that UIViewController for the first time, it returns UIImage(see first screen shot).
Help me ! Stuck here for almost 2 days.
But when I redirect back to that controller, UIImage becomes NSNumber. (see second screen shot)
Before I leave the controller I set the image to app delegate as follows
UIGraphicsBeginImageContextWithOptions(self.imageView.frame.size,YES, 0.0);
[self.imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
del.getImage = resultingImage;
This problem is most easily solved using the debugger. First, as the comments said, change the property name to say myImage. Then, in your UIViewController class, provide you own setter:
- (void)setMyImage:(UIImage *)myImage
{
NSLog(#"Image class: %#", NSStringFromClass([myImage class]));
if([myImage isKindOfClass:[NSNumber class]]) {
NSLog(#"YIKES: a number!");
}
_myImage = myImage; // assumes ARC
}
Put a breakpoint on the second NSLog, or both. When you hit the second breakpoint, look at the call stack and you will find out who is setting it.

Resources