I am using AV Foundation and have created a main layer and a sub layer. The main layer displays a live "preview" of what the camera sees before the user takes a photo. After the user takes a photo, I want to set the value of the sublayer's contents property to the captured photo. Everything works perfectly, except for setting the contents of the sublayer.
And I know the sublayer is working because I am able to give it a background color of blue and when I take a photo in the app it will successfully turn the sublayer blue.
Here is my code where I am trying to set the sublayer to be the captured image:
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc]initWithData:imageData];
CALayer *subLayer = [CALayer layer];
subLayer.contents = (id)[UIImage imageWithData:imageData];
subLayer.frame = _previewLayer.frame;
[_previewLayer addSublayer:subLayer];
I have tried several different ways of setting the sublayer's contents property like these, but none of them work:
subLayer.contents = (id) [UIImage imageWithData:imageData];
subLayer.contents = [UIImage imageWithData:imageData];
subLayer.contents = image;
Also, I know the sublayer is setup properly because if I add this statement it will turn the sublayer completely blue when I take a photo:
subLayer.backgroundColor = [UIColor blueColor].CGColor;
Any ideas how I can update the sublayer and make it display the photo that is being captured?
On a whim I tried adding .CGImage to the imageWithData method call and it is now working. I sure wish it had that listed as a requirement in the xcode documentation.
Related
I want to show progress activity in the form of image getting completed. A simple example is when we install app from app store, we can see the image icon getting full as downloading gets completed, i.e. I get total bytes to be downloaded but I want to show progress in the form of icon image getting completed, i.e. according to bytes completed, want to show that icon image completed, I am looking for that kind of progress activity. Has anybody done that. Please help.
You need an image sequence of empty image and partially filled images until image is fully filled. Then it must be animated so output will look like image is getting filled.
A sample code would look like,
UIImageView *animationView=[[UIImageView alloc]initWithFrame:CGRectMake(0, 0,320, 460)];//specify your size here
animationView.contentMode = UIViewContentModeCenter;
animationView.animationImages = [NSArray arrayWithObjects:
[UIImage imageNamed:#"1.png"],
[UIImage imageNamed:#"2.png"],
[UIImage imageNamed:#"3.png"],
[UIImage imageNamed:#"4.png"],
[UIImage imageNamed:#"5.png"],
[UIImage imageNamed:#"6.png"],
[UIImage imageNamed:#"7.png"],
[UIImage imageNamed:#"8.png"],
[UIImage imageNamed:#"9.png"],
[UIImage imageNamed:#"10.png"],nil];
animationView.animationDuration = 1.5f;
animationView.animationRepeatCount= 0;
[animationView startAnimating];
[self.view addSubview:animationView];
If you're not downloading the image the why not use a CAShapeLayer used as a mask over your image. In other words your image is there the whole time but more of it becomes visible as your mask changes.
If your mask was a single slice of a circle you could then use the NSURLSession/NSURLConnection delegate to apply a CATransform around the centre point of the circle and then recalculate the bezierpath of the mask. Before the delegate returns apply the new mask and you would end up with a circular progress view where more of your image becomes visible with each call to the delegate.
I am trying to capture screen portion to post image on social media.
I am using following code to capture screen.
- (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Above code is perfect for capturing screen.
Problem :
My UIView contains GPUImageView with the filtered image. When I tries to capture screen using above code, that particular portion of GPUImageView does not contains the filtered image.
I am using GPUImageSwirlFilter with the static image (no camera). I have also tried
UIImage *outImage = [swirlFilter imageFromCurrentFramebuffer]
but its not giving image.
Note : Following is working code, which gives perfect output of swirl effect, but I want same image in UIImage object.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
GPUImageSwirlFilter *swirlFilter = [GPUImageSwirlFilter alloc] init];
swirlLevel = 4;
[swirlFilter setAngle:(float)swirlLevel/10];
UIImage *inputImage = [UIImage imageNamed:gi.wordImage];
GPUImagePicture *swirlSourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage];
inputImage = nil;
[swirlSourcePicture addTarget:swirlFilter];
dispatch_async(dispatch_get_main_queue(), ^{
[swirlFilter addTarget:imgSwirl];
[swirlSourcePicture processImage];
// This works perfect and I have filtered image in my imgSwirl. But I want
// filtered image in UIImage to use at different place like posting
// on social media
sharingImage = [swirlFilter imageFromCurrentFramebuffer]; // This also
// returns nothing.
});
});
1) Am I doing something wrong with GPUImage's imageFromCurrentFramebuffer ?
2) And why does screen capture code is not including GPUImageView portion in output image ?
3) How do I get filtered image in UIImage ?
First, -renderInContext: won't work with a GPUImageView, because a GPUImageView renders using OpenGL ES. -renderinContext: does not capture from CAEAGLLayers, which are used to back views presenting OpenGL ES content.
Second, you're probably getting a nil image in the latter code because you've forgotten to set -useNextFrameForImageCapture on your filter before triggering -processImage. Without that, your filter won't hang on to its backing framebuffer long enough to capture an image from it. This is due to a recent change in the way that framebuffers are handled in memory (although this change did not seem to get communicated very well).
The user picks an image with picker and the following code manipulates the image
UIImage *image = info[UIImagePickerControllerOriginalImage];
_imageView.image = image;
_imageView.contentMode = UIViewContentModeScaleAspectFill;
_imageView.layer.cornerRadius = 10;
_imageView.clipsToBounds = YES;
_imageView.layer.shouldRasterize = YES;
_imageView.layer.rasterizationScale = [[UIScreen mainScreen] scale];
When I send imageView to the server it is sending the original image and not the new image. I am guessing this is because UIImageView does not actually change the image but is just making on the fly visual changes to the original image.
Can someone lead me in the right direction as to either what I need to learn in order to make permanent changes to an image/create new image or is there a simple way to make these changes I've made permanent?
Thanks
I suppose you talking about getting a new image like it appear on screen (with the corner radius set etc etc) so you might want to test some like that :
UIGraphicsBeginImageContext(self.imageView.image.size);
[self.imageView drawViewHierarchyInRect:self.imageView.frame afterScreenUpdates:YES];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Can anyone help me understand how I apply a background image object to a UIView please?
I have created a background image which is a blurred version of a background and I would like to apply it to be the background of a uiView in the foreground which would ideally mask the background image.
I have the following code so far -
_blurImage = [source stackBlur:50];
[_HPBlurView.backgroundColor = [UIColor colorWithPatternImage:[_blurImage]]];
I would like to apply the image object(_blurImage) to be the background image of _hpBlurView but i'm struggling to get it working!
At first glance, you are using too many brackets. Here is a working version of your code :
_burImage = [source stackBlur:50];
_HPBlurImage.backgroundColor = [UIColor colorWithPatternImage:_blurImage];
I can't see what stackBlur:50 returns. So start from the beginning. colorWithPatternImag takes UIImage as a parameter. So Start by adding a picture, any picture, to your application. Lets imagine that the image is called image.png. This is one way to do it:
UIImage *image = [UIImage imageNamed:#"image.png"];
_HPBlurView.backgroundColor = [UIColor colorWithPatternImage:image];
This should help to get you going.
Create an image and add to the background.:
UIImage *image = [UIImage imageNamed:#"youimage"];
self.view.backgroundColor = [UIColor colorWithPatternImage:image];
It's that.
To make sure everything resizes properly, no matter rotation, device size and iOS version, I just set an UIImageView
//Create UIImageView
UIImageView *backgroundImageView = [[UIImageView alloc] initWithFrame:self.view.frame]; //or in your case you should use your _blurView
backgroundImageView.image = [UIImage imageNamed:#"image.png"];
//set it as a subview
[self.view addSubview:backgoundImageView]; //in your case, again, use _blurView
//just in case
[self.view sendSubviewToBack:backgroundImageView];
I have a UIImageView that can be moved/scaled (self.imageForEditing). On top of this image view I have an overlay with a hole cut out, which is static and can't be moved. I need to save just the part of the underlying image that is visible through the hole at the time a button is pressed. My current attempt:
- (IBAction)saveImage
{
UIImage *image = self.imageForEditing.image;
CGImageRef originalMask = [UIImage imageNamed:#"picOverlay"].CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(originalMask),
CGImageGetHeight(originalMask),
CGImageGetBitsPerComponent(originalMask),
CGImageGetBitsPerPixel(originalMask),
CGImageGetBytesPerRow(originalMask),
CGImageGetDataProvider(originalMask), nil, YES);
CGImageRef maskedImageRef = CGImageCreateWithMask(image.CGImage, mask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedImageRef scale:image.scale orientation:image.imageOrientation];
CGImageRelease(mask);
CGImageRelease(maskedImageRef);
UIImageView *test = [[UIImageView alloc] initWithImage:maskedImage];
[self.view addSubview:test];
}
As a test I'm just trying to add the newly created image to the top left of the screen. Theoretically it should be a small round image (the part that was visible through the overlay). But I'm just getting the whole image created again. What am I doing wrong? And how can I account for the fact that self.imageForEditing can be moved around?
CGImageCreateWithMask returns an image of the same size as the original's one.
That is why you get the original image (I assume) with the mask being applied.
You can apply the mask and then remove the invisible border. Use the advice from this question: iOS: How to trim an image to the useful parts (remove transparent border)
Find the bounds of the non-transparent part of the image and redraw it into a new image.