UIImage *sticky = [UIImage imageNamed:#"Radio.png"];
[_imgViewSticky setImage:sticky];
CIImage *outputImage = [self.originalImage CIImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImg = [context createCGImage:outputImage fromRect:[outputImage extent]];
float widthRatio = [outputImage extent].size.width / 320;
float heighRatio = [outputImage extent].size.height / 480;
CGPoint cgStickyPoint = CGPointMake(_imgViewSticky.frame.origin.x * widthRatio, _imgViewSticky.frame.origin.y * heighRatio);
cgImg = [self setStickyForCGImage:cgImg withPosition:cgStickyPoint];
The last line returns a CGImageRef object.
And I'm assigning the value to final image like this:
UIImage *finalImage = [UIImage ImageWithCGImageRef:cgImg];
Yet I'm not getting the image. Any ideas why? Any Help is much appreciated.
I notice that your CIContext isn't receiving any drawing, which could be why you're not getting an image. I don't have a clear picture of what you want, but this code will superimpose one UIImage on top of another UIImage:
UIGraphicsBeginImageContextWithOptions(backgroundImage.size, NO, 0.0); //Create an image context
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)]; //Draw the first UIImage
[stickerImage drawInRect:stickerRect]; //Draw the second UIImage wherever you want on top of the first image
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext(); //Get the final UIImage result
Related
Hello I'd like to create the following Black and White Photoshop effect on a UIImage
https://drive.google.com/file/d/0B5dHxpdDwpPec3dPTWdLVnNhZFk/view?usp=sharing
In which you can change the brightness of each of the six colors (reds yellows green cyans blues magentas)
I used this to make the image black and white but it doesn't allow me to change the specific colors
self.imageView.image = chosenImage;
CIImage *beginImage = [CIImage imageWithCGImage:chosenImage.CGImage];
CIImage *blackAndWhite = [CIFilter filterWithName:#"CIColorControls" keysAndValues:kCIInputImageKey, beginImage, #"inputBrightness", [NSNumber numberWithFloat:0.0], #"inputContrast", [NSNumber numberWithFloat:1.1], #"inputSaturation", [NSNumber numberWithFloat:0.0], nil].outputImage;
CIImage *output = [CIFilter filterWithName:#"CIExposureAdjust" keysAndValues:kCIInputImageKey, blackAndWhite, #"inputEV", [NSNumber numberWithFloat:0.7], nil].outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgiimage = [context createCGImage:output fromRect:output.extent];
UIImage *newImage = [UIImage imageWithCGImage:cgiimage];
self.imageView.image = newImage;
Thank You for your time
I think you can accomplish that effect with the following function:
- (UIImage *)grayScaleImageWith:(UIImage *)image blackPoint:(CGFloat)blackPoint whitePoint:(CGFloat)whitePoint andGamma:(CGFloat)gamma {
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateCalibratedGray(whitePoint, blackPoint, gamma);
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
Then call it filling the black and white values with the values selected on the UI:
CGFloat black[3] = { 0, 0, 0 }; // replace content with values from interface
CGFloat white[3] = { 100, 100, 100 }; // replace content with values from interface
[self grayScaleImageWith:image blackPoint:black whitePoint:white andGamma:1.8f];
I have not tested this code yet but I hope at least it points you in the right direction.
I am new to mobile programming. I am working in H264 video rendering in iOS application using VideoToolBox framework. It has one feature to take snapshot while rendering the video. Whenever I take a snapshot, I get the Black screen only.
I tried this
1. renderInContext,
2. drawViewHierarchyInRect,
3. snapshotViewAfterScreenUpdates method
to capture the rendering the video but returns a Black screen only.
//snapshot coding
UIGraphicsBeginImageContextWithOptions (self.view.bounds.size, YES, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
mImageView.image = snapshotImage;
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(snapshotImage,self, #selector(image:didFinishSavingWithError: contextInfo:), nil);
Check this out,
following chunk of code works for me to take screen's snap shot
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(APP_DELEGATE.window.bounds.size, NO, [[UIScreen mainScreen] scale]);
else
UIGraphicsBeginImageContext(APP_DELEGATE.window.bounds.size);
[APP_DELEGATE.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * data = UIImagePNGRepresentation(image);
I guess, it will help you. let me know if so
I've not worked with video yet, but a simple snapshot of UIView with subViews on it works fine
+ (UIImage *)makeSnapShot:(UIView *)view image:(UIImageView *)imageView
{
CGFloat offset_x = /*your_value*/;
CGFloat offset_y = /*your_value*/;
UIGraphicsBeginImageContext(view.bounds.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect rect = CGRectMake(offset_x, offset_y, imageView.bounds.size.width, imageView.bounds.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return image;
}
Not sure if this is what you're looking for, but if you need to get a snapshot of the VTDecompressionSession, you can send the CVImageBuffer that you get from the decodeFrame callback into this method to get a UIImage. You can also add your CIContext to the parameters list instead of using the temporaryContext.
+ (UIImage *) UIImageFromCVImageBufferRef:(CVImageBufferRef)imageBuf
{
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuf];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(imageBuf),
CVPixelBufferGetHeight(imageBuf))];
UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
CGImageRelease(videoImage);
return image;
}
func takeScreenshot(_ shouldSave: Bool = true) {
var screenshotImage :UIImage?
let layer = UIApplication.shared.keyWindow!.layer
let scale = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(layer.frame.size, false, scale)
self.view.drawHierarchy(in: self.view.bounds, afterScreenUpdates: true)
screenshotImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
if let image = screenshotImage, shouldSave {
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
}
So I have a UIImage which I want to crop. I looked and found imageByCroppingToRect method for CIImage. So, I converted the data to CIImage instead of UIImage, crop it using the specified method and then convert the resulting CIImage to UIImage and then display it in a UIImageView.
My code is
NSData *data = [[NSData alloc]initWithData:[def objectForKey:#"imageData"]];
//UIImage *normalImage = [[UIImage alloc]initWithData:data];
CIImage *originalImage = [CIImage imageWithData:data];
[originalImage imageByCroppingToRect:CGRectMake(10, 72, 300, 300)];
self.imageView.image = [UIImage imageWithCIImage:originalImage];
The problem is the image gets rotated by 90 degrees and I am not sure if it is being cropped. This image is captured using the device's camera. I use AVFoundation to access the camera. My session preset is AVCaptureSessionPresetPhoto. I think this is why I get the zooming.
CGRect rect = CGRectMake(10, 72, 300, 300);
CGImageRef imref = CGImageCreateWithImageInRect([yourOriginalImage CGImage], rect);
UIImage *newSubImage = [UIImage imageWithCGImage:imref];
try this. may help u.
EDIT:
Firstly fix your image orientation:
refs : https://github.com/j3r3miah/mapmatic-ios/blob/master/Mapmatic/UIImage+FixOrientation.m
then use above code to crop the Image to Specified Rect.
Not really an answer to your question, but an answer to your problem
https://github.com/mbcharbonneau/UIImage-Categories
especially this file : https://github.com/mbcharbonneau/UIImage-Categories/blob/master/UIImage%2BResize.m
- (UIImage *)croppedImage:(CGRect)bounds {
CGFloat scale = MAX(self.scale, 1.0f);
CGRect scaledBounds = CGRectMake(bounds.origin.x * scale, bounds.origin.y * scale, bounds.size.width * scale, bounds.size.height * scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], scaledBounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:UIImageOrientationUp];
CGImageRelease(imageRef);
return croppedImage;
}
you will find there all you need to crop your image
is there an easy way of selecting a region of an UIImage Instance and then creating a new image out of it? I tried it using a pan-gesture Recognizer for getting the coordinates that the user selected. But with this option there is now visible feedback and i also have to transform the coordinates. Is there maybe a nice plugin or best practices for doing this ?
try this:
UIGraphicsBeginImageContext(self.view.bounds.size);
// retrieve the current graphics context
CGContextRef context = UIGraphicsGetCurrentContext();
// render view into context
[self.view.layer renderInContext:context];
// create image from context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
image=[self cropImage:image];
UIGraphicsEndImageContext();
- (UIImage *)cropImage:(UIImage *)oldImage {
CGSize imageSize = oldImage.size;
UIGraphicsBeginImageContextWithOptions(CGSizeMake( imageSize.width,imageSize.height - 150),NO,0.); //set your height and width
[oldImage drawAtPoint:CGPointMake( 0, -80) blendMode:kCGBlendModeCopy alpha:1.];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
Like this:
CGImageRef imageRef = CGImageCreateWithImageInRect([bigImage CGImage], cropRect);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
I want to change hue of an image using slider. what I did is:
float slideValue = sldHueChange.value;
beginImage= [CIImage imageWithCGImage:imgBorder.image.CGImage];
context = [CIContext contextWithOptions:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:kCIContextUseSoftwareRenderer]];
filter= [CIFilter filterWithName:#"CIHueAdjust"];
[filter setValue:beginImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:slideValue] forKey:#"inputAngle" ];
CIImage *outputImage = [filter outputImage];
CGImageRef cgimg =[context createCGImage:outputImage fromRect:[outputImage extent]];
UIImage *newImg = [UIImage imageWithCGImage:cgimg];
[imgBorder setImage:newImg];
CGImageRelease(cgimg);
Its working. But it is not smooth. I want a to change hue of image very smoothly.
If you have idea to make a smooth hue changer please share it. I really need it. Thanks in advance.
I would suggest you to use the amazing GPUImage from Brad, it has a lot of filters, or you can try to use the GPU render for CIContext, changing the boolean to NO in the options dictionary for the CIContext creation.
Now what I'm doing is:
CGRect rect = CGRectMake(0, 0, selectedBorderForChangingHue.size.width, selectedBorderForChangingHue.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context1 = UIGraphicsGetCurrentContext();
CGContextClipToMask(context1, rect, selectedBorderForChangingHue.CGImage);
CGContextSetFillColorWithColor(context1, [[UIColor colorWithHue:sldHueChange.value/255.0f saturation:1.0f brightness:1.0f alpha:1.0f] CGColor]);
CGContextFillRect(context1, rect);
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage *flippedImage = [UIImage imageWithCGImage:img.CGImage scale:1.0 orientation: UIImageOrientationDownMirrored];
imgBorder.image = flippedImage;
and it is working fine for me. and it is very smooth.