I want to do a selective masking between two images in iOS similar to the mask function in Blender. There are two images 1 and 2 (resized to same dimensions). Initially only image 1 will be visible but wherever user touches any area upon image1, it becomes transparent and image 2 becomes visible in those regions.
I created a mask-like image using core graphics with touch move. It is basically a full black image with white portions wherever I touched. The alpha is set to 1.0 throughout. I can use this image as a mask and do the necessary by implementing my own image-processing methods which will iterate over each pixel, check it and set according values. Now this method will be called inside touch move and so my method might slow the entire process (specially for 8MP camera images).
I want to know how this can be achieved by using Quartz Core or Core Graphics which will be efficient enough to run in big images.
The code I have so far :
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
mouseSwiped = YES;
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:staticBG];
UIGraphicsBeginImageContext(staticBG.frame.size);
[maskView.image drawInRect:CGRectMake(0, 0, maskView.frame.size.width, maskView.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 20.0);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1.0, 1.0, 1.0, 0.0);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
maskView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lastPoint = currentPoint;
mouseMoved++;
if (mouseMoved == 10)
mouseMoved = 0;
staticBG.image = [self maskImage:staticBG.image withMask:maskView.image];
//maskView.hidden = NO;
}
- (UIImage*) maskImage:(UIImage *)baseImage withMask:(UIImage *)maskImage
{
CGImageRef imgRef = [baseImage CGImage];
CGImageRef maskRef = [maskImage CGImage];
CGImageRef actualMask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask(imgRef, actualMask);
return [UIImage imageWithCGImage:masked];
}
The maskImage method is not working as it creates a mask image depending upon alpha values.
I went through this link : Creating Mask from Path but I cannot understand the answer.
First of all I will mention something that I hope you know already.
masking works by taking the alpha values only.
creating an image with core graphics at each touchMove is a pretty huge overhead & you should try to avoid or use some other way of doing things.
try to use a static mask image.
I would like to propose you look at this from an inverted point of view.
i.e Instead of trying to make a hole in the top image trying to view the bottom, why not place the bottom image on top & mask the bottom image so that it would show user's touch point covering up the top view at specific parts. ?
I've done an example for you to get an idea over here > http://goo.gl/Zlu31T
Good luck & do post back if anything is not clear. I do believe there are much better and optimised way of doing this though. "_"
Since you're doing this in real time, I suggest you fake it while editing, and if you need to output the image later on, you can mask it for real, since it might take some time( not much, just not fast enough to do it in real time). And by faking I mean to put the image1 as background and on top of it put the hidden image2. Once the user touches the point, set the bounds of the image2 UIImageView to
CGRect rect= CGRectMake(touch.x - desiredRect.size.width/2,
touch.y - desiredRect.size.height/2,
desiredRect.size.width,
desiredRect.size.height);
and make it visible.
desiredRect would be the portion of the image2 that you want to show. Upon lifting the finger, you can just hide the image2 UIImageView so the image1 is fully visible. It is the fastest way I could think right now if your goal isn't to output the image at that very moment.
Use this code it will help for masking the two UIImages
CGSize newSize = CGSizeMake(320, 377);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[ backGroundImageView.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity
[self.drawImage.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageData = UIImagePNGRepresentation(newImage);
Related
I would like to rotate a UIImage from file (a jpeg on the file system) a computed amount of radians, but I would like to rotate it around a point in the image, as well as keep the original size of the image (with transparent gaps in the image where image data no longer exists, as well as cropping image data that has moved outside of the original frame). I would like to then store and display the resulting UIImage. I haven't found any resources for this task, any help would be much appreciated!
The closest thing I have found so far (with some slight modifications) is as follows:
-(UIImage*)rotateImage:(UIImage*)image aroundPoint:(CGPoint)point radians:(float)radians newSize:(CGRect)newSize
{
CGRect imageRect = { point, image.size };
UIGraphicsBeginImageContext(image.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, imageRect.origin.x, imageRect.origin.y);
CGContextRotateCTM(context, radians);
CGContextTranslateCTM(context, -imageRect.origin.x, -imageRect.origin.y);
CGContextDrawImage(context, (CGRect){ CGPointZero, imageRect.size }, [image CGImage]);
UIImage *returnImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return returnImg;
}
Unfortunately, this rotates the image incorrectly (in my tests, somewhere in the neighborhood of 180 degrees more than desired).
to rotate that UIImage image, lets say 90 degrees, you can easily do:
imageView.transform = CGAffineTransformMakeRotation(M_PI/2);
to rotate it multiple times you can use:
UIView animateKeyframesWithDuration method
and to anchor it to some point you can use:
[image.layer setAnchorPoint:CGPointMake(....)];
When user hovers over the image,tapping on certain body part presents the magnified image of that area. I wanted to know any possible third party frameworks that address such kind of feature or code snippets(like using which gesturerecognizers) that can help me attain this feature.
Question 2: Also I have to add a dynamic clickable Label at the point where the touch happens and ends (as you can see the wrist label in image) so that I can take the user to a separate view from this screen on clicking the label. How to make this possible?
In your drawRect method, mask off a circle (using a monochrome bitmap containing the 'mask' of your magnifying glass) and draw your subject view in there with a 2x scale transform. Then draw a magnifying glass image over that and you're done.
- (void) drawRect: (CGRect) rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect bounds = self.bounds;
CGImageRef mask = [UIImage imageNamed: #"loupeMask"].CGImage;
UIImage *glass = [UIImage imageNamed: #"loupeImage"];
CGContextSaveGState(context);
CGContextClipToMask(context, bounds, mask);
CGContextFillRect(context, bounds);
CGContextScaleCTM(context, 2.0, 2.0);
//draw your subject view here
CGContextRestoreGState(context);
[glass drawInRect: bounds];
}
Check this link for complete example.
I'm trying to zoom and translate an image on the screen.
here's my drawRect:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetShouldAntialias(context, NO);
CGContextScaleCTM (context, senderScale, senderScale);
[self.image drawAtPoint:CGPointMake(imgposx, imgposy)];
CGContextRestoreGState(context);
}
When senderScale is 1.0, moving the image (imgposx/imgposy) is very smooth. But if senderScale has any other value, performance takes a big hit and the image stutters when I move it.
The image I am drawing is a UIImageobject. I create it with
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
and draw a simple UIBezierPath(stroke):
self.image = UIGraphicsGetImageFromCurrentImageContext();
Am I doing something wrong? Turning off the anti-aliasing did not improve things much.
Edit:
I tried this:
rectImage = CGRectMake(0, 0, self.frame.size.width * senderScale, self.frame.size.height * senderScale);
[image drawInRect:rectImage];
but it was just as slow as the other method.
If you want this to perform well, you should let the GPU do the heavy lifting by using CoreAnimation instead of drawing the image in your -drawRect: method. Try creating a view and doing:
myView.layer.contents = self.image.CGImage;
Then zoom and translate it by manipulating the UIView relative to its superview. If you draw the image in -drawRect: you're making it do the hard work of blitting the image for every frame. Doing it via CoreAnimation only blits once, and then subsequently lets the GPU zoom and translate the layer.
I'm trying to add a feature in my iPhone app that allows users to record the screen. To do this, I used an open source class called ScreenCaptureView that compiles a series of screenshots of the main view through the renderInContext: method. However, this method ignores masked CALayers, which is key to my app.
EDIT:
I need a way to record the screen so that the masks are included. Although this question specifically asks for a way to create the illusion of an image mask, I'm open to any other modifications I can make to record the screen successfully.
APP DESCRIPTION
In the app, I take a picture and create the effect of a moving mouth by animating the position of the jaw region, as shown below.
Currently, I have the entire face as one CALayer, and the chin region as a separate CALayer. To make the chin layer, I mask the chin region from the complete face using a CGPath. (This path is an irregular shape and must be dynamic).
- (CALayer *)getChinLayerFromPic:(UIImage *)pic frame:(CGRect)frame {
CGMutablePathRef mPath = CGPathCreateMutable();
CGPathMoveToPoint(mPath, NULL, p0.x, p0.y);
CGPoint midpt = CGPointMake( (p2.x + p0.x)/2, (p2.y+ p0.y)/2);
CGPoint c1 = CGPointMake(2*v1.x - midpt.x, 2*v1.y - midpt.y); //control points
CGPoint c2 = CGPointMake(2*v2.x - midpt.x, 2*v2.y - midpt.y);
CGPathAddQuadCurveToPoint(mPath, NULL, c1.x, c1.y, p2.x, p2.y);
CGPathAddQuadCurveToPoint(mPath, NULL, c2.x, c2.y, p0.x, p0.y);
CALayer *chin = [CALayer layer];
CAShapeLayer *chinMask = [CAShapeLayer layer];
chin.frame = frame;
chin.contents = (id)[pic CGImageWithProperOrientation];
chinMask.path = mPath;
chin.mask = chinMask;
CGPathRelease(mPath);
return chin;
}
I then animate the chin layer with a path animation.
As mentioned before, the renderInContext: method ignores the mask, and returns an image of the entire face instead of just the chin. Is there any way I can create an illusion of masking the chin? I would like to use CALayers if possible, since it would be most convenient for animations. However, I'm open to any ideas, including other ways to capture the video. Thanks.
EDIT:
I'm turning the cropped chin into a UIImage, and then setting that new image as the layers contents, instead of directly masking the layer. However, the cropped region is the reverse of the specified path.
CALayer *chin = [CALayer layer];
chin.frame = frame;
CGImageRef imageRef = [pic CGImage];
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
int targetWidth = frame.size.width;
int targetHeight = frame.size.height;
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGContextRef bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
CGContextAddPath(bitmap, mPath);
CGContextClip(bitmap);
CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *chinPic = [UIImage imageWithCGImage:ref];
chin.contents = (id)[chinPic CGImageWithProperOrientation];
Why don't you draw the CALayer of the chin into a seperate CGImage and make a new UIImage with that?
Then you can add this CGImage to a seperate UIImageView, which you can just move around with a PanGestureRecognizer for example?
I suggest that you draw each part alone(you don't need masks), the face without a chin, and the chin, both with alpha pixels around, then just draw the chin over and move it on your path ..
Please inform me if this wasn't helpful to you, regards
If you just need this in order to capture the screen, it is much easier to use a programme such as
http://www.airsquirrels.com/reflector/
that connects your device to the computer via AirPlay, and then record the stream on the computer. This particular programme has screen recording built in, extremely convenient. I think there is a trial version you can use for recordings of up to 10 minutes.
Have you tried taking a look at layer.renderInContext doesn't take layer.mask into account? ?
In particular (notice the coordinate flip):
//Make the drawing right with coordinate switch
CGContextTranslateCTM(context, 0, cHeight);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, CGRectMake(maskLayer.frame.origin.x * mod, maskLayer.frame.origin.y * modTwo, maskLayer.frame.size.width * mod,maskLayer.frame.size.height * modTwo), maskLayer.image.CGImage);
//Reverse the coordinate switch
CGAffineTransform ctm = CGContextGetCTM(context);
ctm = CGAffineTransformInvert(ctm);
CGContextConcatCTM(context, ctm);
I have this code to color in a view:
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:drawImage];
UIGraphicsBeginImageContext(drawImage.frame.size);
[drawImage.image drawInRect:CGRectMake(0, 0, drawImage.frame.size.width, drawImage.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), size);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), r, g, b, a);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
my problem is that I want to color not with a point, but I want to use a particular image that is repeated (as a png image)
Is it possible?
It's easy to load a single UIImage and draw it:
UIImage *brushImage = [UIImage imageNamed:#"brush.png"];
[brushImage drawAtPoint:CGPointMake(currentPoint.x-brushImage.size.width/2, currentPoint.y-brushImage.size.height/2)];
This will draw the image just once per cycle, not a continuous line. If you want solid lines of your brush picture, see Objective C: Using UIImage for Stroking.
This could end up loading the image file every time this method is called, making it slow. While the results of [UIImage imageNamed:] are often cached, my code above could be improved by storing the brush for later reuse.
Speaking of performance, test this on older devices. As written, it worked on my second generation iPod touch, but any additions could make it stutter on second and third generation devices. Apple's GLPaint example, recommended by #Armaan, uses OpenGL for fast drawing, but much more code is involved.
Also, you seem to be doing your interaction (touchesBegan:, touchesMoved:, touchesEnded:) in a view that then draws its contents to drawImage, presumably a UIImageView. It is possible to store the painting image, then set this view's layer contents to be that image. This will not alter performance, but the code will be cleaner. If you continue using drawImage, access it through a property instead using its ivar.
You can start with APPLE's sample code:Sample code:GLPaint