How do I alpha-mask a SKSpriteNode? - ios

I know there's SKCropNode, but it will only fully in- or exclude pixels based on an alpha threshold of the maskNode. What I already tried is using the SKEffectNode with a CIBlendWithAlphaMask filter, but the result I'm getting is invisible and I also wouldn't be sure on how to move the mask around. Here's the code:
SKSpriteNode* overlay = [SKSpriteNode spriteNodeWithImageNamed:#"Overlay.png"];
//...
SKEffectNode* blendNode = [[SKEffectNode alloc] init];
blendNode.filter = [CIFilter filterWithName:#"CIBlendWithAlphaMask"];
UIImage* maskImage = [UIImage imageNamed:#"MaskImage.png"];
[blendNode.filter setValue:[maskImage CIImage] forKey:#"inputMaskImage"];
[blendNode addChild:overlay];
[self addChild:blendNode];
I already had figured out the perfect solution, using a SKLightNode to light the texture and then blend it into the framebuffer using SKBlendModeAdd, but for some reason it won't work on iPhone 5 (here's a related topic). I'm aware of this topic too, which didn't help either.
Any solution to properly alpha-mask a SKSpriteNode would be greatly appreciated!
Running iOS 8.

Using a dynamically generated CIImage as mask does the trick. It can be created by a CIFilter, using whatever gradient or image you need as mask, and setting the inputCenter property to the position you want.

Related

IOS: Cannot Change Color of CAShapeLayer

I want to create a layer to act as a mask for my UIImageView which was created in interface builder. However, no matter what I do the layer remains white. The code is pretty straightforward any ideas what is causing this behavior ?
UIImage* image = [UIImage imageNamed:#"face"];
self.imageView.image = image;
CAShapeLayer *maskLayer = [[CAShapeLayer alloc]init];
maskLayer.frame = self.imageView.bounds;
maskLayer.fillColor = [[UIColor blackColor] CGColor];
maskLayer.path = CGPathCreateWithRect(self.imageView.bounds, NULL);
self.imageView.layer.mask = maskLayer;
self.maskLayer = maskLayer;
I edited the code and added the path but it still does not work.
You seem to have a misconception as to what a mask is. It is a set of instructions, in effect, for injecting transparency into a view, thus "punching a hole" through the view to a greater or lesser extent (depending on the degree of transparency). It has no color. You are seeing white because that is the color of what is behind your image view — you have punched a hole through the entire image view and made it invisible.
From the docs:
The layer’s alpha channel determines how much of the layer’s content
and background shows through. Fully or partially opaque pixels allow
the underlying content to show through but fully transparent pixels
block that content.
Black fill (interior of the closed path) = opaque pixels
Area outside of the closed path = transparent pixels
Image will appear on the interior region of your path and not appear outside of that path. The actual colour is irrelevant: when you set a layer as a mask, all we are interested in now is opacity.
The path is ostensively the same size as the imageView, so you won't see any difference as the mask matches the imageView bounds. Additionally , if you use this code before the geometry is fully set, such as in viewDidLoad, you may not get the results you expect.
As matt suggests - you need to think what result you are after.
If you want a "black translucent color" - or similar - consider adding another translucent-colored layer to the mix. But this is not a mask.

GPUImage Harris Corner Detection on an existing UIImage gives a black screen output

I've successfully added a crosshair generator and harris corner detection filter onto a GPUImageStillCamera output, as well as on live video from GPUImageVideoCamera.
I'm now trying to get this working on a photo set on a UIImageView, but continually get a black screen as the output. I have been reading the issues listed on GitHub against Brad Larson's GPUImage project, but they seemed to be more in relation to blend type filters, and following the suggestions there I still face the same problem.
I've tried altering every line of code to follow various examples I have seen, and to follow Brad's example code in the Filter demo projects, but the result is always the same.
My current code is, once I've taken a photo (which I check to make sure it is not just a black photo at this point):
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:self.photoView.image];
GPUImageHarrisCornerDetectionFilter *cornerFilter1 = [[GPUImageHarrisCornerDetectionFilter alloc] init];
[cornerFilter1 setThreshold:0.1f];
[cornerFilter1 forceProcessingAtSize:self.photoView.frame.size];
GPUImageCrosshairGenerator *crossGen = [[GPUImageCrosshairGenerator alloc] init];
crossGen.crosshairWidth = 15.0;
[crossGen forceProcessingAtSize:self.photoView.frame.size];
[cornerFilter1 setCornersDetectedBlock:^(GLfloat* cornerArray, NSUInteger cornersDetected, CMTime frameTime, BOOL endUpdating)
{
[crossGen renderCrosshairsFromArray:cornerArray count:cornersDetected frameTime:frameTime];
}];
[stillImageSource addTarget:crossGen];
[crossGen addTarget:cornerFilter1];
[crossGen prepareForImageCapture];
[stillImageSource processImage];
UIImage *currentFilteredImage = [crossGen imageFromCurrentlyProcessedOutput];
UIImageWriteToSavedPhotosAlbum(currentFilteredImage, nil, nil, nil);
[self.photoView setImage:currentFilteredImage];
I've tried prepareForImageCapture on both filters, on neither, adding the two targets in the opposite order, calling imageFromCurrentlyProcessedOutput on either filter, I've tried it without the crosshair generator, I've tried using local variables and variables declared in the .h file. I've tried with and without forceProcessingAtSize on each of the filters.
I can't think of anything else that I haven't tried to get the output. The app is running on iPhone 7.0, in Xcode 5.0.1. The standard filters work on the photo, e.g. the simple GPUImageSobelEdgeDetectionFilter included in the SimpleImageFilter test app.
Any suggestions? I am saving the output to the camera roll so I can check it's not just me failing to display it correctly. I suspect it's a stupid mistake somewhere but am at a loss as to what else to try now.
Thanks.
Edited to add: the corner detection is definitely working, as depending on the threshold I set, it returns between 6 and 511 corners.
The problem with the above is that you're not chaining filters in the proper order. The Harris corner detector takes in an input image, finds the corners within it, and provides the callback block to return those corners. The GPUImageCrosshairGenerator takes in those points and creates a visual representation of the corners.
What you have in the above code is image->GPUImageCrosshairGenerator-> GPUImageHarrisCornerDetectionFilter, which won't really do anything.
The code in your answer does go directly from the image to the GPUImageHarrisCornerDetectionFilter, but you don't want to use the image output from that. As you saw, it produces an image where the corners are identified by white dots on a black background. Instead, use the callback block, which processes that and returns an array of normalized corner coordinates for you to use.
If you need these to be visible, you could then take that array of coordinates and feed it into the GPUImageCrosshairGenerator to create visible crosshairs, but that image will need to be blended with your original image to make any sense. This is what I do in the FilterShowcase example.
I appear to have fixed the problem, with trying different variations again and now the returned image is black but there are white dots in the locations of the found corners. I removed the GPUImageCrosshairGenerator altogether. The code that got this working was:
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:self.photoView.image];
GPUImageHarrisCornerDetectionFilter *cornerFilter1 = [[GPUImageHarrisCornerDetectionFilter alloc] init];
[cornerFilter1 setThreshold:0.1f];
[cornerFilter1 forceProcessingAtSize:self.photoView.frame.size];
[stillImageSource addTarget:cornerFilter1];
[cornerFilter1 prepareForImageCapture];
[stillImageSource processImage];
UIImage *currentFilteredImage = [cornerFilter1 imageFromCurrentlyProcessedOutput];
UIImageWriteToSavedPhotosAlbum(currentFilteredImage, nil, nil, nil);
[self.photoView setImage:currentFilteredImage];
I do not need to add the crosshairs for the purpose of my app - I simply want to parse the locations of the corners to provide some cropping, but I required the dots to be visible to check the corners were being detected correctly. I'm not sure if the white dots on black are the expected outcome of this filter, but I presume so.
Updated code for Swift 2:
let stillImageSource: GPUImagePicture = GPUImagePicture(image: image)
let cornerFilter1: GPUImageHarrisCornerDetectionFilter = GPUImageHarrisCornerDetectionFilter()
cornerFilter1.threshold = 0.1
cornerFilter1.forceProcessingAtSize(image.size)
stillImageSource.addTarget(cornerFilter1)
stillImageSource.processImage()
let tmp: UIImage = cornerFilter1.imageByFilteringImage(image)

Implement Blur over parts of view

How can I implement the image below pragmatically - meaning the digits can change at runtime or even be replaced with a movie?
Just add a blurred UIView on top of your thing.
For example...make a UIImage of your desired view size, blur it using CIFilter and then add it to your view .It should achieve the desired effect.
This is generally the same question and is answered by quite a few methods.. Anyway I would propose 1 more:
Get the image from UIView
+ (UIImage *)imageFromLayer:(CALayer *)layer {
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
rather yet play around a bit with this to get the desired part of the view as the image. Now create a new view and add to it image views (with the image you get from layer). Then move the centers of the image views to achieve gaussian algorithm and take the image from this layer again and place it back on the original view.
Moving the center should be defined by radius fragment (I'd start with .5f) and resample range.
for(int i=1; i<resampleCount; i++) {
view1.center = CGPointMake(view1.center.x + radiusFragment*i, view1.center.y);
view2.center = CGPointMake(view2.center.x - radiusFragment*i, view2.center.y);
view3.center = CGPointMake(view3.center.x, view3.center.y + radiusFragment*i);
view4.center = CGPointMake(view4.center.x, view4.center.y - radiusFragment*i);
//add the subviews
}
//get the image from view
All the subviews need to have alpha set to 1.0f/(resampleCount*4)
This method might not be the fastest but it would be extremely easy to implement and if you can pimp the radius and resample range to minimum fragments it should do pretty well.
use a UIView whith white background and decrease the alpha property
blurView.backgroundColor=[UIColor colorWithRed:255 green:255 blue:255 alpha:0.3]

renderInContext flips the origin of colorWithPatternImage

I have UIView with backgroundColor set with colorWithPatternImage. As expected, the background image is drawn starting at the top left corner.
Problem appears when I'm doing renderInContext on that view: the background image is drawn starting at the bottom left corner. Everything else seems to render fine.
Here's the source and destination images:
Here is the code:
// here is the layer to be rendered into an image
UIView *src = [[UIView alloc] initWithFrame:(CGRect){{0, 0}, {100, 100}}];
src.backgroundColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"background.png"]];
[self.view addSubview:src];
// here we'll display the image
UIImageView *dest = [[UIImageView alloc] initWithFrame:(CGRect){{110, 0}, src.bounds.size}];
[self.view addSubview:dest];
// render `src` to an image in `dest`
UIGraphicsBeginImageContext(src.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[src.layer renderInContext:context];
dest.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Is there a way to keep the image to tile in right direction, as in the src view?
I managed to work around this by calling CGContextSetPatternPhase with a height of modf(viewHeight, patternHeight)
https://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_overview/dq_overview.html#//apple_ref/doc/uid/TP30001066-CH202-CJBBAEEC
This doc explains why this happens, in particular this part:
Important: The above discussion is essential to understand if you plan to write applications that directly target Quartz on iOS, but it is not sufficient. On iOS 3.2 and later, when UIKit creates a drawing context for your application, it also makes additional changes to the context to match the default UIKIt conventions. In particular, patterns and shadows, which are not affected by the CTM, are adjusted separately so that their conventions match UIKit’s coordinate system. In this case, there is no equivalent mechanism to the CTM that your application can use to change a context created by Quartz to match the behavior for a context provided by UIKit; your application must recognize the what kind of context it is drawing into and adjust its behavior to match the expectations of the context.

iOS animate a blur for a view

I would like to quickly animate a blur on a UIView to use as a transition in my app. I'm having trouble knowing where to start. I believe core image is the proper tool for the job. Can anyone point me to a sample of how to blur a UIView? I'm assuming I will need to convert the view into a single UIImage, but I don't know where to proceed from there.
Thanks in advance!
Taking a snapshot of the View and using GPUImage from Brad Larson (the GPUImageGaussianBlurFilter) got me some nice results.
To animate the view I created a ImageView with the blurred image and animated the alpha channel from 0 to 1 to make the blur appear progressively.
Alternatively, I presume its possible to increase the blursize per frame.
#import "GPUImage.h"
...
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
...
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
...
GPUImageGaussianBlurFilter * filter = [[GPUImageGaussianBlurFilter alloc] init];
filter.blurSize = 0.5;
UIImage * blurred = [filter imageByFilteringImage:image];
rasterizeScale of a uiview's layer is what you need, Here is the code for adding blur effect to UIVIew:
CALayer *layer = [self.blurView layer];
[layer setRasterizationScale:0.3];
[layer setShouldRasterize:YES];
For details refer to Apple Documentation of CALayer, Also this tutorial might help You, hope that helps
I recently did some tests with blurring a series of images at different blur settings and animating them simply with UIImageView. You might want to take a look:
AnimatedGaussianBlur

Resources