Adding GPUImage Vignette Filter with Opacity - ios

I am attempting to create an effect on an image using GPUImage. I am adding a vignette to an image to produce an Instagram-inspired filter. Currently I am using a GPUImageVignetteFilter to achieve this. The filter works, but I am looking for a way to either decrease the opacity of this filter, or blend it similar to a photoshop effect. Current code:
let sourceImage = GPUImagePicture(image: "Nothing.png")
let vignetteFilter = GPUImageVignetteFilter()
vignetteFilter.vignetteColor = GPUVector3(one: 77.0 / 255.0, two: 3.0 / 255.0, three: 188.0 / 255.0)
vignetteFilter.vignetteStart = 0
vignetteFilter.vignetteEnd = 1.2
sourceImage?.addTarget(vignetteFilter)
vignetteFilter.useNextFrameForImageCapture()
sourceImage?.processImage()
let newImage = vignetteFilter.imageFromCurrentFramebuffer()
Current Effect:
Desired Effect:
Original Photo:
Any help would be appreciated!

For anyone looking into adding vignettes with alpha, it is not currently supported through the current GPUImage. There is a fork by Drew Wilson (https://github.com/drewwilson/GPUImage) which adds a vignetteAlpha property to the filter. This worked like a charm. Hopefully it will be added to the main branch in the future!

Related

Metal alphaBlendOperation .max weird behavior

I'm using metal to draw some lines, my drawing canvas has a texture in MTLRenderPassDescriptor and when I draw inside it blending is enabled MTLRenderPipelineDescriptor and I'm using alphaBlendOperation = .max
renderPassDescriptor = MTLRenderPassDescriptor()
let attachment = renderPassDescriptor?.colorAttachments[0]
attachment?.texture = self.texture
attachment?.loadAction = .load
attachment?.storeAction = .store
let rpd = MTLRenderPipelineDescriptor()
rpd.colorAttachments[0].pixelFormat = .rgba8Unorm
let attachment = rpd.colorAttachments[0]!
attachment.isBlendingEnabled = true
attachment.rgbBlendOperation = .max
attachment.alphaBlendOperation = .max
I can change the properties in brush (size, opacity, hardness "blur"). However first two brushes are working really great as in the image bellow
But I have only one weird behavior when I use blurred brush with faded sides where lines are connected the faded areas is not blending as expected and an empty small line created between the connection. the image bellow described this issue, please check the single line and single point and then check the connections you can see this behavior very clear
MTLRenderPassDescriptor Should choose even the bellow alpha from down texture or brush alpha but when tap in the second and third point its making empty line instead of choosing a one of the alpha, Its like making alpha zero in these areas.
This is my faded brush you can see there is a gradian of color but i don't know if there is a problem with it
Please share with me any idea you have to solve it

Blend material onto another material in SceneKit using PBR iOS

I have already added a material to my geometry of SCNNode and now I want to add another material to it and set it to blend mode 'multiply'.
I tried a lot but unable to find a way to do this. If we blend the texture as multiply then it works for other light setups but not with the PBR.
material.lightingModel = .physicallyBased
let image = UIImage(named: "1.PNG")
material.multiply.contents = image
material.multiply.contentsTransform = SCNMatrix4MakeScale(10, 10, 0)
material.multiply.wrapT = .repeat
material.multiply.wrapS = .repeat
material.multiply.intensity = 1.0
Any help on this?
Thanks

Glass effect in SceneKit material

I want to make glass effect in SceneKit.
I searched in google but there's no perfect answer.
So I'm finding SceneKit warrior who can solve my problem clearly.
There's an image that I'm going to make.
It should be looks like real.
The glass effect, reflection and shadow are main point here.
I have obj and dae file already.
So, Is there anyone to help me?
Create a SCNMaterial and configure the following properties and assign it to the bottle geometry of a SCNNode :
.lightingModel = .blinn
.transparent.content = // an image/texture whose alpha channel defines
// the area of partial transparency (the glass)
// and the opaque part (the label).
.transparencyMode = .dualLayer
.fresnelExponent = 1.5
.isDoubleSide = true
.specular.contents = UIColor(white: 0.6, alpha: 1.0)
.diffuse.contents = // texture image including the label (rest can be gray)
.shininess = // somewhere between 25 and 100
.reflective.contents = // glass won’t look good unless it has something
// to reflect, so also configure this as well.
// To at least a gray color with value 0.7
// but preferably an image.
Depending on what else is in your scene, the background, and the lighting used, you will probably have to tune the values above to get the desired results. If you want a bottle without the label, use the .transparency property (set its contents to a gray color) instead of the .transparent property.

MTKView Displaying Wide Gamut P3 Colorspace

I'm building a real-time photo editor based on CIFilters and MetalKit. But I'm running into an issue with displaying wide gamut images in a MTKView.
Standard sRGB images display just fine, but Display P3 images are washed out.
I've tried setting the CIContext.render colorspace as the image colorspace, and still experience the issue.
Here are snippets of the code:
guard let inputImage = CIImage(mtlTexture: sourceTexture!) else { return }
let outputImage = imageEditor.processImage(inputImage)
print(colorSpace)
context.render(outputImage,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: inputImage.extent,
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable)
let pickedImage = info[UIImagePickerControllerOriginalImage] as! UIImage
print(pickedImage.cgImage?.colorSpace)
if let cspace = pickedImage.cgImage?.colorSpace {
colorSpace = cspace
}
I have found a similar issue on the Apple developer forums, but without any answers: https://forums.developer.apple.com/thread/66166
In order to support the wide color gamut, you need to set the colorPixelFormat of your MTKView to either BGRA10_XR or bgra10_XR_sRGB. I suspect the colorSpace property of macOS MTKViews won't be supported on iOS because color management in iOS is not active but targeted (read Best practices for color management).
Without seeing your images and their actual values, it is hard to diagnose, but I'll explain my findings & experiments. I suggest you start like I did, by debugging a single color.
For instance, what's the reddest point in P3 color space? It can be defined through a UIColor like this:
UIColor(displayP3Red: 1, green: 0, blue: 0, alpha: 1)
Add a UIButton to your view with the background set to that color for debugging purposes. You can either get the components in code to see what those values become in sRGB,
var fRed : CGFloat = 0
var fGreen : CGFloat = 0
var fBlue : CGFloat = 0
var fAlpha : CGFloat = 0
let c = UIColor(displayP3Red: 1, green: 0, blue: 0, alpha: 1)
c.getRed(&fRed, green: &fGreen, blue: &fBlue, alpha: &fAlpha)
or you can use the Calculator in macOS Color Sync Utility,
Make sure you select Extended Range, otherwise the values will be clamped to 0 and 1.
So, as you can see, your P3(1, 0, 0) corresponds to (1.0930, -0.2267, -0.1501) in extended sRGB.
Now, back to your MTKView,
If you set the colorPixelFormat of your MTKView to .BGRA10_XR, then you obtain the brightest red if the output of your shader is,
(1.0930, -0.2267, -0.1501)
If you set the colorPixelFormat of your MTKView to .bgra10_XR_sRGB, then you obtain the brightest red if the output of your shader is,
(1.22486, -0.0420312, -0.0196301)
because you have to write a linear RGB value, since this texture format will apply the gamma correction for you. Be careful when applying the inverse gamma, since there are negative values. I use this function,
let f = {(c: Float) -> Float in
if fabs(c) <= 0.04045 {
return c / 12.92
}
return sign(c) * powf((fabs(c) + 0.055) / 1.055, 2.4)
}
The last missing piece is creating a wide gamut UIImage. Set the color space to CGColorSpace.displayP3 and copy the data over. But what data, right? The brightest red in this image will be
(1, 0, 0)
or (65535, 0, 0) in 16-bit ints.
What I do in my code is using .rgba16Unorm textures to manipulate images in displayP3 color space, where (1, 0, 0) will be the brightest red in P3. This way, I can directly copy over its contents to a UIImage. Then, for displaying, I pass a color transform to the shader to convert from P3 to extended sRGB (so, not saturating colors) before displaying. I use linear color, so my transform is just a 3x3 matrix. I set my view to .bgra10_XR_sRGB, so the gamma will be applied automatically for me.
That (column-major) matrix is,
1.2249 -0.2247 0
-0.0420 1.0419 0
-0.0197 -0.0786 1.0979
You can read about how I generated it here: Exploring the display-P3 color space
Here's an example I built using UIButtons and an MTKView, screen-captured on an iPhoneX,
The button on the left is the brightest red on sRGB, while the button on the right is using a displayP3 color. At the center, I placed an MTKView that outputs the transformed linear color as described above.
Same experiment for green,
Now, if you see this on a recent iPhone or iPad, you should see the both the square in the center and the button to the right have the same bright colors. If you see this on a Mac that can't display them, the left button will appear the same color. If you see this in a Windows machine or a browser without proper color management, the left button may also appear to be of a different color, but that's only because the whole image is interpreted as sRGB and obviously those pixels have different values... But the appearance won't be correct.
If you want more references, check the testP3UIColor unit test I added here: ColorTests.swift,
my functions to initialize the UIImage: Image.swift,
and a sample app to try out the conversions: SampleColorPalette
I haven't experimented with CIImages, but I guess the same principles apply.
I hope this information is of some help. It also took me long to figure out how to display colors properly because I couldn't find any explicit reference to displayP3 support in the Metal SDK documentation.

Apply Core Image Filter (CIBumpDistortion) to only one part of an image + change radius of selection and intensity of CIFilter

I would like to copy some of the features displayed here:
So I would like the user to apply a CIBumpDistortion filter to an image and let him choose
1) where exactly he wants to apply it by letting him just touch the respective location on the image
2a) the size of the circle selection (first slider in the image above)
2b) the intensity of the CIBumpDistortion Filter (second slider in the image above)
I read some previously asked questions, but they were not really helpful and some of the solutions sounded really far from userfriendly (e.g. cropping the needed part, then reapplying it to the old image). Hope I am not asking for too much at once. Objective-C would be preferred, but any help/hint would be much appreciated really! Thank you in advance!
I wrote a demo (iPad) project that lets you apply most supported CIFilters. It interrogates each filter for the parameters it needs and has built-in support for float values as well as points and colors. For the bump distortion filter it lets you select a center point, a radius, and an input scale.
The project is called CIFilterTest. You can download the project from Github at this link: https://github.com/DuncanMC/CIFilterTest
There is quite a bit of housekeeping in the app to support the general-purpose ability to use any supported filter, but it should give you enough information to implement your own bump filter as you're asking to do.
The approach I worked out to applying a filter and getting it to render without extending outside of the bounds of the original image is to first apply a clamp filter to the image (CIAffineClamp) set to the identity transform, take the output of that filter and feed that into the input of your "target" filter (the bump distortion filter in this case) and then take the output of that and feed that into a crop filter (CICrop) with the bounds of the crop filter set to the original image size.
The method to look for in the sample project is called showImage, in ViewController.m
You wrote:
1) where exactly he wants to apply it by letting him just touch the
respective location on the image
2a) the size of the circle selection (first slider in the image above)
2b) the intensity of the CIBumpDistortion Filter (second slider in the
image above)
Well, CIBumpDistortion has those attributes:
inputCenter is the center of the effect
inputRadius is the size of the circle selection
inputScale is the intensity
Simon
To show the bump:
You have to pass the location (kCIInputCenterKey) on image with Radius Size (white Circle in your case)
func appleBumpDistort(toImage currentImage: UIImage, radius : Float, intensity: Float) -> UIImage? {
var context: CIContext = CIContext()
let currentFilter = CIFilter(name: "CIBumpDistortion")
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
currentFilter.setValue(radius, forKey: kCIInputRadiusKey)
currentFilter.setValue(intensity, forKey: kCIInputScaleKey)
currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
guard let image = currentFilter.outputImage else { return nil }
if let cgimg = context.createCGImage(image, from: image.extent) {
let processedImage = UIImage(cgImage: cgimg)
return processedImage
}
return nil
}

Resources