SCNMaterial prevent stretch - ios

I have a custom geometry quadrangle and my texture image is displaying on it, but I want it to display as an Aspect Fill, rather than stretching or compressing to fit the space.
I'm applying the same texture to multiple walls in a room so if the image is wallpaper, it has to look correct.
Is there a way to use the following and also determine how it fills?
quadNode.geometry?.firstMaterial?.diffuse.contents = UIImage(named: "wallpaper3.jpg")
Thanks
[UPDATE]
let quadNode = SCNNode(geometry: quad)
let (min, max) = quadNode.boundingBox
let width = CGFloat(max.x - min.x)
let height = CGFloat(max.y - min.y)
let material = SCNMaterial()
material.diffuse.contents = UIImage(named: "wallpaper3.jpg")
material.diffuse.contentsTransform = SCNMatrix4MakeScale(Float(width), Float(height), 1)
material.diffuse.wrapS = SCNWrapMode.repeat
material.diffuse.wrapT = SCNWrapMode.repeat
quadNode.geometry?.firstMaterial = material

I think this might help you, It is in objective c but it should be understandable:
CGFloat width = self.planeGeometry.width;
CGFloat height = self.planeGeometry.length;
SCNMaterial *material = self.planeGeometry.materials[4];
material.diffuse.contentsTransform = SCNMatrix4MakeScale(width, height, 1);
material.diffuse.wrapS = SCNWrapModeRepeat;
material.diffuse.wrapT = SCNWrapModeRepeat;
Plane Geometry is defined as follows:
self.planeGeometry = [SCNBox boxWithWidth:width height:planeHeight length:length chamferRadius:0];
//planeHeight = 0.01;
I'm using this to show horozontal planes, and the material it's made of doesn't get stretched out, but merely extends. Hoping that's what you need.
The dimensions of the plane are defined as: (incase it is needed)
float width = anchor.extent.x;
float length = anchor.extent.z;
This is being done in initWithAnchor method which uses the ARPlaneAnchor found on a plane.

Related

Mosaic light show CAReplicatorLayer animation

I'm trying to achieve this mosaic light show effect for my background view with the CAReplicatorLayer object:
https://downloops.com/stock-footage/mosaic-light-show-blue-illuminated-pixel-grid-looping-background/
Each tile/CALayer is a single image that was replicated horizontally & vertically. That part I have done.
It seems to me this task is broken into at least 4 separate parts:
Pick a random tile
Select a random range of color offset for the selected tile
Apply that color offset over a specified duration in seconds
If the random color offset exceeds a specific threshold then apply a glow effect with the color offset animation.
But I'm not actually sure this would be the correct algorithm.
My current code was taken from this tutorial:
https://www.swiftbysundell.com/articles/ca-gems-using-replicator-layers-in-swift/
Animations are not my strong suite & I don't actually know how to apply continuous/repeating animation on all tiles. Here is my current code:
#IBOutlet var animationView: UIView!
func cleanUpAnimationView() {
self.animationView.layer.removeAllAnimations()
self.animationView.layer.sublayers?.removeAll()
}
/// Start a background animation with a replicated pattern image in tiled formation.
func setupAnimationView(withPatternImage patternImage: UIImage, animate: Bool = true) {
// Tutorial: https://www.swiftbysundell.com/articles/ca-gems-using-replicator-layers-in-swift/
let imageSize = patternImage.size.halve
self.cleanUpAnimationView()
// Animate pattern image
let replicatorLayer = CAReplicatorLayer()
replicatorLayer.frame.size = self.animationView.frame.size
replicatorLayer.masksToBounds = true
self.animationView.layer.addSublayer(replicatorLayer)
// Give the replicator layer a sublayer to replicate
let imageLayer = CALayer()
imageLayer.contents = patternImage.cgImage
imageLayer.frame.size = imageSize
replicatorLayer.addSublayer(imageLayer)
// Tell the replicator layer how many copies (or instances) of the image needs to be rendered. But we won't see more than one since they are, per default, all rendered/stacked on top of each other.
let instanceCount = self.animationView.frame.width / imageSize.width
replicatorLayer.instanceCount = Int(ceil(instanceCount))
// Instance offsets & transforms is needed to move them
// 'CATransform3D' transform will be used on each instance: shifts them to the right & reduces the red & green color component of each instance's tint color.
// Shift each instance by the width of the image
replicatorLayer.instanceTransform = CATransform3DMakeTranslation(imageSize.width, 0, 0)
// Reduce the red & green color component of each instance, effectively making each copy more & more blue while horizontally repeating the gradient pattern
let colorOffset = -1 / Float(replicatorLayer.instanceCount)
replicatorLayer.instanceRedOffset = colorOffset
replicatorLayer.instanceGreenOffset = colorOffset
//replicatorLayer.instanceBlueOffset = colorOffset
//replicatorLayer.instanceColor = UIColor.random.cgColor
// Extend the original pattern to also repeat vertically using another tint color gradient
let verticalReplicatorLayer = CAReplicatorLayer()
verticalReplicatorLayer.frame.size = self.animationView.frame.size
verticalReplicatorLayer.masksToBounds = true
verticalReplicatorLayer.instanceBlueOffset = colorOffset
self.animationView.layer.addSublayer(verticalReplicatorLayer)
let verticalInstanceCount = self.animationView.frame.height / imageSize.height
verticalReplicatorLayer.instanceCount = Int(ceil(verticalInstanceCount))
verticalReplicatorLayer.instanceTransform = CATransform3DMakeTranslation(0, imageSize.height, 0)
verticalReplicatorLayer.addSublayer(replicatorLayer)
guard animate else { return }
// Set both the horizontal & vertical replicators to add a slight delay to all animations applied to the layer they're replicating
let delay = TimeInterval(0.1)
replicatorLayer.instanceDelay = delay
verticalReplicatorLayer.instanceDelay = delay
// This will make the image layer change color
let animColor = CABasicAnimation(keyPath: "instanceRedOffset")
animColor.duration = animationDuration
animColor.fromValue = verticalReplicatorLayer.instanceRedOffset
animColor.toValue = -1 / Float(Int.random(replicatorLayer.instanceCount-1))
animColor.autoreverses = true
animColor.repeatCount = .infinity
replicatorLayer.add(animColor, forKey: "colorshift")
let animColor1 = CABasicAnimation(keyPath: "instanceGreenOffset")
animColor1.duration = animationDuration
animColor1.fromValue = verticalReplicatorLayer.instanceGreenOffset
animColor1.toValue = -1 / Float(Int.random(replicatorLayer.instanceCount-1))
animColor1.autoreverses = true
animColor1.repeatCount = .infinity
replicatorLayer.add(animColor1, forKey: "colorshift1")
let animColor2 = CABasicAnimation(keyPath: "instanceBlueOffset")
animColor2.duration = animationDuration
animColor2.fromValue = verticalReplicatorLayer.instanceBlueOffset
animColor2.toValue = -1 / Float(Int.random(replicatorLayer.instanceCount-1))
animColor2.autoreverses = true
animColor2.repeatCount = .infinity
replicatorLayer.add(animColor2, forKey: "colorshift2")
}
let imageSize = patternImage.size.halve
and
animColor.toValue = -1 / Float(Int.random(replicatorLayer.instanceCount-1))
both generated errors.
I removed the halve and commented-out the animColor lines and the code runs and animates. I could not get ANY replicator layer to display or animate at all (not even the most basic apple or tutorial code) until I used your code. Thank you so much!

How do you play a video with alpha channel using AVFoundation?

I have an AR application which uses SceneKit, and imports a video on to scene using AVPlayer and thereby adding it as a child node of an SKVideo node.
The video is visible as it is supposed to, but the transparency in the video is not achieved.
Code as follows:
let spriteKitScene = SKScene(size: CGSize(width: self.sceneView.frame.width, height: self.sceneView.frame.height))
spriteKitScene.scaleMode = .aspectFit
guard let fileURL = Bundle.main.url(forResource: "Triple_Tap_1", withExtension: "mp4") else {
return
}
let videoPlayer = AVPlayer(url: fileURL)
videoPlayer.actionAtItemEnd = .none
let videoSpriteKitNode = SKVideoNode(avPlayer: videoPlayer)
videoSpriteKitNode.position = CGPoint(x: spriteKitScene.size.width / 2.0, y: spriteKitScene.size.height / 2.0)
videoSpriteKitNode.size = spriteKitScene.size
videoSpriteKitNode.yScale = -1.0
videoSpriteKitNode.play()
spriteKitScene.backgroundColor = .clear
spriteKitScene.addChild(videoSpriteKitNode)
let background = SCNPlane(width: CGFloat(2), height: CGFloat(2))
background.firstMaterial?.diffuse.contents = spriteKitScene
let backgroundNode = SCNNode(geometry: background)
backgroundNode.position = position
backgroundNode.constraints = [SCNBillboardConstraint()]
backgroundNode.rotation.z = 0
self.sceneView.scene.rootNode.addChildNode(backgroundNode)
// Create a transform with a translation of 0.2 meters in front of the camera.
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.2
let transform = simd_mul((self.session.currentFrame?.camera.transform)!, translation)
// Add a new anchor to the session.
let anchor = ARAnchor(transform: transform)
self.sceneView.session.add(anchor: anchor)
What could be the best way to implement the transparency of the Triple_Tap_1 video in this case.
I have gone through some stack overflow questions on this topic, and found the only solution to be a KittyBoom repository that was created somewhere in 2013, using Objective C.
I'm hoping that the community can reveal a better solution for this problem. GPUImage library is not something I could get to work.
I've came up with two ways of making this possible. Both utilize surface shader modifiers. Detailed information on shader modifiers can be found in Apple Developer Documentation.
Here's an example project I've created.
1. Masking
You would need to create another video that represents a transparency mask. In that video black = fully opaque, white = fully transparent (or any other way you would like to represent transparency, you would just need to tinker the surface shader).
Create a SKScene with this video just like you do in the code you provided and put it into material.transparent.contents (the same material that you put diffuse video contents into)
let spriteKitOpaqueScene = SKScene(...)
let spriteKitMaskScene = SKScene(...)
... // creating SKVideoNodes and AVPlayers for each video etc
let material = SCNMaterial()
material.diffuse.contents = spriteKitOpaqueScene
material.transparent.contents = spriteKitMaskScene
let background = SCNPlane(...)
background.materials = [material]
Add a surface shader modifier to the material. It is going to "convert" black color from the mask video (well, actually red color, since we only need one color component) into alpha.
let surfaceShader = "_surface.transparent.a = 1 - _surface.transparent.r;"
material.shaderModifiers = [ .surface: surfaceShader ]
That's it! Now the white color on the masking video is going to be transparent on the plane.
However you would have to take extra care of syncronizing these two videos since AVPlayers will probably get out of sync. Sadly I didn't have time to address that in my example project (yet, I will get back to it when I have time). Look into this question for a possible solution.
Pros:
No artifacts (if syncronized)
Precise
Cons:
Requires two videos instead of one
Requires synchronisation of the AVPlayers
2. Chroma keying
You would need a video that has a vibrant color as a background that would represent parts that should be transparent. Usually green or magenta are used.
Create a SKScene for this video like you normally would and put it into material.diffuse.contents.
Add a chroma key surface shader modifier which will cut out the color of your choice and make these areas transparent. I've lent this shader from GPUImage and I don't really know how it actually works. But it seems to be explained in this answer.
let surfaceShader =
"""
uniform vec3 c_colorToReplace = vec3(0, 1, 0);
uniform float c_thresholdSensitivity = 0.05;
uniform float c_smoothing = 0.0;
#pragma transparent
#pragma body
vec3 textureColor = _surface.diffuse.rgb;
float maskY = 0.2989 * c_colorToReplace.r + 0.5866 * c_colorToReplace.g + 0.1145 * c_colorToReplace.b;
float maskCr = 0.7132 * (c_colorToReplace.r - maskY);
float maskCb = 0.5647 * (c_colorToReplace.b - maskY);
float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;
float Cr = 0.7132 * (textureColor.r - Y);
float Cb = 0.5647 * (textureColor.b - Y);
float blendValue = smoothstep(c_thresholdSensitivity, c_thresholdSensitivity + c_smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));
float a = blendValue;
_surface.transparent.a = a;
"""
shaderModifiers = [ .surface: surfaceShader ]
To set uniforms use setValue(:forKey:) method.
let vector = SCNVector3(x: 0, y: 1, z: 0) // represents float RGB components
setValue(vector, forKey: "c_colorToReplace")
setValue(0.3 as Float, forKey: "c_smoothing")
setValue(0.1 as Float, forKey: "c_thresholdSensitivity")
The as Float part is important, otherwise Swift is going to cast the value as Double and shader will not be able to use it.
But to get a precise masking from this you would have to really tinker with the c_smoothing and c_thresholdSensitivity uniforms. In my example project I ended up having a little green rim around the shape, but maybe I just didn't use the right values.
Pros:
only one video required
simple setup
Cons:
possible artifacts (green rim around the border)

Rotate my SceneKit material

I'm taking images with AVCapturePhotoOutput and then using their JPEG representation as the texture on a SceneKit SCNPlane that is the same aspect ratio as the image:
let image = UIImage(data: dataImage!)
let rectangle = SCNPlane(width:9, height:12)
let rectmaterial = SCNMaterial()
rectmaterial.diffuse.contents = image
rectmaterial.isDoubleSided = true
rectangle.materials = [rectmaterial]
let rectnode = SCNNode(geometry: rectangle)
let pos = sceneSpacePosition(inFrontOf: self.pictCamera, atDistance: 16.5) // 16.5 is arbitrary, but makes the rectangle the same size as the camera
rectnode.position = pos
rectnode.orientation = self.pictCamera.orientation
pictView.scene?.rootNode.addChildNode(rectnode)
sceneSpacePosition is a bit of code that can be found here on SO that maps CoreMotion into SceneKit orientation. It is used to place the rectangle, which does indeed appear at the right location with the right size. All very cool.
The problem is that the image is rotated 90 degrees to the rectangle. So I did the obvious:
rectmaterial.diffuse.contentsTransform = SCNMatrix4MakeRotation(Float.pi / 2, 0, 0, 1)
This does not work property; the resulting image is unrecognizable. It appears that one small part of the image has been stretched to a huge size. I thought it might be the axis, but I tried all three with the same result.
Any ideas?
You are rotating on the upper left corner as suggested by Alain T.
If you move your image down, you may get the rotation you were expecting.
Try this:
let translation = SCNMatrix4MakeTranslation(0, -1, 0)
let rotation = SCNMatrix4MakeRotation(Float.pi / 2, 0, 0, 1)
let transform = SCNMatrix4Mult(translation, rotation)
rectmaterial.diffuse.contentsTransform = transform

Drawing a confidence ellipse on top of a scatter plot

I'm currently working on an iOS app where I'm using the CorePlot library (Version 2.1) to draw a scatter plot. My scatter plot draws fine, and in the next step I'd like to draw an translucent confidence ellipse on top of the plot. I've written a class computing the main and minor axis and the required rotation angle of my ellipse. My ConfidenceEllipse class implements a getPath() method which returns a CGPath representing the ellipse to draw.
func getPath() -> CGPath
{
var ellipse: CGPath
var transform = CGAffineTransformIdentity
transform = CGAffineTransformTranslate (transform, CGFloat(-self.meanX), CGFloat(-self.meanY))
transform = CGAffineTransformRotate (transform, CGFloat(self.angle))
transform = CGAffineTransformTranslate (transform, CGFloat(self.meanX), CGFloat(self.meanY))
ellipse = CGPathCreateWithEllipseInRect(CGRectMake (CGFloat(-self.mainHalfAxis), CGFloat(-self.minorHalfAxis), CGFloat(2 * self.mainHalfAxis), CGFloat(2 * self.minorHalfAxis)),&transform)
return ellipse
}
After searching the web for a while, it appears that Annotations are the way to go, so I tried this:
let graph = hostView.hostedGraph!
let space = graph.defaultPlotSpace
let ellipse = ConfidenceEllipse(chiSquare: 5.991)
ellipse.analyze(self.samples)
let annotation = CPTPlotSpaceAnnotation (plotSpace: space!, anchorPlotPoint: [0,0])
let overlay = CPTBorderedLayer (frame: graph.frame)
overlay.outerBorderPath = ellipse.getPath()
let fillColor = CPTColor.yellowColor()
overlay.fill = CPTFill (color: fillColor)
annotation.contentLayer = overlay
annotation.contentLayer?.opacity = 0.5
graph.addAnnotation(annotation)
Doing this, will give me the following
Screenshot
As you can see, the overlay takes up the full size of the frame, which seems logical given the fact that I passed the frames dimensions when creating the CPTBorderedLayer object. I also tried leaving the constructor empty, but then the overlay doesn't show at all. So I'm wondering, is there anything I'm missing here ?
You need to scale the ellipse to match the plot. Use the plot area bounds for the frame of the annotation layer and attach the annotation to the plot area. Scale the ellipse in the x- and y-directions to match the transform used by the plot space to fit plots in the plot area.
Edit:
After looking into how bordered layers work, I realized my suggestion above won't work. CPTBorderedLayer sets the outerBorderPath automatically whenever the layer bounds change. Instead of trying to affect the layer border, draw the ellipse into an image and use that as the fill for the bordered layer. You should size the layer so the ellipse just fits inside.
After failing to get the Annotations to work properly, I decided to take a different road. My final solution consists in overlaying my original scatter plot with a second one, which only contains one datapoint, namely the center of my confidence ellipse. Here's the code
func drawConfidenceEllipse () {
let graph = hostView.hostedGraph!
let plotSpace = graph.defaultPlotSpace as! CPTXYPlotSpace
let scaleX = (graph.bounds.size.width - graph.paddingLeft - graph.paddingRight) / CGFloat(plotSpace.xRange.lengthDouble)
let scaleY = (graph.bounds.size.height - graph.paddingTop - graph.paddingBottom) / CGFloat(plotSpace.yRange.lengthDouble)
let analysis = ConfidenceEllipse(chiSquare: 5.991)
analysis.analyze(self.samples)
let unscaledPath = analysis.getPath()
let bounds = CGPathGetBoundingBox(unscaledPath)
var scaler = CGAffineTransformIdentity
scaler = CGAffineTransformScale (scaler, scaleX, scaleY)
scaler = CGAffineTransformTranslate (scaler, CGFloat (-bounds.origin.x), CGFloat (-bounds.origin.y))
let scaledPath = CGPathCreateCopyByTransformingPath (unscaledPath, &scaler)
let scaledBounds = CGPathGetPathBoundingBox(scaledPath)
let symbol = CPTPlotSymbol ()
symbol.symbolType = CPTPlotSymbolType.Custom
symbol.customSymbolPath = scaledPath
symbol.fill = CPTFill (color: CPTColor.yellowColor().colorWithAlphaComponent(0.25))
symbol.size = CGSize (width: scaledBounds.size.width, height: scaledBounds.size.height)
let lineStyle = CPTMutableLineStyle()
lineStyle.lineWidth = 1
lineStyle.lineColor = CPTColor.yellowColor()
symbol.lineStyle = lineStyle
let ellipse = CPTScatterPlot (frame: hostView.frame)
ellipse.title = "Confidence Ellipse"
ellipse.delegate = self
ellipse.dataSource = self
ellipse.plotSymbol = symbol
ellipse.dataLineStyle = nil
graph.addPlot(ellipse)
}
Here's a screenshot of the final result:
95% Confidence Ellipse on top of scatter plot
Hope this helps

GPUImage crop to CGRect and rotate

Given a CGRect, I want to use GPUImage to crop a video. For example, if the rect is (0, 0, 50, 50), the video would be cropped at (0,0) with a length of 50 on each side.
What's throwing me is that GPUImageCropFilter doesn't take a rectangle, rather a normalized crop region with values ranging from 0 to 1. My intuition was to to this:
let assetSize = CGSizeApplyAffineTransform(videoTrack.naturalSize, videoTrack.preferredTransform)
let cropRect = CGRect(x: frame.minX/assetSize.width,
y: frame.minY/assetSize.height,
width: frame.width/assetSize.width,
height: frame.height/assetSize.height)
to calculate the crop region based on the size of the incoming asset. Then:
// Filter
let cropFilter = GPUImageCropFilter(cropRegion: cropRect)
let url = NSURL(fileURLWithPath: "\(NSTemporaryDirectory())\(String.random()).mp4")
let movieWriter = GPUImageMovieWriter(movieURL: url, size: assetSize)
movieWriter.encodingLiveVideo = false
movieWriter.shouldPassthroughAudio = false
// add targets
movieFile.addTarget(cropFilter)
cropFilter.addTarget(movieWriter)
cropFilter.forceProcessingAtSize(frame.size)
cropFilter.setInputRotation(kGPUImageRotateRight, atIndex: 0)
What should the movie writer size be? Shouldn't it be the size of the frame I want to crop with? And should I be using forceProcessingAtSize with the size value of my crop frame?
A complete code example would be great; I've been trying for hours and I can't seem to get the section of the video that I want.
FINAL:
if let videoTrack = self.asset.tracks.first {
let movieFile = GPUImageMovie(asset: self.asset)
let transformedRegion = CGRectApplyAffineTransform(region, videoTrack.preferredTransform)
// Filters
let cropFilter = GPUImageCropFilter(cropRegion: transformedRegion)
let url = NSURL(fileURLWithPath: "\(NSTemporaryDirectory())\(String.random()).mp4")
let renderSize = CGSizeApplyAffineTransform(videoTrack.naturalSize, CGAffineTransformMakeScale(transformedRegion.width, transformedRegion.height))
let movieWriter = GPUImageMovieWriter(movieURL: url, size: renderSize)
movieWriter.transform = videoTrack.preferredTransform
movieWriter.encodingLiveVideo = false
movieWriter.shouldPassthroughAudio = false
// add targets
// http://stackoverflow.com/questions/37041231/gpuimage-crop-to-cgrect-and-rotate
movieFile.addTarget(cropFilter)
cropFilter.addTarget(movieWriter)
movieWriter.completionBlock = {
observer.sendNext(url)
observer.sendCompleted()
}
movieWriter.failureBlock = { _ in
observer.sendFailed(.VideoCropFailed)
}
disposable.addDisposable {
cropFilter.removeTarget(movieWriter)
movieWriter.finishRecording()
}
movieWriter.startRecording()
movieFile.startProcessing()
}
As you note, the GPUImageCropFilter takes in a rectangle in normalized coordinates. You're on the right track, in that you just need to convert your CGRect in pixels to normalized coordinates by dividing the X components (origin.x and size.width) by the width of the image and the Y components by the height.
You don't need to use forceProcessingAtSize(), because the crop will automatically output an image of the appropriate cropped size. The movie writer's size should be matched to this cropped size, which you should know from your original CGRect.
The one complication you introduce is the rotation. If you need to apply a rotation in addition to your crop, you might want to check and make sure that you don't need to swap your X and Y for your crop region. This should be apparent in the output if the two need to be swapped.
There were some bugs with applying rotation at the same time as a crop a while ago, and I can't remember if I fixed all those. If I didn't, you could insert a dummy filter (gamma or brightness set to default values) before or after the crop and apply the rotation at that stage.

Resources