I'm currently working on an iOS app where I'm using the CorePlot library (Version 2.1) to draw a scatter plot. My scatter plot draws fine, and in the next step I'd like to draw an translucent confidence ellipse on top of the plot. I've written a class computing the main and minor axis and the required rotation angle of my ellipse. My ConfidenceEllipse class implements a getPath() method which returns a CGPath representing the ellipse to draw.
func getPath() -> CGPath
{
var ellipse: CGPath
var transform = CGAffineTransformIdentity
transform = CGAffineTransformTranslate (transform, CGFloat(-self.meanX), CGFloat(-self.meanY))
transform = CGAffineTransformRotate (transform, CGFloat(self.angle))
transform = CGAffineTransformTranslate (transform, CGFloat(self.meanX), CGFloat(self.meanY))
ellipse = CGPathCreateWithEllipseInRect(CGRectMake (CGFloat(-self.mainHalfAxis), CGFloat(-self.minorHalfAxis), CGFloat(2 * self.mainHalfAxis), CGFloat(2 * self.minorHalfAxis)),&transform)
return ellipse
}
After searching the web for a while, it appears that Annotations are the way to go, so I tried this:
let graph = hostView.hostedGraph!
let space = graph.defaultPlotSpace
let ellipse = ConfidenceEllipse(chiSquare: 5.991)
ellipse.analyze(self.samples)
let annotation = CPTPlotSpaceAnnotation (plotSpace: space!, anchorPlotPoint: [0,0])
let overlay = CPTBorderedLayer (frame: graph.frame)
overlay.outerBorderPath = ellipse.getPath()
let fillColor = CPTColor.yellowColor()
overlay.fill = CPTFill (color: fillColor)
annotation.contentLayer = overlay
annotation.contentLayer?.opacity = 0.5
graph.addAnnotation(annotation)
Doing this, will give me the following
Screenshot
As you can see, the overlay takes up the full size of the frame, which seems logical given the fact that I passed the frames dimensions when creating the CPTBorderedLayer object. I also tried leaving the constructor empty, but then the overlay doesn't show at all. So I'm wondering, is there anything I'm missing here ?
You need to scale the ellipse to match the plot. Use the plot area bounds for the frame of the annotation layer and attach the annotation to the plot area. Scale the ellipse in the x- and y-directions to match the transform used by the plot space to fit plots in the plot area.
Edit:
After looking into how bordered layers work, I realized my suggestion above won't work. CPTBorderedLayer sets the outerBorderPath automatically whenever the layer bounds change. Instead of trying to affect the layer border, draw the ellipse into an image and use that as the fill for the bordered layer. You should size the layer so the ellipse just fits inside.
After failing to get the Annotations to work properly, I decided to take a different road. My final solution consists in overlaying my original scatter plot with a second one, which only contains one datapoint, namely the center of my confidence ellipse. Here's the code
func drawConfidenceEllipse () {
let graph = hostView.hostedGraph!
let plotSpace = graph.defaultPlotSpace as! CPTXYPlotSpace
let scaleX = (graph.bounds.size.width - graph.paddingLeft - graph.paddingRight) / CGFloat(plotSpace.xRange.lengthDouble)
let scaleY = (graph.bounds.size.height - graph.paddingTop - graph.paddingBottom) / CGFloat(plotSpace.yRange.lengthDouble)
let analysis = ConfidenceEllipse(chiSquare: 5.991)
analysis.analyze(self.samples)
let unscaledPath = analysis.getPath()
let bounds = CGPathGetBoundingBox(unscaledPath)
var scaler = CGAffineTransformIdentity
scaler = CGAffineTransformScale (scaler, scaleX, scaleY)
scaler = CGAffineTransformTranslate (scaler, CGFloat (-bounds.origin.x), CGFloat (-bounds.origin.y))
let scaledPath = CGPathCreateCopyByTransformingPath (unscaledPath, &scaler)
let scaledBounds = CGPathGetPathBoundingBox(scaledPath)
let symbol = CPTPlotSymbol ()
symbol.symbolType = CPTPlotSymbolType.Custom
symbol.customSymbolPath = scaledPath
symbol.fill = CPTFill (color: CPTColor.yellowColor().colorWithAlphaComponent(0.25))
symbol.size = CGSize (width: scaledBounds.size.width, height: scaledBounds.size.height)
let lineStyle = CPTMutableLineStyle()
lineStyle.lineWidth = 1
lineStyle.lineColor = CPTColor.yellowColor()
symbol.lineStyle = lineStyle
let ellipse = CPTScatterPlot (frame: hostView.frame)
ellipse.title = "Confidence Ellipse"
ellipse.delegate = self
ellipse.dataSource = self
ellipse.plotSymbol = symbol
ellipse.dataLineStyle = nil
graph.addPlot(ellipse)
}
Here's a screenshot of the final result:
95% Confidence Ellipse on top of scatter plot
Hope this helps
Related
I have set up shadow in ARKit, But it's not satisfied results, we have required the same shade as quick view in safari. Please help me how to set up it. We have attached two images.
Code
var light = SCNLight()
var lightNode = SCNNode()
light.castsShadow = true
light.automaticallyAdjustsShadowProjection = true
light.maximumShadowDistance = 20.0
light.orthographicScale = 1
light.type = .directional
light.shadowMapSize = CGSize(width: 2048, height: 2048)
light.shadowMode = .deferred
light.shadowSampleCount = 128
light.shadowRadius = 3
light.shadowBias = 32
light.zNear = 1
light.zFar = 1000
light.shadowColor = UIColor.black.withAlphaComponent(0.36)
lightNode.light = light2
lightNode.eulerAngles = SCNVector3(-Float.pi / 2, 0, 0)
self.sceneView.scene.rootNode.addChildNode(lightNode)
Provide shadow offset and increase the shadow radius. Play with these values to get the desired output.
light.shadowOffset = CGSize(width: 1, height: 1) //controls spread
light.shadowOpacity = 0.5 // controls opacity
light.shadowRadius = 5.0 // controls blur level
If you need a more blurry shadows in your scene use a greater values for shadowRadius instance property. shadowRadius specifies the sample radius used to render the receiver’s shadow.
Default value is 3.0.
var shadowRadius: CGFloat { get set }
...in real code it looks like this:
lightNode.light?.shadowRadius = 20.0
Apple documentation says:
shadowRadius is a number that specifies the amount of blurring around the edges of shadows cast by the light. SceneKit produces soft-edged shadows by rendering the silhouettes of geometry into a 2D shadow map and then using several weighted samples from the shadow map to determine the strength of the shadow at each pixel in the rendered scene. This property controls the radius of shadow map sampling. Lower numbers result in shadows with sharply defined, pixelated edges, higher numbers result in blurry shadows.
Also, use a spotlight instead of directional light, `cause the first one produces nice and blurry shadows.
lightNode.light?.type = .spot
And one more tip: keep you spotlight fixture at the distance of more than 2 meters from model, and assign a value of 179 degrees to spotOuterAngle instance property:
lightNode.light?.spotOuterAngle = 179.0 /* default is 45 degrees */
P.S.
If you wanna know how to use blurred shadows in RealityKit, please read this post.
I'm trying to achieve this mosaic light show effect for my background view with the CAReplicatorLayer object:
https://downloops.com/stock-footage/mosaic-light-show-blue-illuminated-pixel-grid-looping-background/
Each tile/CALayer is a single image that was replicated horizontally & vertically. That part I have done.
It seems to me this task is broken into at least 4 separate parts:
Pick a random tile
Select a random range of color offset for the selected tile
Apply that color offset over a specified duration in seconds
If the random color offset exceeds a specific threshold then apply a glow effect with the color offset animation.
But I'm not actually sure this would be the correct algorithm.
My current code was taken from this tutorial:
https://www.swiftbysundell.com/articles/ca-gems-using-replicator-layers-in-swift/
Animations are not my strong suite & I don't actually know how to apply continuous/repeating animation on all tiles. Here is my current code:
#IBOutlet var animationView: UIView!
func cleanUpAnimationView() {
self.animationView.layer.removeAllAnimations()
self.animationView.layer.sublayers?.removeAll()
}
/// Start a background animation with a replicated pattern image in tiled formation.
func setupAnimationView(withPatternImage patternImage: UIImage, animate: Bool = true) {
// Tutorial: https://www.swiftbysundell.com/articles/ca-gems-using-replicator-layers-in-swift/
let imageSize = patternImage.size.halve
self.cleanUpAnimationView()
// Animate pattern image
let replicatorLayer = CAReplicatorLayer()
replicatorLayer.frame.size = self.animationView.frame.size
replicatorLayer.masksToBounds = true
self.animationView.layer.addSublayer(replicatorLayer)
// Give the replicator layer a sublayer to replicate
let imageLayer = CALayer()
imageLayer.contents = patternImage.cgImage
imageLayer.frame.size = imageSize
replicatorLayer.addSublayer(imageLayer)
// Tell the replicator layer how many copies (or instances) of the image needs to be rendered. But we won't see more than one since they are, per default, all rendered/stacked on top of each other.
let instanceCount = self.animationView.frame.width / imageSize.width
replicatorLayer.instanceCount = Int(ceil(instanceCount))
// Instance offsets & transforms is needed to move them
// 'CATransform3D' transform will be used on each instance: shifts them to the right & reduces the red & green color component of each instance's tint color.
// Shift each instance by the width of the image
replicatorLayer.instanceTransform = CATransform3DMakeTranslation(imageSize.width, 0, 0)
// Reduce the red & green color component of each instance, effectively making each copy more & more blue while horizontally repeating the gradient pattern
let colorOffset = -1 / Float(replicatorLayer.instanceCount)
replicatorLayer.instanceRedOffset = colorOffset
replicatorLayer.instanceGreenOffset = colorOffset
//replicatorLayer.instanceBlueOffset = colorOffset
//replicatorLayer.instanceColor = UIColor.random.cgColor
// Extend the original pattern to also repeat vertically using another tint color gradient
let verticalReplicatorLayer = CAReplicatorLayer()
verticalReplicatorLayer.frame.size = self.animationView.frame.size
verticalReplicatorLayer.masksToBounds = true
verticalReplicatorLayer.instanceBlueOffset = colorOffset
self.animationView.layer.addSublayer(verticalReplicatorLayer)
let verticalInstanceCount = self.animationView.frame.height / imageSize.height
verticalReplicatorLayer.instanceCount = Int(ceil(verticalInstanceCount))
verticalReplicatorLayer.instanceTransform = CATransform3DMakeTranslation(0, imageSize.height, 0)
verticalReplicatorLayer.addSublayer(replicatorLayer)
guard animate else { return }
// Set both the horizontal & vertical replicators to add a slight delay to all animations applied to the layer they're replicating
let delay = TimeInterval(0.1)
replicatorLayer.instanceDelay = delay
verticalReplicatorLayer.instanceDelay = delay
// This will make the image layer change color
let animColor = CABasicAnimation(keyPath: "instanceRedOffset")
animColor.duration = animationDuration
animColor.fromValue = verticalReplicatorLayer.instanceRedOffset
animColor.toValue = -1 / Float(Int.random(replicatorLayer.instanceCount-1))
animColor.autoreverses = true
animColor.repeatCount = .infinity
replicatorLayer.add(animColor, forKey: "colorshift")
let animColor1 = CABasicAnimation(keyPath: "instanceGreenOffset")
animColor1.duration = animationDuration
animColor1.fromValue = verticalReplicatorLayer.instanceGreenOffset
animColor1.toValue = -1 / Float(Int.random(replicatorLayer.instanceCount-1))
animColor1.autoreverses = true
animColor1.repeatCount = .infinity
replicatorLayer.add(animColor1, forKey: "colorshift1")
let animColor2 = CABasicAnimation(keyPath: "instanceBlueOffset")
animColor2.duration = animationDuration
animColor2.fromValue = verticalReplicatorLayer.instanceBlueOffset
animColor2.toValue = -1 / Float(Int.random(replicatorLayer.instanceCount-1))
animColor2.autoreverses = true
animColor2.repeatCount = .infinity
replicatorLayer.add(animColor2, forKey: "colorshift2")
}
let imageSize = patternImage.size.halve
and
animColor.toValue = -1 / Float(Int.random(replicatorLayer.instanceCount-1))
both generated errors.
I removed the halve and commented-out the animColor lines and the code runs and animates. I could not get ANY replicator layer to display or animate at all (not even the most basic apple or tutorial code) until I used your code. Thank you so much!
I have a custom geometry quadrangle and my texture image is displaying on it, but I want it to display as an Aspect Fill, rather than stretching or compressing to fit the space.
I'm applying the same texture to multiple walls in a room so if the image is wallpaper, it has to look correct.
Is there a way to use the following and also determine how it fills?
quadNode.geometry?.firstMaterial?.diffuse.contents = UIImage(named: "wallpaper3.jpg")
Thanks
[UPDATE]
let quadNode = SCNNode(geometry: quad)
let (min, max) = quadNode.boundingBox
let width = CGFloat(max.x - min.x)
let height = CGFloat(max.y - min.y)
let material = SCNMaterial()
material.diffuse.contents = UIImage(named: "wallpaper3.jpg")
material.diffuse.contentsTransform = SCNMatrix4MakeScale(Float(width), Float(height), 1)
material.diffuse.wrapS = SCNWrapMode.repeat
material.diffuse.wrapT = SCNWrapMode.repeat
quadNode.geometry?.firstMaterial = material
I think this might help you, It is in objective c but it should be understandable:
CGFloat width = self.planeGeometry.width;
CGFloat height = self.planeGeometry.length;
SCNMaterial *material = self.planeGeometry.materials[4];
material.diffuse.contentsTransform = SCNMatrix4MakeScale(width, height, 1);
material.diffuse.wrapS = SCNWrapModeRepeat;
material.diffuse.wrapT = SCNWrapModeRepeat;
Plane Geometry is defined as follows:
self.planeGeometry = [SCNBox boxWithWidth:width height:planeHeight length:length chamferRadius:0];
//planeHeight = 0.01;
I'm using this to show horozontal planes, and the material it's made of doesn't get stretched out, but merely extends. Hoping that's what you need.
The dimensions of the plane are defined as: (incase it is needed)
float width = anchor.extent.x;
float length = anchor.extent.z;
This is being done in initWithAnchor method which uses the ARPlaneAnchor found on a plane.
I'm taking images with AVCapturePhotoOutput and then using their JPEG representation as the texture on a SceneKit SCNPlane that is the same aspect ratio as the image:
let image = UIImage(data: dataImage!)
let rectangle = SCNPlane(width:9, height:12)
let rectmaterial = SCNMaterial()
rectmaterial.diffuse.contents = image
rectmaterial.isDoubleSided = true
rectangle.materials = [rectmaterial]
let rectnode = SCNNode(geometry: rectangle)
let pos = sceneSpacePosition(inFrontOf: self.pictCamera, atDistance: 16.5) // 16.5 is arbitrary, but makes the rectangle the same size as the camera
rectnode.position = pos
rectnode.orientation = self.pictCamera.orientation
pictView.scene?.rootNode.addChildNode(rectnode)
sceneSpacePosition is a bit of code that can be found here on SO that maps CoreMotion into SceneKit orientation. It is used to place the rectangle, which does indeed appear at the right location with the right size. All very cool.
The problem is that the image is rotated 90 degrees to the rectangle. So I did the obvious:
rectmaterial.diffuse.contentsTransform = SCNMatrix4MakeRotation(Float.pi / 2, 0, 0, 1)
This does not work property; the resulting image is unrecognizable. It appears that one small part of the image has been stretched to a huge size. I thought it might be the axis, but I tried all three with the same result.
Any ideas?
You are rotating on the upper left corner as suggested by Alain T.
If you move your image down, you may get the rotation you were expecting.
Try this:
let translation = SCNMatrix4MakeTranslation(0, -1, 0)
let rotation = SCNMatrix4MakeRotation(Float.pi / 2, 0, 0, 1)
let transform = SCNMatrix4Mult(translation, rotation)
rectmaterial.diffuse.contentsTransform = transform
I'm using a PaintCode StyleKit to generate a bunch of gradients, but PaintCode exports them as a CGGradient. I wanted to add these gradients a layer. Is it possible to convert a CGGradient to a CAGradientLayer?
No. The point of a CAGradientLayer is that you get to describe to it the gradient you want and it draws the gradient for you. You are already past that step; you have already described the gradient you want (to PaintCode instead of to a CAGradientLayer) and thus you already have the gradient you want. Thus, it is silly for you even to want to use a CAGradientLayer, since if you were going to do that, why did you use PaintCode in the first place? Just draw the CGGradient, itself, into an image, a view, or even a layer.
You can't get the colors out of a CGGradient, but you can use the same values to set the CAGradientLayer's colors and locations properties. Perhaps it would help for you to modify the generated PCGradient class to keep the colors and locations around as NSArrays that you can pass into CAGradientLayer.
This can be important if you have a library of gradients and only occasionally need to use a gradient in one of the two formats.
Yes, it is possible, but requires some math to convert from a regular coordinate system (with x values from 0 to the width and y values from 0 to the height) to the coordinate system used by CAGradientLayer (with x values from 0 to 1 and y values from 0 to 1). And it requires some more math (quite complex) to get the slope right.
The distance from 0 to 1 for x will depend on the width of the original rectangle. And the distance from 0 to 1 for y will depend on the height of the original rectangle. So:
let convertedStartX = startX/Double(width)
let convertedStartY = startY/Double(height)
let convertedEndX = endX/Double(width)
let convertedEndY = endY/Double(height)
let intermediateStartPoint = CGPoint(x:convertedStartX,y:convertedStartY)
let intermediateEndPoint = CGPoint(x:convertedEndX,y:convertedEndY)
This works if your original rectangle was a square. If not, the slope of the line that defines the angle of the gradient will be wrong! To fix this see the excellent answer here: CAGradientLayer diagonal gradient
If you pick up the utility from there, then you can set your final converted start and end points as follows, starting with the already-adjusted point values from above:
let fixedStartEnd:(CGPoint,CGPoint) = LinearGradientFixer.fixPoints(start: intermediateStartPoint, end: intermediateEndPoint, bounds: CGSize(width:width,height:height))
let myGradientLayer = CAGradientLayer()
myGradientLayer.startPoint = fixedStartEnd.0
myGradientLayer.endPoint = fixedStartEnd.1
Here's code for a full Struct that you can use to store gradient data and get back CGGradients or CAGradientLayers as needed:
import UIKit
struct UniversalGradient {
//Doubles are more precise than CGFloats for the
//calculations needed to convert start and end
//to CAGradientLayers 1...0 format
var startX: Double
var startY: Double
var endX: Double
var endY: Double
let options: CGGradientDrawingOptions = [.drawsBeforeStartLocation, .drawsAfterEndLocation]
//for CAGradientLayer
var colors: [UIColor]
var locations: [Double]
//computed conversions
private var myCGColors: [CGColor] {
return self.colors .map {color in color.cgColor}
}
private var myCGFloatLocations: [CGFloat] {
return self.locations .map {location in CGFloat(location)}
}
//computed properties
var gradient: CGGradient {
return CGGradient(colorsSpace: nil, colors: myCGColors as CFArray, locations: myCGFloatLocations)!
}
var start: CGPoint {
return CGPoint(x: startX,y: startY)
}
var end: CGPoint {
return CGPoint(x: endX,y: endY)
}
//can't use computed property here
//since we need details of the specific environment's bounds to be passed in
//so this will be an instance function
func gradientLayer(width: CGFloat, height: CGFloat) -> CAGradientLayer {
//convert location x and y values from full coordinates to 0...1 for start and end
//this works great for position, but it gets the slope incorrect if the view is not square
//this is because the gradient is not drawn with the final scale
//it is drawn while the view is square, and then it gets stretched, changing the angle
//https://stackoverflow.com/questions/38821631/cagradientlayer-diagonal-gradient
let convertedStartX = startX/Double(width)
let convertedStartY = startY/Double(height)
let convertedEndX = endX/Double(width)
let convertedEndY = endY/Double(height)
let intermediateStartPoint = CGPoint(x:convertedStartX,y:convertedStartY)
let intermediateEndPoint = CGPoint(x:convertedEndX,y:convertedEndY)
let fixedStartEnd:(CGPoint,CGPoint) = LinearGradientFixer.fixPoints(start: intermediateStartPoint, end: intermediateEndPoint, bounds: CGSize(width:width,height:height))
let myGradientLayer = CAGradientLayer()
myGradientLayer.startPoint = fixedStartEnd.0
myGradientLayer.endPoint = fixedStartEnd.1
myGradientLayer.locations = self.locations .map {location in NSNumber(value: location)}
myGradientLayer.colors = self.colors .map {color in color.cgColor}
return myGradientLayer
}
}