MapBox - detect zoomLevel changes - ios

How can I simply detect zoom level changes? Is it possible?
I simply need to hide my annotation views when zoom level is not enough.
regionDidChange:animated: is not intended to use for me. Any other way?
I need to hide my labels here:
and show them here:
This is what I currently do with my labels:
class CardAnnotation: MGLPointAnnotation {
var card: Card
init(card: Card) {
self.card = card
super.init()
let coordinates = card.border.map { $0.coordinate }
let sumLatitudes = coordinates.map { $0.latitude }.reduce(0, +)
let sumLongitudes = coordinates.map { $0.longitude }.reduce(0, +)
let averageLatitude = sumLatitudes / Double(coordinates.count)
let averageLongitude = sumLongitudes / Double(coordinates.count)
coordinate = CLLocationCoordinate2D(latitude: averageLatitude, longitude: averageLongitude)
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
var annotations = [CardAnnotation]()
mapView.addAnnotations(annotations)

Of the two main ways to add overlays to an MGLMapView, the runtime styling API is better suited for text labels and also for varying the appearance based on the zoom level. While you’re at it, you might as well create the polygons using the same API too.
Start by creating polygon features for the areas you want shaded in:
var cards: [MGLPolygonFeature] = []
var coordinates: [CLLocationCoordinate2D] = […]
let card = MGLPolygonFeature(coordinates: &coordinates, count: UInt(coordinates.count))
card.attributes = ["address": 123]
// …
cards.append(card)
Within any method that runs after the map finishes loading, such as MGLMapViewDelegate.mapView(_:didFinishLoading:), add a shape source containing these features to the current style:
let cardSource = MGLShapeSource(identifier: "cards", features: cards, options: [:])
mapView.style?.addSource(cardSource)
With the shape source in place, create a style layer that renders the polygon features as mauve fills:
let fillLayer = MGLFillStyleLayer(identifier: "card-fills", source: cardSource)
fillLayer.fillColor = NSExpression(forConstantValue: #colorLiteral(red: 0.9098039216, green: 0.8235294118, blue: 0.9647058824, alpha: 1))
mapView.style?.addLayer(fillLayer)
Then create another style layer that renders labels at each polygon feature’s centroid. (MGLSymbolStyleLayer automatically calculates the centroids, accounting for irregularly shaped polygons.)
// Same source as the fillLayer.
let labelLayer = MGLSymbolStyleLayer(identifier: "card-labels", source: cardSource)
// Each feature’s address is an integer, but text has to be a string.
labelLayer.text = NSExpression(format: "CAST(address, 'NSString')")
// Smoothly interpolate from transparent at z16 to opaque at z17.
labelLayer.textOpacity = NSExpression(format: "mgl_interpolate:withCurveType:parameters:stops:($zoomLevel, 'linear', nil, %#)",
[16: 0, 17: 1])
mapView.style?.addLayer(labelLayer)
As you customize these style layers, pay particular attention to the options on MGLSymbolStyleLayer that control whether nearby symbols are automatically hidden due to collision. You may find that the automatic collision detection makes it unnecessary to specify the textOpacity property.
When you create the source, one of the options you can pass into the MGLShapeSource initializer is MGLShapeSourceOption.clustered. However, in order to use that option, you’d have to create MGLPointFeatures, not MGLPolygonFeatures. Fortunately, MGLPolygonFeature has a coordinate property that lets you find the centroid without manual calculations:
var cardCentroids: [MGLPointFeature] = []
var coordinates: [CLLocationCoordinate2D] = […]
let card = MGLPolygonFeature(coordinates: &coordinates, count: UInt(coordinates.count))
let cardCentroid = MGLPointFeature()
cardCentroid.coordinate = card.coordinate
cardCentroid.attributes = ["address": 123]
cardCentroids.append(cardCentroid)
// …
let cardCentroidSource = MGLShapeSource(identifier: "card-centroids", features: cardCentroids, options: [.clustered: true])
mapView.style?.addSource(cardCentroidSource)
This clustered source can only be used with MGLSymbolStyleLayer or MGLCircleStyleLayer, not MGLFillStyleLayer. This example shows how to work with clustered points in more detail.

One option is to add the labels as a MGLSymbolStyleLayer, then determine the textOpacity based on zoom level.
If you are using the current version of the Maps SDK for iOS, you could try something like:
symbols.textOpacity = NSExpression(format: "mgl_interpolate:withCurveType:parameters:stops:($zoomLevel, 'linear', nil, %#)", [16.9: 0, 17: 1])
The dynamically styled interactive points example shows one approach to this.

Is the problem that when you zoom out, your annotations are too close together? If so, it is better to group them together than to hide them entirely. See Decluttering a Map with MapKit Annotation Clustering.

Related

swift gestures RealityKit like in SceneKit

I created an AR scene with RealityComposer.
let boxScene = try! Experience.loadBox()
But I don't like the gestures that Apple has provided for this method.
boxScene.generateCollisionShapes(recursive: true)
let box = boxScene.box as? Entity & Has Collision
arView.installGestures(for:box!) //Add gestures
I would like to use the same gestures as in SceneKit. I created a cube with the help of RealityKit and added a textEntity to each of its faces.
let textEntity_0: Entity = boxScene.self.children[0].children[0].children[0].children[0].children[0].children[0]
var textModelComp_0: ModelComponent = (textEntity_0.components[ModelComponent])!
var material_0 = SimpleMaterial()
material_0.baseColor = .color(.red)
textModelComp_0.materials[0] = material_0
textModelComp_0.mesh = .generateText("testText",
extrusionDepth: 0.001
font: .systemFont(ofSize: 0.03),
containerFrame: CGRect(),
lineBreakMode: .byCharWrapping)
boxScene.self.children[0].children[0].children[0].children[0].children[0].children[0].components.set(textModelComp_0)
I do not need it to stand on the surface or be strictly attached to the surface, but it is necessary that all its faces be visible during rotation. It is also necessary to implement the method:
arView.scene.anchors.append(boxScene)
let anchor = AnchorEntity(.image(group: "LogoTypes", name: "logo"))
anchor.addChild(boxScene)
This method is from RealityKit. It only shows my 3D cube when it sees a marker. I need the surrounding space from the camera in the background, like in AR.
How can I use SceneKit gestures in this case?

Clustering MGLAnnotations Images on iOS mapbox using MGLPointCollectionFeature

Using latest Mapbox iOS SDK. Custom MGLAnnotations with different images that all appear correctly. I am setting annotations like so:
self.mapView.addAnnotations(annotations)
self.setupClustering(for:annotations)
Trying to implement example provided by mapbox using MGLPointCollectionFeature instead of GeojSON file.
Hardcoding with to a large number to make sure that clustering occurs. Actual width of the icon is 29.
EDIT: no clustering, circler with number does not appear
let clusterSource = MGLShapeSource(identifier: "clusterSource", shape: nil, options: [.clustered: true, .clusterRadius: 100])
func setupClustering(for annotations:[MGLAnnotation]) {
let coordinates = annotations.map { $0.coordinate }
let allAnnotationsFeature = MGLPointCollectionFeature(coordinates: coordinates, count: UInt(coordinates.count))
self.clusterSource.shape = allAnnotationsFeature
}
func mapView(_ mapView: MGLMapView, didFinishLoading style: MGLStyle) {
self.mapView.style?.addSource(self.clusterSource)
// Color clustered features based on clustered point counts.
let stops = [
2: UIColor.lightGray,
5: UIColor.orange,
10: UIColor.red
]
// Show clustered features as circles. The `point_count` attribute is built into
// clustering-enabled source features.
let circlesLayer = MGLCircleStyleLayer(identifier: "clusteredItems", source: self.clusterSource)
circlesLayer.circleRadius = NSExpression(forConstantValue: NSNumber(value: Double(100) / 2))
circlesLayer.circleOpacity = NSExpression(forConstantValue: 0.75)
circlesLayer.circleStrokeColor = NSExpression(forConstantValue: UIColor.white.withAlphaComponent(0.75))
circlesLayer.circleStrokeWidth = NSExpression(forConstantValue: 2)
circlesLayer.circleColor = NSExpression(format: "mgl_step:from:stops:(point_count, %#, %#)", UIColor.lightGray, stops)
circlesLayer.predicate = NSPredicate(format: "cluster == YES")
self.mapView.style?.addLayer(circlesLayer)
// Label cluster circles with a layer of text indicating feature count. The value for
// `point_count` is an integer. In order to use that value for the
// `MGLSymbolStyleLayer.text` property, cast it as a string.
let numbersLayer = MGLSymbolStyleLayer(identifier: "clusterNumbers", source: self.clusterSource)
numbersLayer.textColor = NSExpression(forConstantValue: UIColor.white)
numbersLayer.textFontSize = NSExpression(forConstantValue: NSNumber(value: Double(100) / 2))
numbersLayer.iconAllowsOverlap = NSExpression(forConstantValue: true)
numbersLayer.text = NSExpression(format: "CAST(point_count, 'NSString')")
numbersLayer.predicate = NSPredicate(format: "cluster == YES")
self.mapView.style?.addLayer(numbersLayer)
}
It looks like you're initializing your MGLShapeSource with shapes instead of features (https://docs.mapbox.com/ios/api/maps/6.0.0/Classes/MGLShapeSource.html#/c:objc(cs)MGLShapeSource(im)initWithIdentifier:features:options:). This is preventing your points from clustering because any MGLFeature instance passed into this initializer is treated as an ordinary shape, causing any attributes to be inaccessible to an MGLVectorStyleLayer when evaluating a predicate or configuring certain layout or paint attributes.
Additionally, for clustering, you would want to use point shapes as mentioned here. At the moment, you're using MGLPointCollectionFeatures instead of MGLPointFeatures, which means that instead of having 5 point features with 5 features, each with their own attributes, you have a point collection feature containing five points with one feature that has the same attribute.
If you create arrays of MGLPointFeatures instead, you can use these in the features parameter for your shape source initializers.
I’ve included an example below:
var unclusteredFeatures = [MGLPointFeature]()
for coordinate in coordinates {
let point = MGLPointFeature()
point.coordinate = coordinate
unclusteredFeatures.append(point)
}
let source = MGLShapeSource(identifier: "Features",
features: unclusteredFeatures,
options: [.clustered: true, .clusterRadius: 20])
style.addSource(source)

Very slow scrolling/zooming experience with GMUClusterRenderer (Google Maps Clustering) on iOS

I will try to explain my issue, and what I have done so far.
Introduction:
I am using the iOS Utils Library from Google Maps in order to display around 300 markers on the map.
The algorithm used for the Clustering is the GMUNonHierarchicalDistanceBasedAlgorithm.
Basically, our users can send us the weather they observe through their window, so that we can display the real time weather around the world.
It enables us to improve and/or adjust the weather forecasts.
But my scrolling/zooming experience isn't smooth at all. By the way I am testing it with an iPhone X ...
Let's get to the heart of the matter:
Here is how I configure the ClusterManager
private func configureCluster(array: [Observation]) -> Void {
let iconGenerator = GMUDefaultClusterIconGenerator()
let algorithm = GMUNonHierarchicalDistanceBasedAlgorithm()
let renderer = GMUDefaultClusterRenderer(mapView: mapView,
clusterIconGenerator: iconGenerator)
renderer.delegate = self
clusterManager = GMUClusterManager(map: mapView, algorithm: algorithm,
renderer: renderer)
clusterManager.add(array)
clusterManager.cluster()
clusterManager.setDelegate(self, mapDelegate: self)
}
Here is my Observation class, I tried to keep it simple :
class Observation : NSObject, GMUClusterItem {
static var ICON_SIZE = 30
let timestamp: Double
let idObs: String
let position: CLLocationCoordinate2D
let idPicto: [Int]
let token: String
let comment: String
let altitude: Double
init(timestamp: Double, idObs: String, coordinate: CLLocationCoordinate2D, idPicto: [Int], token: String, comment: String, altitude: Double) {
self.timestamp = timestamp
self.idObs = idObs
self.position = coordinate
self.idPicto = idPicto
self.token = token
self.comment = comment
self.altitude = altitude
}
}
And finally, the delegate method for the rendering :
func renderer(_ renderer: GMUClusterRenderer, willRenderMarker marker: GMSMarker) {
if let cluster = marker.userData as? GMUCluster {
if let listObs = cluster.items as? [Observation] {
if listObs.count > 1 {
let sortedObs = listObs.sorted(by: { $0.timestamp > $1.timestamp })
if let mostRecentObs = sortedObs.first {
DispatchQueue.main.async {
self.setIconViewForMarker(marker: marker, obs: mostRecentObs)
}
}
} else {
if let obs = listObs.last {
DispatchQueue.main.async {
self.setIconViewForMarker(marker: marker, obs: obs)
}
}
}
}
}
}
Users can only send one observation, but this observation can be composed with various weather phenomenoms (like Clouds + Rain + Wind) or only Rain if they want.
To differenciate them, if it's only 1 phenomenom, the marker.iconView property will be set directly.
On the other hand, if it's an observation with multiple phenomenoms, I will create a View containing all the images representing the phenomenoms.
func setIconViewForMarker(marker: GMSMarker, obs: Observation) -> Void {
let isYourObs = Observation.isOwnObservation(id: obs.idObs) ? true : false
if isYourObs {
marker.iconView = Observation.viewForPhenomenomArray(ids: obs.idPicto, isYourObs: isYourObs)
} else {
// Observation with more than 1 phenomenom
if obs.idPicto.count > 1 {
marker.iconView = Observation.viewForPhenomenomArray(ids: obs.idPicto, isYourObs: isYourObs)
// Observation with only 1 phenomenom
} else if obs.idPicto.count == 1 {
if let id = obs.idPicto.last {
marker.iconView = Observation.setImageForPhenomenom(id: id)
}
}
}
}
And the last piece of code, to show you how I build this custom view (I think my issue is probably here)
class func viewForPhenomenomArray(ids: [Int], isYourObs: Bool) -> UIView {
let popupView = UIView()
popupView.frame = CGRect.init(x: 0, y: 0, width: (ICON_SIZE * ids.count) + ((ids.count + 1) * 5) , height: ICON_SIZE)
if (isYourObs) {
popupView.backgroundColor = UIColor(red:0.25, green:0.61, blue:0.20, alpha:1)
} else {
popupView.backgroundColor = UIColor(red:0.00, green:0.31, blue:0.57, alpha:1)
}
popupView.layer.cornerRadius = 12
for (index, element) in ids.enumerated() {
let imageView = UIImageView(image: Observation.getPictoFromID(id: element))
imageView.frame = CGRect(x: ((index + 1) * 5) + index * ICON_SIZE, y: 0, width: ICON_SIZE, height: ICON_SIZE)
popupView.addSubview(imageView)
}
return popupView
}
I also tried with very small image, to understand if the issue comes from rendering a lot of PNGs on the map, but seriously, it's an iPhone X, it should be able to render some simple weather icon on a map.
Do you think I am doing something wrong ? Or is it a known issue in the Google Maps SDK ? (I have read that it is fixed at 30 fps)
Do you think rendering a lot of images (as marker.image) on a map takes that much GPU? To a point where the experience isn't acceptable at all?
If you have any advice, I'll take them all.
I was facing the same issue. After debugging a lot and checking google's code even, i come to the conclusion that, issue was from GMUDefaultClusterIconGenerator. This class is creating images at runtime for given cluster size that you are displaying. So, when you zoom in or zoom out the map, the cluster size is going to update, and this class creates new image for new number(Even it keep images cached, if same number get repeated).
So, the solution that i found is to use buckets. You will get surprised by seeing this new term. Let me explain the bucket concept by giving simple example.
suppose you kept bucket sizes as 10, 20, 50, 100, 200, 500, 1000.
Now, if your cluster is 3, then it will show 3.
If cluster size = 8, show = 8.
If cluster size = 16, show = 10+.
If cluster size = 22, show = 20+.
If cluster size = 48, show = 20+.
If cluster size = 91, show = 50+.
If cluster size = 177, show = 100+.
If cluster size = 502, show = 500+.
If cluster size = 1200004, show = 1000+.
Now here, for any cluster size, the marker images that are going to be rendered will be from 1, 2, 3, 4, 5, 6, 7, 8, 9, 10+, 20+, 50+, 100+, 200+, 500+, 1000+. As it caches the images, so this images is going to be reused. So, the time+cpu that it was using for creating new images is lowered(only few images required to be created).
You must have got the idea, about buckets now. As, if cluster is having very small number, then cluster size matters, but if increases, then bucket size is enough to get idea about cluster size.
Now, question is how to achieve this.
Actually, GMUDefaultClusterIconGenerator class has already this functionality implemented, you just need to change its initialisation to this:
let iconGenerator = GMUDefaultClusterIconGenerator(buckets: [ 10, 20, 50, 100, 200, 500, 1000])
GMUDefaultClusterIconGenerator class have other init methods, by using which you can give different background colors to different buckets, different background images to to different buckets and many more.
Let me know, if any further help required.

ARKit hide objects behind walls

How can I use the horizontal and vertical planes tracked by ARKit to hide objects behind walls/ behind real objects? Currently the 3D added objects can be seen through walls when you leave a room and/ or in front of objects that they should be behind. So is it possible to use the data ARKit gives me to provide a more natural AR experience without the objects appearing through walls?
You have two issues here.
(And you didn't even use regular expressions!)
How to create occlusion geometry for ARKit/SceneKit?
If you set a SceneKit material's colorBufferWriteMask to an empty value ([] in Swift), any objects using that material won't appear in the view, but they'll still write to the z-buffer during rendering, which affects the rendering of other objects. In effect, you'll get a "hole" shaped like your object, through which the background shows (the camera feed, in the case of ARSCNView), but which can still obscure other SceneKit objects.
You'll also need to make sure that an occluded renders before any other nodes it's supposed to obscure. You can do this using node hierarchy ( I can't remember offhand whether parent nodes render before their children or the other way around, but it's easy enough to test). Nodes that are peers in the hierarchy don't have a deterministic order, but you can force an order regardless of hierarchy with the renderingOrder property. That property defaults to zero, so setting it to -1 will render before everything. (Or for finer control, set the renderingOrders for several nodes to a sequence of values.)
How to detect walls/etc so you know where to put occlusion geometry?
In iOS 11.3 and later (aka "ARKit 1.5"), you can turn on vertical plane detection. (Note that when you get vertical plane anchors back from that, they're automatically rotated. So if you attach models to the anchor, their local "up" direction is normal to the plane.) Also new in iOS 11.3, you can get a more detailed shape estimate for each detected plane (see ARSCNPlaneGeometry), regardless of its orientation.
However, even if you have the horizontal and the vertical, the outer limits of a plane are just estimates that change over time. That is, ARKit can quickly detect where part of a wall is, but it doesn't know where the edges of the wall are without the user spending some time waving the device around to map out the space. And even then, the mapped edges might not line up precisely with those of the real wall.
So... if you use detected vertical planes to occlude virtual geometry, you might find places where virtual objects that are supposed to be hidden show through, either by being not quite hiding right at the edge of the wall, or being visible through places where ARKit hasn't mapped the entire real wall. (The latter issue you might be able to solve by assuming a larger extent than ARKit does.)
For creating an occlusion material (also known as blackhole material or blocking material) you have to use the following instance properties: .colorBufferWriteMask, .readsFromDepthBuffer, .writesToDepthBuffer and .renderingOrder.
You can use them this way:
plane.geometry?.firstMaterial?.isDoubleSided = true
plane.geometry?.firstMaterial?.colorBufferWriteMask = .alpha
plane.geometry?.firstMaterial?.writesToDepthBuffer = true
plane.geometry?.firstMaterial?.readsFromDepthBuffer = true
plane.renderingOrder = -100
...or this way:
func occlusion() -> SCNMaterial {
let occlusionMaterial = SCNMaterial()
occlusionMaterial.isDoubleSided = true
occlusionMaterial.colorBufferWriteMask = []
occlusionMaterial.readsFromDepthBuffer = true
occlusionMaterial.writesToDepthBuffer = true
return occlusionMaterial
}
plane.geometry?.firstMaterial = occlusion()
plane.renderingOrder = -100
In order to create an occlusion material it's really simple
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
// Define a occlusion material
let occlusionMaterial = SCNMaterial()
occlusionMaterial.colorBufferWriteMask = []
boxGeometry.materials = [occlusionMaterial]
self.box = SCNNode(geometry: boxGeometry)
// Set rendering order to present this box in front of the other models
self.box.renderingOrder = -1
Great solution:
GitHub: arkit-occlusion
Worked for me.
But in my case i wanted to set the walls by code. So if you don't want to set the Walls by user -> use the plane detection to detect walls and set the walls by code.
Or in a range of 4 meters the iphone depht sensor works and you can detect obstacles with ARHitTest.
ARKit 6.0 and LiDAR scanner
You can hide any object behind a virtual invisible wall that replicates real wall geometry. iPhones and iPads Pro equipped with a LiDAR scanner help us reconstruct a 3d topological map of surrounding environment. LiDAR scanner greatly improves a quality of Z channel that allows occlude or remove humans from AR scene.
Also LiDAR improves such feature as Object Occlusion, Motion Tracking and Raycasting. With LiDAR scanner you can reconstruct a scene even in a unlit environment or in a room having white walls with no features at all. 3d reconstruction of surrounding environment has become possible in ARKit 6.0 thanks to sceneReconstruction instance property. Having a reconstructed mesh of your walls it's now super easy to hide any object behind real walls.
To activate a sceneReconstruction instance property in ARKit 6.0 use the following code:
#IBOutlet var arView: ARView!
arView.automaticallyConfigureSession = false
guard ARWorldTrackingConfiguration.supportsSceneReconstruction(.mesh)
else { return }
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
arView.debugOptions.insert([.showSceneUnderstanding])
arView.environment.sceneUnderstanding.options.insert([.occlusion])
arView.session.run(config)
Also if you're using SceneKit try the following approach:
#IBOutlet var sceneView: ARSCNView!
func renderer(_ renderer: SCNSceneRenderer,
nodeFor anchor: ARAnchor) -> SCNNode? {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return nil }
let geometry = SCNGeometry(arGeometry: meshAnchor.geometry)
geometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
‍
let node = SCNNode()
node.name = "Node_\(meshAnchor.identifier)"
node.geometry = geometry
return node
}
func renderer(_ renderer: SCNSceneRenderer,
didUpdate node: SCNNode,
for anchor: ARAnchor) {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return }
let newGeometry = SCNGeometry(arGeometry: meshAnchor.geometry)
newGeometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
node.geometry = newGeometry
}
And here are SCNGeometry and SCNGeometrySource extensions:
extension SCNGeometry {
convenience init(arGeometry: ARMeshGeometry) {
let verticesSource = SCNGeometrySource(arGeometry.vertices,
semantic: .vertex)
let normalsSource = SCNGeometrySource(arGeometry.normals,
semantic: .normal)
let faces = SCNGeometryElement(arGeometry.faces)
self.init(sources: [verticesSource, normalsSource], elements: [faces])
}
}
extension SCNGeometrySource {
convenience init(_ source: ARGeometrySource, semantic: Semantic) {
self.init(buffer: source.buffer, vertexFormat: source.format,
semantic: semantic,
vertexCount: source.count,
dataOffset: source.offset,
dataStride: source.stride)
}
}
...and SCNGeometryElement and SCNGeometryPrimitiveType extensions:
extension SCNGeometryElement {
convenience init(_ source: ARGeometryElement) {
let pointer = source.buffer.contents()
let byteCount = source.count *
source.indexCountPerPrimitive *
source.bytesPerIndex
let data = Data(bytesNoCopy: pointer,
count: byteCount,
deallocator: .none)
self.init(data: data, primitiveType: .of(source.primitiveType),
primitiveCount: source.count,
bytesPerIndex: source.bytesPerIndex)
}
}
extension SCNGeometryPrimitiveType {
static func of(type: ARGeometryPrimitiveType) -> SCNGeometryPrimitiveType {
switch type {
case .line: return .line
case .triangle: return .triangles
}
}
}

Converting a CGGradient to a CAGradientLayer

I'm using a PaintCode StyleKit to generate a bunch of gradients, but PaintCode exports them as a CGGradient. I wanted to add these gradients a layer. Is it possible to convert a CGGradient to a CAGradientLayer?
No. The point of a CAGradientLayer is that you get to describe to it the gradient you want and it draws the gradient for you. You are already past that step; you have already described the gradient you want (to PaintCode instead of to a CAGradientLayer) and thus you already have the gradient you want. Thus, it is silly for you even to want to use a CAGradientLayer, since if you were going to do that, why did you use PaintCode in the first place? Just draw the CGGradient, itself, into an image, a view, or even a layer.
You can't get the colors out of a CGGradient, but you can use the same values to set the CAGradientLayer's colors and locations properties. Perhaps it would help for you to modify the generated PCGradient class to keep the colors and locations around as NSArrays that you can pass into CAGradientLayer.
This can be important if you have a library of gradients and only occasionally need to use a gradient in one of the two formats.
Yes, it is possible, but requires some math to convert from a regular coordinate system (with x values from 0 to the width and y values from 0 to the height) to the coordinate system used by CAGradientLayer (with x values from 0 to 1 and y values from 0 to 1). And it requires some more math (quite complex) to get the slope right.
The distance from 0 to 1 for x will depend on the width of the original rectangle. And the distance from 0 to 1 for y will depend on the height of the original rectangle. So:
let convertedStartX = startX/Double(width)
let convertedStartY = startY/Double(height)
let convertedEndX = endX/Double(width)
let convertedEndY = endY/Double(height)
let intermediateStartPoint = CGPoint(x:convertedStartX,y:convertedStartY)
let intermediateEndPoint = CGPoint(x:convertedEndX,y:convertedEndY)
This works if your original rectangle was a square. If not, the slope of the line that defines the angle of the gradient will be wrong! To fix this see the excellent answer here: CAGradientLayer diagonal gradient
If you pick up the utility from there, then you can set your final converted start and end points as follows, starting with the already-adjusted point values from above:
let fixedStartEnd:(CGPoint,CGPoint) = LinearGradientFixer.fixPoints(start: intermediateStartPoint, end: intermediateEndPoint, bounds: CGSize(width:width,height:height))
let myGradientLayer = CAGradientLayer()
myGradientLayer.startPoint = fixedStartEnd.0
myGradientLayer.endPoint = fixedStartEnd.1
Here's code for a full Struct that you can use to store gradient data and get back CGGradients or CAGradientLayers as needed:
import UIKit
struct UniversalGradient {
//Doubles are more precise than CGFloats for the
//calculations needed to convert start and end
//to CAGradientLayers 1...0 format
var startX: Double
var startY: Double
var endX: Double
var endY: Double
let options: CGGradientDrawingOptions = [.drawsBeforeStartLocation, .drawsAfterEndLocation]
//for CAGradientLayer
var colors: [UIColor]
var locations: [Double]
//computed conversions
private var myCGColors: [CGColor] {
return self.colors .map {color in color.cgColor}
}
private var myCGFloatLocations: [CGFloat] {
return self.locations .map {location in CGFloat(location)}
}
//computed properties
var gradient: CGGradient {
return CGGradient(colorsSpace: nil, colors: myCGColors as CFArray, locations: myCGFloatLocations)!
}
var start: CGPoint {
return CGPoint(x: startX,y: startY)
}
var end: CGPoint {
return CGPoint(x: endX,y: endY)
}
//can't use computed property here
//since we need details of the specific environment's bounds to be passed in
//so this will be an instance function
func gradientLayer(width: CGFloat, height: CGFloat) -> CAGradientLayer {
//convert location x and y values from full coordinates to 0...1 for start and end
//this works great for position, but it gets the slope incorrect if the view is not square
//this is because the gradient is not drawn with the final scale
//it is drawn while the view is square, and then it gets stretched, changing the angle
//https://stackoverflow.com/questions/38821631/cagradientlayer-diagonal-gradient
let convertedStartX = startX/Double(width)
let convertedStartY = startY/Double(height)
let convertedEndX = endX/Double(width)
let convertedEndY = endY/Double(height)
let intermediateStartPoint = CGPoint(x:convertedStartX,y:convertedStartY)
let intermediateEndPoint = CGPoint(x:convertedEndX,y:convertedEndY)
let fixedStartEnd:(CGPoint,CGPoint) = LinearGradientFixer.fixPoints(start: intermediateStartPoint, end: intermediateEndPoint, bounds: CGSize(width:width,height:height))
let myGradientLayer = CAGradientLayer()
myGradientLayer.startPoint = fixedStartEnd.0
myGradientLayer.endPoint = fixedStartEnd.1
myGradientLayer.locations = self.locations .map {location in NSNumber(value: location)}
myGradientLayer.colors = self.colors .map {color in color.cgColor}
return myGradientLayer
}
}

Resources