How can I know what tiles will be rendered in Google Map? - ios

So initially I want to render a color for each tile on Google Map according to air pollution level.
I have learned how to customize tile layer from Google's documentation about Tile Layer, with which mine looks similar to the following code:
class TestTileLayer: GMSSyncTileLayer {
override func tileForX(x: UInt, y: UInt, zoom: UInt) -> UIImage! {
// On every odd tile, render an image.
if (x % 2 == 0) {
return UIImage(named: "australia")
} else {
return kGMSTileLayerNoTile;
}
}
}
Basically, I want to avoid making one request to server for every tile. I would like to know which tiles will be rendered so I may make one request to get air pollution level for all of the tiles at once.
How do I know which tiles will be rendered in Google Maps, given user's current position, zoom level? Is there an API that does this already? I can't seem to find a direct one.
Thank you!

Related

Optimization for many markers on Google Map ios sdk?

I'm using the following code to see when markers enter the screen :
let visibleRegion = mapView.projection.visibleRegion()
let bounds = GMSCoordinateBounds(region: visibleRegion)
for i in stride(from: 0, to: markers.count, by: 1){
let marker = markers[i]
if bounds.contains(marker.position) {
print("Is present on screen")
print(marker.position)
} else {
// Marker not on the screen
}
}
This works and when I scroll the map on top of a marker I get the printout.
I've got 30k markers that I'm needing to potentially place onto the map. The markers show up at different zoom levels, and only need to be loaded once the user is able to see them.
The marker is a rounded image view, so as you an imagine loading 30 thousand pictures into a map is a huge task.
I have JSON that I am loading in the Lon/Lat/ImageURL.
Do I need to deinit markers as they leave the screen and init them as they come onto the screen? Are google map annotations reused like a tableview cell? Should I only create the marker once a location from my JSON is in the bounds of the map, or can I create them and only add them to the map once they're in the bounds? What sort of optimization tools should I use?
Thanks for any tips here

How to fit route bounds using CARTO-Mobile-SDK

i calculate the route betwen two points, and i get the polygon produced by the separation of this two points, i create the polygon in this way
let polygon = NTPolygon(poses: vector, style: NTPolygonStyleBuilder().buildStyle())
so, i am creating a functionality for when the route between this 2 points is to large you can press the button and the map will zoom out and show the bounding box of the route, for that i get the bounding box from the polygon polygon.getBounds() and i am trying to use map.move(toFit: NTMapBounds!, screenBounds: NTScreenBounds!, integerZoom: Bool, durationSeconds: Float) but i dont how to get NTScreenBounds
Any help whit this issue, also any other approach than using map.move is welcome.
Thanks in advance
NTScreenBounds, in this context, is the layout of your NTMapView.
Here's an example from Xamarin.iOS, you should get the gist of it:
public ScreenBounds FindScreenBounds()
{
var min = new ScreenPos(Frame.X, Frame.Y);
var max = new ScreenPos(Frame.Width, Frame.Height);
return new ScreenBounds(min, max);
}

Adding custom view to ARKit

I just started looking at ARKitExample from apple and I am still studying. I need to do like interactive guide. For example, we can detect something (like QRCode), in that area, can I show with 1 label ?
Is it possible to add custom view (like may be UIVIew, UIlabel) to surface?
Edit
I saw some example to add line. I will need to find how to add additional view or image.
let mat = SCNMatrix4FromMat4(currentFrame.camera.transform)
let dir = SCNVector3(-1 * mat.m31, -1 * mat.m32, -1 * mat.m33)
let currentPosition = pointOfView.position + (dir * 0.1)
if button!.isHighlighted {
if let previousPoint = previousPoint {
let line = lineFrom(vector: previousPoint, toVector: currentPosition)
let lineNode = SCNNode(geometry: line)
lineNode.geometry?.firstMaterial?.diffuse.contents = lineColor
sceneView.scene.rootNode.addChildNode(lineNode)
}
}
I think this code should be able to add custom image. But I need to find the whole sample.
func updateRenderer(_ frame: ARFrame){
drawCameraImage(withPixelBuffer:frame.capturedImage)
let viewMatrix = simd_inverse(frame.came.transform)
let prijectionMatrix = frame.camera.prijectionMatrix
updateCamera(viewMatrix, projectionMatrix)
updateLighting(frame.lightEstimate?.ambientIntensity)
drawGeometry(forAnchors: frame.anchors)
}
ARKit isn't a rendering engine — it doesn't display any content for you. ARKit provides information about real-world spaces for use by rendering engines such as SceneKit, Unity, and any custom engine you build (with Metal, etc), so that they can display content that appears to inhabit real-world space. Thus, any "how do I show" question for ARKit is actually a question for whichever rendering engine you use with ARKit.
SceneKit is the easy out-of-the-box, no-additional-software-required way to display 3D content with ARKit, so I presume you're asking about that.
SceneKit can't render a UIView as part of a 3D scene. But it can render planes, cubes, or other shapes, and texture-map 2D content onto them. If you want to draw a text label on a plane detected by ARKit, that's the direction to investigate — follow the example's, um, example to create SCNPlane objects corresponding to detected ARPlaneAnchors, get yourself an image of some text, and set that image as the plane geometry's diffuse contents.
Yes you can add custom view in ARKit Scene.
Just make image of your view and add it wherever you want.
You can use following code to get image for UIView
func image(with view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}

GPUImage: How to determine an average color within the hexagon?

I’m doing video processing with GPUImage2. When the app starts, I create a hexagonal grid and add it to my cameraView. The grid is fullscreen and consists of about 100 of hexagons.
In general, what I’m trying to achieve is
For each frame I want to find an average color (in RGB or even better HSV) within each cell of the grid.
When the color is determined, I want to draw something in the center of each hexagon depending on its average color.
I have an array with hexagons, each of them knows its vertexes’ coordinates and center.
I also have an array with UIBezierPaths which contains bounds of these hexagons (just in case).
So my code looks like this
class ViewController: UIViewController {
var hexagons = [HKHexagon]()
var hexagonsBounds = [UIBezierPath]()
let averageColorExtractor = AverageColorExtractor()
override func viewDidLoad() {
super.viewDidLoad()
do {
camera = try Camera(sessionPreset:AVCaptureSessionPreset1920x1080)
camera.delegate = self
cameraView.orientation = .landscapeLeft
camera --> cameraView
camera.startCapture()
drawGrid()
} catch {
fatalError("Could not initialize rendering pipeline: \(error)")
}
}
}
extension ViewController: CameraDelegate {
func didCaptureBuffer(_ sampleBuffer: CMSampleBuffer) {
for hexagon in hexagons {
}
}
}
I guess didCaptureBuffer() should be the place to apply averageColorExtractor to each hexagon but don’t have an idea what to do next..
I am new to iOS development and it’s the first time I use GPUImage2… Please, guide me in the right direction.
Not coding for your platform at all but GPU architecture allows to do it like this:
pass the image as texture
render the center points only as points
in fragment shader compute the avg color of hex around actual position
This is the hardest and most performance demanding part. If you compute just inscribed circle it is easy but for hexagon you need to compute which texel is inside and which not. For axis aligned hexagons you can divide hex into regions (2x rectangle, 4x triangle) for rotated hexes you need add transformation matrix.
compute/render output inside the center point.
I do not know what your framework can do for you from this. If you rendered stuff is bigger then just the center point then you need either use another pass in your render or use bigger primitive then points in #2 but that means you will compute the avg color for each rendered pixel which can slow things down a lot.
Take a look at GLSL shader that uses this technique (for entirely different task but the technique is the same):
How to implement 2D raycasting light effect in GLSL
If this is not adaptable to your platform then ignore this answer ...

How to implement MKAnnotationViews that rotate with the map

I have an app with some annotations on it, and up until now they are just symbols that look good with the default behavior (e.g. like numbers and letters). Their directions are fixed in orientation with the device which is appropriate.
Now however I need some annotations that need to be fixed in orientation to the actual map, so if the map rotates, then the annotation symbols need to rotate with it (like an arrow indicating the flow of a river for example).
I don't want them to scale with the map like an overlay but I do want them to rotate with the map.
I need a solution that primarily works when the user manually rotates the map with their fingers, and also when it rotates due to be in tracking with heading mode.
On Google Maps (at least on android) this is very easy with a simple MarkerOptions.flat(true)
I am sure it won't be too much more difficult on ios, I hope.
Thanks in advance!
Here's what I used for something similar.
- (void)rotateAnnotationView:(MKAnnotationView *)annotationView toHeading:(double)heading
{
// Convert mapHeading to 360 degree scale.
CGFloat mapHeading = self.mapView.camera.heading;
if (mapHeading < 0) {
mapHeading = fabs(mapHeading);
} else if (mapHeading > 0) {
mapHeading = 360 - mapHeading;
}
CGFloat offsetHeading = (heading + mapHeading);
while (offsetHeading > 360.0) {
offsetHeading -= 360.0;
}
CGFloat headingInRadians = offsetHeading * M_PI / 180;
annotationView.layer.affineTransform = CGAffineTransformMakeRotation(headingInRadians);
}
And then, this is called in regionDidChange etc.
Unfortunately, this solution doesn't rotate while the user is rotating the map, but it corrects itself afterwards to make sure it has the proper heading. I wrap some of the affineTransform into an animation block to make it look nice.
Hopefully this can help, or maybe help get you pointed in the right direction.

Resources