Google maps Custom info window not showing properly on iphone X - ios

I am making Uber like application. when user select pick up and destination address, then i draw route between these two addresses.
I have two separate info windows for both Source address and destination address.
This is working fine on my iphone 7 pluse and iphone 6s. But when i try to draw route on iPhone x, the route is drawn successfully but one of the info window is always incorrect.
I have tried every solution. I checked constraints too. everything is fine.
//Destination info window
let infoWindow1 = DestinationInfoWindow.instanceFromNib() as! DestinationInfoWindow
dest.iconView = infoWindow1
infoWindow1.DestinationLabel.text = locationStringwithDetail
infoWindow1.lblRemainingDistance.text = String(self.estimatedDistance.rounded(toPlaces: 1))
self.destinationName = locationStringwithDetail
self.destinationCoordinate = destinationCoordinate
dest.zIndex = 0
dest.position = destinationCoordinate
dest.map = self.mapView
// Source info window
let infoWindow = MapMarkerWindow.instanceFromNib() as! MapMarkerWindow
infoWindow.remaingDriverTimelbl.pushTransition(0.4)
if self.estimatedITime != 0 && self.estimatedITime < 30*60 {
var inMinuts:Int = self.estimatedITime/60
if inMinuts == 0{
inMinuts = 1
};infoWindow.remaingDriverTimelbl.text = "\(inMinuts)"
} else { infoWindow.remaingDriverTimelbl.text = "~" }
self.sourceName = locationName
self.sourceAdress = locationAdress
mySrc.iconView = infoWindow
mySrc.zIndex = 1
infoWindow.sourceTitle.text = locationStringwithDetail
mySrc.position = sourceCoordinate
mySrc.map = self.mapView
There is no error. the issue is only on `iPhone X` device and simulator.
Here is screen shot for `iPhone 7+`:
Here is xib file

Related

Change color of Icon for un-clustered markers in Mapbox IOS

I am trying to implement clustering in Mapbox in IOS. I want to change color for unclustered StyleLayer depending on specific attribute in MGLPointFeature. following is the code for single feature :
let feature = MGLPointFeature()
feature.coordinate = CLLocationCoordinate2D(latitude: site.latitude, longitude: site.longitude)
feature.attributes = ["id": site.siteId, "siteCode": site.siteCode, "risk": site.riskId]
in above snippet, I want to use this attribute ("risk": site.riskId) to generate different colors for icon which is set by using following code:
style.setImage(icon.withRenderingMode(.alwaysTemplate), forName: "icon")
let ports = MGLSymbolStyleLayer(identifier: "ports", source: source)
ports.iconImageName = NSExpression(forConstantValue: "icon")
ports.predicate = NSPredicate(format: "cluster != YES")
ports.iconAllowsOverlap = NSExpression(forConstantValue: true)
style.addLayer(ports)
and following are the colors for each riskId :
let risks = [
0: Color.cellBackgroundColor,
1: UIColor.from(hexString: "B9E5D1"),
2: UIColor.from(hexString: "95E9FF"),
3: UIColor.from(hexString: "FCE2A6"),
4: UIColor.from(hexString: "FCE2A6")
]
I have an idea that I can get these results using NSExpression for feature attributes. But have no idea how to implement it. Can anyone please help me get this done. Thanks
So, I was able to solve this problem. For this I added an attribute "siteRiskColor" in Feature and gave it the value depending on value of risk as
let riskId = site.riskId
var color = "B0E5A1"
if riskId == 1 {
color = "B0E5A1"
} else if riskId == 2 {
color = "99E9FF"
} else if riskId == 3 {
color = "FCD2A6"
} else if riskId == 4 {
color = "FBC3A9"
}
and then added different images for each color mentioned above and name them same as above. Then, while making icon for un-clustered style image I added following lines to pick different images.
let site = MGLSymbolStyleLayer(identifier: "site", source: source)
site.iconImageName = NSExpression(forKeyPath: "siteRiskIcon")
and it worked!

Very slow scrolling/zooming experience with GMUClusterRenderer (Google Maps Clustering) on iOS

I will try to explain my issue, and what I have done so far.
Introduction:
I am using the iOS Utils Library from Google Maps in order to display around 300 markers on the map.
The algorithm used for the Clustering is the GMUNonHierarchicalDistanceBasedAlgorithm.
Basically, our users can send us the weather they observe through their window, so that we can display the real time weather around the world.
It enables us to improve and/or adjust the weather forecasts.
But my scrolling/zooming experience isn't smooth at all. By the way I am testing it with an iPhone X ...
Let's get to the heart of the matter:
Here is how I configure the ClusterManager
private func configureCluster(array: [Observation]) -> Void {
let iconGenerator = GMUDefaultClusterIconGenerator()
let algorithm = GMUNonHierarchicalDistanceBasedAlgorithm()
let renderer = GMUDefaultClusterRenderer(mapView: mapView,
clusterIconGenerator: iconGenerator)
renderer.delegate = self
clusterManager = GMUClusterManager(map: mapView, algorithm: algorithm,
renderer: renderer)
clusterManager.add(array)
clusterManager.cluster()
clusterManager.setDelegate(self, mapDelegate: self)
}
Here is my Observation class, I tried to keep it simple :
class Observation : NSObject, GMUClusterItem {
static var ICON_SIZE = 30
let timestamp: Double
let idObs: String
let position: CLLocationCoordinate2D
let idPicto: [Int]
let token: String
let comment: String
let altitude: Double
init(timestamp: Double, idObs: String, coordinate: CLLocationCoordinate2D, idPicto: [Int], token: String, comment: String, altitude: Double) {
self.timestamp = timestamp
self.idObs = idObs
self.position = coordinate
self.idPicto = idPicto
self.token = token
self.comment = comment
self.altitude = altitude
}
}
And finally, the delegate method for the rendering :
func renderer(_ renderer: GMUClusterRenderer, willRenderMarker marker: GMSMarker) {
if let cluster = marker.userData as? GMUCluster {
if let listObs = cluster.items as? [Observation] {
if listObs.count > 1 {
let sortedObs = listObs.sorted(by: { $0.timestamp > $1.timestamp })
if let mostRecentObs = sortedObs.first {
DispatchQueue.main.async {
self.setIconViewForMarker(marker: marker, obs: mostRecentObs)
}
}
} else {
if let obs = listObs.last {
DispatchQueue.main.async {
self.setIconViewForMarker(marker: marker, obs: obs)
}
}
}
}
}
}
Users can only send one observation, but this observation can be composed with various weather phenomenoms (like Clouds + Rain + Wind) or only Rain if they want.
To differenciate them, if it's only 1 phenomenom, the marker.iconView property will be set directly.
On the other hand, if it's an observation with multiple phenomenoms, I will create a View containing all the images representing the phenomenoms.
func setIconViewForMarker(marker: GMSMarker, obs: Observation) -> Void {
let isYourObs = Observation.isOwnObservation(id: obs.idObs) ? true : false
if isYourObs {
marker.iconView = Observation.viewForPhenomenomArray(ids: obs.idPicto, isYourObs: isYourObs)
} else {
// Observation with more than 1 phenomenom
if obs.idPicto.count > 1 {
marker.iconView = Observation.viewForPhenomenomArray(ids: obs.idPicto, isYourObs: isYourObs)
// Observation with only 1 phenomenom
} else if obs.idPicto.count == 1 {
if let id = obs.idPicto.last {
marker.iconView = Observation.setImageForPhenomenom(id: id)
}
}
}
}
And the last piece of code, to show you how I build this custom view (I think my issue is probably here)
class func viewForPhenomenomArray(ids: [Int], isYourObs: Bool) -> UIView {
let popupView = UIView()
popupView.frame = CGRect.init(x: 0, y: 0, width: (ICON_SIZE * ids.count) + ((ids.count + 1) * 5) , height: ICON_SIZE)
if (isYourObs) {
popupView.backgroundColor = UIColor(red:0.25, green:0.61, blue:0.20, alpha:1)
} else {
popupView.backgroundColor = UIColor(red:0.00, green:0.31, blue:0.57, alpha:1)
}
popupView.layer.cornerRadius = 12
for (index, element) in ids.enumerated() {
let imageView = UIImageView(image: Observation.getPictoFromID(id: element))
imageView.frame = CGRect(x: ((index + 1) * 5) + index * ICON_SIZE, y: 0, width: ICON_SIZE, height: ICON_SIZE)
popupView.addSubview(imageView)
}
return popupView
}
I also tried with very small image, to understand if the issue comes from rendering a lot of PNGs on the map, but seriously, it's an iPhone X, it should be able to render some simple weather icon on a map.
Do you think I am doing something wrong ? Or is it a known issue in the Google Maps SDK ? (I have read that it is fixed at 30 fps)
Do you think rendering a lot of images (as marker.image) on a map takes that much GPU? To a point where the experience isn't acceptable at all?
If you have any advice, I'll take them all.
I was facing the same issue. After debugging a lot and checking google's code even, i come to the conclusion that, issue was from GMUDefaultClusterIconGenerator. This class is creating images at runtime for given cluster size that you are displaying. So, when you zoom in or zoom out the map, the cluster size is going to update, and this class creates new image for new number(Even it keep images cached, if same number get repeated).
So, the solution that i found is to use buckets. You will get surprised by seeing this new term. Let me explain the bucket concept by giving simple example.
suppose you kept bucket sizes as 10, 20, 50, 100, 200, 500, 1000.
Now, if your cluster is 3, then it will show 3.
If cluster size = 8, show = 8.
If cluster size = 16, show = 10+.
If cluster size = 22, show = 20+.
If cluster size = 48, show = 20+.
If cluster size = 91, show = 50+.
If cluster size = 177, show = 100+.
If cluster size = 502, show = 500+.
If cluster size = 1200004, show = 1000+.
Now here, for any cluster size, the marker images that are going to be rendered will be from 1, 2, 3, 4, 5, 6, 7, 8, 9, 10+, 20+, 50+, 100+, 200+, 500+, 1000+. As it caches the images, so this images is going to be reused. So, the time+cpu that it was using for creating new images is lowered(only few images required to be created).
You must have got the idea, about buckets now. As, if cluster is having very small number, then cluster size matters, but if increases, then bucket size is enough to get idea about cluster size.
Now, question is how to achieve this.
Actually, GMUDefaultClusterIconGenerator class has already this functionality implemented, you just need to change its initialisation to this:
let iconGenerator = GMUDefaultClusterIconGenerator(buckets: [ 10, 20, 50, 100, 200, 500, 1000])
GMUDefaultClusterIconGenerator class have other init methods, by using which you can give different background colors to different buckets, different background images to to different buckets and many more.
Let me know, if any further help required.

Can't zoom Google map iOS using elm

I would like to use pinch to zoom in/out with iOS. My code for map is following:
continentalUsMapOptions : GoogleMaps.CreateMapOptions
continentalUsMapOptions =
{ center = Just usaCenterLatLng
, disableDefaultUI = Just True
, mapTypeControlPosition = Nothing
, mapTypeId = Nothing
, maxZoom = Just 6
, minZoom = Just 2
, rotateControlPosition = Nothing
, scrollWheel = Just False
, signInControl = Just False
, streetViewControlPosition = Nothing
, styles = List.map Style.toJson mapStyles
, zoomGestures = Just True
, zoomControlPosition = Nothing
}
Unfortunately, zoomGestures is not working at all. Link to page is: https://sre.com

Adding "SCNNode" to ScreenView to current view of the camera

I've got an image that im trying to add to the current camera location of the Screenview ARSCNView session. Whatever I do it seems to put the image behind the current location of the phone camera and i have to move it back to see it. I have the following code:
var currentFrame = SceneView.Session.CurrentFrame;
if (currentFrame == null) return;
var threeVector = new SCNVector3(currentFrame.Camera.Transform.Column3.X + 0.005f,
currentFrame.Camera.Transform.Column3.Y - 0.02f,
currentFrame.Camera.Transform.Column3.Z - 0.05f);
var scaleFactor = imgToAdd.Size.Width / 0.05;
float width = float.Parse((imgToAdd.Size.Width / scaleFactor).ToString());
float height = float.Parse((imgToAdd.Size.Height / scaleFactor).ToString());
var box = new SCNPlane {
Width = width,
Height = height
};
var cubeNode = new SCNNode {
Position = threeVector,
Geometry = box
};
var mat = new SCNMaterial();
mat.Diffuse.Contents = imgToAdd;
mat.LocksAmbientWithDiffuse = true;
cubeNode.Geometry.Materials = new[] { mat };
SceneView.Scene.RootNode.AddChildNode(cubeNode);
Just trying to execute a simple idea of the ARKit paint apps that you see all over the app store now. So the imgToAdd is a snapshot of the scribble that the user has already put onto the screen and this event is fired at the touchended after an image is created from the view scribbled on.
Any ideas on what I need to change to get it to line up with the current view of the camera on the phone?

Screen mirror not behaving as expected (Swift)

I'm trying to implement some simple screen mirroring in my swift application but I'm getting undesired behavior. When my code executes, the external display gets the phone view but the iphone screen goes black. Also in the external view it's filled in with black. Here's a screenshot:
Here's my code to setup the external view:
func initializeExternalScreen(external: UIScreen){
self.mirroredScreen = external;
// Find max resolution
var max = CGSize()
var maxScreenMode = UIScreenMode()
for current in self.mirroredScreen.availableModes {
if (current.size.height > max.height || current.size.width > max.width) {
max = current.size;
maxScreenMode = current;
}
}
self.mirroredScreen.currentMode = maxScreenMode;
self.mirroredWindow = UIWindow(frame: self.mirroredScreen.bounds)
self.mirroredWindow.hidden = false
self.mirroredWindow.layer.contentsGravity = kCAGravityResizeAspect
self.mirroredWindow.screen = self.mirroredScreen
self.mirroredScreenView = UIView(frame: self.mirroredScreen.bounds)
self.mirroredScreenView.addSubview(self.view)
self.mirroredWindow.addSubview(self.mirroredScreenView)
}
Any ideas?

Resources