Please suggest me a solution for displaying annotations (text or images) for 3D model object on iOS.
For details, so far, I'm able to load and display an 3D model object on iOS by following the guide at http://iosdeveloperzone.com/2016/05/10/getting-started-with-modelio/. But I cannot find a way to add annotations, then display them with the 3D model object. I expect the implementation on iOS will be similar with https://sketchfab.com/models/363e92268ff04a6ba8322332004bdaab (on web version).
Thank you for any suggestions and answers.
Project the locations of your 3D locations into screen space. Then use the overlaySKScene to draw the annotations. This will get the annotations in the right spot, and keep the text a constant size, independent of the distance from the camera to each annotation's location.
You can try to create node with SCNText geometry type. This allows to show text in 3D scene.
Related
Currently, RealityKit doesn't have any method that provides the currently visible entities. In SceneKit we do have a method for that particular functionality—nodesInsideFrustum(pointOfView).
Our internal solution is to create a big fake bounding box in front of the camera. We then check intersections between the "frustum" bounding box and each entity's bounding box. That, of course, is a bit cumbersome and inaccurate. I wonder if someone can come up with a better solution who is willing to share it.
You could combine two ARView methods:
ARView.project(position) to get the 2D point in screen space
ARView.bounds.contains(point) to know if it's visible on screen
But it's not enough, you also have to check if the object is behind you:
Entity.position(relativeTo: cameraAnchor) (with cameraAnchor being an AnchorEntity(.camera)) to have the local position
the sign of localPosition.z shows if it's in front or behind the camera
I want to add a 3d marker for showing cars on map with rotation like Uber does but I can't find any information on adding 3d objects on Google Maps SDK for iOS.
Would appreciate any help.
Apparently no one is seeing what OP and I are seeing so here's a video of a Uber car turning 90 degrees. Play it frame by frame and you'll notice that it's not a simple image rotation. Either Uber went through the trouble of doing ~360 angles of each vehicles, or it really is a 3D model. Doing 360 images of every car seems foolish to me.
From what I can tell, they are not using 3D objects. They are also not animating between 400 images of a car at a different angle. They're doing a mix of rotating image assets and animating between ~50-70 images of a car at different angles. The illusion is perfect because it really does look like they used 3D car models !
Look at this GIF of a Uber car turning a corner (Dropbox link):
We can clearly see that that the shadow and the car's view angle doesn't update as often as the car's rotation.
Here I overlaid 2 images of the car at different angles, but using the same car image:
We can see that the map is rotated ~5 degrees but the car image is perfectly clear because it hasn't changed, it was simply rotated.
Uber just released a blog post documenting this.
It looks like the vehicles were modeled in 3D software and then image assets depicting different angles were exported for the app. Depending on where the vehicle is on the map and its heading then a different asset is used.
First, they are NOT 3D Objects if that's what you referring to (It's possible to create one though, but waste of time) They are simply 3D image created in Photoshop or Illustrator (Mostly) that have 3D perspective (It's also retina optimized, that's why it looks very clear).
The reason you see that the car is rotated its because the UIImageView that the image is being held into is rotated (using CABasicAnimation mostly) using calculation base off of 3D device position (Same technology use for running apps to track your location etc), which you can use Core Location to retrieve that data.
It's a proccess, but very doable. Good Luck!
Thanks All answers are valid.
if you want you can see the video running, how it works
You can generate sprite sheet ( around 60 ) tiles
How i implement it and tools you need
3d source car model.
blender, animate camera using path animation elipse.
camera rotate around of car from top to bottom view
render 3d marker using sprite generated with blender, for angles use bearing change on location updates.
Your vehicle needs to be rendered to support most screens, so the base size for each tile was 64 px and I was scaling according to the dpi of the screens
Result implementation:
https://twitter.com/ronfravi/status/1133226618024022016?s=09
I believe a pair of marker images, one is the real marker, and another one is a darker blurry shadow can do the trick in a cheaper way. Putting the shadow marker beneath the marker, shifting X & Y axis to an amount where you feel the shadow would be put appropriately, and finally moving them as well as rotating them (on web version, you may need separate rotated images) at the same time should be able to [re]create the illusion.
As #Felix Lapalme already explained it beyond any easier, am not diving any deeper into explaning it.
check out my repo
I used a dea model and turned it according to the heading variable
https://github.com/hilalalhakani/HHMarker
I achieved this in Xamarin by rendering three.js in a webview then sending the image buffer directly to the marker instead of drawing it to the screen. It only took a couple of days, and for my use case it was needed because I want the drivers to be able to change the color and kind of car, but if this is not the functionality you need you're better of just using a sequence of rendered images.
If you want to Rotated your image as the marker. Want to show a moving object you can use .git image. It would be help full for you.
Swift 3
let position = CLLocationCoordinate2D(latitude: 51.5, longitude: -0.127)
//Rotate a marker
let degrees = 90.0
let london = GMSMarker(position: position)
london.title = "London"
//Customize the marker image
london.icon = UIImage(named: "YourGifImageName")
london.groundAnchor = CGPoint(x: 0.5, y: 0.5)
london.rotation = degrees
london.map = mapView
For more info Please check here
I need to show an indoor floor plan in a MKMapView. So far I have managed to load custom map tiles into MKMapView. But when I try to show annotations it does not show in correct location. The space between latitudes are not equal. I think it’s because MKMapView maps the globe into flat surface. But in my case I need to show annotations in a flat surface. Any idea how to do this?
EDIT : Anyone know how to convert geographic coordinates to cartesian coordinates? That will solve this problem
Apple just updated its demo project:
https://developer.apple.com/library/prerelease/ios/samplecode/footprint/Introduction/Intro.html
Take a look, there are information about how to translate geolocation (spherical) to flat location.
I'm looking for a library/framework/toolkit that will allow me to render a 2D map from real GeoSpatial data and draw objects on the 2D map.
Requirements:
Map Tiling (when I zoom into the map, i want a more detailed image)
Pan (ability to use the mouse to move around the map)
Read various Geospatial images (satellite, street, etc)
Ability to draw objects onto the map (based on lat/longs) and have them move. For example, I want to be able to put an image of a bird on the map and have it move and rotate correctly.
Primitive shapes. It would be nice if it had built in ability to draw lines, circles, etc.
Complex drawing. For example, I want to draw a compass and have it show the heading of the current heading of the bird.
Mouse input. I want to be able to right click on the map and have a context menu appear. I want to click and hold an shape I draw on the map and drag it easily.
What I have looked at:
OpenSceneGraph with osgEarth. It's great, and fulfills my reqs, but is really slow and I had to do a lot of weird things to get things to work (especially with dragging objects on the map).
Cesium: looks promising, but somewhat slow, and I need it to work as a desktop application. I've seen online that some have managed to use Cesium inside Qt's Webkit, but I'm not sure I would want to take that risk.
EDIT:
I really want to stay away from a web-based framework if possible.
http://imgur.com/52DaJtQ
Here is a primitive picture of what I'm want to achieve. The aircraft icon should move and the degree circle along with it. I want to be able to drag the green waypoints and have the lines redraw as I move a waypoint. The red sensor footprint should adjust to what the aircraft can see.
http://imgur.com/52DaJtQ
Google Maps, Open Street Map, Bing Maps.
I use OpenSceneGraph/osgEarth extensively and am not dissatisfied with its performance.
What kind of weird things did you need to do?
If you want, you can contact me privately to troubleshoot your situation. Me website is AlphaPixel.com and there's a contact form there.
The Realtor iPad app has done a very good job of implementing a custom drawing tool on top of mapkit that they use to query an area for homes. I am familiar with mapkit and its associated classes but I am unaware of how I could do some custom drawing with my finger and have it translate to a geospatial query. How to do it?
I'm not sure how far along you've made it with this, but your basic algorithm should look like this:
Draw a polygon overtop your map, then translate the coordinates of that polygon to "map" coordinates. In order to do that you would probably need to listen for gestures on a view other than the MKMapKit instance. With my limited knowledge of the MapKit's touch event handling you might have to overlay a different transparent view on the map when you want to draw, so touch events won't go through to the MapKit (if that makes any sense). You use your finger to pinch, zoom, pan and you won't want that functionality if you're trying to draw. In that view, you'll draw the shape tracing the user's finger, then translate the points drawn into map points.
The docs indicate that you can translate screen points to map points using the convertPoint:toCoordinateFromView: method on MKMapView.
Check this link for information on that: Trouble converting MapKit user coordinates to screen coordinates
This post provides a link that might help you with drawing the polygon:
To draw polygon on google map with MapKit framework
After you've drawn your polygon you'll want to "spatially" query your data. You could do that in several ways. Locally on the device or through a web service are two options. If your data is local to the device you'll have to do the cartographic math on your device. You'll also need to ensure that your point data (the X,Y's) are in the same projection and coordinate space as your polygon's information. Polygon intersection math is relatively straight forward to do, when your projections and coordinate systems line up.
Here's a link that can help you with the math.
https://math.stackexchange.com/questions/237/how-do-you-determine-if-a-point-sits-inside-a-polygon
Alternatively you could set up some web service that takes your polygon data and performs the same cartographic math on a server and returns the results to the device. Either way the same math needs to be performed. You'll take that polygon data and determine which records in your data intersect with that polygon.
This is pretty high-level, I know, but it should be all you need to do.
Another consideration is if your data is spatially enabled with spatialite compiled for SQLite on your device or SQL Server Spatial on your server. You should be able to query the data using that polygon data. You would have to format the query properly, though.
Lastly, I would encourage you to look into the ESRI SDK for iOS. ESRI provides drawing and sketching tools out of the box. Its not too difficult to use but one downside is that you would have to learn a new API:
http://resources.arcgis.com/en/communities/runtime-ios-sdk/