Spherical 360 photo coordinates on SCNSphere - ios

Best way to explain what I want to do is Google Streets. I have a spherical 360 photo rendered using CTPanoramaView and that works nicely. Now, I need what Google Streets does, a way to detect a tap position on that 360 image and change the image. To do that, I need to somehow "map" coordinates from 2D image to location in 3D sphere. For example, tap on this chandelier bellow opens up another photo (outer space :)
So, disregarding resolution of the photo (or not, pixel resolution can also work, say that photo is 4096 x 2048, I need to convert tap on sphere to tap in that resolution), taking for example that both x and y can go from 0 to 1, I want to detect tap on x=0.247 and y=0.4 on this photo:
By converting tap from SCNSphere on which this image is rendered to. See this screenshot:
What I currently have is ability to detect tap position in sphere by running hitTest on this SCNSphere where photo is rendered and receiving SCNHitTestResult.
SCNHitTestResult seems like a starting point, but I don't really know how to convert it's coordinates to this point I'm looking for.
Thanks in advance.

Related

how to display text recognition bounding box on screen of a ARFrame captured image? (iOS)

I've read ARKit official tutorial RealtimeNumberReader, it uses AVCaptureSession and a specific function layerRectConverted which is only for AVCaptureSession to convert coordinates from bounding box to screen coordinate.
let rect = layer.layerRectConverted(fromMetadataOutputRect: box.applying(self.visionToAVFTransform))
Now I want to recognize text on ARFrame's capturedImage and then display the bound box on screen. Is it possible?
I know how to recognize text on a single image from official tutorial, my problem is how to convert the normalized box coordinate to viewport coordinate.
Please help and thank you very much!!!
Based on #Banane42's answer, I found the theory behind ARkit and VNRecognizeTextRequest
ARKit Sceneview's capturedImage is wider than you can see. check the picture below. I made a small app that has an imageView to display the whole image, and the background image is the sceneview area.
The coordinate of sceneview or image is originated from top left corner, x-axis -> to right and y-axis -> to bottom. But the coordinate of boundingBox that VNRequest returns is originated from bottom left corner and x-axis -> to right and y-axis -> to top.
if you use request.regionOfInterest, this ROI should be normalized coordinate with respect to the whole image. the returned VNRequest boundingBox is in normalized coordinate with respect to the ROI box.
Finally I've got my app working properly. And this is very complicated. so be careful!
Try looking at this git repo. Having messed with it myself it is not the most performant but this should give you a start.

ARKit: How to detect only Horizontal floor excluding obstacles

I am developing horizontal plane detection application using ARKit. It seems to be working fine. Once floor is detected I am trying to place SCNPlane 2meter Hight and 2meter width horizontally from the centre point(detected floor). It is also working fine when floor is empty. If floor has some objects(obstacles like furniture) then SCNPlane is being placed over the object instead of the floor(under the object). How to detect only Horizontal floor excluding the objects. please guide me. thanks
When you are searching and have found the floor the ARKit will put out a grid, normally people use some kind of grid image to display this, but some don't want to show it. Once the grid has placed you place a SCNPlane, which i assume has an physical body as you say it falls towards the floor / furniture?
You can do this in 3 ways:
You can to stop the worldTrackingConfiguration once the floor has
been found.
You can once the floor has been found, fetch that Y-position and bind every object to fall towards that Y-position.
I guess you could check if the Y-position of the new detection overlaps with the floor detection, then it's fine else it's not. (I have not tested this one)

Prevent GMSMarkers from Scaling on iOS?

I am using GMSMarkers on iOS with the Google Maps API. I have a GMSMarker with a PNG image and no matter my zoom level on the map view the GMSMarker keeps its size respective to the screen. If I zoom in it is 30 points across the screen, and if I am zoomed all the way out on the map it is still 30 points. I do NOT want this to happen, I want the GMSMarker to stay small no matter what the zoom level is. It would be preferable that I do not have to loop through all my GMSMarkers every time the camera zoom changes, as I will be eventually having 50-100 of them on the map.
Is there anyway to keep the GMSMarker small no matter the zoom level on the GMSMapView?
I am using Objective-C, but if someone can only give me help in Swift or even Java I can still make do with that.
You could use a GMSGroundOverlay to add an image to the map which scales along with the map. So instead of a fixed size in pixels like a GMSMarker, it stays a fixed size in metres.
Note that another difference is that a ground overlay also stays oriented with the map, ie if the map is rotated/tilted, the top of the image rotates to point north, instead of staying pointing to the top of the screen like a marker.

How can I add a 3d object as a marker on Google Maps like Uber does

I want to add a 3d marker for showing cars on map with rotation like Uber does but I can't find any information on adding 3d objects on Google Maps SDK for iOS.
Would appreciate any help.
Apparently no one is seeing what OP and I are seeing so here's a video of a Uber car turning 90 degrees. Play it frame by frame and you'll notice that it's not a simple image rotation. Either Uber went through the trouble of doing ~360 angles of each vehicles, or it really is a 3D model. Doing 360 images of every car seems foolish to me.
From what I can tell, they are not using 3D objects. They are also not animating between 400 images of a car at a different angle. They're doing a mix of rotating image assets and animating between ~50-70 images of a car at different angles. The illusion is perfect because it really does look like they used 3D car models !
Look at this GIF of a Uber car turning a corner (Dropbox link):
We can clearly see that that the shadow and the car's view angle doesn't update as often as the car's rotation.
Here I overlaid 2 images of the car at different angles, but using the same car image:
We can see that the map is rotated ~5 degrees but the car image is perfectly clear because it hasn't changed, it was simply rotated.
Uber just released a blog post documenting this.
It looks like the vehicles were modeled in 3D software and then image assets depicting different angles were exported for the app. Depending on where the vehicle is on the map and its heading then a different asset is used.
First, they are NOT 3D Objects if that's what you referring to (It's possible to create one though, but waste of time) They are simply 3D image created in Photoshop or Illustrator (Mostly) that have 3D perspective (It's also retina optimized, that's why it looks very clear).
The reason you see that the car is rotated its because the UIImageView that the image is being held into is rotated (using CABasicAnimation mostly) using calculation base off of 3D device position (Same technology use for running apps to track your location etc), which you can use Core Location to retrieve that data.
It's a proccess, but very doable. Good Luck!
Thanks All answers are valid.
if you want you can see the video running, how it works
You can generate sprite sheet ( around 60 ) tiles
How i implement it and tools you need
3d source car model.
blender, animate camera using path animation elipse.
camera rotate around of car from top to bottom view
render 3d marker using sprite generated with blender, for angles use bearing change on location updates.
Your vehicle needs to be rendered to support most screens, so the base size for each tile was 64 px and I was scaling according to the dpi of the screens
Result implementation:
https://twitter.com/ronfravi/status/1133226618024022016?s=09
I believe a pair of marker images, one is the real marker, and another one is a darker blurry shadow can do the trick in a cheaper way. Putting the shadow marker beneath the marker, shifting X & Y axis to an amount where you feel the shadow would be put appropriately, and finally moving them as well as rotating them (on web version, you may need separate rotated images) at the same time should be able to [re]create the illusion.
As #Felix Lapalme already explained it beyond any easier, am not diving any deeper into explaning it.
check out my repo
I used a dea model and turned it according to the heading variable
https://github.com/hilalalhakani/HHMarker
I achieved this in Xamarin by rendering three.js in a webview then sending the image buffer directly to the marker instead of drawing it to the screen. It only took a couple of days, and for my use case it was needed because I want the drivers to be able to change the color and kind of car, but if this is not the functionality you need you're better of just using a sequence of rendered images.
If you want to Rotated your image as the marker. Want to show a moving object you can use .git image. It would be help full for you.
Swift 3
let position = CLLocationCoordinate2D(latitude: 51.5, longitude: -0.127)
//Rotate a marker
let degrees = 90.0
let london = GMSMarker(position: position)
london.title = "London"
//Customize the marker image
london.icon = UIImage(named: "YourGifImageName")
london.groundAnchor = CGPoint(x: 0.5, y: 0.5)
london.rotation = degrees
london.map = mapView
For more info Please check here

iOS Maps - Grid Overlay

I'm looking for a way to overlay the iOS maps with a grid. The complete earth needs to be divided into squares. The location of the user doesn't effect the placement of the squares (In other words; the squares are always placed the same. On every iPhone, no matter where the user is).
I Looked into MKOverlay, but I've never used this so it's very new to me. Also, when zooming in/out should effect the overlay. It's very important that the squares are always covering the same area on the map (For example; A square should be 100mx100m in real world, when you zoom out, the square should cover the same 100x100).
Is there anybody that can point me in the right direction?
Is it possible to draw the grid from een .xml? Example given; On .XML is holding all squares with their coordinates on the map. When the user loads the map, the 100 squares around the user are loaded.

Resources