I am using Objective C for the first time and instead of choosing to do a nice simple app I'm trying to make a tour application which will have the route mapped on it with the tourist points marked. I eventually want to add audio to the points but for now just want the route mapped and working!
Yes I know I'm silly for picking such a hard task - but any help? I am just stuck on how to get into this. I don't want anything fancy, I really just want to get the map up with the points and route displayed.
You can store specific points on a plist file (or a common text file).
Once you have those points, u have to make a mapview on your app (just drag)
and then, read the points, and add a mark to the map.
You can also draw lines between those points, so u can display the route.
Apple´s mapview and gps, has a method that allows u to know when the phone
has moved. When u detect that, u can check if the phone is near of one of
the audio points, and then start to play the sound. (there is a function
that gives you the distance between 2 points of the map).
Related
I wonder how to create level measuring app with ARKit like iOS 12 Measure app does.
I am searching for the solution or idea from last 1 week. I have seen number of videos and tutorials but I didn't get idea how to do it.
What I think is ARSceneview's pointOfView node which represent the camera can be used to do it like can get eulerAngles and do something but I am not able to figure out.
Please any help or suggestion would be appreciated. I request you to Please Don't close this question.
For getting the value to how level the phone is, I'd recommend using self.sceneView.pointOfView?.orientation (if you only want one of the 3 values from this SCNVector just add .x, .y or .z on the end).
For instance if I hold my phone perfectly upright, the value of self.sceneView.pointOfView?.orientation.x will be approximately 0. Likewise, holding it upside-down will give a value of around 1 or -1.
You can use these values to figure out just how level the phone is on all three axes, and convert them to degrees.
I'd recommend placing the following code in a project, and triggering it via a button, or a loop that constantly runs it, and watch how the values change. Hopefully this helps. (Note that sceneView is replaced by whatever the name of your ARKit SceneView is named.)
print(self.sceneView.pointOfView?.orientation.x)
print(self.sceneView.pointOfView?.orientation.y)
print(self.sceneView.pointOfView?.orientation.z)
Is there any proof of concept of how to implement multiple AR markers w/ A-Frame?
Ex. Something like this: https://www.youtube.com/watch?v=Y8WEGGbLWlA
The first video in this post from Alexandra Etienne is the effect I’m aiming for (multiple distinct AR "markers" with distinct content): https://medium.com/arjs/area-learning-with-multi-markers-in-ar-js-1ff03a2f9fbe
I’m a bit unclear if when using multiple markers they need to be close to each-other/exist in the same camera view
This example from the ar.js repo uses multiple markers but they're all of different types (ie one is a Hiro marker, one is a Kanji marker, etc): https://github.com/jeromeetienne/AR.js/blob/master/aframe/examples/multiple-independent-markers.html
tldr: working glitch here. Learn the area (the image is in the assets), click the accept button, and toggle the marker helpers.
Now, a bit of details:
1) Loading saved area data
Upon initialisation, when ar.js detects, that you want to use the area marker preset, it tries to grab a localStorage reference:
localStorage.get("ARjsMultiMarkerFile")
The most important data there is an array of pairs {markerPreset, url.patt} which will be used to create the area.
Note: By default it's just the hiro marker.
2) Creating an area data file
When you have debugUIEnabled set to true:
<a-scene embedded arjs='sourceType: webcam; debugUIEnabled: true'>
There shows up a button "Learn-new-marker-area".
At first glance it redirects you to a screen where you can save the area file.
There is one but: by default the loaded learner site is on another domain.
Strictly speaking: ARjs.Context.baseURL = 'https://jeromeetienne.github.io/AR.js/three.js/
Any data saved there won't be loaded on our website, for local storage is isolated per origin.
To save and use the marker area, you have to create your own learner.html. It can be identical to the original, just keep in mind you have to keep it on the same domain.
To make the debugUI button redirect the user to your learner html file, you need to set
ARjs.AnchorDebugUI.MarkersAreaLearnerURL = "myLearnerUrl.html"
before the <a-marker>s are initialized. Just do it in the <head>.
Once on the learner site, make sure the camera sees all the markers, and approve the learning.
It should look like this:
Once approved, you will be redirected back to your website, the area file will be loaded, and the data will be used.
As #mnutsch stated, AR.js does what you want.
You can display two different models on two different markers. If the camera doesn't see one of the markers the model vanishes (or stays where it was last, depending on your implementation.
The camera doesn't need to see both.
Screenshot:
https://www.dropbox.com/s/i21xt76ijrsv1jh/Screenshot%202018-08-20%2011.25.22.png?dl=0
Project:
https://curious-electric.com/w/experiments/aframe/ar-generic/
Also, unlike Vuforia, there is no 'extended tracking' - once the code is out of sight, you can't track anymore.
I studied the multimarker documentation of ARToolKit for iOS and i have some troubles in achieving some sort of QR-Code.
I want, for example:
A set of 6 markers positioned differently on a picture, and when and only when ALL of them are present, some sort of video is displayed in the origin of them( i want to use some sort of CORNER Markers like QR-Code system ).
How to do this ? From what i've seen, on multimarkers, if 1 is present out of 6 for example, the object is displayed.
From looking into the ARToolKit code you can see that a MultiMarker is internally handled as one single Marker consisting of several Pattern:
https://github.com/artoolkit/artoolkit5/blob/master/lib/SRC/ARWrapper/ARMarker.cpp#L344
https://github.com/artoolkit/artoolkit5/blob/master/lib/SRC/ARWrapper/ARMarkerMulti.cpp#L75
That is why ARToolKit will always return true whenever one of the markers configured in the multi-marker configuration is visible.
Taking that into account ‘Multi-Markers’ are not the way to go for the target you would like to reach.
What you can do, however, is to configure each marker separately and add them as ‘Single-Marker’. Then you can query if all of these ‘Single-Markers’ are visible.
If so you can calculate the origin of all these ‘Single-Markers’ and render your object there.
You can get an idea on how to configure several ‘Single-Markers’ if you take a look here:
http://augmentmy.world/moving-cars-augmented-reality
Also take that example here on how to set to markers into the same coordinate system (and calculate the distance between them) you can use that as a starting point for calculating the origin between several markers:
https://github.com/artoolkit/artoolkit5/tree/master/AndroidStudioProjects/ARMarkerDistanceProj
I know that these are not iOS examples but I have only done Android so far. Also, the ARWrapper interface should be the same on Android and iOS, meaning to say there should not be much difference between these two.
I hope that helps
I am working on a trails/maps app that has custom trails mapped out in a region and will aid the user navigate his or her way around some trails in a "foresty" area.
Currently, I am using MKMapView to get the user data/location, and loading the trails as overlays from a KML file. The problem I am having is that while testing the app, I noticed that in some situations the blue dot representing the user position goes off the trail overlays, which is expected since GPS (especially on phones) is not that great, plus some error that might have been obtained when getting the values for the trails to put in the KML file.
I apologize if all of that is a bit confusing, my question is: Is it possible to "snap" the user location (the blue dot that we all love) to a trail/overlay/path that is placed on the map with a specific tolerence? for example, if the blue dot appears to be a few pixels off the trails, then it would be placed right in the middle of the trails. If it is far off, then the user probably walked off the trails, and no snapping will happen to the user's location.
First off I wouldn't bother. If they are just a few pixels off they won't care, but if they are further away then it's important that they know where they are as accurately as possible. They could be lost in the snow and looking for trail markings.
If you do go ahead with that you'll have to abandon the userLocation dot and build our own. Using a LocationManager you can get told every time the device gets new location information and move your custom annotation dot to where you think they should be. More trouble that it's worth IMHO.
I want to build an iPad app that detect an alphabet physical shape placed on the iPad screen and print the alphabet to the screen after processing the object detection. Is this doable?
I am trying to find a way to implement this, but could not find any article or online resource that guide me to that.
Thanks,
I would imagine you could start by looking at the various pens and stylus's that are available for iPads. Look at how they work. Then you would need to see if you cna make an object that will activate the touch mechanism over a defined area in the same way, for example - a line, and see if you can detech the touch points along the line. Sorting all that out will effectively get you started.