Folks,
I am trying to get a utility as shown in the picture below. Basically the camera display window covers part of the device's screen and a list of points that are connected by a curve or straight line are presented over the camera view as an overlay. I understand this can be drawn using quartz but this is less than half of my problem.
The real issue is that the overlay should present different points as the bearing and elevation changes.
For example:
if the bearing has to change +5 degrees and elevation +2 degrees, then PT1 will be next to the right edge of the camera view, PT2 will also move to the right and PT3 will be visible.
Another movement that changes the bearing +10 degrees would make PT1 not visible, PT2 at the right, PT3 middle and PT4 on the left edge of the camera view.
My questions after the picture:
Is it possible to have a view that is substantially larger than the size of the camera view (as shown below) and use some methods (I need to research these) to move the view when bearing/elevation changes? Is it recommended performance wise?
Is quartz the way to go here? What else do I need (other then of course AVFoundation for the camera and corelocation/motion)? Since my application is only iOS 7 I can use any new methods/APIs exclusive to iOS 7.
Aside from raywendelrich's tutorial on the augmented reality game, are there any tutorials that you know of that could help me with this endeavor?
Have a look at the following, each article or link has different key things required to make your final product. You eventually will be using a combination of geolocation, the compass/or the iphone's gyroscope data coming in.
Reading all the references combined and implementing them one by one in different projects will give you a solid start on how to then combine it all together to create your application. But first you need to get a solid understanding on how to manipulate the knowledge you will learn and how you can then apply it to create your project.
References:
A cool project from Ray Winderlech teach you how to use location gps coordinates in your application
Augmented reality location based tutorial
The next two links show you how to grab gyroscope data to find out the pitch, yaw and rotation and find out the device current position in space.
Apple gyroscope example app
Another core motion gyroscope example
Will teach you how to use the compass
Ray Winderlichs augmented reality compass tutorial for ios
Here's some more augmented reality stuff on overlaying stuff on the camera view
iPhone AR Toolkit
Augmented reality marker tracking tutorial
Related
I’m working on creating a marker based AR game using AFrame 1.2.0 and ar.js 3.3.3. The display shows 2D images of animals that the user has to “find”. The whole game functions well now, but I was running into an issue of photos appearing distorted or warped. I figured out that the issue is the marker’s plane is not being read correctly by mobile devices. The pictures below include a red cube to show the issue better. The top one is on a PC’s webcam and shows correctly the box is mounted to the marker. The bottom one shows the box is not mounted to the marker.
I figure that the issue is either mobile device’s gyroscope features or that the screen dimensions are affecting the aspect ratio of the screen.
I’ve tried a few properties on Aframe’s a-entity, such as look-controls=‘Enabled:false’ and look-controls=‘magicWindowTrackingEnabled: false’. Neither of those made a difference. I haven’t found properties within ar.js to use. Just wondering if anyone has come across this issue and found a fix.
images planing correctly with the marker
images not planing correctly
arjs comes in two different, mutually exclusive builds - Image + location based tracking, and marker tracking (link).
Importing the wrong one may/will cause incorrect behavior like the one you experience.
I'm starting to learn how to use ARkit and I would like to add a button like the one in the Pokemon go application where you can switch between AR ON (with a model into the real world) and AR OFF (without using a camera, having just the 3D model with a fixed background). Are there any easy way to do it?
Another good example of what you're asking about is the AR Quick Look feature in iOS 12 (see WWDC video or this article): when you quick look a USDZ file you get a generic white-background preview where you can spin the object around with touch gestures, and you can seamlessly switch back and forth between that and a real-world AR camera view.
You've asked about ARKit but not said anything about which renderer you're using. Remember, ARKit itself only tells you about the real world and provides live camera imagery, but it's up to you to display that image and whatever 3D overlay content you want — either by using a 3D graphics framework like SceneKit, Unity, or Unreal, or by creating your own renderer with Metal. So the rest of this answer is renderer-agnostic.
There are two main differences between an AR view and a non-AR 3D view of the same content:
An AR view displays the live camera feed in the background; a non-AR view doesn't.
3D graphics frameworks typically involve some notion of a virtual camera that determines your view of the 3D scene — by moving the camera, you change what part of the scene you see and what angle you see it from. In AR, the virtual camera is made to match the movement of the real device.
Hence, to switch between AR and non-AR 3D views of the same content, you just need to manipulate those differences in whatever way your renderer allows:
Hide the live camera feed. If your renderer lets you directly turn it off, do that. Otherwise you can put some foreground content in front of it, like an opaque skybox and/or a plane for your 3D models to rest on.
Directly control the camera yourself and/or provide touch/gesture controls for the user to manipulate the camera. If your renderer supports multiple cameras in the scene and choosing which one is currently used for rendering, you can keep and switch between the ARKit-managed camera and your own.
I just want some hint of how can I create an app in iOS which can do following.
When a user is at point X, user will click on start button so app will start a timer and track the movement. User will be on a horse and user needs to ride in a full circle. When user comes back to point X the app should draw the route taken by the user on the horse.
Aim is to ride completely in a circle. I want to make this app to
practice and see how close to a circle I ride.
I tried to look at GPS locator but I am not sure whether it will give me efficient results because the circle I ride can be as small as 60m or less in radius.
I don't know if iOS GPS can be this accurate. I read article on motion sensor and how to track rotation and acceleration.
But I am not quite sure how to use that to my advantage.
I just need some tips like which API to use etc.
Using the Standard Positioning Service one can achieve 15 meter
horizontal accuracy 95% of the time. This means that 95% of the time,
the coordinates you read from your GPS receiver display will be within
15 meters of your true position on the earth.
More Information click here
For integrate Map and draw path using current position google map is good option for integrate in iOS mobile .
small and range and get accurate result use indoor position system.
For more information about Indoor positioning system (IPS) click here
http://developer.estimote.com/
and Github iOS demo : get code
I am currently working on the "Xtion pro live" by using "OpenNI" library.
The problem is that the Xtion must be vertically placed (along a wall). The problem is that in this position the user calibration always fails, so it is impossible to get the Skeleton info.
So, I would like to know how to fix this issue, I suppose there is something that I didn't understand about "GetSkeletonCap().RequestCalibration()" or with the "SampleConfig.xml" file. After a lot of research however I am still stuck.
Try moving the user, followed by the camera in 360degree circumference around the subject keeping the vertical positioning of the camera the same all the way through. It may detect optimal angle on the depth sensor. We did this twice with the kinect and it worked.
Also make sure the room is well lit.
I'm trying to develop a mini "Around Me" like using camera, compass and location. I would like to display place's images on my screen.
For the moment I have my location and my orientation with compass. I would like to know how can I determine the position of the place I want to display.
Thanks for your help ;)
Once you have relative distance and bearing, which you can determine from two points in the same coordinate space using algorithms found on this page, figuring out where a known coordinate is with respect to a known viewpoint is basically a perspective projection, the math is outlined on this Wikipedia article. The rotation of the camera is given by the compass, and the tilt by the accelerometer (the position is of course, GPS).
I'm trying to find a better document - there are a couple of extra things to consider - like the camera parameters etc, but this is a good starting point.
If it's too involved (like if you're not comfortable with rotation matrices) we can break it right down to the simple trig.
The code in the iPhone ARKit project does this, and quite a bit more. While you may not be able to use their complete library, it is a great reference on the subject of augmented reality.
Check out 3DAR, it lets you add an AR view to a MKMapView app very easily. There's a video tutorial on this process, as well as some sample code, on the 3DAR site, www.3dar.us
You can create a location based AR app in Junaio. It's an AR browser. Free to use and deploy in (as long as it's not a custom app and in Junaio).