Fix to get my wishes? - ios

I have problems in myself when I drive cars 🚘 I forget to slowly in some way have cameras 🎥 speed in high ways so I thinked if possible to make IOS APP to fixing these problems I explain my thing in image but I can't convert to coding by this step ?
1-After to speed camera 🎥 100 Miter Alerts me app( (there are camera speeds pleas slow down your speed.))
2- just post code i have basic programming languages in swift.

I'm not sure if I got you wright, so please correct me if I'm wrong.
You want an iOS app that tells you if there is a speed camera on the road you're driving, right?
So you have some possibilities to achieve that:
you can have a look at the app store. There are lot of such apps (e.g. TomTom) (easiest way)
if you want to build your own app you can make a use of the navigation sdk provided by mapbox: https://www.mapbox.com/help/ios-navigation-sdk/ (some programming skills needed)
Build your own app from scratch (much work and advanced programming skills)
If you want to build your app by mapbox or on your own you'll need the GPS-locations of speed cameras like provided here: https://www.scdb.info/

Related

A-Frame: FOSS Options for widely supported, markerless AR?

A-Frame's immersive-ar functionality will work on some Android devices I've tested with, but I haven't had success with iOS.
It is possible to use an A-Frame scene for markerless AR on iOS using a commercial external library. Example: this demo from Zapworks using their A-Frame SDK. https://zappar-xr.github.io/aframe-example-instant-tracking-3d-model/
The tracking seems to be no where near as good as A-Frame's hit test demo (https://github.com/stspanho/aframe-hit-test), but it does seem to work on virtually any device and browser I've tried, and it is good enough for the intended purpose.
I would be more than happy to fallback to lower quality AR mode in order to have AR at all in devices that don't support immersive-ar in browser. I have not been able to find an A-Frame compatible solution for using only free/open source components for doing this, only commercial products like Zapworks and 8th Wall.
Is there a free / open source plugin for A-Frame that allows a scene to be rendered with markerless AR across a very broad range of devices, similar to Zapworks?
I ended up rolling my own solution which wasn't complete, but good enough for the project. Strictly speaking, there's three problems to overcome with getting a markerless AR experience on mobile without relying on WebXR:
Webcam display
Orientation
Position
Webcam display is fairly trivial to implement in HTML5 without any libraries.
Orientation is already handled nicely by A-FRAME's "magic window" functionality, including on iOS.
Position was tricky and I wasn't able to solve it. I attempted to use the FULLTILT library's accelerometer functions, and even using the readings with gravity filtered out I wasn't able to get a high enough level of accuracy. (It happened that this particular project did not need it)

Is there any plan for ARCore to support saving and loading sparse point clouds for localisation purposes?

I'm trying to write an app for detecting "where you are" in a building use ARCore. I'd like to use previously learnt and then saved feature points to provide the initial sync position as well as then helping to continuously update position accurately. But this feature does not currently appear to be supported in ARCore.
Currently I'm using tracked images as a way to do an initial sync. It works, but not brilliantly - alignment is often a few degrees off and you have to approach the image pretty slowly and deliberately. And then once synced there is drift... Yes, loop closing works pretty well when it gets back to somewhere it recognises, but it needs to build up that map every time you start the session.
So, obvious solution: are there any plans for Google to implement "Area Learning" as it was back in Google Tango? It looks like Cloud Anchors might be some attempt to do this, but clearly that's all hosted on Google, and it strictly limited as to how long that data is stored. Currently that's just not a possible solution. OTOH, Apple's ARKit seems to now provide just what is needed:
https://developer.apple.com/documentation/arkit/saving_and_loading_world_data
Does this mean that Apple / ARKit is the only way to go for the app? Hope not...
You might want to check out persistent cloud anchors that is still in development.
From documentation:
Note: We’re currently developing persistent Cloud Anchors, which can
be resolved for much longer. Before making the feature broadly
available, we’re looking for more developers to help us explore and
test persistent Cloud Anchors in real world apps at scale. See here if
you’re interested.

Developing an iOS app, questions about the API I should use. (thinking of SpriteKit)

This is a pretty basic question and doesn't really need much depth for an answer. I was just currently interested in developing an app for iPhones and after learning the swift2 language from the Mac site, i was just wondering which API I should use.
I did some moderate research, and so far spriteKit seems like the way to go (bit of a hassle if I want to port it over to android, but not impossible). But I was just wanting to make sure it's the right way to go.
The app idea revolves around keeping track of some form of progress and being able to use this data to generate graphs for the user? I'd also like to add some image functionality to the application as well. (similar to the health app and how it measures distance walked?)
I know this sounds vague, but would spriteKit be able to do these things? Or is there another API worth having a look into? (I've checked out metal and sceneKit as well, but leaning towards the more 2D type apps)
From my point of view using spritekit for this is overkill.
There are some greats frameworks to do graphs like charts :
https://github.com/danielgindi/ios-charts
Spritekit is more gaming oriented, if you want a "simple" app you don't need it.
I don't think SpriteKit is the right API to accomplish your problems.
SpriteKit is all about moving and manipulating images on the screen. I am quite familiar with the SpriteKit API.
I do not see any help for generating graphs or something like that.
Also it is not the right API to display a proper interface for "serious" apps.

Accessing data from HealthKit with Swift

I'm a complete noob at Swift (and Xcode), as a matter of fact, the only programming language I (somewhat) know is Javascript.
I'm trying to make a Swift SpriteKit game, and I would like to access the number of calories burned in HealthKit.
The idea is that my game will provide more points the more calories you burn using other apps like Endomondo. My app does not actually track anything, I would just like to access other data left by other apps in the Health App.
Is this even possible? (I'm running the latest version of everything, from Mac OS X to Xcode)
Certainly. I don't think there is anything technically preventing you from making calls to the HealthKit APIs in your game. In fact, you're fairly free to mix and match the use of any public frameworks provided on iOS.
One thing to keep in mind is privacy and disclosure of the use of health data. The user will have to explicitly grant your app permission to see data.
HealthKit is a really rich API with lots of ways to access lots of different kinds of data, and you're really only interested in a small part right now, so a quick way to experiment is to create a new Swift SpriteKit game from the new project template in Xcode, do your research on HealthKit, and see if you can just log the number of calories burned since some time point while your app is running. If you can do that, the rest is details (as in, the entire app :-)).
Here are what I think might be some helpful links, good luck on your project!
https://itunes.apple.com/us/book/swift-programming-language/id881256329?mt=11
https://developer.apple.com/library/ios/documentation/HealthKit/Reference/HealthKit_Framework/index.html
You'll also find some good documentation on SpriteKit (references and guides) on the iOS Developer Library site.

Augmented Reality, Move 3d model respective to device movement

I am working on augmented reality app. I have augmented a 3d model using open GL ES 2.0. Now, my problem is when I move device a 3d model should move according to device movement speed. Just like this app does : https://itunes.apple.com/us/app/augment/id506463171?l=en&ls=1&mt=8. I have used UIAccelerometer to achieve this. But, I am not able to do it.
Should I use UIAccelerometer to achieve it or any other framework?
It is complicated algorithm rather than just Accelerometer. You'd better use any third party frameworks, such as Vuforia, Metaio. That would save a lot of time.
Download and check a few samples apps. That is exactly what you want.
https://developer.vuforia.com/resources/sample-apps
You could use Unity3D to load your 3D model and export XCODE project. Or you could use open GL ES.
From your comment am I to understand that you want to have the model anchored at a real world location? If so, then the easiest way to do it is by giving your model a GPS location and reading the devices' GPS location. There is actually a lot of research going into the subject of positional tracking, but for now GPS is your best (and likely only) option without going into advanced positional tracking solutions.
Seeing as I can't add comments due to my account being too new. I'll also add a warning not to try to position the device using the accelerometer data. You'll get far too much error due to the double integration of acceleration to position (See Indoor Positioning System based on Gyroscope and Accelerometer).
I would definitely use Vuforia for this task.
Regarding your comment:
I am using Vuforia framework to augment 3d model in native iOS. It's okay. But, I want to
move 3d model when I move device. It is not provided in any sample code.
Well, it's not provided in any sample code, but that doesn't necessarily mean it's impossible or too difficult.
I would do it like this (working on Android, C++, but it must be very similar on iOS anyway):
locate your renderFrame function
simply do your translation before actual DrawElements:
QCARUtils::translatePoseMatrix(xMOV, yMOV, zMOV, &modelViewProjectionScaled.data[0]);
Where the data for the movement would be prepared by a function that reads them from the accelerometer as a time and acceleration...
What I actually find challenging is to find just the right calibration for a proper adjustment of the output from the sensor's API, which is a completely different and AR/Vuforia unrelated question. Here I guess you've got a huge advantage over Android devs regarding various devices...

Resources