Back Story: I was approached to write an app, but iOS isn't something that have any experience with.
Short Description: Want an app for coverage map for use in an airplane while spraying.
Long Description: The customer has a some airplanes that he uses to spray chemicals on farm fields. They want a system to display a map of the area, a boundary of the field(s) that are to be sprayed on the current flight, and record the flight path of the airplane. The user interface needs to be extremely clean and simple because the user is going to be flying an airplane while using it. Dropbox will be used to transfer data between the airplane and the main office. Someone in the office will create a list of fields that need to be sprayed, and the boundary information of those fields are stored in a shape file format. Those shape files need to be read by the app and displayed over satellite imagery. The airplane already has a high accuracy GPS receiver on it that outputs NMEA position data at 10Hz or faster. The customer also wants to attach a pressure sensor to the spray circuit to monitor if it is dropping spray or not. That information needs to go to the app as well to paint the screen where the plane has already been. This will help the operator to eliminate overlap and skips.
As for getting the GPS position data and pressure data into an iPad, I'm guessing that 802.11 wireless is the simplest way, with that data being supplied in a TCP data stream. I can build a device that makes the data available as a TCP server on a 802.11 wireless network.
From there, I need an app on the iPad that connects to that server to get the data stream. That data gets parsed and turned in to a map.
I have experience with developing apps for Windows in VB.net and two apps for Android. How much difference is there with development concepts in iOS?
I see that iOS uses OpenGL for the graphics, which is ideal for a map. Can I easily access terrain data like is available in Google Earth?
Like dasdom i will encourage you to not begin with that complex project, perhaps divide the several goals in your requirements and make tiny apps for getting in tune with the iPhone SDK, also you have to learn Objective-C that implies that you are already good enough in C programming.
study this topics: Objective-C, iOS Memory Management, sockets, MapKit, Quartz and CoreGraphics, etc.
Or you can buy this excellent book from Aaron Hillegas:
"iPhone Programming: The Big Nerd Ranch Guide"
That book cover almost all topics to introduce your self in the iOS programming madness :)
Related
I'm trying to understand how IoT could help the company that I'm working for. This company provides software and hardware to others - take, for example, printers.
We already build and deploy software on the printers which have the ability to report back information (somewhere) so that management can react to certain data points:
Printer ink levels
Printer status (working, jammed, off, on, etc)
Printer state (good, errored, etc)
What would an IoT solution do for this type of situation?
As I understand it, IoT is mainly all about data and how we can analyze the data coming in from a fleet of devices.
Some things I've thought might be beneficial:
Instead of sending just "Printer Ink Low" and "Printer Ink Empty", continuously send the raw printer ink levels somewhere where you can analyze the average rates of printer ink use. You can then use that data to preemptively refill the ink.
Separation of concerns - software around the printer ink or printing process should be separate from the reporting aspect of the printer ink levels.
Similar to #1, sending raw data to be processed/learned upon to somewhere else is beneficial, as processing the data is moved from the device (which should focus on printing) to somewhere else (the cloud, a local compute server).
This collected data can be used to analyze trends and make business decisions.
This collected data can be used to preemptively act on situations instead of reactively (increasing efficiency).
Other questions:
As I understand IoT, it is usually a physical device or sensor that can connect to the Internet (either itself or through another device) and send up data. It usually is a physical device, but can it be more of a software device? IE a docker module that monitors the system and reports data? Would that be considered an IoT solution/device?
Although, stackoverflow is not the right place for such open questions (I would suggest other fora, like, e.g., researchgate), in general, the answer to all your questions, is yes, you are on the right way.
Especially for your last question, you have to have some kind of hardware (e.g., an extra or existing ink level meter inside the printer). All those hardware devices that interact with the environment and monitor something (temp, humidity, salt in water) are the sensors. Behind the sensors, you can have any kind of software/cloud/program/other hardware (e.g., raspberry/arduino), to implement your specific needs.
What is the best way, if any, to use Apple's new ARKit with multiple users/devices?
It seems that each devices gets its own scene understanding individually. My best guess so far is to use raw features points positions and try to match them across devices to glue together the different points of views since ARKit doesn't offer any absolute referential reference.
===Edit1, Things I've tried===
1) Feature points
I've played around and with the exposed raw features points and I'm now convinced that in their current state they are a dead end:
they are not raw feature points, they only expose positions but none of the attributes typically found in tracked feature points
their instantiation doesn't carry over from frame to frame, nor are the positions exactly the same
it often happens that reported feature points change by a lot when the camera input is almost not changing, with either a lot appearing or disappearing.
So overall I think it's unreasonable to try to use them in some meaningful way, not being able to make any kind of good point matching within one device, let alone several.
Alternative would to implement my own feature point detection and matching, but that'd be more replacing ARKit than leveraging it.
2) QR code
As #Rickster suggested, I've also tried identifying an easily identifiable object like a QR code and getting the relative referential change from that fixed point (see this question) It's a bit difficult and implied me using some openCV to estimate camera pose. But more importantly very limiting
As some newer answers have added, multiuser AR is a headline feature of ARKit 2 (aka ARKit on iOS 12). The WWDC18 talk on ARKit 2 has a nice overview, and Apple has two developer sample code projects to help you get started: a basic example that just gets 2+ devices into a shared experience, and SwiftShot, a real multiplayer game built for AR.
The major points:
ARWorldMap wraps up everything ARKit knows about the local environment into a serializable object, so you can save it for later or send it to another device. In the latter case, "relocalizing" to a world map saved by another device in the same local environment gives both devices the same frame of reference (world coordinate system).
Use the networking technology of your choice to send the ARWorldMap between devices: AirDrop, cloud shares, carrier pigeon, etc all work, but Apple's Multipeer Connectivity framework is one good, easy, and secure option, so it's what Apple uses in their example projects.
All of this gives you only the basis for creating a shared experience — multiple copies on your app on multiple devices all using a world coordinate system that lines up with the same real-world environment. That's all you need to get multiple users experiencing the same static AR content, but if you want them to interact in AR, you'll need to use your favorite networking technology some more.
Apple's basic multiuser AR demo shows encoding an ARAnchor
and sending it to peers, so that one user can tap to place a 3D
model in the world and all others can see it. The SwiftShot game example builds a whole networking protocol so that all users get the same gameplay actions (like firing slingshots at each other) and synchronized physics results (like blocks falling down after being struck). Both use Multipeer Connectivity.
(BTW, the second and third points above are where you get the "2 to 6" figure from #andy's answer — there's no limit on the ARKit side, because ARKit has no idea how many people may have received the world map you saved. However, Multipeer Connectivity has an 8 peer limit. And whatever game / app / experience you build on top of this may have latency / performance scaling issues as you add more peers, but that depends on your technology and design.)
Original answer below for historical interest...
This seems to be an area of active research in the iOS developer community — I met several teams trying to figure it out at WWDC last week, and nobody had even begun to crack it yet. So I'm not sure there's a "best way" yet, if even a feasible way at all.
Feature points are positioned relative to the session, and aren't individually identified, so I'd imagine correlating them between multiple users would be tricky.
The session alignment mode gravityAndHeading might prove helpful: that fixes all the directions to a (presumed/estimated to be) absolute reference frame, but positions are still relative to where the device was when the session started. If you could find a way to relate that position to something absolute — a lat/long, or an iBeacon maybe — and do so reliably, with enough precision... Well, then you'd not only have a reference frame that could be shared by multiple users, you'd also have the main ingredients for location based AR. (You know, like a floating virtual arrow that says turn right there to get to Gate A113 at the airport, or whatever.)
Another avenue I've heard discussed is image analysis. If you could place some real markers — easily machine recognizable things like QR codes — in view of multiple users, you could maybe use some form of object recognition or tracking (a ML model, perhaps?) to precisely identify the markers' positions and orientations relative to each user, and work back from there to calculate a shared frame of reference. Dunno how feasible that might be. (But if you go that route, or similar, note that ARKit exposes a pixel buffer for each captured camera frame.)
Good luck!
Now, after releasing ARKit 2.0 at WWDC 2018, it's possible to make games for 2....6 users.
For this, you need to use ARWorldMap class. By saving world maps and using them to start new sessions, your iOS application can now add new Augmented Reality capabilities: multiuser and persistent AR experiences.
AR Multiuser experiences. Now you may create a shared frame of a reference by sending archived ARWorldMap objects to a nearby iPhone or iPad. With several devices simultaneously tracking the same world map, you may build an experience where all users (up to 6) can share and see the same virtual 3D content (use Pixar's USDZ file format for 3D in Xcode 10+ and iOS 12+).
session.getCurrentWorldMap { worldMap, error in
guard let worldMap = worldMap else {
showAlert(error)
return
}
}
let configuration = ARWorldTrackingConfiguration()
configuration.initialWorldMap = worldMap
session.run(configuration)
AR Persistent experiences. If you save a world map and then your iOS application becomes inactive, you can easily restore it in the next launch of app and in the same physical environment. You can use ARAnchors from the resumed world map to place the same virtual 3D content (in USDZ or DAE format) at the same positions from the previous saved session.
Not bulletproof answers more like workarounds but maybe you'll find these helpful.
All assume the players are in the same place.
DIY ARKit sets up it's world coordinate system quickly after the AR session has been started. So if you can have all players, one after another, put and align their devices to the same physical location and let them start the session there, there you go. Imagine the inside edges of an L square ruler fixed to whatever available. Or any flat surface with a hole: hold phone agains surface looking through the hole with camera, (re)init session.
Medium Save the player aligning phone manually, instead detect a real world marker with image analysis just like #Rickster described.
Involved Train an Core ML model to recognize iPhones and iPads and their camera location. Like it's done with human face and eyes. Aggregate data on a server, then turn off ML to save power. Note: make sure your model is cover-proof. :)
I'm in the process of updating my game controller framework (https://github.com/robreuss/VirtualGameController) to support a shared controller capability, so all devices would receive input from the control elements on the screens of all devices. The purpose of this enhancement is to support ARKit-based multiplayer functionality. I'm assuming developers will use the first approach mentioned by diviaki, where the general positioning of the virtual space is defined by starting the session on each device from a common point in physical space, a shared reference, and specifically I have in mind being on opposite sides of a table. All the devices would launch the game at the same time and utilize a common coordinate space relative to physical size, and using the inputs from all the controllers, the game would remain theoretically in sync on all devices. Still testing. The obvious potential problem is latency or disruption in the network and the sync falls apart, and it would be difficult to recover except by restarting the game. The approach and framework may work for some types of games fairly well - for example, straightforward arcade-style games, but certainly not for many others - for example, any game with significant randomness that cannot be coordinated across devices.
This is a hugely difficult problem - the most prominent startup that is working on it is 6D.ai.
"Multiplayer AR" is the same problem as persistent SLAM, where you need to position yourself in a map that you may not have built yourself. It is the problem that most self driving car companies are actively working on.
I am working on a small learning project with arduino in which there are multiple sensors like motors, 7 segment display, temprature sensor, lcd display, button etc and all of these need to talk single IPhone or Ipad.
My first thought is to buy multiple flora arduino having each a sensor. Then each independent unit can connect via bluetooth. But i am not sure if it is good idea or not. How will they all connect to single device.
Idea is like 10 arduino device sending and receiving signal from one iphone or Ipad.
As Piglet and TomServo said, The motors, 7 segment display, temprature sensor, lcd display, button are not sensors those can be considered as peripherals!
Now even though its a vast topic and you will need a lot of work to accomplish this, Its not hard if you have embedded c or C programming experience.
You should look into how micro controllers work and how to start programming them arduino site should be good starting point!
https://www.arduino.cc/en/Guide/HomePage
https://www.quora.com/Which-are-the-Best-books-on-learning-to-program-microcontroller
Now once you set up your environment and programmed a simple program like toggling a LED, you should start looking at communication protocols,
The devices/peripherals communicate to master/Host microcontroller(Arduino) through set of protocols such as SPI, UART,I2C etc. you will need to decide an ICs for example HDSP-5503 is a seven segment display you can find more at
http://www.mouser.com/ProductDetail/Broadcom-Avago/HDSP-5503/?qs=sGAEpiMZZMsx4%2fFVpd5sGZOLZP3%252bEaxq
You can not just buy arduino with ton of sensors and connect it to iPhone, iPhone will require your custom app to support your board check out this guy!
https://itunes.apple.com/us/app/hm10-bluetooth-serial-lite/id1030454675?mt=8
hopefully this will help good luck!
The Problem:
Many touchscreens used on PCs/laptops with Windows 8 have multitouch capability, but no digitizer to convey stylus pressure data. The current, popular solution is to use Bluetooth on the stylus to stream the pressure data and combine that with the touch information. Sadly, after scouring the internet, I've found that pretty much all of these types of stylus' are designed to function with iOS or Android devices, for example:
Jot Touch Pro: http://www.adonit.net/jot/touch/
Pencil: http://www.fiftythree.com/pencil
etc...
The Question(s):
Is it possible to sync one of these iOS/Android stylus' with a PC and be able to view the data stream from the device? Would this data be useful (not encrypted or proprietary)?
What would be involved in creating some sort of interface/driver so that the stylus could be used in a drawing application? Am I crazy for even attempting something like this without building my own hardware?
I really just want to get an idea of the scope of a project like this before I attempt anything crazy. I would be extremely grateful for any help provided.
For those not in the know, Spaceteam is a very popular and very fun multiplayer game for iOS.
It allows for real time gameplay among multiple devices on an ad-hoc Wifi network - how does it do this?
Are there published libraries describing how to build protocols on top of ad-hoc networking libraries? Is it iOS specific, or would it be possible to build a variety of applications across different platforms?
Quickly, answer before we hit the asteroid!
Specifically which aspect are you interested in? There's nothing particularly special about mobile devices or ad hoc Wi-Fi networks (except in an ad hoc network, not all devices may be able to communicate with each other, so some mesh networking can help but unnecessarily complicates matters for the normal case).
I'll answer the broader question first, because it's more interesting. In my experience, there are a handful of major considerations:
Server/client or peer-to-peer? By this I mean whether there's a "master" deciding the true state of the world and communicating this to all clients. Avara is the only game I know of that is "peer-to-peer" in this sense (peers sent commands to all other peers; this proved bandwidth-heavy for modem users on 6-player games). I am not aware of games using more sophisticated network topologies to communicate game state (e.g. only sending data to one client on each LAN).
What do you do about latency? Avara is the only game I know of which lags everyone locally by the "latency tolerance" in order to get a consistent state of the world, which was terrible if someone was on a modem (turning off compression helped a lot). There are various ways to do "latency compensation" (e.g. in Half-Life/CS), some of which could also work on peer-to-peer games.
Time sync? For client-server games, you at least need to worry about a changing RTT. For peer-to-peer games, I think you also want to agree on timing that minimizes the effective maximum latency.
What if clients disagree about the state of the world? Avara just lets peers decide on their own state of the world (and displays "reality fragmentation detected" if it senses a mismatch, which might happen due to dropped packets or a too-low "latency tolerance").
What if a player leaves? For a P2P game, you might have to agree on a consistent game state (e.g. if the player was disconnected after sending commands to a subset of other peers). For a client-server game, you might have to elect a new master.
And now, after watching the Spaceteam trailer:
I have no idea how it works, since I haven't reverse-engineered the protocol. However, it's pretty simple to make something that works well enough:
Use some sort of P2P discovery to find players (e.g. Bonjour; there should be plenty of docs and samples out there).
Communicate with peers. I've done this with GameKit circa iOS 3/4 (I'm not sure if it still works over Wi-Fi).
Elect a master. This can be as simple as whoever presses "ready" last attempts to be the master. In some edge cases you might have to handle failure.
Let the master decide everything. Spaceteam is not latency-sensitive; Wi-Fi latency tends to be at most a handful of milliseconds, and nobody's really going to notice if one device is slower by 100 ms (as long as the UI responds fast enough).
There is a library made by Spaceteam that does this for Unity games.
https://github.com/hengineer/CaptainsMess
The creator of Spaceteam also wrote an old blog post about Networking in Spaceteam
http://spaceteamadmirals.club/blog/the-spaceteam-networking-post/
There is an iOS only library that will connect nearby devices easily called MultipeerConnectivity https://developer.apple.com/documentation/multipeerconnectivity
If you want something that will work cross-platform I have an example app here: https://github.com/brendaninnis/LocalNetworkingApp, which I explain in great detail here: http://brendaninnis.ca/connect-nearby-devices-part-1.html