Obstacle Avoidance Sensor Data Using the DJI SDK (V4.3.2) - ios

I am currently working with the Inspire 2 & the M210 RTK. Can anyone help me in getting the Obstacle Avoidance sensor data output from the drone using the Mobile-SDK? I would like to get the exact distance reading from the object in front of the drone in an constantly updating value. Any code examples? I am relatively new to the DJI SDK so any help would be greatly appreciated. Thanks in advance!

Before getting into this, keep in mind that the distance you receive is not perfectly accurate.
There is a few ways you can access the sensors with the mobile SDK.
1/ Traditional interfaces using the obstacle avoidance part of the DJIFlightAssistant
Implement the delegate DJIFlightAssistantDelegate
Implement the didUpdateVisionDetectionState method
In the DJIVisionDetectionState object, there is a detectionSectors property which is an array of DJIObstacleDetectionSector which have a obstacleDistanceInMeters.
2/ Using the keys, you can start listening to the Flight Controller key DJIFlightAssistantParamDetectionSectors. The block called will contain an array of DJIObstacleDetectionSector which have a obstacleDistanceInMeters:
guard let detectionSectorsKey = DJIFlightControllerKey(param: DJIFlightAssistantParamDetectionSectors) else {
// failed to create the key
return;
}
guard let keyManager = DJISDKManager.keyManager()? else {
// failed to access the keyManager. You're most likely not registered yet.
return;
}
keyManager.startListeningForChanges(on: detectionSectorsKey, withListener: self, andUpdate: { (oldValue, newValue) in
let sectors = newValue?.value as! [DJIObstacleDetectionSector]
//do stuff.
})

Related

Inaccurate face detection using ML Kit Face detection, doesn't work with selfies

I am creating a iOS app that uses the Firebase ML Kit Face Detection and I am trying to allow users to take a photo from their camera and check if there was a face in it. So I have followed the documentation and some youtube videos but it seems that it just doesn't work properly/accurately for me. I did some testing using a photo library not just pictures that I take, and what I found is it works well when I use selfies from google, but when I take my own selfies it never seems to work. I noticed when I take a selfie on my camera it does like a "mirror" kind of thing where it flips it, but I even took a picture of my friend using the front facing camera and it still didn't work. So I am not sure if I implemented this wrong, or what is going on. I have attached some of the relevant code to show how it was implemented. Thanks to anyone who takes the time to help out, I am a novice at iOS development so hopefully this isn't a waste of your time.
func photoVerification(){
let options = VisionFaceDetectorOptions()
let vision = Vision.vision()
let faceDetector = vision.faceDetector(options: options)
let image = VisionImage(image: image_one.image!)
faceDetector.process(image) { (faces, error) in
guard error == nil, let faces = faces, !faces.isEmpty else{
//No face detected provide error on image
print("No face detected!")
self.markImage(isVerified: false)
return
}
//Face Has been detected Offer Verified Tag to user
print("Face detected!")
self.markImage(isVerified: true)
}
}

AGSFeature.attributes of ArcGIS Runtime 100 now giving parameters in its dictionary

I am sending a query of geometry to show the features selected on map and get the selected features.
Both the things are working okay but when i check the attribute dictionary of a feature it contains only 5 key/value pair but the same function in android returning 10 key/value pair.
I am making query like this
let query = AGSQueryParameters()
if let selectionGraphicGeometry = selectionGraphic?.geometry {
let geometry = AGSGeometryEngine.simplifyGeometry(selectionGraphicGeometry)
query.geometry = geometry
}
selectableLayer?.selectFeatures(withQuery: query, mode: AGSSelectionMode.add, completion: { (result, error) in
if let features = result?.featureEnumerator().allObjects {
for feature in features {
let keys = feature.attributes.allKeys
}
}
}
I dont know where i am doing this wrong
In the Version 100 Runtime, we take a slightly different approach for efficiency's sake.
Features will by default only include the minimal set of fields required for rendering and editing. When you are making a selection, you are working with those features, so that's why you're seeing the smaller set of fields.
If you need all the fields for your selected features, you should actually perform a query on your AGSServiceFeatureTable and select the features based off that.
Something like this:
let table = selectableLayer.featureTable as? AGSServiceFeatureTable
table?.queryFeatures(with: query, queryFeatureFields: .loadAll) { (result, error) in
guard error == nil else {
print("Error selecting features: \(error!.localizedDescription)")
return
}
guard let features = result?.featureEnumerator().allObjects else {
return
}
selectableLayer.select(features)
for feature in features {
let keys = feature.attributes.allKeys
print(keys)
}
}
What's odd is that you say you're seeing a different number of fields returned on Android than on iOS. Are you sure the Android app is displaying the same layer with the same renderer?
One other point: You might be better off using the Esri Community (GeoNet). The ArcGIS Runtime SDK for iOS Forum can be found here. Do post a question there if you are seeing different numbers of fields on iOS and Android with the same layer and renderer.
Hope this helps!
P.S. There are two related things you might want to know.
AGSArcGISFeature instances are now Loadable. So if you have an individual feature and you want to get all the fields for it from the source service, you can call load(completion:) on it. Or you could pass an [AGSArcGISFeature] array to the ASGLoadObjects() helper function. However, that function will make a separate network request for each feature so if your array isn't small, that could lead to a bad user experience.
You can set your AGSServiceFeatureTable.featureRequestMode to .manualCache. You then need to call populateFromServiceWithParameters() to load the precise data that you need locally (and as you pan and zoom the map, you will need to manually manage this cache). See here for more details.

Trying to set Estimote iBeacon GPIO pin .high - SWIFT iOS

Ive been trying for a few days now to set a pin high (Estimote location beacon) from an app I'm building.
Im doing something wrong as i am getting an error when the block fires off. Error is: [ESTTelemetryInfo portsData]: unrecognized selector sent to instance...
Ive looked everywhere for a snippet but can't find anything. I only want to be able to set the pin high (i don't need to send any data). If i can set the pin high i figure i could set it low when done using the same methods. This is the code:
let telem = ESTTelemetryInfo.init(shortIdentifier: "xxxxxxxxxxxxxxxx")!
let setPinHigh = ESTTelemetryNotificationGPIO.init(notificationBlock: { (telemInfo) in
if telInfo.shortIdentifier! != "xxxxxxxxxxxxxxxx" { return }
telemInfo.portsData.setPort(.port0, value: .high)
})
setPinHigh.fireNotificationBlock(with: telem)
Any help would be greatly appreciated.
ps Sorry if this is incorrectly formatted (long time reader first time poster).
Cheers
Gary
Fixed..we'll sort of. For anyone wanting to know the right way to to set a pin high, in output mode, is to connect to the beacon first through the device manager: ESTDeviceManager() -set the delegate in the class as ESTDeviceManagerDelegate - startDeviceDiscovery(with: deviceFilter) then in the delegate method:
func estDeviceConnectDidSucceed(_ device: ESTDeviceConnectable) { self.settings.gpio.portsData.setPort(.port0, value: .high)
}
BUT -> at the moment there is a bug that portsData has no member 'setPort'. I've filed a bug issue with Estimote on GitHub. Will come back to report once it's fixed.

A loop to continuously collect the Wifi strength of nearby access points

Assume I have an iPhone connected to a wifi network with 3+ access points.
I'd like to collect all possible fields around wifi access strength/signal/etc from EACH access point and use that to triangulate, even while in background.
while true {
...
for access_point in access_points {
...
signal_strength = ...
}
}
I've been reading previous SO answers and other posts, and seems like it wasn't allowed on iOS without a jailbreak for a while, but is now availiable again.
Anyone can show a code snippet of how I'd go about doing this? All new to iOS development..
It's been quite a while since I worked with this, so I did a quick check again and now I am fairly certain you misunderstood something you've read. As far as I can tell, Apple did not suddenly revert their previous decision to restrict the public frameworks to scan for access points, i.e. specific MAC addresses and their signal strength.
You can query the specific rssi (signal strength) for a network (i.e. for an ssid), but not for individual MAC addresses. Before iOS 5 you could do that using private APIs, then you could do it with private APIs on a jailbroken device and that's pretty much it.
I don't have the code of my own, old stuff at hand (I used to do this for indoor location tracking before we switched to use iBeacons), so I can't provide you with a sample snippet myself. My code is dated and no longer functioning anyways, but you might find something here.
I would be really interested in the sources you mention that claim iOS 10 now allows this again. Apple closed this for privacy considerations (officially at least, and although this might be true in part it also means developers dealing with location-tracking now need to rely fully on Apple's framework for that only), so I highly doubt they went back on it.
Also, note that this is for sure not something trivial, especially if you're new to iOS development. I haven't even tackled the background idea, you can safely forget about that, because no matter what you do, you will not have a scanner that runs continuously in the background. That's against a very core principle of iOS programming.
I've answered how to ping ALL wifi networks in this question;
func getInterfaces() -> Bool {
guard let unwrappedCFArrayInterfaces = CNCopySupportedInterfaces() else {
print("this must be a simulator, no interfaces found")
return false
}
guard let swiftInterfaces = (unwrappedCFArrayInterfaces as NSArray) as? [String] else {
print("System error: did not come back as array of Strings")
return false
}
for interface in swiftInterfaces {
print("Looking up SSID info for \(interface)") // en0
guard let unwrappedCFDictionaryForInterface = CNCopyCurrentNetworkInfo(interface) else {
print("System error: \(interface) has no information")
return false
}
guard let SSIDDict = (unwrappedCFDictionaryForInterface as NSDictionary) as? [String: AnyObject] else {
print("System error: interface information is not a string-keyed dictionary")
return false
}
for d in SSIDDict.keys {
print("\(d): \(SSIDDict[d]!)")
}
}
return true
}
You may have seen this feature in jailbroken apps as it is possible to do this using private libraries, which means that apps that are sold on the iOS store can't be sold if they utilise them.

Can I get progress while uploading/downloading data from Firebase

I need to show progress while uploading/downloading data from Firebase database.
Can I get it?. I saw in Firebase storage but I didn't saw in Firebase database.
Progress based on data:
I recommend looking over this post, as it will help you understand how to use some of the built in data references to determine your upload progress. While it is not for iOS, it explains the thought process of how you can measure upload progress.
You can always use a progress monitor like such:
let observer = uploadTask.observeStatus(.Progress) { snapshot in
print(snapshot.progress) // NSProgress object
}
Progress based on count:
As to what FrankvanPuffelen said, there is in fact no tool to give you what you are asking for. However, as Jay states, you can determine the "progress" of your task(s) based on how you are reading/writing.
Say for instance you are writing (uploading) 10 photos. For each photo that is successfully written (uploaded), you can simply increment some sort of progress meter by 1/10.
Example for some local file on the users device:
// Some image on the device
let localFile = URL(string: "<PATH>")!
// Make a reference to the file that needs to be uploaded
let riversRef = storageRef.child("chicken.jpg")
// Upload the file to the path you specified
let uploadTask = riversRef.putFile(from: localFile, metadata: nil) { metadata, error in
if let error = error {
// Whoops, something went wrong :(
}
else {
// Tada! Uploaded file
someCountVariable += 1
// If needed / applicable
// let downloadURL = metadata!.downloadURL()
}
}
Of course for one (1) item you will go from 0 to 100 (..real quick - shout out to Drake), but this can just be nested in some loop that iterates through some list of items to upload. Then per each item successfully uploaded increment.
This will be more expensive in terms of requests if you need to upload objects other than multiple images, but for the purpose this will give you a nice visual / numeric way of tracking at least how many items are left to be successfully uploaded.
If you really want to get down to the nitty gritty, you can always try checking the current connection strength to determine a relative data transfer speed, and cross that with the current task you are attempting. This could potentially give you at least some estimate the progress based on how long the process should take to how long it actually has taken.
Hope some of this helps / points you in the right direction. Happy coding!
val bytesTransferred=taskSnapshot.bytesTransferred.toFloat()
val totalByteCount=taskSnapshot.totalByteCount.toFloat()
val progress = bytesTransferred / totalByteCount
Log.d("progress", (progress*100).toString())
Log.d("progressDivide", (bytesTransferred / totalByteCount).toString())
Log.d("progressbytesTransferred", bytesTransferred.toString())
Log.d("progresstotalByteCount", totalByteCount.toString())

Resources