I am new to IOS app development. I have been trying to learn how to work with Apple HealthKit API. So far, as an experiment I have managed to build a simple app which could store and retrieve data from the HealthKit such as blood type, heart rate etc (i can furnish the code if anyone needs it-it is already available on the internet). I am able to do this functionality because healthkitStore exposes these typeIdentifiers for the app developers. However, I am a bit lost when I want to create a new typeIdentifier such for storing ECG/EKG on the healthKit? I want to feed ECG/EKG signals into my app and use the HealthKitStore to save these information. Am i missing something?I know I am slow, but i have searched a lot over the internet, but I could not find any specific solutions. Is this not possible? But the whole point of opening the API to the developers is to create new apps with different features.
I have no specific requirement as far as storing and retrieving ECG data is concerned, as i simply want to create a PoC without any constraints but focusing on the functionality.
Will I be wrong If i want to create the above by using
struct HKClinicalTypeIdentifier
and then use Clinical Record type identifier
static let labResultRecord: HKClinicalTypeIdentifier
Is this the correct direction?
Any direction, motivation or criticism is much welcomed.
In iOS 14 you can read ECG data using new API
HKElectrocardiogramQuery Apple Documentation
here is the sample code I used to retrieve ECG data:
if #available(iOS 14.0, *) {
let predicate = HKQuery.predicateForSamples(withStart: Date.distantPast,end: Date.distantFuture,options: .strictEndDate)
let sortDescriptor = NSSortDescriptor(key: HKSampleSortIdentifierStartDate, ascending: false)
let ecgQuery = HKSampleQuery(sampleType: HKObjectType.electrocardiogramType(), predicate: predicate, limit: 0, sortDescriptors: [sortDescriptor]){ (query, samples, error) in
guard let samples = samples,
let mostRecentSample = samples.first as? HKElectrocardiogram else {
return
}
print(mostRecentSample)
var ecgSamples = [(Double,Double)] ()
let query = HKElectrocardiogramQuery(mostRecentSample) { (query, result) in
switch result {
case .error(let error):
print("error: ", error)
case .measurement(let value):
print("value: ", value)
let sample = (value.quantity(for: .appleWatchSimilarToLeadI)!.doubleValue(for: HKUnit.volt()) , value.timeSinceSampleStart)
ecgSamples.append(sample)
case .done:
print("done")
}
}
self.healthMonitor.healthStore.execute(query)
}
healthMonitor.healthStore.execute(ecgQuery)
} else {
// Fallback on earlier versions
}
I found an alternate solution to the above issue. I am writing this so that if anyone has similar issue can take a similar approach if needed.
Basically at the time of writing this thread, there are no ECG typeIdentifier available for developers to use. However, the way around it is to create a HKQauntiySample object and pass the ECG values as metadata. But the only issue that i am facing with such an approach is to do with the rate at which the live/historical ECG can be saved into the healthkit.
The sampling frequency for the ECG e.g is 200 Hz. I am not able to store the data with subsecond timestamp. It can only provide upto seconds of timestamp. Also, it seems, the maximum rate at which data can be stored using the above object is as low as 160Hz. Maybe this is a limitation of the interface, healtkitstore etc. I dont know. Hope this closes the issue.
Related
I have an application which gets data from health app and but the data i got from healthkit is not in visual form. So is there any way i can get the "Export to pdf" functionality from health in my app ?
I have added sample code using which i am getting data.
for sample in ecgSamples {
// Handling the samples here.
print("Sampel Data: \(sample)")
let voltageQuery = HKElectrocardiogramQuery(sample) { (query, result) in
switch(result) {
case .measurement(let measurement):
if let voltageQuantity = measurement.quantity(for: .appleWatchSimilarToLeadI) {
// Handling the voltage quantity here.
print("Voltage Data: \(voltageQuantity)")
}
case .done:
print("Voltage Data Complete")
// No more voltage measurements. Finish processing the existing measurements.
case .error(let error):
print("Voltage error: \(error)") // Handle the error here.
}
}
// Execute the query.
self.healthStore.execute(voltageQuery)
}
After searching for a long time i came across this project which allows user to show data from ECG to your iOS app.
iOS ECG App
once you are able to show data its not hard to take that chart to pdf.
You might face an issue with scrolling of chart so you can either try your own scroll or move function of map.
I am sending a query of geometry to show the features selected on map and get the selected features.
Both the things are working okay but when i check the attribute dictionary of a feature it contains only 5 key/value pair but the same function in android returning 10 key/value pair.
I am making query like this
let query = AGSQueryParameters()
if let selectionGraphicGeometry = selectionGraphic?.geometry {
let geometry = AGSGeometryEngine.simplifyGeometry(selectionGraphicGeometry)
query.geometry = geometry
}
selectableLayer?.selectFeatures(withQuery: query, mode: AGSSelectionMode.add, completion: { (result, error) in
if let features = result?.featureEnumerator().allObjects {
for feature in features {
let keys = feature.attributes.allKeys
}
}
}
I dont know where i am doing this wrong
In the Version 100 Runtime, we take a slightly different approach for efficiency's sake.
Features will by default only include the minimal set of fields required for rendering and editing. When you are making a selection, you are working with those features, so that's why you're seeing the smaller set of fields.
If you need all the fields for your selected features, you should actually perform a query on your AGSServiceFeatureTable and select the features based off that.
Something like this:
let table = selectableLayer.featureTable as? AGSServiceFeatureTable
table?.queryFeatures(with: query, queryFeatureFields: .loadAll) { (result, error) in
guard error == nil else {
print("Error selecting features: \(error!.localizedDescription)")
return
}
guard let features = result?.featureEnumerator().allObjects else {
return
}
selectableLayer.select(features)
for feature in features {
let keys = feature.attributes.allKeys
print(keys)
}
}
What's odd is that you say you're seeing a different number of fields returned on Android than on iOS. Are you sure the Android app is displaying the same layer with the same renderer?
One other point: You might be better off using the Esri Community (GeoNet). The ArcGIS Runtime SDK for iOS Forum can be found here. Do post a question there if you are seeing different numbers of fields on iOS and Android with the same layer and renderer.
Hope this helps!
P.S. There are two related things you might want to know.
AGSArcGISFeature instances are now Loadable. So if you have an individual feature and you want to get all the fields for it from the source service, you can call load(completion:) on it. Or you could pass an [AGSArcGISFeature] array to the ASGLoadObjects() helper function. However, that function will make a separate network request for each feature so if your array isn't small, that could lead to a bad user experience.
You can set your AGSServiceFeatureTable.featureRequestMode to .manualCache. You then need to call populateFromServiceWithParameters() to load the precise data that you need locally (and as you pan and zoom the map, you will need to manually manage this cache). See here for more details.
Assume I have an iPhone connected to a wifi network with 3+ access points.
I'd like to collect all possible fields around wifi access strength/signal/etc from EACH access point and use that to triangulate, even while in background.
while true {
...
for access_point in access_points {
...
signal_strength = ...
}
}
I've been reading previous SO answers and other posts, and seems like it wasn't allowed on iOS without a jailbreak for a while, but is now availiable again.
Anyone can show a code snippet of how I'd go about doing this? All new to iOS development..
It's been quite a while since I worked with this, so I did a quick check again and now I am fairly certain you misunderstood something you've read. As far as I can tell, Apple did not suddenly revert their previous decision to restrict the public frameworks to scan for access points, i.e. specific MAC addresses and their signal strength.
You can query the specific rssi (signal strength) for a network (i.e. for an ssid), but not for individual MAC addresses. Before iOS 5 you could do that using private APIs, then you could do it with private APIs on a jailbroken device and that's pretty much it.
I don't have the code of my own, old stuff at hand (I used to do this for indoor location tracking before we switched to use iBeacons), so I can't provide you with a sample snippet myself. My code is dated and no longer functioning anyways, but you might find something here.
I would be really interested in the sources you mention that claim iOS 10 now allows this again. Apple closed this for privacy considerations (officially at least, and although this might be true in part it also means developers dealing with location-tracking now need to rely fully on Apple's framework for that only), so I highly doubt they went back on it.
Also, note that this is for sure not something trivial, especially if you're new to iOS development. I haven't even tackled the background idea, you can safely forget about that, because no matter what you do, you will not have a scanner that runs continuously in the background. That's against a very core principle of iOS programming.
I've answered how to ping ALL wifi networks in this question;
func getInterfaces() -> Bool {
guard let unwrappedCFArrayInterfaces = CNCopySupportedInterfaces() else {
print("this must be a simulator, no interfaces found")
return false
}
guard let swiftInterfaces = (unwrappedCFArrayInterfaces as NSArray) as? [String] else {
print("System error: did not come back as array of Strings")
return false
}
for interface in swiftInterfaces {
print("Looking up SSID info for \(interface)") // en0
guard let unwrappedCFDictionaryForInterface = CNCopyCurrentNetworkInfo(interface) else {
print("System error: \(interface) has no information")
return false
}
guard let SSIDDict = (unwrappedCFDictionaryForInterface as NSDictionary) as? [String: AnyObject] else {
print("System error: interface information is not a string-keyed dictionary")
return false
}
for d in SSIDDict.keys {
print("\(d): \(SSIDDict[d]!)")
}
}
return true
}
You may have seen this feature in jailbroken apps as it is possible to do this using private libraries, which means that apps that are sold on the iOS store can't be sold if they utilise them.
I need to show progress while uploading/downloading data from Firebase database.
Can I get it?. I saw in Firebase storage but I didn't saw in Firebase database.
Progress based on data:
I recommend looking over this post, as it will help you understand how to use some of the built in data references to determine your upload progress. While it is not for iOS, it explains the thought process of how you can measure upload progress.
You can always use a progress monitor like such:
let observer = uploadTask.observeStatus(.Progress) { snapshot in
print(snapshot.progress) // NSProgress object
}
Progress based on count:
As to what FrankvanPuffelen said, there is in fact no tool to give you what you are asking for. However, as Jay states, you can determine the "progress" of your task(s) based on how you are reading/writing.
Say for instance you are writing (uploading) 10 photos. For each photo that is successfully written (uploaded), you can simply increment some sort of progress meter by 1/10.
Example for some local file on the users device:
// Some image on the device
let localFile = URL(string: "<PATH>")!
// Make a reference to the file that needs to be uploaded
let riversRef = storageRef.child("chicken.jpg")
// Upload the file to the path you specified
let uploadTask = riversRef.putFile(from: localFile, metadata: nil) { metadata, error in
if let error = error {
// Whoops, something went wrong :(
}
else {
// Tada! Uploaded file
someCountVariable += 1
// If needed / applicable
// let downloadURL = metadata!.downloadURL()
}
}
Of course for one (1) item you will go from 0 to 100 (..real quick - shout out to Drake), but this can just be nested in some loop that iterates through some list of items to upload. Then per each item successfully uploaded increment.
This will be more expensive in terms of requests if you need to upload objects other than multiple images, but for the purpose this will give you a nice visual / numeric way of tracking at least how many items are left to be successfully uploaded.
If you really want to get down to the nitty gritty, you can always try checking the current connection strength to determine a relative data transfer speed, and cross that with the current task you are attempting. This could potentially give you at least some estimate the progress based on how long the process should take to how long it actually has taken.
Hope some of this helps / points you in the right direction. Happy coding!
val bytesTransferred=taskSnapshot.bytesTransferred.toFloat()
val totalByteCount=taskSnapshot.totalByteCount.toFloat()
val progress = bytesTransferred / totalByteCount
Log.d("progress", (progress*100).toString())
Log.d("progressDivide", (bytesTransferred / totalByteCount).toString())
Log.d("progressbytesTransferred", bytesTransferred.toString())
Log.d("progresstotalByteCount", totalByteCount.toString())
I was hoping that someone can help a coding newbie with what might be considered a stupid question. I'm making a blog type app for a community organization and it's pretty basic. It'll have tabs where each tab may be weekly updates, a table view with past updates and a tab with general information.
I setup cloudkit to store strings and pictures, and then created a fetchData method to query cloud kit. In terms of the code (sample below) it works and gets the data/picture. My problem is that it takes almost 5-10 seconds before the text and image update when I run the app. I'm wondering if that's normal, and I should just add an activity overlay for 10 seconds, or is there a way to decrease the time it takes to update.
override func viewDidLoad() {
fetchUpcoming()
}
func fetchUpcoming() {
let container = CKContainer.defaultContainer()
let publicData = container.publicCloudDatabase
let query = CKQuery(recordType: "Upcoming", predicate: NSPredicate(format: "TRUEPREDICATE", argumentArray: nil))
publicData.performQuery(query, inZoneWithID: nil) { results, error in
if error == nil { // There is no error
println(results)
for entry in results {
self.articleTitle.text = entry["Title"] as? String
self.articleBody.text = entry["Description"] as? String
let imageAsset: CKAsset = entry["CoverPhoto"] as! CKAsset
self.articlePicture.image = UIImage(contentsOfFile: imageAsset.fileURL.path!)
self.articleBody.sizeToFit()
self.articleBody.textAlignment = NSTextAlignment.Justified
self.articleTitle.adjustsFontSizeToFitWidth = true
}
}
else {
println(error)
}
}
}
Another question I had is about string content being stored on cloud kit. If I want to add multiple paragraphs to a blood entry (for example), is there a way to put it in one record, or do I have to separate the blog entry content into separate paragraphs? I may be mistaken but it seems like CloudKit records don't recognize line breaks. If you can help answer my questions, I'd be really appreciative.
It looks like you might be issuing a query after creating the data, which isn't necessary. When you save data, as soon as your completion block succeeds (with no errors) then you can be sure the data is stored on the server and you can go ahead and render it to the user.
For example, let's say you're using a CKModifyRecordsOperation to save the data and you assign a block of code to the modifyRecordsCompletionBlock property. As soon as that block runs and no errors are passed in, then you can render your data and images to your user. You have the data (strings, images, etc.) locally because you just sent them to the server, so there's no need to go request them again.
This provides a quicker experience for the user and reduces the amount of network requests and battery you're using on their device.
If you are just issuing normal queries when your app boots up, then that amount of time does seem long but there can be a lot of factors: your local network, the size of the image you're downloading, etc. so it's hard to say without more information.
Regarding the storage of paragraphs of text, you should consider using a CKAsset. Here is a quote from the CKRecord's documentation about string data:
Use strings to store relatively small amounts of text. Although
strings themselves can be any length, you should use an asset to store
large amounts of text.
You'll need to make sure you're properly storing and rendering line break characters between the user input and what you send to CloudKit.