How to read sample number from mbientLab sensor MblMwData in Swift? - ios

This requires some setup. Hang in there.
Working with the MbientLab metawear MetaMotionRL motion sensor -- specifically the BMI160 gyroscope. Building mobile app in flutter with both an Android and iOS frontend. Access to the device is via mbientlab API. Using Java for Android and Swift for iOS. Device streams data over BlueTooth. Data includes the sample number. Reading the sample number works in Java, but not Swift.
The Java API is relatively different from the Swift API. What works in Java for the sample number is not working in Swift.
Here's the Swift stream, callback handler:
mbl_mw_datasignal_subscribe(source, bridge(obj: self)) { (context, datamPointer) in
let datum: MblMwData = datamPointer!.pointee
let velocity: MblMwCartesianFloat = datum.valueAs()
let sampleSlot: Int = datum.extraAs()
}
The velocity value is good. Contains 3 floats for x, y and z value from gyroscope. But, the value for sampleSlot is HUGE. It should be small. Starting a 0 and incrementing for each sample. Well, there will be gaps when a packet is lost ... which is why it's critical to have the sample number/slot rather than depend on the sample index/count. I suspect that the value for sampleSlot that I'm getting is an address.
Here's the definition of MblMwData from mbientlabs/metawear (MblMw):
typedef struct {
int64_t epoch; ///< Number of milliseconds since epoch
void* extra; ///< Extra information attached to this data sample
void* value; ///< Pointer to the data value
MblMwDataTypeId type_id; ///< ID represnting the data type the value pointer points to
uint8_t length; ///< Size of the value
} MblMwData;
Notice that both value and extra are void*. For value, I cast the value by calling valueAs(). I do similar for extra using extraAs(). But, the result is clearly wrong -- a HUGE value.
Here's the handler in Java:
public void apply(Data data, Object... env) {
AngularVelocity velocity = data.value(AngularVelocity.class);
long sampleSlot = data.extra(Long.class);
}
In Java, MblMwData is exposed as a class called Data. And data.extra(Long.class) seems to cast/convert/expose the value to a long. Java long is 8 bytes and on a 64 bit machine Swift Int is also 8 bytes. That's why I used Int in Swift.
So, the question is this: how do I write the Swift code to get the sample number?
mbient has a community site, but it's readonly (can't post) and searching is terrible (can't find anything).
I find very little about mbient on SO. But, I hope someone will help.
... I added tags for mbient and metaware but since they didn't seem to be known I don't think that will help much.
... and I'm not allows to add tags :(
... Generally, I don't post unless it's a hard question. So, good luck. But, hopefully there's a simple answer.
Swift API: https://mbientlab.com/tutorials/SwApple.html
Java API: https://mbientlab.com/documents/metawear/android/latest/

Related

Outputting values from CAMPARY

I'm trying to use the CAMPARY library (CudA Multiple Precision ARithmetic librarY). I've downloaded the code and included it in my project. Since it supports both cpu and gpu, I'm starting with cpu to understand how it works and make sure it does what I need. But the intent is to use this with CUDA.
I'm able to instantiate an instance and assign a value, but I can't figure out how to get things back out. Consider:
#include <time.h>
#include "c:\\vss\\CAMPARY\\Doubles\\src_cpu\\multi_prec.h"
int main()
{
const char *value = "123456789012345678901234567";
multi_prec<2> a(value);
a.prettyPrint();
a.prettyPrintBin();
a.prettyPrintBin_UnevalSum();
char *cc = a.prettyPrintBF();
printf("\n%s\n", cc);
free(cc);
}
Compiles, links, runs (VS 2017). But the output is pretty unhelpful:
Prec = 2
Data[0] = 1.234568e+26
Data[1] = 7.486371e+08
Prec = 2
Data[0] = 0x1.987bf7c563caap+86;
Data[1] = 0x1.64fa5c3800000p+29;
0x1.987bf7c563caap+86 + 0x1.64fa5c3800000p+29;
1.234568e+26 7.486371e+08
Printing each of the doubles like this might be easy to do, but it doesn't tell you much about the value of the 128 number being stored. Performing highly accurate computations is of limited value if there's no way to output the results.
In addition to just printing out the value, eventually I also need to convert these numbers to ints (I'm willing to try it all in floats if there's a way to print, but I fear that both accuracy and speed will suffer). Unlike MPIR (which doesn't support CUDA), CAMPARY doesn't have any associated multi-precision int type, just floats. I can probably cobble together what I need (mostly just add/subtract/compare), but only if I can get the integer portion of CAMPARY's values back out, which I don't see a way to do.
CAMPARY doesn't seem to have any docs, so it's conceivable these capabilities are there, and I've simply overlooked them. And I'd rather ask on the CAMPARY discussion forum/mail list, but there doesn't seem to be one. That's why I'm asking here.
To sum up:
Is there any way to output the 128bit ( multi_prec<2> ) values from CAMPARY?
Is there any way to extract the integer portion from a CAMPARY multi_prec? Perhaps one of the (many) math functions in the library that I don't understand computes this?
There are really only 2 possible answers to this question:
There's another (better) multi-precision library that works on CUDA that does what you need.
Here's how to modify this library to do what you need.
The only people who could give the first answer are CUDA programmers. Unfortunately, if there were such a library, I feel confident talonmies would have known about it and mentioned it.
As for #2, why would anyone update this library if they weren't a CUDA programmer? There are other, much better multi-precision libraries out there. The ONLY benefit CAMPARY offers is that it supports CUDA. Which means the only people with any real motivation to work with or modify the library are CUDA programmers.
And, as the CUDA programmer with the most vested interest in solving this, I did figure out a solution (albeit an ugly one). I'm posting it here in the hopes that the information will be of value to future CAMPARY programmers. There's not much information out there for this library, so this is a start.
The first thing you need to understand is how CAMPARY stores its data. And, while not complex, it isn't what I expected. Coming from MPIR, I assumed that CAMPARY stored its data pretty much the same way: a fixed size exponent followed by an arbitrary number of bits for the mantissa.
But nope, CAMPARY went a different way. Looking at the code, we see:
private:
double data[prec];
Now, I assumed that this was just an arbitrary way of reserving the number of bits they needed. But no, they really do use prec doubles. Like so:
multi_prec<8> a("2633716138033644471646729489243748530829179225072491799768019505671233074369063908765111461703117249");
// Looking at a in the VS debugger:
[0] 2.6337161380336443e+99 const double
[1] 1.8496577979210756e+83 const double
[2] 1.2618399223120249e+67 const double
[3] -3.5978270144026257e+48 const double
[4] -1.1764513205926450e+32 const double
[5] -2479038053160511.0 const double
[6] 0.00000000000000000 const double
[7] 0.00000000000000000 const double
So, what they are doing is storing the max amount of precision possible in the first double, then the remainder is used to compute the next double and so on until they encompass the entire value, or run out of precision (dropping the least significant bits). Note that some of these are negative, which means the sum of the preceding values is a bit bigger than the actual value and they are correcting it downward.
With this in mind, we return to the question of how to print it.
In theory, you could just add all these together to get the right answer. But kinda by definition, we already know that C doesn't have a datatype to hold a value this size. But other libraries do (say MPIR). Now, MPIR doesn't work on CUDA, but it doesn't need to. You don't want to have your CUDA code printing out data. That's something you should be doing from the host anyway. So do the computations with the full power of CUDA, cudaMemcpy the results back, then use MPIR to print them out:
#define MPREC 8
void ShowP(const multi_prec<MPREC> value)
{
multi_prec<MPREC> temp(value), temp2;
// from mpir at mpir.org
mpf_t mp, mp2;
mpf_init2(mp, value.getPrec() * 64); // Make sure we reserve enough room
mpf_init(mp2); // Only needs to hold one double.
const double *ptr = value.getData();
mpf_set_d(mp, ptr[0]);
for (int x = 1; x < value.getPrec(); x++)
{
// MPIR doesn't have a mpf_add_d, so we need to load the value into
// an mpf_t.
mpf_set_d(mp2, ptr[x]);
mpf_add(mp, mp, mp2);
}
// Using base 10, write the full precision (0) of mp, to stdout.
mpf_out_str(stdout, 10, 0, mp);
mpf_clears(mp, mp2, NULL);
}
Used with the number stored in the multi_prec above, this outputs the exact same value. Yay.
It's not a particularly elegant solution. Having to add a second library just to print a value from the first is clearly sub-optimal. And this conversion can't be all that speedy either. But printing is typically done (much) less frequently than computing. If you do an hour's worth of computing and a handful of prints, the performance doesn't much matter. And it beats the heck out of not being able to print at all.
CAMPARY has a lot of shortcomings (undoced, unsupported, unmaintained). But for people who need mp numbers on CUDA (especially if you need sqrt), it's the best option I've found.

Swift: converting String to Float and back to String again after doing some mathematical operations

I've been googling and trying to understand how things work with the Float values in swift, but can't seem to make any sense of it, I would really appreciate any help, I feel I'm just wasting my time.
For example, let's say that I have an API that returns some json data, I parse that data, make some calculations and then present some of the data to the user, something like this:
let balance : String = "773480.67" // value that was received through json api
let commission : String = "100000.00" // value that was received through json api
//framework maps the json properties
let floatBalance : Float = Float(balance)! // at this point value is 773480.688
let floatCommission : Float = Float(commission)! //100000.0
//we do some math with the values
let result : Float = floatBalance + floatCommission // this is somehow 873480.687
//and then show some of the values on a label
print("stringBalance: \(balance)") //stringBalance: 773480.67
print("floatBalance: \(floatBalance)") //floatBalance: 773481.0
print("floatCommission: \(floatCommission)") //floatCommission: 100000.0
print("result: \(result)") //result: 873481.0
print("label: \(String(format:"%.2f", result))") //label: 873480.69
print("just kill me now")
I'm using the EVReflection framework to map the json properties to an object, so the conversion from String to Float is done in the background without me doing much about it, but the values shown above are basically what I'm working with.
My question is, what do I need to do at the end to get the correct string (873480.67) from the resulting float (873480.687) or is my approach wrong from the start?
Thank you
Actually floats can not represent numbers accurately, you'll have to use Double.
Here is a very nice answer on that issue:
https://stackoverflow.com/a/3730040/4662531
EDIT:
Sorry, but actually Double should not be use to perform calculations (I'm assuming from the naming of your variables you are working on some banking things). That part of the above linked answer is really giving a great suggestion:
A solution that works in just about any language is to use integers
instead, and count cents. For instance, 1025 would be $10.25. Several
languages also have built-in types to deal with money. Among others,
Java has the BigDecimal class, and C# has the decimal type.
A colleague of mine that used to work in a banking company also confirmed that all calculations were done without using Floats or Double, but with Int as suggested in the link.

Compression of 2-D array on the fly with iOS

I am currently using Swift to store some data on iOS. The values come as a 2-D integer array, defined as an [[Int]]. I need to save these integer arrays to disk. Currently, I am using the following function to do so:
func writeDataToFile(data: [[Int]], filename: String){
let fullfile = NSString(string: self.folderpath).stringByAppendingPathComponent(filename+".txt")
var fh = NSFileHandle(forWritingAtPath: fullfile)
if fh == nil{
NSFileManager.defaultManager().createFileAtPath(fullfile, contents: nil, attributes: nil)
fh = NSFileHandle(forWritingAtPath: fullfile)
}
fh?.writeData("Time: \(filename)\n".dataUsingEncoding(NSUTF16StringEncoding)!)
fh?.writeData("\(data)".dataUsingEncoding(NSUTF16StringEncoding)!)
fh?.closeFile()
}
Currently this function works just fine, but it produces files that are relatively large (1.1mb each - which when you are writing them at 1 Hz, gets huge fast). The arrays written have a fixed size and the values will be from 20000 < x < 35000. Is there a way to compress this data on the fly such that I can later read the data into say Python or some other language? Would it just be easier to use some library like Zip to compress the files into zips after writing? Is there some way to transform the data (without loss of data/fidelity) into an image (for compression purposes, not viewing purposes). There is some metadata that I would like to store along with the 2-D array, such as a timestamp.
Since you are currently saving those as string values, the simplest and fastest size reduction would be to save them as binary values (or base64 encoded strings). Then you could convert all of your int values into 2 byte sets (since unsigned 2 bytes can store up to 65536) and save the values that way. That would go from 5 bytes per int value down to 2 bytes per int value. Immediate savings of 60%.
For the Base64 encoding I use something I found on the internet called NSData+Base64. But in looking that up I just read:
In the iOS 7 and Mac OS 10.9 SDKs, Apple introduced new base64 methods on NSData that make it unnecessary to use a 3rd party base 64 decoding library. What's more, they exposed access to private base64 methods that are retrospectively available back as far as IOS 4 and Mac OS 6.
Link.
You could go much further into the compression by realizing that data from one element to the next will likely not change by the entire range, since heat maps will always be gradients. Then you could save the arrays as difference since the last element and likely get that down to a single byte (255 value) change set. But that may lose precision if you are viewing something with a very fast heat gradient (or using a low resolution camera).
If you eventually need to get into compression, I use GTMNSData+zlib and decompress it in a c# webservice. So with a little bit of work it is cross platform.
A proper answer for this would require more information about the problem domain. Most likely, 2D arrays are the wrong data structure for this but it's hard to tell without more info.
What's the data stored in these arrays?
Apple has had a compression library since last year:
https://developer.apple.com/library/ios/documentation/Performance/Reference/Compression/index.html

Binary file IO in Swift

EDIT: TLDR In C family languages you can represent arbitrary data ( ints, floats, doubles, structs ) as byte streams via casting and pack them into streams or buffers. And you can do the reverse to get data back out. And of course you can byte swap for endianness correctness.
Is this possible in idiomatic swift?
Now the original question:
If I were writing in C/C++/ObjC I might cast a struct to unsigned char * and write its bytes to a FILE*, or memcpy them to a buffer. Same for ints, doubles, etc. I know there are endianness concerns to deal with, but this is for an iOS app, and I don't expect endianness to change any time soon for the platform. Swift's type system doesn't seem like it would allow this behavior ( casting arbitrary data to unsigned 8 bit ints and passing the address ), but I don't know.
I'm learning Swift, and would like an idiomatic way to write my data. Note that my highly numeric, and ultimately going to be sent over the wire so it needs to be compact, so textual formats like JSON are out.
I could use NSKeyedArchiver but I want to learn here. Also I don't want to write off an android client at some point in the future so a simple binary coding seems where it's at.
Any suggestions?
As noted in Using Swift with Cocoa and Objective-C, you can pass/assign an array of a Swift type to a parameter/variable of pointer type, and vice versa, to get a binary representation. This even works if you define your own struct types, much like in C.
Here's an example -- I use code like this for packaging up 3D vertex data for the GPU (with SceneKit, OpenGL, etc.):
struct Float3 {
var x, y, z: GLfloat
}
struct Vertex {
var position, normal: Float3
}
var vertices: [Vertex] // initialization omitted for brevity
let data = NSData(bytes: vertices, length: vertices.count * sizeof(Vertex))
Inspect this data and you'll see a pattern of 32 * 3 * 2 bits of IEEE 754 floating-point numbers (just like you'd get from serializing a C struct through pointer access).
For going the other direction, you might sometimes need unsafeBitCast.
If you're using this for data persistence, be sure to check or enforce endianness.
The kind of format you're discussing is well explored with MessagePack. There are a few early attempts at doing this in Swift:
https://github.com/briandw/SwiftPack
https://github.com/yageek/SwiftMsgPack
I'd probably start with the yageek version. In particular, look at how packing is done into [Byte] data structures. I'd say this is pretty idiomatic Swift, without losing endian management (which you shouldn't ignore; chips do change, and the numeric types give you via bigEndian):
extension Int32 : MsgPackMarshable{
public func msgpack_marshal() -> Array<Byte>{
let bigEndian: UInt32 = UInt32(self.bigEndian)
return [0xce, Byte((bigEndian & 0xFF000000) >> 24), Byte((bigEndian & 0xFF0000) >> 16), Byte((bigEndian & 0xFF00) >> 8), Byte(bigEndian & 0x00FF)]
}
}
This is also fairly similar to how you'd write it in C or C++ if you were managing byte order (which C and C++ should always do, so the fact that they could splat their bytes into memory doesn't make correct implementations trivial). I'd probably drop Byte (which comes from Foundation) and use UInt8 (which is defined in core Swift). But either is fine. And of course it's more idiomatic to say [UInt8] rather than Array<UInt8>.
That said, as Zaph notes, NSKeyedArchiver is idiomatic for Swift. But that doesn't mean MessagePack isn't a good format for this kind of problem, and it's very portable.

Benefits of using NSInteger over int?

I am trying to comprehend how development is affected when developing for both 32-bit and 64-bit architectures. From what I have researched thus far, I understand an int is always 4 bytes regardless of the architecture of the device running the app. But an NSInteger is 4 bytes on a 32-bit device and 8 bytes on a 64-bit device. I get the impression NSInteger is "safer" and recommended but I'm not sure what the reasoning is for that.
My question is, if you know the possible value you're using is never going to be large (maybe you're using it to index into an array of 200 items or store the count of objects in an array), why define it as an NSInteger? That's just going to take up 8 bytes when you won't use it all. Is it better to define it as an int in those cases? If so, in what case would you want to use an NSInteger (as opposed to int or long etc)? Obviously if you needed to utilize larger numbers, you could with the 64-bit architecture. But if you needed it to also work on 32-bit devices, would you not use long long because it's 8 bytes on 32-bit devices as well? I don't see why one would use NSInteger, at least when creating an app that runs on both architectures.
Also I cannot think of a method which takes in or returns a primitive type - int, and instead utilizes NSInteger, and am wondering if there is more to it than just the size of the values. For example, (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section. I'd like to understand why this is the case. Assuming it's possible to have a table with 2,147,483,647 rows, what would occur on a 32-bit device when you add one more - does it wrap around to a -2,147,483,647? And on a 64-bit device it would be 2,147,483,648. (Why return a signed value? I'd think it should be unsigned since you can't have a negative number of rows.)
Ultimately, I'd like to obtain a better understanding of actual use of these number data types, perhaps some code examples would be great!
I personally think that, 64-bit is actually the reason for existence for NSInteger and NSUInteger; before 10.5, those did not exist. The two are simply defined as longs in 64-bit, and as ints in 32-bit.
NSInteger/NSUInteger are defined as *dynamic typedef*s to one of these types, and they are defined like this:
#if __LP64__ || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
Thus, using them in place of the more basic C types when you want the 'bit-native' size.
I suggest you to throughly read this link.
CocoaDev has some more info.
For proper format specifier you should use for each of these types, see the String Programming Guide's section on Platform Dependencies
I remember when attending iOS developer conference. you have to take a look on the data-type in iOS7. for example, you use NSInteger in 64-bit device and save it on iCloud. then you want to sync to lower device (say iPad 2nd gen), your app will not behave the same, because it recognizes NSInteger in 4 bytes not 8 bytes, then your calculation would be wrong.
But so far, I use NSInteger because mostly my app doesn't use iCloud or doesn't sync. and to avoid compiler warning.
Apple uses int because for a loop control variable (which is only used to control the loop iterations) int datatype is fine, both in datatype size and in the values it can hold for your loop. No need for platform dependent datatype here. For a loop control variable even a 16-bit int will do most of the time.
Apple uses NSInteger for a function return value or for a function argument because in this case datatype [size] matters, because what you are doing with a function is communicating/passing data with other programs or with other pieces of code.
Apple uses NSInteger (or NSUInteger) when passing a value as an
argument to a function or returning a value from a function.
The only thing I would use NSInteger for is passing values to and from an API that specifies it. Other than that it has no advantage over an int or a long. At least with an int or a long you know what format specifiers to use in a printf or similar statement.
As a continue to Irfan's answer:
sizeof(NSInteger)
equals a processor word's size. It is much more simple and faster for processor to operate with words

Resources