Swift numerics and CGFloat (CGPoint, CGRect, etc.) - ios

I'm finding Swift numerics particularly clumsy when, as so often happens in real life, I have to communicate with Cocoa Touch with regard to CGRect and CGPoint (e.g., because we're talking about something's frame or bounds).
CGFloat vs. Double
Consider the following innocent-looking code from a UIViewController subclass:
let scale = 2.0
let r = self.view.bounds
var r2 = CGRect()
r2.size.width = r.size.width * scale
This code fails to compile, with the usual mysterious error on the last line:
Could not find an overload for '*' that accepts the supplied arguments
This error, as I'm sure you know by now, indicates some kind of impedance mismatch between types. r.size.width arrives as a CGFloat, which will interchange automatically with a Swift Float but cannot interoperate with a Swift Double variable (which, by default, is what scale is).
The example is artificially brief, so there's an artificially simple solution, which is to cast scale to a Float from the get-go. But when many variables drawn from all over the place are involved in the calculation of a proposed CGRect's elements, there's a lot of casting to do.
Verbose Initializer
Another irritation is what happens when the time comes to create a new CGRect. Despite the documentation, there's no initializer with values but without labels. This fails to compile because we've got Doubles:
let d = 2.0
var r3 = CGRect(d, d, d, d)
But even if we cast d to a Float, we don't compile:
Missing argument labels 'x:y:width:height:' in call
So we end up falling back on CGRectMake, which is no improvement on Objective-C. And sometimes CGRectMake and CGSizeMake are no improvement. Consider this actual code from one of my apps:
let kSEP : Float = 2.0
let intercellSpacing = CGSizeMake(kSEP, kSEP);
In one of my projects, that works. In another, it mysteriously fails — the exact same code! — with this error:
'NSNumber' is not a subtype of 'CGFloat'
It's as if, sometimes, Swift tries to "cross the bridge" by casting a Float to an NSNumber, which of course is the wrong thing to do when what's on the other side of the bridge expects a CGFloat. I have not yet figured out what the difference is between the two projects that causes the error to appear in one but not the other (perhaps someone else has).
NOTE: I may have figured out that problem: it seems to depend on the Build Active Architecture Only build setting, which in turn suggests that it's a 64-bit issue. Which makes sense, since Float would not be a match for CGFloat on a 64-bit device. That means that the impedance mismatch problem is even worse than I thought.
Conclusion
I'm looking for practical words of wisdom on this topic. I'm thinking someone may have devised some CGRect and CGPoint extension that will make life a lot easier. (Or possibly someone has written a boatload of additional arithmetic operator function overloads, such that combining CGFloat with Int or Double "just works" — if that's possible.)

Explicitly typing scale to CGFloat, as you have discovered, is indeed the way handle the typing issue in swift. For reference for others:
let scale: CGFloat = 2.0
let r = self.view.bounds
var r2 = CGRect()
r2.size.width = r.width * scale
Not sure how to answer your second question, you may want to post it separately with a different title.
Update:
Swift creator and lead developer Chris Lattner had this to say on this issue on the Apple Developer Forum on July 4th, 2014:
What is happening here is that CGFloat is a typealias for either Float
or Double depending on whether you're building for 32 or 64-bits.
This is exactly how Objective-C works, but is problematic in Swift
because Swift doesn't allow implicit conversions.
We're aware of
this problem and consider it to be serious: we are evaluating several
different solutions right now and will roll one out in a later beta.
As you notice, you can cope with this today by casting to Double.
This is inelegant but effective :-)
Update In Xcode 6 Beta 5:
A CGFloat can be constructed from any Integer type (including the
sized integer types) and vice-versa. (17670817)

I wrote a library that handles operator overloading to allow interaction between Int, CGFloat and Double.
https://github.com/seivan/ScalarArithmetic
As of Beta 5, here's a list of things that you currently can't do with vanilla Swift.
https://github.com/seivan/ScalarArithmetic#sample
I suggest running the test suite with and without ScalarArithmetic just to see what's going on.

I created an extension for Double and Int that adds a computed CGFloatValue property to them.
extension Double {
var CGFloatValue: CGFloat {
get {
return CGFloat(self)
}
}
}
extension Int {
var CGFloatValue: CGFloat {
get {
return CGFloat(self)
}
}
}
You would access it by using let someCGFloat = someDoubleOrInt.CGFloatValue
Also, as for your CGRect Initializer, you get the missing argument labels error because you have left off the labels, you need CGRect(x: d, y: d, width: d, height: d) you can't leave the labels out unless there is only one argument.

Related

Best way to calculate the square root of any number in ios , Objective C and Swift

I am asking for ways to calculate the square root of any given number in ios, Objective C. I have inserted my way to do it using log. the logic was. ex : find the square root of 5
X = √5
then
log10X = log10(√5)
this means
log10X = log10(5)/2;
then should get the value of log10(5) and divide it from 2 and after that shoud get the antilog of that value to search X.
so my answer is in Objective C is like below (as an ex: I'm searching the square root of 5)
double getlogvalue = log10(5)/2; // in here the get the value of 5 in log10 and divide it from two.
//then get the antilog value for the getlogvalue
double getangilogvalue = pow(10,getlogvalue);
//this will give the square root of any number. and the answer may include for few decimal points. so to print with two decimal point,
NSLog(#"square root of the given number is : %.02f", getantilogvalue);
If anyone have any other way/answers. to get the square root of any given value , add please add and also suggestions for above answer is also accepted.
This is open for swift developers too. please add there answers also, becasue this will help to anyone who want to calculate the square root of any given number.
The sqrt function (and other mathematical functions as well) is
available in the standard libraries on all OS X and iOS platforms.
It can be used from (Objective-)C:
#include "math.h"
double sqrtFive = sqrt(5.0);
and from Swift:
import Darwin // or Foundation, Cocoa, UIKit, ...
let sqrtFive = sqrt(5.0)
In Swift 3,
x = 4.0
y = x.squareRoot()
since the FloatingPoint protocol has a squareRoot method and both Float and Double comply with the FloatingPoint protocol. This should be higher performance than the sqrt() function from either Darwin or Glibc, since the code it generates will be from the LLVM built-in square root, so no function call overhead on systems with hardware square root machine code.

Using NLoptNet in F#

Having found no examples online of NLopt being used in F#, I've been trying to convert the example given on NLoptNet from C# to F#. Having no familiarity with C# and very little with F#, I've been butchering it pretty badly.
Here is what I have so far:
open NLoptNet
open System
let solver = new NLoptSolver(NLoptAlgorithm.LN_COBYLA, uint32(1), 0.001, 100)
solver.SetLowerBounds([|-10.0|])
solver.SetUpperBounds([|100.0|])
let objfunc (variables : float array) =
Math.Pow(variables.[0] - 3.0, 2.0) + 4.0
solver.SetMinObjective(objfunc)
let initial_val = [|2.|]
let finalscore = ref System.Nullable() // ERROR
let result = solver.Optimize(initial_val, finalscore)
Here is the description of the error:
Successive arguments should be separated by spaces or tupled, and
arguments involving function or method applications should be
parenthesized
To be more specific, I'm trying to translate the following three lines of C# to F#:
double? finalScore;
var initialValue = new[] { 2.0 };
var result = solver.Optimize(initialValue, out finalScore);
Any ideas?
This error is due to the way that F# handles precedence - adding more brackets or some operators to clarify the order in which things are applied fixes the problem.
2 possible fixes are
ref (System.Nullable())
or
ref <| System.Nullable()
Just for completeness' sake here's a third possible fix:
let finalscore, result = solver.Optimize(initial_val)
This takes advantage of the fact the F# can treat the out parameter as a return value (in a tuple). I'm sure that there might be a case where an actual ref cell might be necessary. In recent F# mutable usually is enough. For some discussion see:
MSDN reference
SO Discussion 1
SO Discussion 2
Fun&Profit reference

iOS warning message : Incompatible pointer types passing 'CGFloat *' (aka 'double *') to parameter of type 'float *'

This is causing my App to act up. this error is happening on this line modff(floatIndex, &intIndex); What do I need to do to fix this issue?
Edit: it is because of &intIndex
- (BOOL)isFloatIndexBetween:(CGFloat)floatIndex {
CGFloat intIndex, restIndex;
restIndex = modff(floatIndex, &intIndex);
BOOL isBetween = fabsf(restIndex - 0.5f) < EPSILON;
return isBetween;
}
As I recall CGFloat is defined as float on 32 bit devices and double on 64 bit devices. Thus you don't want to use CGFloat in a call to modff(). Instead, declare your parameters using a specific type, and use casting.
Something like this (In this case I am using modf and all float variables.
- (BOOL)isFloatIndexBetween:(CGFloat)floatIndex
{
float restIndex;
float first, second;
first = (float) floatIndex;
restIndex = modf(first, &second);
BOOL isBetween = fabsf(restIndex - 0.5f) < EPSILON;
return isBetween;
}
Learning to speak compiler error/warning is an invaluable skill. In this case, it is telling you that modff is expecting a float (that is, a single-precision floating point number), but you're passing it a CGFloat (which is typedef'd as a double, which is a double-precision floating point number). As NobodyNada says, you can either change which function you're using or the type if intIndex.
You are passing CGFloats (typedef'ed to double on your system) to functions that expect floats.
You can either change modff and fabsf to modf and fabs, respectively (slower but more precise), or change intIndex and restIndex to be floats instead of doubles (faster but less precise).
Perhaps the easiest way to avoid these types of warnings and errors when using an architecture specific types like CGFloat is to put #import <tgmath.h> in your precompiled header or the imports for this file. That way the type-generic versions of the underly C functions are used. In this case it makes your warnings go away without any code changes. Then it's just a matter of making sure the precision is what you want.
If you are using 64-bit architectures (like arm64),then you should use CGFloat because it is defined as double and therefore a 8-byte floating point number, whereas float is a 4-byte floating point number.
So you should use these according to architecture.

Is it better to write 0.0, 0.0f or .0f instead of simple 0 for supposed float or double values

Hello well all is in the title. The question apply especially for all those values that can be like NSTimeInterval, CGFloat or any other variable that is a float or a double. Thanks.
EDIT: I'm asking for value assignment not format in a string.
EDIT 2: The question is really does assigning a plain 0 for a float or a double is worst than anything with f a the end.
The basic difference is as :
1.0 or 1. is a double constant
1.0f is a float constant
Without a suffix, a literal with a decimal in it (123.0) will be treated as a double-precision floating-point number. If you assign or pass that to a single-precision variable or parameter, the compiler will (should) issue a warning. Appending f tells the compiler you want the literal to be treated as a single-precision floating-point number.
If you are initializing a variable then it make no sense. compiler does all the cast for you.
float a = 0; //Cast int 0 to float 0.0
float b = 0.0; //Cast 0.0 double to float 0.0 as by default floating point constants are double
float c = 0.0f // Assigning float to float. .0f is same as 0.0f
But if you are using these in an expression then that make a lot of sense.
6/5 becomes 1
6/5.0 becomes 1.2 (double value)
6/5.0f becomes 1.2 (float value)
If you want to dig out if there is any difference to the target CPU running the code or the binary code it executes, you can easily copy one of the command lines compiling the code from XCode to command line, fix missing environment variables and add a -S. By that you would get assembly output, that you can use to compare. If you put all 4 variants in a small example source file, you can compare the resulting assembly code afterwards, even without being fluent in ARM assembly.
From my ARM assembly experience (okay... 6 years ago and GCC) I would bet 1ct on something like XORing a register with itself to flush it's content to 0.
Whether you use 0.0, .0, or 0.0f or even 0f does not make much of a difference. (There are some with respect to double and float) You may even use (float) 0.
But there is a significant difference between 0 and some float notation. Zero will always be some type of integer. And that can force the machine to perform integer operations when you may want float operations instead.
I do not have a good example for zero handy but I've got one for float/int in general, which nealy drove me crazy the other day.
I am used to 8-Bit-RGB colors That is because of my hobby as photographer and because of my recent background as html developer. So I felt it difficult to get used to the cocoa style 0..1 fractions of red, green and yellow. To overcome that I wanted to use the values that I was used to and devide them by 255.
[CGColor colorWithRed: 128/255 green: 128/255 andYellow: 128/255];
That should generate me some nice middle gray. But it did not. All that I tried either made a black or white.
First I thought that this was caused by some undocumented dificiency of the UI text objects with which I was using this colour. It took a while to realize that this constant values forced integer operations wich can only round up or down to 0 and 1.
This expession eventually did what I wanted to achieve:
[CGColor colorWithRed: 128.0/255.0 green: 128.0/255.0 andYellow: 128.0/255.0];
You could achieve the same thing with less .0s attached. But it does not hurt having more of them as needed. 128.0f/(float)255 would do either.
Edit to respond to your "Edit2":
float fvar;
fvar = 0;
vs ...
fvar = .0;
In the end it does not make a difference at all. fvar will contain a float value close to (but not always equal to) 0.0. For compilers in the 60th and 70th I would have guessed that there is a minor performance issue associated with fvar = 0. That is that the compiler creates an int 0 first which will then have to be converted to float before the assignment. Modern compilers of today should optimize automatically much better than older ones. In the end I'd have to look at the machine code output to see whether it does make a difference.
However, with fvar = .0; you are always on the safe site.

F# Units of Measurement modeling metric prefix (micro, milli, nano)

As per this question: Fractional power of units of measures in F# there are no fractional powers supported for units of measure in F#.
In my application, it is beneficial to think of data with a metric prefix sometime, e.g. when dealing with seconds. Sometimes I need a result in milli-seconds, sometimes in seconds.
The alternative I'm currently thinking about using is this
[<Measure>] type milli
[<Measure>] type second
let a = 10.0<second>;
let b = 10.0<milli*second>
which gives me:
val a : float<second> = 10.0
val b : float<milli second> = 10.0
Now I want to allow calculations with the two operations. So I could do
let milliSecondsPerSecond = 1000.0<(milli*second)/second>
let a = 10.0<second>;
let b = 10.0<milli*second>
(a*milliSecondsPerSecond) + b
which gives me exactly what I wanted
val it : float<milli second> = 10010.0
Now, this is all nice and shiny but grows out of hand quickly when you want to support multiple units and multiple prefixes. So I think it would be either necessary to bake this into a more generic solution, but don't know where to start. I tried
let milliPer<'a> = 1000.0<(milli * 'a) / 'a>
but that won't work because f# complains and tells me "Non-Zero constants cannot have generic units"...
Since I imagine that unit prefixes are a common problem, I imagine someone has solved this problem before. Is there a more idiomatic way to do unit prefixes in F#?
You write the constant as 1000.0<(milli second)/second> representing 1000 milliseconds per second, but actually (you can do this as an algebraic simplification) "milli" just means that you need to multiply whatever unit by 1000 to get the unit without the "milli" prefix.
So, you can simplify your definition of milliPer (and milliSecondsPerSecond) to just say:
let milli = 1000.0<milli>
Then it is possible to use it with other kinds of measures:
(10.0<second> * milli) + 10.0<milli second>
(10.0<meter> * milli) + 10.0<milli meter>
I think this should not lead to any complications anywhere in the code - it is a perfectly fine pattern when working with units (I've seen people using a unit of percentsimilarly, but then the conversion is 0.01)

Resources