Difference Between #Define and Float? - ios

What would be the difference between say doing this?
#define NUMBER 10
and
float number = 10;
In what circumstances should I use one over the other?

#define NUMBER 10
Will create a string replacement that will be executed by the preprocessor (i.e. during compilation).
float number = 10;
Will create a float in the data-segment of your binary and initialize it to 10. I.e. it will have an address and be mutable.
So writing
float a = NUMBER;
will be the same as writing
float a = 10;
whereas writing
float a = number;
will create a memory-access.

As Philipp says, the #define form creates a replacement in your code at the preprocessing stage, before compilation. Because the #define isn't a variable like number, your definition is hard-baked into your executable at compile time. This is desirable if the thing you are repesenting is a truly a constant that doesn't need to calculated or read from somewhere at runtime, and which doesn't change during runtime.
#defines are very useful for making your code more readable. Suppose you were doing physics calculations -- rather than just plonking 0.98f into your code everywhere you need to use the gravitational acceleration constant, you can define it in just one place and it increases your code readability:
#define GRAV_CONSTANT 0.98f
...
float finalVelocity = beginVelocity + GRAV_CONSTANT * time;
EDIT
Surprised to come back and find my answer and see I didn't mention why you shouldn't use #define.
Generally, you want to avoid #define and use constants that are actual types, because #defines don't have scope, and types are beneficial to both IDEs and compilers.
See also this question and accepted answer: What is the best way to create constants in Objective-C

"#Define" is actually a preprocessor macro which is run before the program starts and is valid for the entire program
Float is a data type defined inside a program / block and is valid only within the program / block.

Related

is it better to say final type variable or say final variable according to memory management

i know that var in many languages takes more ram than saying the explicit type
but it's different in final situation.
final i = 10; // without type
final int i = 10; // with type
// which is better? or there is no difference at all ?
i searched a little and saw lint always_specify_types, so it's in dart effective dart, but does this include final variables ?
As far as the compiler is concerned, those two lines are identical. The compiler infers the type from the right hand side. (You should even be able to hover over the i in the first line in the IDE and it will show you that it's an int.)
So, now it's a matter of style. Do you prefer the Flutter style approach, or the omit local types approach?

Check which variables can be side-effected by expression evaluation in Clang AST

clang::Expr has the member function HasSideEffects(const ASTContext &Ctx, bool IncludePossibleEffects). In my case I want to be more precise, and to know that e.g. an Expr corresponding to Y++ will only affect Y and not X. Is there a way to do this?
This predicate is rather simple, however, its simplicity and even the ability to work are bound to the fact that it doesn't try to list those exact side-effects.
Clang is only a front-end for LLVM and doesn't do tricky analysis of the code (except for clang static analyzer component). The main problem is the problem of aliases, i.e. trying to figure out what other variables can be affected by an arbitrary modification of pointer/reference.
Simple example:
int X = 42;
int &Y = X;
Y++;
Does Y++ affect X in this case? - Yes.
Can we understand it? - Yes, if we trace what Y refers to.
Is it possible? - Generally speaking, no. We're limited with the knowledge of the current translation unit. And even if it is the whole program, it takes way too long to do precisely. There are a lot of different trade-offs and techniques to do it rather fast and precise, but it's definitely not a part of the front-end compiler.

#define negative numbers?

Simple question, but I cannot seem to find the answer. Is it okay to use #define to define a negative number, as in:
#define kGravity -9.8
XCode is changing the 9.8 to the color of my numbers I have set (purple), but the - is being shown as the same color as the define statement (orange).
Is this legal? Will it compile?
Did you try it? It should work fine.
However, I would encourage you to not use pre-processor macros, but instead use real constants.
static const float kGravity = -9.8f;
Preprocessor directives are a bit frowned upon, in general. Here's some more info on the subject: #define vs const in Objective-C
It is absolutely legal to define negative constants with #define. What you discovered is most likely a bug in Xcode's code coloring, which will probably be fixed in one of the future revisions.

ifndef, define & direct assignment of constants

I am just thinking of the difference between below methods, while defining constants:
Method1:
Create a header file to define all the constants, using include guard:
#ifndef c1
#define c1 #"a123456789"
#endif
then assign the constant to the function:
Identity.number = c1;
Method2:
Just simply define the constant
#define c1 #"a123456789"
then assign the constant to the function:
Identity.number = c1;
Method3:
Do not define a constant, just assign the value to a function:
Identity.number = #"a123456789";
Any pros and cons for the above?
The first method is important when you make sure that the constant is only defined once. The third method don't allow the IDE to help you with autocompletion which can be important when the value of the constant is more complex.
Methods 1 and 2 are much better for bigger projects, because you can easily change the value of the constant one place.
Method 1 may be especially good for very big projects with many files, but is not really necessary for smaller projects.
In method 3, you have to search through every line of code to find the value you want to assign to (if you assign it more places). Therefore, I think it is bad to use this.

Maintaining Units of measure across type converstions

If we define a unit of measure like:
[<Measure>] type s
and then an integer with a measure
let t = 1<s>
and then convert it to a float
let r = float t
we see that r = 1.0 without a measure type. This seems very odd, as all the measure information has been lost.
You can use LanguagePrimitives.FloatWithMeasure to convert back to a float with something like
let inline floatMeasure (arg:int<'t>) : (float<'t>) =
LanguagePrimitives.FloatWithMeasure (float arg)
which enforces the right types, but this doesn't feel like the right solution as the docs for units of measure (http://msdn.microsoft.com/en-us/library/dd233243.aspx) say
However, for writing interoperability layers, there are also some explicit functions that you can use to convert unitless values to values with units. These are in the Microsoft.FSharp.Core.LanguagePrimitives module. For example, to convert from a unitless float to a float, use FloatWithMeasure, as shown in the following code.
Which seems to suggest that the function should be avoided in F# code.
Is there a more idiomatic way to do this?
Here's working snippet that does exactly what you need although gives warning
stdin(9,48): warning FS0042: This construct is deprecated: it is only for use in the F# library)):
[<NoDynamicInvocation>]
let inline convert (t: int<'u>) : float<'u> = (# "" t : 'U #)
[<Measure>] type s
let t = 1<s>
let t1 = convert t // t1: float<s>
However, I wouldn't suggest this approach.
First of all, UoM are compile-time, while type conversion let r = float t is runtime. At the moment of invocation, int -> float has no idea of whether it is int<s> or int<something_else>. So it is simply unable to infer a proper float<'u> at runtime.
Another thought is that philosophy behind UoM is wider than it's described. It is like saying the compiler, "well, it is int, but please treat it as int<s>". The goal is avoiding occasional improper use (e.g., adding int<s> to int<hours>).
Sometimes it makes no sense of int -> float conversion: think of int<ticks>, there is no sense of float<ticks>.
Further reading, credits to #kvb for pointing on this article.
(Caveat: I've not used units much in anger.)
I think that the only negative for using e.g. FloatWithMeasure is the unit-casting aspect (unitless to unitful). I think this is conceptually orthogonal to the numeric-representation-casting aspect (e.g. int to float). However there is (I think) no library function to do numeric-representation-casting on unit-ful values. Perhaps this is reflective of the fact that most unitful values model real-world continuous values, as so discrete representations like int are typically not used for them (e.g. 1<s> feels wrong; surely you mean 1.0<s>).
So I think it's fine to 'cast representations' and then 'readjust units', but I wonder how you got the values with different representations in the first place, as it's often typical for those representations to be fixed for a domain (e.g. use float everywhere).
(In any case, I do like your floatMeasure function, which un-confounds the unit-aspect from the representation-aspect, so that if you do need to only change representation, you have a way to express it directly.)

Resources