How to change #define value - ios

I know #define must be constant but please give me any good tips.
In my case, I define a constant value by #define (e.g. #define kImageQuality 0.7).
However, I would like to change the constant value from Settings.Bundle before opening an app.
That means to change the constant value, doesn't it?
Is that any way to implement as my aim?
It should change to instance variable instead of #define?
Any tips given by you that would be really appreciated.

#define constants are replaced before compilation even begins by the preprocessor (e.g. kImageQuality gets replaced by 0.7 before compilation). Therefore loading it before the app starts is impossible as the app does not recompile every time. You need to use a variable:
float imageQuality = 0.7f;

This is not possible because this:
#define constant 3
...
y = x + constant
Is completely equivalent to this:
y = x + 3
#defined constants are replaced by their value in the preprocessing stage before the code is even compiled. To change the value dynamically, you have to either use a global variable, or some other persistent mechanism like NSUserDefaults.

Related

Other ways to call/eval dynamic strings in Lua?

I am working with a third party device which has some implementation of Lua, and communicates in BACnet. The documentation is pretty janky, not providing any sort of help for any more advanced programming ideas. It's simply, "This is how you set variables...". So, I am trying to just figure it out, and hoping you all can help.
I need to set a long list of variables to certain values. I have a userdata 'ME', with a bunch of variables named MVXX (e.g. - MV21, MV98, MV56, etc).
(This is all kind of background for BACnet.) Variables in BACnet all have 17 'priorities', i.e., every BACnet variable is actually a sort of list of 17 values, with priority 16 being the default. So, typically, if I were to say ME.MV12 = 23, that would set MV12's priority-16 to the desired value of 23.
However, I need to set priority 17. I can do this in the provided Lua implementation, by saying ME.MV12_PV[17] = 23. I can set any of the priorities I want by indexing that PV. (Corollaries - what is PV? What is the underscore? How do I get to these objects? Or are they just interpreted from Lua to some function in C on the backend?)
All this being said, I need to make that variable name dynamic, so that i can set whichever value I need to set, based on some other code. I have made several attempts.
This tells me the object(MV12_PV[17]) does not exist:
x = 12
ME["MV" .. x .. "_PV[17]"] = 23
But this works fine, setting priority 16 to 23:
x = 12
ME["MV" .. x] = 23
I was trying to attempt some sort of what I think is called an evaluation, or eval. But, this just prints out function followed by some random 8 digit number:
x = 12
test = assert(loadstring("MV" .. x .. "_PV[17] = 23"))
print(test)
Any help? Apologies if I am unclear - tbh, I am so far behind the 8-ball I am pretty much grabbing at straws.
Underscores can be part of Lua identifiers (variable and function names). They are just part of the variable name (like letters are) and aren't a special Lua operator like [ and ] are.
In the expression ME.MV12_PV[17] we have ME being an object with a bunch of fields, ME.MV12_PV being an array stored in the "MV12_PV" field of that object and ME.MV12_PV[17] is the 17th slot in that array.
If you want to access fields dynamically, the thing to know is that accessing a field with dot notation in Lua is equivalent to using bracket notation and passing in the field name as a string:
-- The following are all equivalent:
x.foo
x["foo"]
local fieldname = "foo"
x[fieldname]
So in your case you might want to try doing something like this:
local n = 12
ME["MV"..n.."_PV"][17] = 23
BACnet "Commmandable" Objects (e.g. Binary Output, Analog Output, and o[tionally Binary Value, Analog Value and a handful of others) actually have 16 priorities (1-16). The "17th" you are referring to may be the "Relinquish Default", a value that is used if all 16 priorities are set to NULL or "Relinquished".
Perhaps your system will allow you to write to a BACnet Property called "Relinquish Default".

How to check for condition inside a macro?

I have defined a macro like this,
#define SELECTED_SITE_ID (SITE_MANAGER.selectedSite.siteCollectionIdentifier)
It's returning a double value which was stored in user defaults.
In code, SELECTED_SITE_ID macro is using for more than 1000 places like this,
int a = SELECTED_SITE_ID;
NSArray *array = [someClassObject objectAtIndex:a-1];
As my app is running for the first time, SELECTED_SITE_ID macro is returning 0.0, that's assigned to int a; so a will be 0.
Where from array, I have written a-1 to 0-1, this leads to a crash issue.
I don't know what's the quick way to fix this for now, as it's written at approx. 1000 places?
What I think?
I guess inside macro itself? If I would able to check, what's the value coming? if it's 0 then I will explicitly return 1.
Any help would be appreciated.
That was easy,
#define SELECTED_SITE_ID ((SITE_MANAGER.selectedSite.siteCollectionIdentifier <= 0.0) ? 1 : SITE_MANAGER.selectedSite.siteCollectionIdentifier)
Better way is:
#define SELECTED_SITE_ID (SITE_MANAGER.selectedSite.siteCollectionIdentifier? :1)

Why when I use #define for int I need to wrap them in brackets?

This is my example I've found:
#define kNumberOfViews (37)
#define kViewsWide (5)
#define kViewMargin (2.0)
Why it cannot be like that?
#define kNumberOfViews 37
#define kViewsWide 5
#define kViewMargin 2.0
And what means k in front? Is there a some guide for it?
It is not really required in your example, but the use of parenthesis in defines is a useful approach to make sure your define states exactly what you mean in the context of the define and protects it from side effects when used in code.
E.g
#define VAR1 40
#define VAR2 20
#define SAVETYPING1 VAR1-VAR2
#define SAVETYPING2 (VAR1-VAR2)
Then in your code
foo(4*SAVETYPING1); // comes out as foo(140)
Is not the same as
foo(4*SAVETYPING2); // comes out as foo(80)
As for what the k prefix means. It is used for constants. Plenty of discussion here on the origins:
Objective C - Why do constants start with k
#define SOME_VALUE 1234
It is preprocessor directive. It means, that before your code is compiled, all occurrences of SOME_VALUE will be replaced by 1234. Alternative to this would be
const int kSomeValue = 1234;
For discussion about advantages of one or the other see
#define vs const in Objective-C
As for brackets - in more complex cases they are necessary exactly because preprocessor makes copy-paste with #define. Consider this example:
#define BIRTH_YEAR 1990
#define CURRENT_YEAR 2015
#define AGE CURRENT_YEAR - BIRTH_YEAR
...
// later in the code
int ageInMonths = AGE * 12;
Here one might expect that ageInMonths = 25 * 12, but instead it is computed as ageInMonths = 2015 - 1990 * 12 = 2015 - (1990 * 12). That is why correct definition of AGE should have been
#define AGE (CURRENT_YEAR - BIRTH_YEAR)
As for naming conventions, AFAIK for #define constants capital cases with underscores are used, and for const constants camel names with leading k are used.
k is just a hungarian notation convention to indicate that that is a constant value. Personally I find it dumb, but it is a convention that many people follow. It isn't required for the code to work at all.
I am not sure why the examples you saw had parens around them, but there is no need to have parentheses around #define values.

Is it better to write 0.0, 0.0f or .0f instead of simple 0 for supposed float or double values

Hello well all is in the title. The question apply especially for all those values that can be like NSTimeInterval, CGFloat or any other variable that is a float or a double. Thanks.
EDIT: I'm asking for value assignment not format in a string.
EDIT 2: The question is really does assigning a plain 0 for a float or a double is worst than anything with f a the end.
The basic difference is as :
1.0 or 1. is a double constant
1.0f is a float constant
Without a suffix, a literal with a decimal in it (123.0) will be treated as a double-precision floating-point number. If you assign or pass that to a single-precision variable or parameter, the compiler will (should) issue a warning. Appending f tells the compiler you want the literal to be treated as a single-precision floating-point number.
If you are initializing a variable then it make no sense. compiler does all the cast for you.
float a = 0; //Cast int 0 to float 0.0
float b = 0.0; //Cast 0.0 double to float 0.0 as by default floating point constants are double
float c = 0.0f // Assigning float to float. .0f is same as 0.0f
But if you are using these in an expression then that make a lot of sense.
6/5 becomes 1
6/5.0 becomes 1.2 (double value)
6/5.0f becomes 1.2 (float value)
If you want to dig out if there is any difference to the target CPU running the code or the binary code it executes, you can easily copy one of the command lines compiling the code from XCode to command line, fix missing environment variables and add a -S. By that you would get assembly output, that you can use to compare. If you put all 4 variants in a small example source file, you can compare the resulting assembly code afterwards, even without being fluent in ARM assembly.
From my ARM assembly experience (okay... 6 years ago and GCC) I would bet 1ct on something like XORing a register with itself to flush it's content to 0.
Whether you use 0.0, .0, or 0.0f or even 0f does not make much of a difference. (There are some with respect to double and float) You may even use (float) 0.
But there is a significant difference between 0 and some float notation. Zero will always be some type of integer. And that can force the machine to perform integer operations when you may want float operations instead.
I do not have a good example for zero handy but I've got one for float/int in general, which nealy drove me crazy the other day.
I am used to 8-Bit-RGB colors That is because of my hobby as photographer and because of my recent background as html developer. So I felt it difficult to get used to the cocoa style 0..1 fractions of red, green and yellow. To overcome that I wanted to use the values that I was used to and devide them by 255.
[CGColor colorWithRed: 128/255 green: 128/255 andYellow: 128/255];
That should generate me some nice middle gray. But it did not. All that I tried either made a black or white.
First I thought that this was caused by some undocumented dificiency of the UI text objects with which I was using this colour. It took a while to realize that this constant values forced integer operations wich can only round up or down to 0 and 1.
This expession eventually did what I wanted to achieve:
[CGColor colorWithRed: 128.0/255.0 green: 128.0/255.0 andYellow: 128.0/255.0];
You could achieve the same thing with less .0s attached. But it does not hurt having more of them as needed. 128.0f/(float)255 would do either.
Edit to respond to your "Edit2":
float fvar;
fvar = 0;
vs ...
fvar = .0;
In the end it does not make a difference at all. fvar will contain a float value close to (but not always equal to) 0.0. For compilers in the 60th and 70th I would have guessed that there is a minor performance issue associated with fvar = 0. That is that the compiler creates an int 0 first which will then have to be converted to float before the assignment. Modern compilers of today should optimize automatically much better than older ones. In the end I'd have to look at the machine code output to see whether it does make a difference.
However, with fvar = .0; you are always on the safe site.

What are primitive types default-initialized to in C?

I just had Apple's C/C++ compiler initialize a float to a non-zero value (approx "-0.1").
That was a big surprise - and only happened occasionally (but 100% repeatably, if you ran through the same function calls / args beforehand). It took a long time to track down (using assertions).
I'd thought floats were zero-initialized. Googling suggests that I was thinking of C++ (which of course is much more precise about this stuff - c.f. SO: What are primitive types default-initialized to in C++? ).
But maybe Apple's excuse here is that their compiler was running in C mode ... so: what about C? What should happen, and (more importantly) what's typical?
(OF COURSE I should have initialized it manually - I normally do - but in this one case I failed. I didn't expect it to blow up, though!)
(Google is proving worse than useless for any discussion of this - their current search refuses to show "C" without "C++". Keeps deciding I'm too stupid, and ignoring even my input even when running in advanced mode)
Here's the actual source example where it happened. At first I thought there might be a problem with definitions of MAX and ABS (maybe MAX(ABS,ABS) doesnt always do what you'd expect?) ... but digging with assertions and debugger, I eventually found it was the missing initialization - that float was getting init'd to non-zero value VERY occasionally):
float crossedVectorX = ... // generates a float
float crossedVectorY = ... // generates a float
float infitesimal; // no manual init
float smallPositiveFloat = 2.0 / MAX( ABS(crossedVectorX), ABS(crossedVectorY));
// NB: confirmed with debugger + assertions that smallPositiveFloat was always positive
infitesimal += smallPositiveFloat;
NSAssert( infitesimal >= 0.0, #"This is sometimes NOT TRUE" );
Only objects with static storage duration are initialized to 0 if there is no explicit initializer.
#include <stdio.h>
float f; // initialized to 0, file scope variables have static storage
static float g; // initialized to 0
int main(void)
{
float h; // not initialized to 0, automatic storage duration
static float i; // initialized to 0
return 0;
}
Objects with automatic storage duration (like h in the example above) that are not explicitly initialized have an indeterminate value. Reading their value is undefined behavior.
EDIT: for the sake of completeness, since C11 objects with thread storage duration are also initialized to 0 if there is no explicit initializer.
The relevant part of the standard is ยง6.7.9 paragraph 10:
If an object that has automatic storage duration is not initialized explicitly, its value is indeterminate.
If your variable had thread or static storage duration instead, then the next part of the paragraph would take effect:
If an object that has static or thread storage duration is not initialized explicitly, then:
-- if it has pointer type, it is initialized to a null pointer;
-- if it has arithmetic type, it is initialized to (positive or unsigned) zero;
...
I would also note that you should turn on your compiler's warnings (specifically the warning for uninitialized variables), as that should have identified the problem for you immediately.
Static variable would be initialized to zero, but I'm guessing you are talking about a local variable (i.e. stack, or automatic) - these are not initialized for you, but get whatever value is at that memory on the stack.
I had to pull out my K&R for this answer:
In the absence of explicit initialization, external and static variables are guaranteed to be initialized to zero; automatic and register variables have undefined (i.e., garbage) initial values.
I don't believe that any of the standards for C define initial values for variables in general. This would be in accord with the general philosophy of and application domain for C -- programming for grown-ups who may, one day, have reason to want their compiler to not initialise a variable for them and who know that it is their responsibility to initialise their own variables.

Resources