I am just thinking of the difference between below methods, while defining constants:
Method1:
Create a header file to define all the constants, using include guard:
#ifndef c1
#define c1 #"a123456789"
#endif
then assign the constant to the function:
Identity.number = c1;
Method2:
Just simply define the constant
#define c1 #"a123456789"
then assign the constant to the function:
Identity.number = c1;
Method3:
Do not define a constant, just assign the value to a function:
Identity.number = #"a123456789";
Any pros and cons for the above?
The first method is important when you make sure that the constant is only defined once. The third method don't allow the IDE to help you with autocompletion which can be important when the value of the constant is more complex.
Methods 1 and 2 are much better for bigger projects, because you can easily change the value of the constant one place.
Method 1 may be especially good for very big projects with many files, but is not really necessary for smaller projects.
In method 3, you have to search through every line of code to find the value you want to assign to (if you assign it more places). Therefore, I think it is bad to use this.
Related
So...I want to create five different polynomials inside a loop in order to make a Sturm sequence, but I don't seem to be able to dynamically name a set of polynomials with different names.
For example:
In the first iteration it would define p1(x):whatever
Then, in the second iteration it would define p2(x):whatever
Lastly, in the Nth iteration it would define pn(x):whatever
So far, I have managed to simply store them in a list and call them one by one by its position. But surely there is a more professional way to accomplish this?
Sorry for the non-technical language :)
I think a subscripted variable is appropriate here. Something like:
for k:1 thru 5 do
p[k] : make_my_polynomial(k);
Then p[1], ..., p[5] are your polynomials.
When you assign to a subscripted variable e.g. something like foo[bar]: baz, where foo hasn't been defined as a list or array already, Maxima creates what it calls an "undeclared array", which is just a lookup table.
EDIT: You can refer to subscripted variables without assigning them any values. E.g. instead of x^2 - 3*x + 1 you could write u[i]^2 - 3*u[i] + 1 where u[i] is not yet assigned any value. Many (most?) functions treat subscripted variables the same as non-subscripted ones, e.g. diff(..., u[i]) to differentiate w.r.t. u[i].
There is any m4 syntax that is equivalent to this C preprocessor one?
#if defined A || defined B
do something
#endif
The short answer is no.
The long answer:
Checking if macros are defined
define(`defined', `ifelse($1()$1, `$1()$1', ``0'', ``1'')')
ifelse(eval(defined(`A') || defined(`B')),
1,
``At least one is defined'',
``Neither are defined'')
There are no sensible ways to check for a defined macro in m4, so you would have to resort to hacks like the above.
How it works
ifelse checks for equality of two strings. In the defined macro, I've expanded the macro in $1 twice (once as $1(), once as $1). I'm comparing it against $1()$1 as a string, so if it doesn't expand then it will compare true. The reason for specifying the macros in two different ways is because A could be defined as ``A'' or ``A()'' which would otherwise cause false negatives when using this method to check whether or not it is defined.
I'm then using that defined macro within an eval to throw the || logic on top.
Caveats
If you use the word defined in your document already, you might want to give the macro a different name.
The defined macro will not work on macros defined to expand to unquoted syntactic markers like (, ,, or ).
If the macro to be checked is infinitely recursive, the defined check will also never return. (Essentially, realize that a hack like this is still actually executing the macro.)
Though the last 2 points there are something you'd expect from any ifelse check on a macro, it might not be intuitive to expect it from a macro purporting to check for whether another macro is defined.
A better way
I would much rather suggest that you define the variables with some default value first, and just avoid the problem of checking whether it is defined or not altogether.
This is much easier to do:
# Define this right off the bat:
define(`A', ``0'')
# Maybe later this line will come up...
# Quotes around the A are mandatory
define(`A', ``1'')
# Then soon after that, you can check:
ifelse(A, `0', , ``hey, A is nonzero!'')
I'm learning Ruby and have just encountered implicit receivers for methods e.g. when I call the method normalize without specifying a receiver it is interpreted with an implicit receiver as self.normalize.
My question is when someone is reading my code how can they easily tell that normalize is a method called on the implicit receiver and not a variable such as normalize = "normalize"?
It seems to me that both when it is a method call normalize and when it is a variable normalize they appear identical in the code.
When ever you use =, as in your example of normalize = "normalize", it will set a variable local to the block - you would need to explicitly add the self. to assign an attribute. The explicit receiver of self only comes into play when you are not assigning via =.
Also, to summarize what is in the comments:
There is a naming convention here too. Use verbs for methods (normalize) and nouns for variables (normalized_value).
A good IDE, like RubyMine, (and even many plain editors) will use syntax highlighting to make a visual distinction, but if it is not obvious immediately or after scanning previous 5 lines, you likely have some bigger problems with your code.
What would be the difference between say doing this?
#define NUMBER 10
and
float number = 10;
In what circumstances should I use one over the other?
#define NUMBER 10
Will create a string replacement that will be executed by the preprocessor (i.e. during compilation).
float number = 10;
Will create a float in the data-segment of your binary and initialize it to 10. I.e. it will have an address and be mutable.
So writing
float a = NUMBER;
will be the same as writing
float a = 10;
whereas writing
float a = number;
will create a memory-access.
As Philipp says, the #define form creates a replacement in your code at the preprocessing stage, before compilation. Because the #define isn't a variable like number, your definition is hard-baked into your executable at compile time. This is desirable if the thing you are repesenting is a truly a constant that doesn't need to calculated or read from somewhere at runtime, and which doesn't change during runtime.
#defines are very useful for making your code more readable. Suppose you were doing physics calculations -- rather than just plonking 0.98f into your code everywhere you need to use the gravitational acceleration constant, you can define it in just one place and it increases your code readability:
#define GRAV_CONSTANT 0.98f
...
float finalVelocity = beginVelocity + GRAV_CONSTANT * time;
EDIT
Surprised to come back and find my answer and see I didn't mention why you shouldn't use #define.
Generally, you want to avoid #define and use constants that are actual types, because #defines don't have scope, and types are beneficial to both IDEs and compilers.
See also this question and accepted answer: What is the best way to create constants in Objective-C
"#Define" is actually a preprocessor macro which is run before the program starts and is valid for the entire program
Float is a data type defined inside a program / block and is valid only within the program / block.
What indicator do you use for member declaration in F#? I prefer
member a.MethodName
this is to many letters and x is used otherwise.
I do almost always use x as the name of this instance. There is no logic behind that, aside from the fact that it is shorter than other options.
The options that I've seen are:
member x.Foo // Simply use (short) 'x' everywhere
member ls.Foo // Based on type name as Benjol explains
member this.Foo // Probably comfortable for C# developers
member self.Foo // I'm not quite sure where this comes from!
member __.Foo // Double underscore to resemble 'ignore pattern'
// (patterns are not allowed here, but '__' is identifier)
The option based on the type name makes some sense (and is good when you're nesting object expressions inside a type), but I think it could be quite difficult to find reasonable two/three abbreviation for every type name.
Don't have a system. Wonder if I should have one, and I am sure there will be a paradigm with its own book some day soon. I tend to use first letter(s) of the type name, like Benjol.
This is a degree of freedom in F# we could clearly do without. :)
I tend to use some kind of initials which represent the type so:
type LaserSimulator =
member ls.Fire() =
I largely tend to use self.MethodName, for the single reason that self represents the current instance by convention in the other language I use most: Python. Come to think of it, I used Delphi for some time and they have self as well instead of this.
I have been trying to convert to a x.MethodName style, similar to the two books I am learning from: Real World Functional Programming and Expert F#. So far I am not succeeding, mainly because referring to x rather than self (or this) in the body of the method still confuses me.
I guess what I am saying is that there should be a meaningful convention. Using this or self has already been standardised by other languages. And I personally don't find the three letter economy to be that useful.
Since I work in the .NET world, I tend to use "this" on the assumption that most of the .NET people who encounter F# will understand its meaning. Of course, the other edge of that sword is that they might get the idea that "this" is the required form.
.NET self-documentation concerns aside, I think I would prefer either: "x" in general, or -- like Benjol -- some abbreviation of the class name (e.g. "st" for SuffixTrie, etc.).
The logic I use is this: if I'm not using the instance reference inside the member definition, I use a double underscore ('__'), a la let-binding expressions. If I am referencing the instance inside the definition (which I don't do often), I tend to use 'x'.