I have this code
def errorMap = validateParams(params)
if (errorMap) {
flash.errorMap = errorMap
return
}
My question is this: Can I combine the assignment in line #1 and evaluation of the condition in line#2 to make a one liner like the following:
if (flash.errorMap = validateParams(params)) {
return
}
Is it a bad practice to do this?
Thanks
Vijay Kumar
We are indoctrinated in C-like languages that single-equals "=" should look like a typo in an if statement. Using a syntax where the single-equals is doing what you mean makes it harder to spot the typo cases.
Although you certainly can do this, my own two cents is that it's usually a bad practice. It's terse, but your if statement is now relying on the evaluation of the assignment, which may not be immediately obvious when you come back and revisit this code months later.
In my opinion it's a very good practice. Calling the function and testing its return value should be thought of together, and putting them together in the source code helps to do that. If you do this habitually, it becomes essentially impossible to accidentally call the function but leave out the code check whether it succeeded.
If this is C++ or C# code, you can combine assignment and evaluation of condition. Just be absolutely certain to avoid using assignment (=) instead of comparison (==). You can waste hours figuring it out.
Also, be careful about using conditions that modify their parameters.
For example,
if (x++ > 100) doStuff()
vs
if (x+1 > 100) doStuff()
While the code in isolation looks precise and elegant, an assignment operator (=) inside the if-clause is more likely to be overlooked as the more prevalent comparison operator (==), which will then cause you more problems.
I wouldn't use it in practice. It could make a good multiple-choice question though.
Related
There's a very narrow semantic difference between the two, and I find myself wondering why both options exist. Are they in any way different functionally, or is one likely just an alias of the other?
There is no difference at all. They are, in fact, the very same method.
To the compiler,
myQueue.async(execute: { foo() })
is exactly the same as
myQueue.async {
foo()
}
When the last argument of any function or method is a function, you can pass that argument as a trailing closure instead of passing it inside the argument list. This is done in order to make higher-order functions such as DispatchQueue.async feel more like part of the language, reduce syntactic overhead and ease the creation of domain-specific languages.
There's documentation on trailing closure syntax here.
And by the way, the idiomatic way to write my first example would be:
myQueue.async(execute: foo)
What you're referring to is called trailing closure syntax. It's a syntactic sugar for making closures easier to work with.
There are many other kinds of syntactic sugar features that pertain to closures, which I cover in my answer here.
As always, I highly recommend the Swift Language guide, which does a great job at explaining the basics like this.
I use == and != a lot in my code and I was wondering which is quicker in objective c so that I can make my app as fast as possible.
Situation
I have a variable which is one of two things and I want the quickest method to see which one it is
Thanks in advance
You should not worry about this level of detail for performance reasons, unless you've identified a performance issue.
However, wondering to satisfy an inquiring mind is a different matter! :-) The answer is they are identical.
A comparison is usually compiled as an instruction which sets condition flags; this could be a specific comparison instruction or something like an arithmetic instruction which sets condition codes; followed by a conditional jump which tests the condition flags - and a test for "equal" is the same cost as for "not equal", just a different setting of those condition flags.
This also means that statements such as if([some method call]) ... and if(![some method call]) ... have the same cost - the "not" operator produces no extra code.
You can test yourself.
Check current milliseconds before and after operating.
I guess there's no differences..
If you really need to know,
you could make a lot of operating with loop.
then you will get the answer.
This is silly. You would have to execute millions of iterations of code using the 2 versions of if statement in order to even detect a difference in speed. This is a triviality, and not worth worrying about.
As the other poster said, == and != should take exactly the same amount of time for non-floatingpoint values. For floating point, there might be some differences, since for an equal comparison the processor has to first normalize the 2 floating point values, then compare them, and normalizing is relatively time-consuming. I don't know if testing for non-equality if slower than equality. IT's unlikely but not impossible.
This is a question I've been mildly irritated about for some time and just never got around to search the answer to.
However I thought I might at least ask the question and perhaps someone can explain.
Basically many languages I've worked in utilize syntactic sugar to write (using syntax from C++):
int main() {
int a = 2;
a += 3; // a=a+3
}
while in lua the += is not defined, so I would have to write a=a+3, which again is all about syntactical sugar. when using a more "meaningful" variable name such as: bleed_damage_over_time or something it starts getting tedious to write:
bleed_damage_over_time = bleed_damage_over_time + added_bleed_damage_over_time
instead of:
bleed_damage_over_time += added_bleed_damage_over_time
So I would like to know not how to solve this if you don't have a nice solution, in that case I would of course be interested in hearing it; but rather why lua doesn't implement this syntactical sugar.
This is just guesswork on my part, but:
1. It's hard to implement this in a single-pass compiler
Lua's bytecode compiler is implemented as a single-pass recursive descent parser that immediately generates code. It does not parse to a separate AST structure and then in a second pass convert that to bytecode.
This forces some limitations on the grammar and semantics. In particular, anything that requires arbitrary lookahead or forward references is really hard to support in this model. This means assignments are already hard to parse. Given something like:
foo.bar.baz = "value"
When you're parsing foo.bar.baz, you don't realize you're actually parsing an assignment until you hit the = after you've already parsed and generated code for that. Lua's compiler has a good bit of complexity just for handling assignments because of this.
Supporting self-assignment would make that even harder. Something like:
foo.bar.baz += "value"
Needs to get translated to:
foo.bar.baz = foo.bar.baz + "value"
But at the point that the compiler hits the =, it's already forgotten about foo.bar.baz. It's possible, but not easy.
2. It may not play nice with the grammar
Lua doesn't actually have any statement or line separators in the grammar. Whitespace is ignored and there are no mandatory semicolons. You can do:
io.write("one")
io.write("two")
Or:
io.write("one") io.write("two")
And Lua is equally happy with both. Keeping a grammar like that unambiguous is tricky. I'm not sure, but self-assignment operators may make that harder.
3. It doesn't play nice with multiple assignment
Lua supports multiple assignment, like:
a, b, c = someFnThatReturnsThreeValues()
It's not even clear to me what it would mean if you tried to do:
a, b, c += someFnThatReturnsThreeValues()
You could limit self-assignment operators to single assignment, but then you've just added a weird corner case people have to know about.
With all of this, it's not at all clear that self-assignment operators are useful enough to be worth dealing with the above issues.
I think you could just rewrite this question as
Why doesn't <languageX> have <featureY> from <languageZ>?
Typically it's a trade-off that the language designers make based on their vision of what the language is intended for, and their goals.
In Lua's case, the language is intended to be an embedded scripting language, so any changes that make the language more complex or potentially make the compiler/runtime even slightly larger or slower may go against this objective.
If you implement each and every tiny feature, you can end up with a 'kitchen sink' language: ADA, anyone?
And as you say, it's just syntactic sugar.
Another reason why Lua doesn't have self-assignment operators is that table access can be overloaded with metatables to have arbitrary side effects. For self assignment you would need to choose to desugar
foo.bar.baz += 2
into
foo.bar.baz = foo.bar.baz + 2
or into
local tmp = foo.bar
tmp.baz = tmp.baz + 2
The first version runs the __index metamethod for foo twice, while the second one does so only once. Not including self-assignment in the language and forcing you to be explicit helps avoid this ambiguity.
I.e, if I have a record
-record(one, {frag, left}).
Is record_info(fields, one) going to always return [frag,
left]?
Is tl(tuple_to_list(#one{frag = "Frag", left = "Left"}))
always gonna be ["Frag", "Left"]?
Is this an implementation detail?
Thanks a lot!
The short answer is: yes, as of this writing it will work. The better answer is: it may not work that way in the future, and the nature of the question concerns me.
It's safe to use record_info/2, although relying on the order may be risky and frankly I can't think of a situation where doing so makes sense which implies that you are solving a problem the wrong way. Can you share more details about what exactly you are trying to accomplish so we can help you choose a better method? It could be that simple pattern matching is all you need.
As for the example with tuple_to_list/1, I'll quote from "Erlang Programming" by Cesarini and Thompson:
"... whatever you do, never, ever use the tuple representations of records in your programs. If you do, the authors of this book will disown you and deny any involvement in helping you learn Erlang."
There are several good reasons why, including:
Your code will become brittle - if you later change the number of fields or their order, your code will break.
There is no guarantee that the internal representation of records will continue to work this way in future versions of erlang.
Yes, order is always the same because records represented by tuples for which order is an essential property. Look also on my other answer about records with examples: Syntax Error while accessing a field in a record
Yes, in both cases Erlang will retain the 'original' order. And yes it's implementation as it's not specifically addressed in the function spec or documentation, though it's a pretty safe bet it will stay like that.
I've seen this around, but never heard a clear explanation of why... This is really for any language, not just C# or VB.NET or Perl or whatever.
When comparing two items, sometimes the "check" value is put on the left side instead of the right. Logically to me, you list your variable first and then the value to which you're comparing. But I've seen the reverse, where the "constant" is listed first.
What (if any) gain is there to this method?
So instead of:
if (myValue > 0)
I've seen:
if (0 < myValue)
or
if (Object.GimmeAnotherObject() != null)
is replaced with:
if (null != Object.GimmeAnotherObject())
Any ideas on this?
TIA!
Kevin
Some developers put the constant on the left like so
if(0 = myValue)
This is because you will get an error from the compiler, since you can't assign 0 a value. Instead, you will have to change it to
if(0 == myValue)
This prevents lots of painful debugging down the road, since typing
if(myValue = 0)
is perfectly legal, but most likely you meant
if(myValue == 0)
The first choice is not what you want. It will subtly change your program and cause all sorts of headaches. Hope that clarifies!
I don't think a simple rule like but the constant first or but the constant last is not a very smart choice. I believe the check should express it semantics. For example I prefer to use both versions in a range check.
if ((low <= value) && (value <= high))
{
DoStuff(value);
}
But I agree on the examples you mentioned - I would but the constant last and can see no obviouse reason for doing it the other way.
if (object != null)
{
DoStuff(object);
}
In C++ both these are valid and compile
if(x == 1)
and
if(x=1)
but if you write it like this
if(1==x)
and
if(1=x)
then the assignment to 1 is caught and the code won't compile.
It's considered "safer" to put the const variable on the left hand side.
Once you get into the habit of putting the const variable on the left for assignment it tends to become your default mode of operation, that's why you see it showing up in equality checks as well
For .Net, it's irrelevant, as the compiler won't let you make an assignment in a condition like:
if(x=1)
because (as everyone else said) it's bad practice (because it's easy to miss).
Once you don't have to worry about that, it's slightly more readable to put the variable first and the value second, but that's the only difference - both sides need to be evaluated.
Its a coding practice to catch typos like '!=' typed as '=' for example.
If you have a CONSTANT on the left all assignment operators will be caught by the compiler since you cannot assign to a constant.
Many languages (specifically C) allow a lot of flexibility in writing code. While, the constant on the left seems unusual to you, you can also program assignments and conditionals together as,
if (var1 = (var2 & var3)) { /* do something */ }
This code will get the boolean result into var1 and also /* do something */ if the result is true.
A related coding practice is to avoid writing code where conditional expressions have assignments within them; though the programming language allows such things. You do not come across such code a lot since assignments within conditionals is unusual so typical code does not have such things.
There is a good C language coding practices article at the IBM DeveloperWorks site that is probably still relevant for people writing in that language.