This question already has answers here:
The "++" and "--" operators have been deprecated Xcode 7.3
(12 answers)
Closed 6 years ago.
After update to Xcode 7.3, there are bunch of warnings showing in my project.
'++' is deprecated: it will be removed in Swift 3
Any idea to fix this warning ? Any reasons why the ++ and -- will be deprecated in the future ?
Since Swift 2.2, you should use += 1 or -= 1 instead.
And after looking up Swift's evolution, there are some reasons for removing these operators:
These operators increase the burden to learn Swift as a first programming language - or any other case where you don't already know these operators from a different language.
Their expressive advantage is minimal - x++ is not much shorter than x += 1.
Swift already deviates from C in that the =, += and other assignment-like operations returns Void (for a number of reasons). These operators are inconsistent with that model.
Swift has powerful features that eliminate many of the common reasons you'd use ++i in a C-style for loop in other languages, so these are relatively infrequently used in well-written Swift code. These features include the for-in loop, ranges, enumerate, map, etc.
Code that actually uses the result value of these operators is often confusing and subtle to a reader/maintainer of code. They encourage "overly tricky" code which may be cute, but difficult to understand.
While Swift has well defined order of evaluation, any code that depended on it (like foo(++a, a++)) would be undesirable even if it was well-defined.
These operators are applicable to relatively few types: integer and floating point scalars, and iterator-like concepts. They do not apply to complex numbers, matrices, etc.
Finally, these fail the metric of "if we didn't already have these, would we add them to Swift 3?"
Please check out Swift evolution for more info.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Instead of using:
for (n = 1; n <= 5; n++) {
print(n)
}
why do we use the following construct in Swift?
for n in 1...5 {
print(n)
}
// Output: 1 2 3 4 5
"I am certainly open to considering dropping the C-style for loop.
IMO, it is a rarely used feature of Swift that doesn’t carry its
weight. Many of the reasons to remove them align with the rationale
for removing -- and ++. "
-- Chris Lattner,
There is a proposal about increment https://github.com/apple/swift-evolution/blob/master/proposals/0004-remove-pre-post-inc-decrement.md
These operators increase the burden to learn Swift as a first
programming language - or any other case where you don't already know
these operators from a different language.
Their expressive advantage is minimal - x++ is not much shorter than x
+= 1.
Swift already deviates from C in that the =, += and other
assignment-like operations returns Void (for a number of reasons).
These operators are inconsistent with that model.
Swift has powerful features that eliminate many of the common reasons
you'd use ++i in a C-style for loop in other languages, so these are
relatively infrequently used in well-written Swift code. These
features include the for-in loop, ranges, enumerate, map, etc.
Code that actually uses the result value of these operators is often
confusing and subtle to a reader/maintainer of code. They encourage
"overly tricky" code which may be cute, but difficult to understand.
While Swift has well defined order of evaluation, any code that
depended on it (like foo(++a, a++)) would be undesirable even if it
was well-defined.
These operators are applicable to relatively few types: integer and
floating point scalars, and iterator-like concepts. They do not apply
to complex numbers, matrices, etc.
Finally, these fail the metric of "if we didn't already have these,
would we add them to Swift 3?"
And about the loops
https://github.com/apple/swift-evolution/blob/master/proposals/0007-remove-c-style-for-loops.md
Both for-in and stride provide equivalent behavior using
Swift-coherent approaches without being tied to legacy terminology.
There is a distinct expressive disadvantage in using for-loops
compared to for-in in succinctness
for-loop implementations do not lend themselves to use with
collections and other core Swift types.
The for-loop encourages use of unary incrementors and decrementors,
which will be soon removed from the language.
The semi-colon delimited declaration offers a steep learning curve
from users arriving from non C-like languages
If the for-loop did not exist, I doubt it would be considered for
inclusion in Swift 3.
I am writing a Swift app and am dealing with decimals in a database (stored in mysql as decimals, with 2 digits. Basically it's sales someone made each day, so generally anything from $0 to $1000, but not millions, and nothing insane in terms of trailing decimals, just always rounded to 2 decimal places).
Referencing this helped me out:
How to properly format currency on ios
..But I wanted to just do a quick sanity check here and make sure this strategy is ok.
i.e I would use NSDecimal or NSDecimalNumber (is there a preferred swift equivalent??)
What would you all recommend I do when dealing with currency in Swift? I'd like to use the locale-based currency symbol as well. I have a class called Sales that contains the amount in question. What do you recommend the datatype to be?
Apologies if I am coming off lazy, I actually have some ideas on what to do but feel a little overwhelmed at the "right" approach, especially in a locale-sensitive way, and wanted to check in here with you all.
Thanks so much!
Update for Swift 3: A Decimal type is now available with built-in support for operators like *, /, +, <, etc. When used in an Any context (passed to Objective-C), it's bridged to NSDecimalNumber.
Old answer:
NSDecimal is not really supported in Swift (it's a weird opaque pointer type), but NSDecimalNumber is — and as in Obj-C, it's the best thing to use for base-ten arithmetic (because it actually does its operations in base ten). NSLocale, NSNumberFormatter and friends all work too and should satisfy your localization needs.
Swift 3 now has a Decimal (value) type which is bridged to NSDecimalNumber.
Each tuple cardinality is represented by its own type in swift (as in any other strongly-typed programming language I'm aware of), so we have
($T1, $T2)
($T1, $T2, $T3)
...
Since we have several different types, one per cardinality, they need to be finite.
In Scala we have up to Tuple22, in Haskell the current limit should be 64.
What's the limit (if any) in swift? Also, are the types implementations generated by the compiler or is there an explicit implementation I couldn't find?
In the current version of Xcode 6 Beta, compilation fails with tuples of arity larger than 1948 (the swift executable exits with code 254; there isn't a specific warning or error).
How could I handle operations with a number like:
48534588306961133067968196965257961415756656521818848750723547477673457670019632882524164647651492025728980571833579341743988603191694784406703
Nothing that I've tried worked so far... unsigned long, long long, etc...
What you need is a library that provides support for operations on integers of arbitrary length. However, from what I've been able to find out, there are no such libraries written in Objective-C.
You are nevertheless in luck as Objective-C is a superset of C. That makes it possible for you to use C libraries such as those described in the answers to this somewhat dated SO question.
Also, since the Clang compiler supports C++ and combining Objective-C and C++ code, you can probably use something like big int.
Note that none of the built-in types is even close to being big enough to represent numbers with as many digits as your examples. The biggest available integer type is unsigned long long, if you don't need negative numbers, and its size is 8 bytes/64 bits, which gives you a range of 0-18446744073709551615, or 20 digits max.
You could use JKBigInteger instead, it is a Objective-C wrapper around LibTomMath C library. And really easy to use and understand.
In your case:
JKBigInteger *int = [[JKBigInteger alloc] initWithString:#"48534588306961133067968196965257961415756656521818848750723547477673457670019632882524164647651492025728980571833579341743988603191694784406703"];
You can try here : http://gmplib.org/
GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers. There is no practical limit to the precision except the ones implied by the available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a regular interface.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Does my variable naming convention have a name?
Notation in question is described by example below:
T for type
P for pointer
F for field
A for argument
L for local
et cetera, there is at least S missing from the list, but i'm not sure which string it designates.
First 3 prefices was with Delphi since very beginning, last 2 i've noticed relatively recently. I'd like to know notation name (if any), and read some normative whitepaper (and adopt then, may be).
Zarko Gajic has a pretty good Delphi-specific list here:
http://delphi.about.com/od/standards/l/bldnc.htm
Personally, I find some conventions like this useful. I still remember my first language FORTRAN, where the convention for Integers was to start them any letter from I to N, and it was easy to remember because they are the first two letters of INteger.
Section "3.3 Field Naming" of the Object Pascal Style Guide by Charles Calvert gives a brief but good guide as to when to use Hungarian notation, and also what single character identifier names are appropriate. My FORTRAN background (8 character names max) also made me use "N" as the count of items and led to code such as:
DO 10 I = 1, N
DO 20 J = I, N
...
20 CONTINUE
10 CONTINUE
Ouch! The memories hurt.
My personal favorite of all these standards, is to obey the standards already established in the code you're in, and not try to impose a different standard 50% of the way through, and to religiously avoid bikeshed discussions.
But if you press me really hard, I'll admit, I prefer Charlie Calvert's standards as used by JVCL devs, same as "section 3.3" link by LKessler above.
Hungarian notation.
With modern IDEs (including Delphi's) many people (myself included) feel it is no longer necessary.
EDIT: Technically this is not true Hungarian notation, as sometimes the prefix indicates the scope rather than the type.