While checking a parser of a JSON in swift, I found the following code:
description = "desc" <~~ json
I suppose that it is similar to use the following:
description = json["desc"]
Is it correct? if no, what does this operator mean?
Thanks
You are right. But it would be wrong to assume that's what it is set out to do in Swift.
I think the parser that was being used was Gloss, and it seems that they have written an operator overload specifically to mean description = json["desc"] (and or or some other stuff under the hood to make the parsing easier) . The operator does not have a meaning per se in Swift. But it's invented by the framework to do the parsing.
You can read about operator overloading here
EDIT
I always have incorrectly used the terms operator overloading and defining custom operator interchangeably. Operator Overloading is extending the implementation of the existing operators which is different than defining your own custom operators. Thank you SO MUCH for pointing this out, #Giacomo Alzetta!
Related
When I learned python big-O of string[:index] was O(index).
However I've read apple developer document and swift's String is bit different from other languages. And there's no any documents about big-O of making substring from existing String.
I got curious about big-O of String(text[..<index]) in Swift so is there anyone who can tell me the big-O of String(text[..<index])
String is a collection of characters under the hood and String conforms to BidirectionalCollection, which protocol declares the subscript(bounds:) function as a required method. This change was introduced as part of SE-0163 and implemented in Swift 4.
Looking at the documentation of String.subscript(bounds:), it states that this method has O(1) complexity.
I am new to Haskell, and I have been trying to write a JSON parser using Parsec as an exercise. This has mostly been going well, I am able to parse lists and objects with relatively little code which is also readable (great!). However, for JSON I also need to parse primitives like
Integers (possibly signed)
Floats (possibly using scientific notation such as "3.4e-8")
Strings with e.g. escaped quotes
I was hoping to find ready to use parsers for things like these as part of Parsec. The closest I get is the Parsec.Tokens module (defines integer and friends), but those parsers require a "language definition" that seems way beyond what I should have to make to parse something as simple as JSON -- it appears to be designed for programming languages.
So my questions are:
Are the functions in Parsec.Token the right way to go here? If so, how to make a suitable language definition?
Are "primitive" parsers for integers etc defined somewhere else? Maybe in another package?
Am I supposed to write these kinds of low-level parsers myself? I can see myself reusing them frequently... (obscure scientific data formats etc.)
I have noticed that a question on this site says Megaparsec has these primitives included [1], but I suppose these cannot be used with parsec.
Related questions:
How do I get Parsec to let me call `read` :: Int?
How to parse an Integer with parsec
Are the functions in Parsec.Token the right way to go here?
Yes, they are. If you don't care about the minutiae specified by a language definition (i.e. you don't plan to use the parsers which depend on them, such as identifier or reserved), just use emptyDef as a default:
import Text.Parsec
import qualified Text.Parsec.Token as P
import Text.Parsec.Language (emptyDef)
lexer = P.makeTokenParser emptyDef
integer = P.integer lexer
As you noted, this feels unnecesarily clunky for your use case. It is worth mentioning that megaparsec (cf. Alec's suggestion) provides a corresponding integer parser without the ceremony. (The flip side is that megaparsec doesn't try to bake in support for e.g. reserved words, but that isn't difficult to implement in the cases you actually need it.)
I'm coming from a Java/Android background where we use NULL. Now I am doing Swift/iOS and I am confused as to what Swift's nil means.
Can I use it like NULL in Java? Does it act the exact same way or is there something different about its usage?
You can think of 'null' and 'nil' the same. Whether the language includes optionals is a separate concern.
Objective-C, has 'nil', but does not have in-built optionals while Swift does. Similarly, Java has 'null', but not have implicit optionals, while several JVM languages such as Kotlin, Scala and Ceylon do, and did so before Swift.  
Here's an article that compares about null, nil and optionals in Kotlin, Scala and Swift: http://codemonkeyism.com/comparing-optionals-and-null-in-swift-scala-ceylon-and-kotlin/
Incidentally, for Android development you may want to investigate Kotlin and the associated Anko library from Jetbrains.
Swift does not currently support a null coalescing operator.
Swift nil explanation
nil means "no value". Non-optional variables cannot be assigned nil even if they're classes, so it's explicitly not a null pointer and not similar to one.
I can recommend you to read more about optionals in Swift.
Disclosure: I don't know a whole terrible lot about Java, so my answer is coming from a C++/Objective-C/Swift perspective.
Matt Thompson of NSHipster has a great post about it and how it relates to Objective-C vs C. You can find it here.
The answer basically boils down to this: you should consider it the same.
0 is how you represent nothing for a primitive value
NULL is how you represent nothing for literal for C pointers
nil is how you represent nothing for literal Objective-C/Swift objects
I was wondering the difference of using and not using type annotations(var a: Int = 1 vs var a = 1) in Swift, so I read Apple's The Swift Programming Language.
However, it only says:
You can provide a type annotation when you declare a constant or variable, to be clear about the kind of values the constant or variable can store.
and
It is rare that you need to write type annotations in practice. If you provide an initial value for a constant or variable at the point that it is defined, Swift can almost always infer the type to be used for that constant or variable
It doesn't mention the pros and cons.
It's obviously that using type annotations makes code clear and self-explanatory, whereas not using it is easier to write the code.
Nonetheless, I'd like to know if there are any other reasons(for example, from the perspective of performance or compiler) that I should or should not use type annotations in general.
It is entirely syntactic so as long as you give the compiler enough information to infer the correct type the affect and performance at run time is exactly the same.
Edit: missed your reference to the compiler - I cannot see it having any significant impact on compile times either as it needs to evaluate your assignment expression and check type compatibility anyway.
We can make a nested list in erlang by writing something like this:
NL = [[2,3], [1]].
[[2,3],[1]]
But assume we wrote it like this instead:
OL = [[2,3]|1].
[[2,3]|1]
Is OL still a list? Can someone please elaborate more what OL is?
This is called an improper list and should typically not be used. I think most library functions expects proper lists (e.g. length([1|2]) throws bad argument exception). Pattern matching with improper lists works though.
For some use cases, see Practical use of improper lists in Erlang (perhaps all functional languages)
More information about | and building list is given in Functional Programming: what is an "improper list"? .