Infinite length array in swift [closed] - ios

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I noticed one thing in Haskell that it efficiently handle working with infinite length array with great ease.
So being swift programmer, I am curious about how we can achieve this with swift?
For example:
var infiniteArray = [1,2,3,.............]

Swift's Array stores concrete, eagerly evaluated elements, so it can't be infinite (due to finite memory).
The Swift equivalent is an infinite Sequence. Here's an example that produces the an infinite sequence of natural numbers.
let naturalNumbers = sequence(first: 0, next: { $0 + 1 })
let first5NaturalNumbers = Array(naturalNumbers.prefix(5))
print(first5NaturalNumbers)
It uses the sequence(first:next:) function to produce an UnfoldSequence, which is an infinitely long, lazy evaluated sequence.

Haskell has lazy evaluation because it is a fully functional language (has no side effects, etc.), Swift is not a functional language (though it borrows some features, functionality and syntax) and does not have the same features.
That being said you can use Sequences and Generators to create the impression of infinite lists - see http://blog.scottlogic.com/2014/06/26/swift-sequences.html for example. Of course the list is not really infinite - btw the Haskell list isn't infinite either, it just stores the function to create new entries.
The main difference is that in Haskell there are some major performance optimisations possible due to the lack of variables and side effects. In Swift you cannot do that. So be careful translating Haskell code to Swift.

If your intention to pass unknown number of parameters to a function (from a logical viewpoint, you can't say "infinite number of parameters" because of the limitation of the machine's memory), it's called variadic parameter:
A variadic parameter accepts zero or more values of a specified type.
You use a variadic parameter to specify that the parameter can be
passed a varying number of input values when the function is called.
Write variadic parameters by inserting three period characters (...)
after the parameter’s type name.
For example, let's say that you want to implement a function that takes unknown number of Ints to sum them:
func summationOfInfiniteInts(ints: Int...) -> Int {
return ints.reduce(0, +)
}
let summation = summationOfInfiniteInts(ints: 1, 2, 3, 4) // 10
Note that ints parameter in the block of summationOfInfiniteInts represented as [Int] (array of Int).
Hope this helped.

Related

How to generate the same random sequence from a given seed in Delphi [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 months ago.
Improve this question
I am wanting to generate the same random sequence (of numbers or characters) based on a given "seed" value.
Using the standard Randomize function does not seem to have such an option.
For example in C# you can initialize the Random function with a seed value (Random seed c#).
How can I achieve something similar in Delphi?
You only need to assign a particular value to the RandSeed global variable.
Actually, I'm almost surprised you asked, because you clearly know of the Randomize function, the documentation for which states the following:
The random number generator should be initialized by making a call to Randomize, or by assigning a value to RandSeed.
The question did not mention thread safety as a requirement, but the OP specified that in comments.
In general, pseudorandom number generators have internal state, in order to calculate the next number. So they cannot be assumed to be thread-safe, and require an instance per thread.
For encapsulated thread-safe random numbers, one alternative is to use a suitably good hash function, such as xxHash, and pass it a seed and a counter that you increment after each call in the thread.
There is a Delphi implementation of xxHash here:
https://github.com/Xor-el/xxHashPascal
For general use, it's easy to make several versions in 1, 2 or 3 dimensions as needed, and return either floating point in 0..1 or integers in a range.

Plsase! I need anyone that can, decode “Luraph Obfuscator” [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I paid an untrusted developer for a script. And as I thought he scammed me. He did send me code, but he obfuscated the script.
https://paste bin.com/Y9rn2Gdr
Every instruction is separated in functions, therefore the code cant be directly deobfuscated without specific details about its functionality.
This code consists of:
A string that contains the source of the script
Some bytes of the string represents an offset of that character in the ASCII table, while others represent functions and loop-paradigms like for and while (note that these are separated in different functions within the interpreter)
An iterator function (interpreter) that goes through every character in the string and calls for other functions in order to find the accurate action to perform based in the character.
The code that is outside the string is an interpreter, for deobfuscating the interpreter I suggest the following:
Take care of variable names, every variable in the interpreter has to be defined before, therefore you can tell by context what's the usage of that variable
Solve the #{4093, 2039, 2140, 1294} tables by simply calculating the length (just like # operator does), that is, the result for that last table is 4
You need a pretty printer that will apply indentation and format to the code, making it more readable
A pseudocode of the reader looks like this (I assume this is also nested within other functions of the interpreter):
-- ReadBytes is the main function that holds the interpreter and other functions
local function ReadBytes(currentCharacter)
local repeatOffset
currentCharacter =
string_gsub(
string_sub(currentCharacter, 5),
"..",
function(digit)
if string.sub(digit, 2) == 'H' then
repeatOffset = tonumber(string_sub(digit, 1, 1))
return ""
else
local char = string_char(tonumber(digit, 16))
if repeatOffset then
local repeatOutput = string_rep(char, repeatOffset)
repeatOffset = nil
return repeatOutput
else
return char
end
end
end
)
. . . -- Other nested functions
end
I have trouble understanding the functionality of the encoded string, however, from this question, this seems to be a ROBLOX script, is that correct?
If that's the case, I recommend you debugging the code within ROBLOX environment to understand the core functionality of the code and rewrite a readable alternative that works just like the original.
You can also deobfuscate the interpreter to understand how it works, then capture the interpreter actions in order to see the workflow of it, then write a Lua script that works exactly like the original and does not require the interpreter.

Why can't we use a for loop with increment operator in Swift [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Instead of using:
for (n = 1; n <= 5; n++) {
print(n)
}
why do we use the following construct in Swift?
for n in 1...5 {
print(n)
}
// Output: 1 2 3 4 5
"I am certainly open to considering dropping the C-style for loop.
IMO, it is a rarely used feature of Swift that doesn’t carry its
weight. Many of the reasons to remove them align with the rationale
for removing -- and ++. "
-- Chris Lattner,
There is a proposal about increment https://github.com/apple/swift-evolution/blob/master/proposals/0004-remove-pre-post-inc-decrement.md
These operators increase the burden to learn Swift as a first
programming language - or any other case where you don't already know
these operators from a different language.
Their expressive advantage is minimal - x++ is not much shorter than x
+= 1.
Swift already deviates from C in that the =, += and other
assignment-like operations returns Void (for a number of reasons).
These operators are inconsistent with that model.
Swift has powerful features that eliminate many of the common reasons
you'd use ++i in a C-style for loop in other languages, so these are
relatively infrequently used in well-written Swift code. These
features include the for-in loop, ranges, enumerate, map, etc.
Code that actually uses the result value of these operators is often
confusing and subtle to a reader/maintainer of code. They encourage
"overly tricky" code which may be cute, but difficult to understand.
While Swift has well defined order of evaluation, any code that
depended on it (like foo(++a, a++)) would be undesirable even if it
was well-defined.
These operators are applicable to relatively few types: integer and
floating point scalars, and iterator-like concepts. They do not apply
to complex numbers, matrices, etc.
Finally, these fail the metric of "if we didn't already have these,
would we add them to Swift 3?"
And about the loops
https://github.com/apple/swift-evolution/blob/master/proposals/0007-remove-c-style-for-loops.md
Both for-in and stride provide equivalent behavior using
Swift-coherent approaches without being tied to legacy terminology.
There is a distinct expressive disadvantage in using for-loops
compared to for-in in succinctness
for-loop implementations do not lend themselves to use with
collections and other core Swift types.
The for-loop encourages use of unary incrementors and decrementors,
which will be soon removed from the language.
The semi-colon delimited declaration offers a steep learning curve
from users arriving from non C-like languages
If the for-loop did not exist, I doubt it would be considered for
inclusion in Swift 3.

Why is typecase a bad thing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Both Agda and Idris effectively prohibit pattern matching on values of type Type. It seems that Agda always matches on the first case, while Idris just throws an error.
So, why is typecase a bad thing? Does it break consistency? I haven't been able to find much information regarding the topic.
It's really odd that people think pattern matching on types is bad. We get a lot of mileage out of pattern matching on data which encode types, whenever we do a universe construction. If you take the approach that Thorsten Altenkirch and I pioneered (and which my comrades and I began to engineer), the types do form a closed universe, so you don't even need to solve the (frankly worth solving) problem of computing with open datatypes to treat types as data. If we could pattern match on types directly, we wouldn't need a decoding function to map type codes to their meanings, which at worst reduces clutter, and at best reduces the need to prove and coerce by equational laws about the behaviour of the decoding function. I have every intention of building a no-middleman closed type theory this way. Of course, you need that level 0 types inhabit a level 1 datatype. That happens as a matter of course when you build an inductive-recursive universe hierarchy.
But what about parametricity, I hear you ask?
Firstly, I don't want parametricity when I'm trying to write type-generic code. Don't force parametricity on me.
Secondly, why should types be the only things we're parametric in? Why shouldn't we sometimes be parametric in other stuff, e.g., perfectly ordinary type indices which inhabit datatypes but which we'd prefer not to have at run time? It's a real nuisance that quantities which play a part only in specification are, just because of their type, forced to be present.
The type of a domain has nothing whatsoever to do with whether quantification over it should be parametric.
Let's have (e.g. as proposed by Bernardy and friends) a discipline where both parametric/erasable and non-parametric/matchable quantification are distinct and both available. Then types can be data and we can still say what we mean.
Many people see matching on types as bad because it breaks parametricity for types.
In a language with parametricity for types, when you see a variable
f : forall a . a -> a
you immediately know a lot about the possible values of f. Intuitively: Since f is a function, it can be written:
f x = body
The body needs to be of type a, but a is unknown so the only available value of type a is x. If the language allows nontermination, f could also loop. But can it make the choice between looping or returning x based on the value of x? No, because a is unknown, f doesn't know which functions to call on x in order to the make the decision. So there are really just two options: f x = x and f x = f x. This is a powerful theorem about the behavior of f that we get just by looking at the type of f. Similar reasoning works for all types with universally quantified type variables.
Now if f could match on the type a, many more implementations of f are possible. So we would lose the powerful theorem.
In Agda, you cannot pattern matching on Set because it isn't an inductive type.

How I can make method take infinity arguments in objective-c? [duplicate]

This question already has answers here:
How to create variable argument methods in Objective-C
(3 answers)
Closed 9 years ago.
I want to make method take infinity arguments in objective-c and add these arguments .
I'm assuming by "infinite" you mean "as many as I want, as long, as memory allows".
Pass a single argument of type NSArray. Fill it with as many arguments as you wish.
If all arguments are guaranteed to be non-object datatypes - i. e. int, long, char, float, double, struct's and arrays of them - you might be better of with a NSData. But identifying individual values in a binary blob will be trickier.
Since you want to add them up, I assume they're numbers. Are they all the same datatype? Then pass an array (and old style C array) or a pointer, and also a number of elements.
EDIT: now that I think of it, the whole design is fishy. You want a method that takes an arbitrarily large number of arguments and adds them up. But the typing effort required for passing them into a function is comparable to that of summing them up. If you have a varargs function, Sum(a,b,c,d,e) takes less typing than a+b+c+d+e. If you have a container class (NSArray, NSData, etc), you have to loop through the addends; while you're doing that, you might as well sum them up.
That's not possible on a finite machine (that is, all existing computers).
If you're good with a variable, yet finite, amount of arguments, there are C's ... variadic argument functions.

Resources