Euclidean algorithm pseudocode conversion to Swift? - ios

I have been working on a function for reducing fractions in Swift, and came across the Euclidean algorithm for finding the greatest common factor (http://en.wikipedia.org/wiki/Euclidean_algorithm)
I converted the pseudo code into swift, but yet I am confused how this is going to give me the greatest common factor if it is returning a which I thought was supposed to be the numerator of the fraction. Any help on this would be greatly appreciated. Thanks!
Pseudocode:
function gcd(a, b)
while b ≠ 0
t := b
b := a mod b
a := t
return a
Swift:
var a = 2
var b = 4
func gcd(a: Int, b: Int) -> Int {
var t = 0
while b != 0 {
t = b
let b = a % b
let a = t
}
return a
}
println("\(a)/\(b)")
Console output: 2/4

When you do this
let b = a % b
you are creating another readonly variable b, which has nothing to do with the variable b from the outside scope. You need to remove both lets inside the loop, and make parameters modifiable by declaring them with var, like this:
func gcd(var a: Int, var b: Int) -> Int {
var t = 0
while b != 0 {
t = b
b = a % b
a = t
}
return a
}
You can call your function like this:
let a = 111
let b = 259
println("a=\(a), b=\(b), gcd=\(gcd(a,b))")
This prints a=111, b=259, gcd=37

Taking #dasblinkenlight's answer and getting rid of t by using tuples for parallel assignment yields:
Swift 2.1:
func gcd(var a: Int, var _ b: Int) -> Int {
while b != 0 {
(a, b) = (b, a % b)
}
return a
}
gcd(1001, 39) // 13
var parameters are deprecated in Swift 2.2 and will be removed in Swift 3. So now it becomes necessary to declare a and b as var within the function:
func gcd(a: Int, _ b: Int) -> Int {
var (a, b) = (a, b)
while b != 0 {
(a, b) = (b, a % b)
}
return a
}

Swift 3 version of answer given by Christopher Larsen
func gcd(a: Int, b: Int) -> Int {
if b == 0 { return a }
return gcd(a: b, b: a % b)
}

Can use a recursive method that just keeps calling itself until the GCD is found.
func gcd(a: Int, b: Int) -> Int {
if b == 0 {
return a
}
let remainder: Int = a % b
return gcd(b, b: remainder)
}
and use like so
let gcdOfSomeNums = gcd(28851538, b: 1183019)
//gcdOfSomeNums is 17657

Related

How to detect the first run of an IteratorProtocol in swift?

Trying to detect the first run of an Iterator protocol.
In the example below I'm trying to start printing Fibonacci series from Zero but it starts from One:
class FibIterator : IteratorProtocol {
var (a, b) = (0, 1)
func next() -> Int? {
(a, b) = (b, a + b)
return a
}
}
let fibs = AnySequence{FibIterator()}
print(Array(fibs.prefix(10)))
What modifications can be made to the above code to detect the first run?
To answer your verbatim question: You can add a boolean variable
firstRun to detect the first call of the next() method:
class FibIterator : IteratorProtocol {
var firstRun = true
var (a, b) = (0, 1)
func next() -> Int? {
if firstRun {
firstRun = false
return 0
}
(a, b) = (b, a + b)
return a
}
}
let fibs = AnySequence { FibIterator() }
print(Array(fibs.prefix(10))) // [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
But there are more elegant solutions for this problem.
You can “defer” the update of a and b to be done after returning
the current value:
class FibIterator : IteratorProtocol {
var (a, b) = (0, 1)
func next() -> Int? {
defer { (a, b) = (b, a + b) }
return a
}
}
let fibs = AnySequence { FibIterator() }
print(Array(fibs.prefix(10))) // [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
or – perhaps simpler – change the initial values (using the fact
that the Fibonacci numbers are defined for negative indices as well):
class FibIterator : IteratorProtocol {
var (a, b) = (1, 0) // (Fib(-1), Fib(0))
func next() -> Int? {
(a, b) = (b, a + b)
return a
}
}
let fibs = AnySequence { FibIterator() }
print(Array(fibs.prefix(10))) // [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
Note that if you declare conformance to the Sequence protocol
then you don't need the AnySequence wrapper (there is a default
implementation of makeIterator() for types conforming to
IteratorProtocol). Also value types are generally preferred,
so – unless the reference semantics is needed – you can make it a struct:
struct FibSequence : Sequence, IteratorProtocol {
var (a, b) = (1, 0) // (Fib(-1), Fib(0))
mutating func next() -> Int? {
(a, b) = (b, a + b)
return a
}
}
let fibs = FibSequence()

I want multiple non-random numbers in swift

I am using swift and want to have a number of duplicatable patterns throughout my game.
Ideally I would have some sort of shared class that worked sort of like this (this is sort of pseudo-Swift code):
class RandomNumberUtility {
static var sharedInstance = RandomNumberUtility()
var random1 : Random()
var random2 : Random()
func seedRandom1(seed : Int){
random1 = Random(seed)
}
func seedRandom2(seed : Int){
random2 = Random(seed)
}
func getRandom1() -> Int {
return random1.next(1,10)
}
func getRandom2() -> Int {
return random2.next(1,100)
}
}
Then, to begin the series, anywhere in my program I could go like this:
RandomNumberUtility.sharedInstance.seedNumber1(7)
RandomNumberUtility.sharedInstance.seedNumber2(12)
And then I would know that (for example) the first 4 times I called
RandomNumberUtility.sharedInstance.getRandom1()
I would always get the same values (for example: 6, 1, 2, 6)
This would continue until at some point I seeded the number again, and then I would either get the exact same series back (if I used the same seed), or a different series (if I used a different seed).
And I want to have multiple series of numbers (random1 & random2) at the same time.
I am not sure how to begin to turn this into an actual Swift class.
Here is a possible implementation. It uses the jrand48 pseudo random number generator,
which produces 32-bit numbers.
This PRNG is not as good as arc4random(), but has the advantage
that all its state is stored in a user-supplied array, so that multiple
instances can run independently.
struct RandomNumberGenerator {
// 48 bit internal state for jrand48()
private var state : [UInt16] = [0, 0, 0]
// Return pseudo-random number in the range 0 ... upper_bound-1:
mutating func next(upper_bound: UInt32) -> UInt32 {
// Implementation avoiding the "module bias" problem,
// taken from: http://stackoverflow.com/a/10989061/1187415,
// Swift translation here: http://stackoverflow.com/a/26550169/1187415
let range = UInt32.max - UInt32.max % upper_bound
var rnd : UInt32
do {
rnd = UInt32(truncatingBitPattern: jrand48(&state))
} while rnd >= range
return rnd % upper_bound
}
mutating func seed(newSeed : Int) {
state[0] = UInt16(truncatingBitPattern: newSeed)
state[1] = UInt16(truncatingBitPattern: (newSeed >> 16))
state[2] = UInt16(truncatingBitPattern: (newSeed >> 32))
}
}
Example:
var rnd1 = RandomNumberGenerator()
rnd1.seed(7)
var rnd2 = RandomNumberGenerator()
rnd2.seed(12)
println(rnd1.next(10)) // 2
println(rnd1.next(10)) // 8
println(rnd1.next(10)) // 1
println(rnd2.next(10)) // 6
println(rnd2.next(10)) // 0
println(rnd2.next(10)) // 5
If rnd1 is seeded with the same value as above then it
produces the same numbers again:
rnd1.seed(7)
println(rnd1.next(10)) // 2
println(rnd1.next(10)) // 8
println(rnd1.next(10)) // 1
What you need is a singleton that generates pseudo-random numbers and make sure all your code that need a random number call via this class. The trick is to reset the seed for each run of your code. Here is a simple RandomGenerator class that will do the trick for you (it's optimized for speed which is a good thing when writing games):
import Foundation
// This random number generator comes from: Klimov, A. and Shamir, A.,
// "A New Class of Invertible Mappings", Cryptographic Hardware and Embedded
// Systems 2002, http://dl.acm.org/citation.cfm?id=752741
//
// Very fast, very simple, and passes Diehard and other good statistical
// tests as strongly as cryptographically-secure random number generators (but
// is not itself cryptographically-secure).
class RandomNumberGenerator {
static let sharedInstance = RandomNumberGenerator()
private init(seed: UInt64 = 12347) {
self.seed = seed
}
func nextInt() -> Int {
return next(32)
}
private func isPowerOfTwo(x: Int) -> Bool { return x != 0 && ((x & (x - 1)) == 0) }
func nextInt(max: Int) -> Int {
assert(!(max < 0))
// Fast path if max is a power of 2.
if isPowerOfTwo(max) {
return Int((Int64(max) * Int64(next(31))) >> 31)
}
while (true) {
var rnd = next(31)
var val = rnd % max
if rnd - val + (max - 1) >= 0 {
return val
}
}
}
func nextBool() -> Bool {
return next(1) != 0
}
func nextDouble() -> Double {
return Double((Int64(next(26)) << 27) + Int64(next(27))) /
Double(Int64(1) << 53)
}
func nextInt64() -> Int64 {
let lo = UInt(next(32))
let hi = UInt(next(32))
return Int64(UInt64(lo) | UInt64(hi << 32))
}
func nextBytes(inout buffer: [UInt8]) {
for n in 0..<buffer.count {
buffer[n] = UInt8(next(8))
}
}
var seed: UInt64 {
get {
return _seed
}
set(seed) {
_initialSeed = seed
_seed = seed
}
}
var initialSeed: UInt64 {
return _initialSeed!
}
private func randomNumber() -> UInt32 {
_seed = _seed &+ ((_seed &* _seed) | 5)
return UInt32(_seed >> 32)
}
private func next(bits: Int) -> Int {
assert(bits > 0)
assert(!(bits > 32))
return Int(randomNumber() >> UInt32(32 - bits))
}
private var _initialSeed: UInt64?
private var _seed: UInt64 = 0
}

Swift 1.2 "Cannot express tuple conversion" error

This method was working fine in the last stable version of Swift, but it won't compile in Swift 1.2:
final func rotateBlocks(orientation: Orientation) {
if let blockRowColumnTranslation:Array<(columnDiff: Int, rowDiff: Int)> = blockRowColumnPositions[orientation] {
for (idx, (columnDiff:Int, rowDiff:Int)) in enumerate(blockRowColumnTranslation) {
blocks[idx].column = column + columnDiff
blocks[idx].row = row + rowDiff
}
}
}
This line:
for (idx, (columnDiff:Int, rowDiff:Int)) in enumerate(blockRowColumnTranslation) {
Throws the following error:
"Cannot express tuple conversion "(index:Int, element:(columnDiff:Int,rowDiff:Int)) to "(Int, (Int, Int))"
Any ideas about what's going on here, and how to fix it?
I would use typealias to simplify, but the following compiles without error for me.
var row: Int = 0
var column: Int = 1
struct block {
var column: Int
var row: Int
}
var blocks = [block]()
enum Orientation { case Up; case Down; }
typealias Diff = (columnDiff: Int, rowDiff: Int)
typealias DiffArray = Array<Diff>
typealias DiffArrayDict = [Orientation: DiffArray]
var blockRowColumnPositions = DiffArrayDict();
func rotateBlocks(orientation: Orientation) {
if let blockRowColumnTranslation: DiffArray = blockRowColumnPositions[orientation] {
for (idx, diff) in enumerate(blockRowColumnTranslation) {
blocks[idx].column = column + diff.columnDiff
blocks[idx].row = row + diff.rowDiff
}
}
}
I ran into the same thing and was able to get this working by adding an element: label for the tuple:
for (idx, element: (columnDiff: Int, rowDiff: Int)) in enumerate(blockRowColumnTranslation) {
blocks[idx].column = column + element.columnDiff
blocks[idx].row = row + element.rowDiff
}
Looks like a Swift bug to me. More generally, this is busted:
let pair = (a: 1, b: 2)
// normally those named elements don't matter, this is fine:
let (x,y) = pair
// but add a bit of nesting:
let indexed = (index: 1, pair)
// and, error:
let (i, (x,y)) = indexed
// cannot express tuple conversion '(index: Int, (a: Int, b: Int))' to '(Int, (Int, Int))'
I'd try removing the type names from the array's tuple declaration (i.e. Array<(Int,Int)> instead of Array<(columnDiff: Int, rowDiff: Int)>), see if that helps.
In other, perhaps related, news, this appears to crash the 1.2 compiler:
let a: Array<(Int,Int)> = [(x: 1,y: 2)]
Thanks guys! I wound up just rewriting it as a for-loop.. it's not exciting but it seems to work okay:
final func rotateBlocks(orientation: Orientation) {
if let blockRowColumnTranslation:Array<(columnDiff: Int, rowDiff: Int)> = blockRowColumnPositions[orientation] {
for var idx = 0; idx < blockRowColumnTranslation.count; idx++
{
let tuple = blockRowColumnTranslation[idx]
blocks[idx].column = column + tuple.columnDiff
blocks[idx].row = row + tuple.rowDiff
}
}
}
final func rotateBlocks(orientation: Orientation) {
if let blockRowColumnTranslation = blockRowColumnPositions[orientation] {
for (idx, diff) in enumerate(blockRowColumnTranslation) {
blocks[idx].column = column + diff.colunmDiff
blocks[idx].row = row + diff.rowDiff
}
}
}

Array of Arithmetic Operators in Swift

Is it possible to have an array of arithmetic operators in Swift? Something like:
var operatorArray = ['+', '-', '*', '/'] // or =[+, -, *, /] ?
I just want to randomly generate numbers and then randomly pick an arithmetic operator from the array above and perform the equation. For example,
var firstNum = Int(arc4random_uniform(120))
var secondNum = Int(arc4random_uniform(120))
var equation = firstNum + operatorArray[Int(arc4random_uniform(3))] + secondNum //
Will the above 'equation' work?
Thank you.
It will - but you need to use the operators a little differently.
Single operator:
// declare a variable that holds a function
let op: (Int,Int)->Int = (+)
// run the function on two arguments
op(10,10)
And with an array, you could use map to apply each one:
// operatorArray is an array of functions that take two ints and return an int
let operatorArray: [(Int,Int)->Int] = [(+), (-), (*), (/)]
// apply each operator to two numbers
let result = map(operatorArray) { op in op(10,10) }
// result is [20, 0, 100, 1]
You can use NSExpression class for doing the same.
var operatorArray = ["+", "-", "*", "/"]
var firstNum = Int(arc4random_uniform(120))
var secondNum = Int(arc4random_uniform(120))
var equation = "\(firstNum) \(operatorArray[Int(arc4random_uniform(3))]) \(secondNum)"
var exp = NSExpression(format: equation, argumentArray: [])
println(exp.expressionValueWithObject(nil, context: nil))
This is an old fashion Object Oriented approach.
protocol Operation {
func calculate(op1:Int, op2:Int) -> Int
}
class Addition : Operation {
func calculate(op1: Int, op2: Int) -> Int {
return op1 + op2
}
}
class Subtraction : Operation {
func calculate(op1: Int, op2: Int) -> Int {
return op1 - op2
}
}
class Multiplication : Operation {
func calculate(op1: Int, op2: Int) -> Int {
return op1 * op2
}
}
class Division : Operation {
func calculate(op1: Int, op2: Int) -> Int {
return op1 / op2
}
}
var operatorArray : [Operation] = [Addition(), Subtraction(), Multiplication(), Division()]
var firstNum = Int(arc4random_uniform(120))
var secondNum = Int(arc4random_uniform(120))
var equation = operatorArray[Int(arc4random_uniform(3))].calculate(firstNum, op2: secondNum)

Swift zero divided by zero gives NAN

I am doing some calculation using Swift. I understand that in Swift, 0/0 gives NAN (not a number) instead of 0. Is there anyway for it to return 0 instead?
for x in 0..<n {
for y in 0..<n {
if(B[0,y,x]==NAN) {B[0,y,x]=0 } //use of undeclared identifier 'NAN'
println("\((Float)B[0,y,x])")
}
}
NaN is defined in FloatingPointType protocol.
Which is the Swift equivalent of isnan()?
Then, if you want zero, how about using Overflow Operators?
let x = 1
let y = x &/ 0
// y is equal to 0
[UPDATED]
You can define custom overflow operator like this.
func &/(lhs: Float, rhs: Float) -> Float {
if rhs == 0 {
return 0
}
return lhs/rhs
}
var a: Float = 1.0
var b: Float = 0
// this is 0
a &/ b
good answer from #bluedome, in case you want this as an extension and\or have any errors there is an answer
infix operator &/ {}
extension CGFloat {
public static func &/(lhs: CGFloat, rhs: CGFloat) -> CGFloat {
if rhs == 0 {
return 0
}
return lhs/rhs
}
}
then you can divide by zero
let a = 5
let b = 0
print(a &/ b) // outputs 0

Resources