I’m trying to make a basic simulation of a 16 bit computer with Swift. The computer will feature
An ALU
2 registers
That’s all. I have enough knowledge to create these parts visually and understand how they work, but it has become increasingly difficult to make larger components with more inputs while using my current approach.
My current approach has been to wrap each component in a struct. This worked early on, but is becoming increasingly difficult to manage multiple inputs while staying true to the principles of computer science.
The primary issue is that the components aren’t updating with the clock signal. I have the output of the component updating when get is called on the output variable, c. This, however, neglects the idea of a clock signal and will likely cause further problems later on.
It’s also difficult to make getters and setters for each variable without getting errors about mutability. Although I have worked through these errors, they are annoying and slow down the development process.
The last big issue is updating the output. The output doesn’t update when the inputs change; it updates when told to do so. This isn’t accurate to the qualities of real computers and is a fundamental error.
This is an example. It is the ALU I mentioned earlier. It takes two 16 bit inputs and outputs 16 bits. It has two unary ALUs, which can make a 16 bit number zero, negate it, or both. Lastly, it either adds or does a bit wise and comparison based on the f flag and inverts the output if the no flag is selected.
struct ALU {
//Operations are done in the order listed. For example, if zx and nx are 1, it first makes input 1 zero and then inverts it.
var x : [Int] //Input 1
var y : [Int] //Input 2
var zx : Int //Make input 1 zero
var zy : Int //Make input 2 zero
var nx : Int //Invert input 1
var ny : Int //Invert input 2
var f : Int //If 0, do a bitwise AND operation. If 1, add the inputs
var no : Int //Invert the output
public var c : [Int] { //Output
get {
//Numbers first go through unary ALUs. These can negate the input (and output the value), return 0, or return the inverse of 0. They then undergo the operation specified by f, either addition or a bitwise and operation, and are negated if n is 1.
var ux = UnaryALU(z: zx, n: nx, x: x).c //Unary ALU. See comments for more
var uy = UnaryALU(z: zy, n: ny, x: y).c
var fd = select16(s: f, d1: Add16(a: ux, b: uy).c, d0: and16(a: ux, b: uy).c).c //Adds a 16 bit number or does a bitwise and operation. For more on select16, see the line below.
var out = select16(s: no, d1: not16(a: fd).c, d0: fd).c //Selects a number. If s is 1, it returns d1. If s is 0, it returns d0. d0 is the value returned by fd, while d1 is the inverse.
return out
}
}
public init(x:[Int],y:[Int],zx:Int,zy:Int,nx:Int,ny:Int,f:Int,no:Int) {
self.x = x
self.y = y
self.zx = zx
self.zy = zy
self.nx = nx
self.ny = ny
self.f = f
self.no = no
}
}
I use c for the output variable, store values with multiple bits in Int arrays, and store single bits in Int values.
I’m doing this on Swift Playgrounds 3.0 with Swift 5.0 on a 6th generation iPad. I’m storing each component or set of components in a separate file in a module, which is why some variables and all structs are marked public. I would greatly appreciate any help. Thanks in advance.
So, I’ve completely redone my approach and have found a way to bypass the issues I was facing. What I’ve done is make what I call “tracker variables” for each input. When get is called for each variable, it returns that value of the tracker assigned to it. When set is called it calls an update() function that updates the output of the circuit. It also updates the value of the tracker. This essentially creates a ‘copy’ of each variable. I did this to prevent any infinite loops.
Trackers are unfortunately necessary here. I’ll demonstrate why
var variable : Type {
get {
return variable //Calls the getter again, resulting in an infinite loop
}
set {
//Do something
}
}
In order to make a setter, Swift requires a getter to be made as well. In this example, calling variable simply calls get again, resulting in a never-ending cascade of calls to get. Tracker variables are a workaround that use minimal extra code.
Using an update method makes sure the output responds to a change in any input. This also works with a clock signal, due to the architecture of the components themselves. Although it appears to act as the clock, it does not.
For example, in data flip-flops, the clock signal is passed into gates. All a clock signal does is deactivate a component when the signal is off. So, I can implement that within update() while remaining faithful to reality.
Here’s an example of a half adder. Note that the tracker variables I mentioned are marked by an underscore in front of their name. It has two inputs, x and y, which are 1 bit each. It also has two outputs, high and low, also known as carry and sum. The outputs are also one bit.
struct halfAdder {
private var _x : Bool //Tracker for x
public var x: Bool { //Input 1
get {
return _x //Return the tracker’s value
}
set {
_x = x //Set the tracker to x
update() //Update the output
}
}
private var _y : Bool //Tracker for y
public var y: Bool { //Input 2
get {
return _y
}
set {
_y = y
update()
}
}
public var high : Bool //High output, or ‘carry’
public var low : Bool //Low output, or ‘sum’
internal mutating func update(){ //Updates the output
high = x && y //AND gate, sets the high output
low = (x || y) && !(x && y) //XOR gate, sets the low output
}
public init(x:Bool, y:Bool){ //Initializer
self.high = false //This will change when the variables are set, ensuring a correct output.
self.low = false //See above
self._x = x //Setting trackers and variables
self._y = y
self.x = x
self.y = y
}
}
This is a very clean way, save for the trackers, do accomplish this task. It can trivially be expanded to fit any number of bits by using arrays of Bool instead of a single value. It respects the clock signal, updates the output when the inputs change, and is very similar to real computers.
Related
I want to get one decimal place of a double in Dart. I use the toStringAsFixed() method to get it, but it returns a round-up value.
double d1 = 1.151;
double d2 = 1.150;
print('$d1 is ${d1.toStringAsFixed(1)}');
print('$d2 is ${d2.toStringAsFixed(1)}');
Console output:
1.151 is 1.2
1.15 is 1.1
How can I get it without a round-up value? Like 1.1 for 1.151 too. Thanks in advance.
Not rounding seems highly questionable to me1, but if you really want to truncate the string representation without rounding, then I'd take the string representation, find the decimal point, and create the appropriate substring.
There are a few potential pitfalls:
The value might be so large that its normal string representation is in exponential form. Note that double.toStringAsFixed just returns the exponential form anyway for such large numbers, so maybe do the same thing.
The value might be so small that its normal string representation is in exponential form. double.toStringAsFixed already handles this, so instead of using double.toString, use double.toStringAsFixed with the maximum number of fractional digits.
The value might not have a decimal point at all (e.g. NaN, +infinity, -infinity). Just return those values as they are.
extension on double {
// Like [toStringAsFixed] but truncates (toward zero) to the specified
// number of fractional digits instead of rounding.
String toStringAsTruncated(int fractionDigits) {
// Require same limits as [toStringAsFixed].
assert(fractionDigits >= 0);
assert(fractionDigits <= 20);
if (fractionDigits == 0) {
return truncateToDouble().toString();
}
// [toString] will represent very small numbers in exponential form.
// Instead use [toStringAsFixed] with the maximum number of fractional
// digits.
var s = toStringAsFixed(20);
// [toStringAsFixed] will still represent very large numbers in
// exponential form.
if (s.contains('e')) {
// Ignore values in exponential form.
return s;
}
// Ignore unrecognized values (e.g. NaN, +infinity, -infinity).
var i = s.indexOf('.');
if (i == -1) {
return s;
}
return s.substring(0, i + fractionDigits + 1);
}
}
void main() {
var values = [
1.151,
1.15,
1.1999,
-1.1999,
1.0,
1e21,
1e-20,
double.nan,
double.infinity,
double.negativeInfinity,
];
for (var v in values) {
print(v.toStringAsTruncated(1));
}
}
Another approach one might consider is to multiply by pow(10, fractionalDigits), use double.truncateToDouble, divide by the power-of-10 used earlier, and then use .toStringAsFixed(fractionalDigits). That could work for human-scaled values, but it could generate unexpected results for very large values due to precision loss from floating-point arithmetic. (This approach would work if you used package:decimal instead of double, though.)
1 Not rounding seems especially bad given that using doubles to represent fractional base-10 numbers is inherently imprecise. For example, since the closest IEEE-754 double-precision floating number to 0.7 is 0.6999999999999999555910790149937383830547332763671875, do you really want 0.7.toStringAsTruncated(1) to return '0.6' instead of '0.7'?
I translated a DTW matlab function to Swift. The code looks as follows:
private func dtw(x1 : [Double], x2 : [Double]) -> Double {
let n1 = x1.count;
let n2 = x2.count;
var table = [[Double]](repeating: [Double](repeating: 0, count: n2 + 1), count: 2);
table[0][0] = 0;
for i in 1...n2 { table[0][i] = Double.infinity }
for i in 1 ... n1 {
table[1][0] = Double.infinity;
for j in 1 ... n2 {
let cost = abs(x1[i - 1] - x2[j - 1]);
var min = table[0][j - 1];
if (min > table[0][j]) {
min = table[0][j];
}
if (min > table[1][j - 1]) { min = table[1][j - 1]; }
table[1][j] = cost + min;
}
let swap = table[0];
table[0] = table[1];
table[1] = swap;
}
return table[0][n2];
}
This function takes an average of 16 ms to complete on an iPhone 11. For my use case, this is very slow. I want to investigate ways to improve speed. I recently read these two articles : DTW in Swift Orailly and Parallel programming with Swift. In the first article, there is a good quote:
Our implementation of DTW is naïve, and can be accelerated using parallel computing. To calculate the new row/column in a distance matrix, you don't need to wait until the previous one is finished; you only need it to be filled one cell ahead of your row/column
This would make the for j in 1 ... n2 { for loop an ideal candidate. ( I think ) Looking at the code, only these two operations should be thread-safe due to the read / write:
table[1][j - 1]
table[1][j]
The problem I am currently experiencing in introducing parallel computing ( from article 2 ) is that I cannot figure out how to tell swift run everything in parallel, except when I come to the two below lines, as they depend on their predocessor:
if (min > table[1][j - 1]) { min = table[1][j - 1]; }
table[1][j] = cost + min;
I suspect I could solve this issue with DispatchQueue.concurrentPerform and an NSLock(), if I implemented it correctly. ( I have not ) It could also be the wrong tool of choice, yielding me back to my question:
What can I do, to improve the speed of my DTW function where the only constraint in performing a task is that the previous execution in an array had to have completed ( parallelization, concurrency, etc. ) A code example would go a long way.
Your first problem is that you're creating an array of arrays. This is not an efficient data structure, and is not a "2 dimensional array" in the way most people mean (i.e a matrix). It is an array made up of other arrays, all of which can have arbitrary sizes, and this can be very expensive to mutate. As a rule, if you want a matrix, you should back it with a flat array and use multiplication to find its offsets, particularly if you're mutating it. Instead of table[i][j] you would use table[i * width + j].
But in your case it's even easier, since there are exactly two rows. So you don't a multi-dimensional array at all. You can just use two variables, and it'll be much more efficient. (In my tests, just making this change is about 30% faster than the original code.)
The major thing that slows you down is contention. You read and write to the same array in the loop. That gets in the way of various reordering and caching optimizations. In particular, it happens here:
if (min > table[1][j - 1]) { min = table[1][j - 1]; }
table[1][j] = cost + min;
If you rewrite that using two row variables rather than an array, it still looks like this:
if (min > row1[j - 1]) { min = row1[j - 1] }
row1[j] = cost + min
This forces the previous write to row1 to be fully completed before the next minimum can be computed, and then requires an array lookup to get the value back. But that's not really necessary. You can just cache the previous value between loops. Doing that means the loop only performs reads on row0 and only performs writes on row1. That's good for memory contention.
Putting those together, I wrote it this way. I changed the offsets to run from 0 rather than 1; it just made the code a little simpler to understand IMO. In my tests, this is about 3x faster than the original code for two arrays of 10k elements each.
func dtw(x1 : [Double], x2 : [Double]) -> Double {
let n1 = x1.count
let n2 = x2.count
var row0 = Array(repeating: Double.infinity, count: n2 + 1)
row0[0] = 0
var row1 = Array(repeating: 0.0, count: n2 + 1)
for i in 0 ..< n1 {
row1[0] = .infinity
// Keep track of the last value so we never have to read from row1.
var lastValue = Double.infinity
for j in 0 ..< n2 {
let cost = abs(x1[i] - x2[j])
// Don't be tempted to use the 3-value version of `min` here. It's much slower.
var minimum = min(row0[j], row0[j + 1])
minimum = min(minimum, lastValue)
lastValue = cost + minimum
row1[j + 1] = lastValue
}
swap(&row0, &row1)
}
return row0[n2];
}
This code is somewhat hard to make parallel, because the operations are not independent. Each row depends on the other rows. The key to good queue-based parallelism is the ability to split up fairly large chunks of independent work, and then efficiently combine them at the end. The cost of coordination will eat your benefits if the work units are too small. In many cases, vectorization (SIMD) is much more efficient than dispatching to multiple queues.
The cost function is independent, and I explored computing it with Accelerate (the main vectorization framework), but this generally made things slower. The compiler is very good at optimizing simple math in loops, and will do quite a lot of vectorizing for you if you let it. Accelerate is best when you need to do an expensive, consistent, and independent computation on a lot of values. And this loop isn't expensive or independent.
I am writing a Math Quiz app for my daughter in xcode/swift.Specifically, I want to produce a question that will contain at least one negative number to be added or subtracted against a second randomly generated number.Cannot be two positive numbers.
i.e.
What is (-45) subtract 12?
What is 23 Minus (-34)?
I am struggling to get the syntax right to generate the numbers, then decide if the said number will be a negative or positive.
Then the second issue is randomizing if the problem is to be addition or subtraction.
It's possible to solve this without repeated number drawing. The idea is to:
Draw a random number, positive or negative
If the number is negative: Draw another number from the same range and return the pair.
If the number is positive: Draw the second number from a range constrained to negative numbers.
Here's the implementation:
extension CountableClosedRange where Bound : SignedInteger {
/// A property that returns a random element from the range.
var random: Bound {
return Bound(arc4random_uniform(UInt32(count.toIntMax())).toIntMax()) + lowerBound
}
/// A pair of random elements where always one element is negative.
var randomPair: (Bound, Bound) {
let first = random
if first >= 0 {
return (first, (self.lowerBound ... -1).random)
}
return (first, random)
}
}
Now you can just write...
let pair = (-10 ... 100).randomPair
... and get a random tuple where one element is guaranteed to be negative.
Here's my attempt. Try running this in a playground, it should hopefully get you the result you want. I hope I've made something clean enough...
//: Playground - noun: a place where people can play
import Cocoa
let range = Range(uncheckedBounds: (-50, 50))
func generateRandomCouple() -> (a: Int, b: Int) {
// This function will generate a pair of random integers
// (a, b) such that at least a or b is negative.
var first, second: Int
repeat {
first = Int(arc4random_uniform(UInt32(range.upperBound - range.lowerBound))) - range.upperBound
second = Int(arc4random_uniform(UInt32(range.upperBound - range.lowerBound))) - range.upperBound
}
while (first > 0 && second > 0);
// Essentially this loops until at least one of the two is less than zero.
return (first, second)
}
let couple = generateRandomCouple();
print("What is \(couple.a) + (\(couple.b))")
// at this point, either of the variables is negative
// I don't think you can do it in the playground, but here you would read
// her input and the expected answer would, naturally, be:
print(couple.a + couple.b)
In any case, feel free to ask for clarifications. Good luck !
I am trying to implement RGB histogram computation for images in Swift (I am new to iOS).
However the computation time for 1500x1000 image is about 66 sec, which I consider to be too slow.
Are there any ways to speed up image traversal?
P.S. current code is the following:
func calcHistogram(image: UIImage) {
let bins: Int = 20;
let width = Int(image.size.width);
let height = Int(image.size.height);
let binStep: Double = Double(bins-1)/255.0
var hist = Array(count:bins, repeatedValue:Array(count:bins, repeatedValue:Array(count:bins, repeatedValue:Int())))
for i in 0..<bins {
for j in 0..<bins {
for k in 0..<bins {
hist[i][j][k] = 0;
}
}
}
var pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
var data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
for x in 0..<width {
for y in 0..<height {
var pixelInfo: Int = ((width * y) + x) * 4
var r = Double(data[pixelInfo])
var g = Double(data[pixelInfo+1])
var b = Double(data[pixelInfo+2])
let r_bin: Int = Int(floor(r*binStep));
let g_bin: Int = Int(floor(g*binStep));
let b_bin: Int = Int(floor(b*binStep));
hist[r_bin][g_bin][b_bin] += 1;
}
}
}
As noted in my comment on the question, there are some things you might rethink before you even try to optimize this code.
But even if you do move to a better overall solution like GPU-based histogramming, a library, or both... There are some Swift pitfalls you're falling into here that are good to talk about so you don't run into them elsewhere.
First, this code:
var hist = Array(count:bins, repeatedValue:Array(count:bins, repeatedValue:Array(count:bins, repeatedValue:Int())))
for i in 0..<bins {
for j in 0..<bins {
for k in 0..<bins {
hist[i][j][k] = 0;
}
}
}
... is initializing every member of your 3D array twice, with the same result. Int() produces a value of zero, so you could leave out the triple for loop. (And possibly change Int() to 0 in your innermost repeatedValue: parameter to make it more readable.)
Second, arrays in Swift are copy-on-write, but this optimization can break down in multidimensional arrays: changing an element of a nested array can cause the entire nested array to be rewritten instead of just the one element. Multiply that by the depth of nested arrays and number of element writes you have going on in a double for loop and... it's not pretty.
Unless there's a reason your bins need to be organized this way, I'd recommend finding a different data structure for them. Three separate arrays? One Int array where index i is red, i + 1 is green, and i + 2 is blue? One array of a custom struct you define that has separate r, g, and b members? See what conceptually fits with your tastes or the rest of your app, and profile to make sure it works well.
Finally, some Swift style points:
pixelInfo, r, g, and b in your second loop don't change. Use let, not var, and the optimizer will thank you.
Declaring and initializing something like let foo: Int = Int(whatever) is redundant. Some people like having all their variables/constants explicitly typed, but it does make your code a tad less readable and harder to refactor.
Int(floor(x)) is redundant — conversion to integer always takes the floor.
If you have some issues about performance in your code, first of all, use Time Profiler from Instruments. You can start it via Xcode menu Build->Profile, then, Instruments app opened, where you can choose Time Profiler.
Start recording and do all interactions in the your app.
Stop recording and analyse where is the "tightest" place of your code.
Also check options "Invert call tree", "Hide missing symbols" and "Hide system libraries" for better viewing profile results.
You can also double click at any listed function to view it in code and seeing percents of usage
So I'm writing a lowpass accelerometer function to moderate the jitters of the accelerometer. I have a CGFloat array to represent the data and i want to damp it with this function:
// Damps the gittery motion with a lowpass filter.
func lowPass(vector:[CGFloat]) -> [CGFloat]
{
let blend:CGFloat = 0.2
// Smoothens out the data input.
vector[0] = vector[0] * blend + lastVector[0] * (1 - blend)
vector[1] = vector[1] * blend + lastVector[1] * (1 - blend)
vector[2] = vector[2] * blend + lastVector[2] * (1 - blend)
// Sets the last vector to be the current one.
lastVector = vector
// Returns the lowpass vector.
return vector
}
In this case, lastVector is defined as follows up at the top of my program:
var lastVector:[CGFloat] = [0.0, 0.0, 0.0]
The three lines in the form vector[a] = ... give me the errors. Any ideas as to why i am getting this error?
That code seems to compile if you pass the array with the inout modifier:
func lowPass(inout vector:[CGFloat]) -> [CGFloat] {
...
}
I'm not sure whether that's a bug or not. Instinctively, if I pass an array to a function I expect to be able to modify it. If I pass with the inout modifier, I'd expect to be able to make the original variable to point to a new array - similar to what the & modifier does in C and C++.
Maybe the reason behind is that in Swift there are mutable and immutable arrays (and dictionaries). Without the inout it's considered immutable, hence the reason why it cannot be modified.
Addendum 1 - It's not a bug
#newacct says that's the intended behavior. After some research I agree with him. But even if not a bug I originally considered it wrong (read up to the end for conclusions).
If I have a class like this:
class WithProp {
var x : Int = 1
func SetX(newVal : Int) {
self.x = newVal
}
}
I can pass an instance of that class to a function, and the function can modify its internal state
var a = WithProp()
func Do1(p : WithProp) {
p.x = 5 // This works
p.SetX(10) // This works too
}
without having to pass the instance as inout.
I can use inout instead to make the a variable to point to another instance:
func Do2(inout p : WithProp) {
p = WithProp()
}
Do2(&a)
With that code, from within Do2 I make the p parameter (i.e. the a variable) point to a newly created instance of WithProp.
The same cannot be done with an array (and I presume a dictionary as well). To change its internal state (modify, add or remove an element) the inout modifier must be used. That was counterintuitive.
But everything gets clarified after reading this excerpt from the swift book:
Swift’s String, Array, and Dictionary types are implemented as structures. This means that strings, arrays, and dictionaries are copied when they are assigned to a new constant or variable, or when they are passed to a function or method.
So when passed to a func, it's not the original array, but a copy of it - Hence any change made to it (even if possible) wouldn't be done on the original array.
So, in the end, my original answer above is correct and the experienced behavior is not a bug
Many thanks to #newacct :)
Since Xcode 6 beta 3, modifying the contents of an Array is a mutating operation. You cannot modify a constant (i.e. let) Array; you can only modify a non-constant (i.e. var) Array.
Parameters to a function are constants by default. Therefore, you cannot modify the contents of vector since it is a constant. Like other parameters, there are two ways to be able to change a parameter:
Declare it var, in which case you can assign to it, but it is still passed by value, so any changes to the parameter has no effect on the calling scope.
Declare it inout, in which case the parameter is passed by reference, and any changes to the parameter is just like you made the changes on the variable in the calling scope.
You can see in the Swift standard library that all the functions that take an Array and mutate it, like sort(), take the Array as inout.
P.S. this is just like how arrays work in PHP by the way
Edit: The following worked for Xcode Beta 2. Apparently, the syntax and behavior of arrays has changed in Beta 3. You can no longer modify the contents of an array with subscripts if it is immutable (a parameter not declared inout or var):
Not valid with the most recent changes to the language
The only way I could get it to work in the play ground was change how you are declaring the arrays. I suggest trying this (works in playground):
import Cocoa
let lastVector: CGFloat[] = [0.0,0.0,0.0]
func lowPass(vector:CGFloat[]) -> CGFloat[] {
let blend: CGFloat = 0.2
vector[0] = vector[0] * blend + lastVector[0] * ( 1 - blend)
vector[1] = vector[1] * blend + lastVector[1] * ( 1 - blend)
vector[2] = vector[2] * blend + lastVector[2] * ( 1 - blend)
return vector
}
var test = lowPass([1.0,2.0,3.0]);
Mainly as a followup for future reference, #newacct's answer is the correct one. Since the original post showed a function that returns an array, the correct answer to this question is to tag the parameter with var:
func lowPass(var vector:[CGFloat]) -> [CGFloat] {
let blend:CGFloat = 0.2
// Smoothens out the data input.
vector[0] = vector[0] * blend + lastVector[0] * (1 - blend)
vector[1] = vector[1] * blend + lastVector[1] * (1 - blend)
vector[2] = vector[2] * blend + lastVector[2] * (1 - blend)
// Sets the last vector to be the current one.
lastVector = vector
// Returns the lowpass vector.
return vector
}