Int Variable won't let me add data from UI text field - ios

I'm currently working on making a savings app in Xcode 10. I'm working on a feature that lets users add the amount of money they have saved for something into the app through a UI text field. I can't find a way to return the text from the text field to an Integer and add that to the total sum of money that has been saved. Also whenever I tried to add a test variable I got plenty of errors.
var amountSavedSoFar += amountOfMoneySaved
Both I have set to be integers. I'm trying to set amountOfMoneySaved equal to the numbers in the text field, but it doesn't seem to work.
'+=' is not a prefix unary operator
Consecutive statements on a line must be separated by ';'
Type annotation missing in pattern
Unary operator cannot be separated from its operand

You've got a few issues as you mentioned:
amountSavedSoFar is declared in the saveAmount function and will not be persisted if you call that function more than once.
amountSaved.text is not being converted from String to the appropriate type (Int, Double, etc.)
amountSavedSoFar isn't typed or initialized.
Try something like:
var amountSavedSoFar: Int = 0
#IBAction func saveAmount(_ sender: Any) {
//Convert the text and default to zero if conversion fails
amountSavedSoFar += Int(amountSaved.text) ?? 0
}

Related

How to create Compiler warning for my function in Swift

I want to validate my inputs to function with some conditions and show as a compiler warning/error.
How it is possible?
For example:
func getPoints(start: Int, end: Int) {
}
I want to show compiler warning/error when someone tries to give input high for start than end.
getPoints(start: 3, end: 10) // No warnings
getPoints(start: 6, end: 2) // Compiler warning like: end value can not be less than start value
Actually this is for a framework purpose. I want to ensure that the parameters are not bad inputs.
Such a constraint can't be enforced at compile time. Take Range for example, which enforces that the lowerBound always compares as less or equal to the upperBound. That's just an assertion that runs at run-time, and crashes if it's not met.
I would suggest you just change your API design to use a Range<Int> or ClosedRange<Int> taking pairs of Ints to model ranges is a bad idea, for many reasons:
It doesn't communicate the semantics of a range. Two integers could be anything, but a range is something much more specific.
It doesn't have any of the useful methods, like contains(_:), or support for pattern matching via the ~= operator.
Its error prone, because when passing pairs around, you might make a copy/paste error leading you to accidentally use the same param twice.
It reads better: getPoint(3...10)
You can't generate a warning at compile time, since the arguments are not evaluated beyond checking for type conformance.
In your example, you have used constants, so it would, in theory, be possible to perform the check you want, but what if you passed a variable, or the result of another function? How much of your code would the compiler need to execute in order to perform the check?
You need to enforce your requirements at run time. For example, you could have your function throw if the parameters were incorrect:
enum MyErrors: Error {
case rangeError
}
func getPoints(start: Int, end: Int) throws {
guard start <= end else {
throw MyErrors.rangeError
}
...
}
Or you could have the function simply handle the problem:
func getPoints(start: Int, end: Int) {
let beginning = min(start,end)
let ending = max(start,end)
...
}
Also, I recommend Alexander's suggestion of using Range instead of Int; it is always a good idea to take advantage of Foundation types, but I will leave my answer as it shows some approaches for handling issues at runtime.

Odd Shortened Syntax for Closures in Swift?

I'm trying to fully wrap my head around closures in Swift. I think I understand the basics. However, I ran into some odd syntax and would appreciate if someone can understand what it means.
I'm going to show the shortened syntax that I came across, but first I'll show how I would write this code without the shortned syntax. I think I understand everything that's going on in the following code, but I'll narrate it just to make sure :)
//function
func manipulate(numbers: [Int], using algorithm: (Int) -> Int){
for number in numbers{
let result = algorithm(number)
print("Manipulating \(number) produced \(result)")
}
}
//trailing closure syntax
manipulate(numbers: [1,2,3]){(number: Int) -> Int in
return number * number
}
Okay, so we're basically declaring a function called manipulate that takes 2 parameters. One of these parameters is numbers and it is an array of Ints. The second parameter, known as using externally and algorithm internally is a closure, which takes an Int as a parameter, and returns an Int.
Okay cool, the function then goes through all the numbers in the array numbers and applies the result of calling the closure, algorithm on each number.
Okay, so that's what the function does, now, let's call it with trailing closure syntax.
What I did was call the function, manipulate and pass in the first parameter, numbers, and then, going by the trailing closure syntax, defined the closure that I'm going to be using. It's taking a parameter, which I called number, that is an Int, and it's returning another Int, number * number.
This code makes perfect sense to me.
Here's the variant that tripped me up. The definition of the function was exactly the same, but, the function was called in a way that I don't understand.
//function
func manipulate(numbers: [Int], using algorithm: (Int) -> Int){
for number in numbers{
let result = algorithm(number)
print("Manipulating \(number) produced \(result)")
}
}
//trailing closure syntax
manipulate(numbers: [1,2,3]){ number in
number * number
}
First of all, we're not specifying the type of the parameter that the closure takes, and we're also not specifying the type of the return value. Also, what does number even mean in this context when we're calling the function?
I'd appreciate it if someone could explain what's going on here. I'm pretty sure it's some idiomatic syntax in Swift, but I don't quite understand it.
Thanks!

Good behavior for subscript

I'm creating an extension for String and I'm trying to decide what proper/expected/good behavior would be for a subscript operator. Currently, I have this:
// Will crash on 0 length strings
subscript(kIndex: Int) -> Character {
var index = kIndex
index = index < 0 ? 0 : index
index = index >= self.length ? self.length-1 : index
let i = self.startIndex.advancedBy(index)
return self.characters[i]
}
This causes all values outside the range of the string to be capped to the edge of the string. While this reduces crashing from passing a bad index to the subscript, it doesn't feel like the right thing to do. I am unable to throw an exception from a subscript and not checking the subscript causes a BAD_INSTRUCTION error if the index is out of bounds. The only other option I can think of is to return an optional, but that seems awkward. Weighing the options, what I have seems to be the most reasonable, but I don't think anybody using this would expect a bad index to return a valid result.
So, my question is: what is the "standard" expected behavior of the subscript operator and is returning a valid element from an invalid index acceptable/appropriate? Thanks.
If you're implementing a subscript on String, you might want to first think about why the standard library chooses not to.
When you call self.startIndex.advancedBy(index), you're effectively writing something like this:
var i = self.startIndex
while i < index { i = i.successor() }
This occurs because String.CharacterView.Index is not a random-access index type. See docs on advancedBy. String indices aren't random-access because each Character in a string may be any number of bytes in the string's underlying storage — you can't just get character n by jumping n * characterSize into the storage like you can with a C string.
So, if one were to use your subscript operator to iterate through the characters in a string:
for i in 0..<string.characters.count {
doSomethingWith(string[i])
}
... you'd have a loop that looks like it runs in linear time, because it looks just like an array iteration — each pass through the loop should take the same amount of time, because each one just increments i and uses a constant-time access to get string[i], right? Nope. The advancedBy call in first pass through the loop calls successor once, the next calls it twice, and so on... if your string has n characters, the last pass through the loop calls successor n times (even though that generates a result that was used in the previous pass through the loop when it called successor n-1 times). In other words, you've just made an O(n2) operation that looks like an O(n) operation, leaving a performance-cost bomb for whoever else uses your code.
This is the price of a fully Unicode-aware string library.
Anyhow, to answer your actual question — there are two schools of thought for subscripts and domain checking:
Have an optional return type: func subscript(index: Index) -> Element?
This makes sense when there's no sensible way for a client to check whether an index is valid without performing the same work as a lookup — e.g. for a dictionary, finding out if there's a value for a given key is the same as finding out what the value for a key is.
Require that the index be valid, and make a fatal error otherwise.
The usual case for this is situations where a client of your API can and should check for validity before accessing the subscript. This is what Swift arrays do, because arrays know their count and you don't need to look into an array to see if an index is valid.
The canonical test for this is precondition: e.g.
func subscript(index: Index) -> Element {
precondition(isValid(index), "index must be valid")
// ... do lookup ...
}
(Here, isValid is some operation specific to your class for validating an index — e.g. making sure it's > 0 and < count.)
In just about any use case, it's not idiomatic Swift to return a "real" value in the case of a bad index, nor is it appropriate to return a sentinel value — separating in-band values from sentinels is the reason Swift has Optionals.
Which of these is more appropriate for your use case is... well, since your use case is problematic to being with, it's sort of a wash. If you precondition that index < count, you still incur an O(n) cost just to check that (because a String has to examine its contents to figure out which sequences of bytes constitute each character before it knows how many characters it has). If you make your return type optional, and return nil after calling advancedBy or count, you've still incurred that O(n) cost.

cannot be applied to operands of type 'UITextField' and 'Int'

I am trying to populate a Label with a text field input * 365
I keep getting the message:
Binary operator '*' cannot be applied to operands of type 'UITextField' and 'Int'
var hours = (hoursTextField.text as NSString).doubleValue
var hoursInAYear = hoursTextField * 365
Your first line is calculating the doubleValue of what's entered into the text field, but you're not using that hours variable. Perhaps you want:
var hoursInAYear = hours * 365
The warning you are getting is telling you that you're trying to use the * operator between a variable whose type is UITextField and another variable whose type is Int (this is what your 365 literal is interpreted as).
This warning will come up any time we try to use an operator between two types for which the operator does not have an overload. It is particularly common when one of our operand's types is implicitly determined because we're using a literal somewhere. To resolve the issue, we must double check our instantiation of our operands and be sure they're of types for which our operator has an overload.
If they are not, then we should either change how we create these variables so they have the right type, or find some way of converting them when we use them with the operator.
When we change our mistaken variable from the text field to the double we just calculated, Swift is able to calculate this correctly. Despite previously claiming that 365 was an Int, being a literal, it can be interpreted as several different types, one of which includes Double.
When we attempt to use the * between a variable of type Double and a literal number, the literal number will be correctly converted to a Double, and we'll use the overload of the * operator which accepts two doubles (and returns a double).
You're trying to multiply hoursTextField by 365. Did you mean to write:
var hours = (hoursTextField.text as NSString).doubleValue
var hoursInAYear = hours * 365 // hours, not hoursTextField.
I think it is basically just a typo or copy-paste-mistake of yours since you already calculate the hours variable correctly and dont use it afterwards. Simply change your second line to
var hoursInAYear = hours * 365

string comparison against factors in Stata

Suppose I have a factor variable with labels "a" "b" and "c" and want to see which observations have a label of "b". Stata refuses to parse
gen isb = myfactor == "b"
Sure, there is literally a "type mismatch", since my factor is encoded as an integer and so cannot be compared to the string "b". However, it wouldn't kill Stata to (i) perform the obvious parse or (ii) provide a translator function so I can write the comparison as label(myfactor) == "b". Using decode to (re)create a string variable defeats the purpose of encoding, which is to save space and make computations more efficient, right?
I hadn't really expected the comparison above to work, but I at least figured there would be a one- or two-line approach. Here is what I have found so far. There is a nice macro ("extended") function that maps the other way (from an integer to a label, seen below as local labi: label ...). Here's the solution using it:
// sample data
clear
input str5 mystr int mynum
a 5
b 5
b 6
c 4
end
encode mystr, gen(myfactor)
// first, how many groups are there?
by myfactor, sort: gen ng = _n == 1
replace ng = sum(ng)
scalar ng = ng[_N]
drop ng
// now, which code corresponds to "b"?
forvalues i = 1/`=ng'{
local labi: label myfactor `i'
if "b" == "`labi'" {
scalar bcode = `i'
break
}
}
di bcode
The second step is what irks me, but I'm sure there's a also faster, more idiomatic way of performing the first step. Can I grab the length of the label vector, for example?
An example:
clear all
set more off
sysuse auto
gen isdom = 1 if foreign == "Domestic":`:value label foreign'
list foreign isdom in 1/60
This creates a variable called isdom and it will equal 1 if foreigns's value label is equal to "Domestic". It uses an extended macro function.
From [U] 18.3.8 Macro expressions:
Also, typing
command that makes reference to `:extended macro function'
is equivalent to
local macroname : extended macro function
command that makes reference to `macroname'
This explains one of the two : in the offered syntax. The other can be explained by
... to specify value labels directly in an expression, rather than through
the underlying numeric value ... You specify the label in double quotes
(""), followed by a colon (:), followed by the name of the value
label.
The quote is from Stata tip 14: Using value labels in expressions, by Kenneth Higbee, The Stata Journal (2004). Freely available at http://www.stata-journal.com/sjpdf.html?articlenum=dm0009
Edit
On computing the number of distinct observations, another way is:
by myfactor, sort: gen ng = _n == 1
count if ng
scalar sc_ng = r(N)
display sc_ng
But yours is fine. In fact, it is documented here: http://www.stata.com/support/faqs/data-management/number-of-distinct-observations/, along with more methods and comments.

Resources