I need to highlight word by word - ios

I've an app in which I need to highlight words. It's like audio books which reads & highlight arabic text.
There are methods to highlight a particular sub-string from label but I don't want that. I want to something like. Lets consider following sentance.
"I am an iOS Developer & I work in BEE Technologies"
Now, what I want, I would say, highlight character number
1-1, -> I
3-4, -> am
6-7, -> an
9-11, ->iOS
13-21, ->Developer
Because I don't think there is any way to simply keep highlighting word by word till end of line.

This is a generic solution, it might be not the best one, but at least it works for me as it should...
I -almost- faced the same case, what I did to achieve it (consider that these are steps to do):
Creating an array of ranges: I calculated the ranges of each word and add them to an array to let it easier when determining which word (range) should be highlighted. For example, when you to highlight "I", you should highlight the range which is at 0 index and so on...
Here is an example of how you can generate an array of NSRange:
let myText = "I am an iOS Developer"
let arrayOfWords = myText.components(separatedBy: " ")
var currentLocation = 0
var currentLength = 0
var arrayOfRanges = [NSRange]()
for word in arrayOfWords {
currentLength = word.characters.count
arrayOfRanges.append(NSRange(location: currentLocation, length: currentLength))
currentLocation += currentLength + 1
}
for rng in arrayOfRanges {
print("location: \(rng.location) length: \(rng.length)")
}
/* output is:
location: 0 length: 1
location: 2 length: 2
location: 5 length: 2
location: 8 length: 3
location: 12 length: 9
*/
Using Timer: to keep checking what is the current second (declare a variable cuurentSecond -for example- and increment it by 1 each second). Based on what is the current second, you can determine which word should be highlighted.
For example: let's say that "I" should be highlighted between 0 and 1 second and "am" should be highlighted from 2 to 3 second, now you can check if the cuurentSecond between 0 and 1 to highlight "I", if it is between 2 and 3 to highlight "am" and so on...
Using NSMutableAttributedString: you should use it to do the actual highlighting for the words (ranges as mentioned int the first bullet). You can also check these questions/answers to know how to use it:
iOS - Highlight One Word Or Multiple Words In A UITextView.
How can I change style of some words in my UITextView one by one in Swift?
Hope this helped...

Related

How to find and replace UILabel's value on a specific line?

I know I can find the total number of lines of a UILabel with .numberOfLines but how can I retrieve and edit the value say on line 2.
Example:
Assuming from your screen shot that each line is separated by a newline character you can split the text based on that.
Here is an example in Swift 3:
if let components = label.text?.components(separatedBy: "\n"), components.count > 1 {
let secondLine = components[2]
let editedSecondLine = secondLine + "edited"
label.text = label.text?.replacingOccurrences(of: secondLine, with: editedSecondLine)
}
You should make sure there is a value at whatever index your interested in. This example makes sure that there are more than a single component before retrieving the value.
You can then replace the second line with your edited line.
Hope that helps.

Generation random (positive and negative) numbers for a quiz

I am writing a Math Quiz app for my daughter in xcode/swift.Specifically, I want to produce a question that will contain at least one negative number to be added or subtracted against a second randomly generated number.Cannot be two positive numbers.
i.e.
What is (-45) subtract 12?
What is 23 Minus (-34)?
I am struggling to get the syntax right to generate the numbers, then decide if the said number will be a negative or positive.
Then the second issue is randomizing if the problem is to be addition or subtraction.
It's possible to solve this without repeated number drawing. The idea is to:
Draw a random number, positive or negative
If the number is negative: Draw another number from the same range and return the pair.
If the number is positive: Draw the second number from a range constrained to negative numbers.
Here's the implementation:
extension CountableClosedRange where Bound : SignedInteger {
/// A property that returns a random element from the range.
var random: Bound {
return Bound(arc4random_uniform(UInt32(count.toIntMax())).toIntMax()) + lowerBound
}
/// A pair of random elements where always one element is negative.
var randomPair: (Bound, Bound) {
let first = random
if first >= 0 {
return (first, (self.lowerBound ... -1).random)
}
return (first, random)
}
}
Now you can just write...
let pair = (-10 ... 100).randomPair
... and get a random tuple where one element is guaranteed to be negative.
Here's my attempt. Try running this in a playground, it should hopefully get you the result you want. I hope I've made something clean enough...
//: Playground - noun: a place where people can play
import Cocoa
let range = Range(uncheckedBounds: (-50, 50))
func generateRandomCouple() -> (a: Int, b: Int) {
// This function will generate a pair of random integers
// (a, b) such that at least a or b is negative.
var first, second: Int
repeat {
first = Int(arc4random_uniform(UInt32(range.upperBound - range.lowerBound))) - range.upperBound
second = Int(arc4random_uniform(UInt32(range.upperBound - range.lowerBound))) - range.upperBound
}
while (first > 0 && second > 0);
// Essentially this loops until at least one of the two is less than zero.
return (first, second)
}
let couple = generateRandomCouple();
print("What is \(couple.a) + (\(couple.b))")
// at this point, either of the variables is negative
// I don't think you can do it in the playground, but here you would read
// her input and the expected answer would, naturally, be:
print(couple.a + couple.b)
In any case, feel free to ask for clarifications. Good luck !

Stubborn emoji won't combine: πŸ‘¨β€β€β€πŸ’‹β€πŸ‘¨

Even after #user3441734 solved most of my problems πŸ™‡, there are a few emoji that I can't seem to render properly when converting from a [String:String] to String.
Here's some Playground-ready code to illustrate the problem:
var u = ""
u = "1f468-1f468-1f467-1f467" // πŸ‘¨β€πŸ‘¨β€πŸ‘§β€πŸ‘§
//u = "1f918-1f3ff" // 🀘🏿
//u = "1f468-2764-1f48b-1f468" // πŸ‘¨β€β€β€πŸ’‹β€πŸ‘¨ (broken)
//u = "1f3c7-1f3fb" // πŸ‡β€πŸ» (broken)
let unicodeArray = u.characters.split("-")
.map(String.init)
.map {String(UnicodeScalar(Int($0,radix: 16) ?? 0))}
if let last = unicodeArray.last {
let separator: String
switch (unicodeArray.first, last) {
// Failed attempt to get tone applied to jockey
case let (horse_racing, _) where horse_racing == "\u{1f3c7}":
separator = "\u{200d}"
case let (_, tone) where "\u{1f3fb}"..."\u{1f3ff}" ~= tone:
separator = ""
case let (_, regionalIndicatorSymbol) where "\u{1f1e6}"..."\u{1f1ff}" ~= regionalIndicatorSymbol:
separator = ""
default:
separator = "\u{200d}"
}
print(unicodeArray.joinWithSeparator(separator))
}
Uncomment each assignment to u in turn to see the problem in action. The 3rd and 4th values should render like so:
and
Thoughts…
It turns out that a long-press on the race horse fails to show skin tones on iOS, so let's assume that's just an oversight, perhaps related to the near-impossibility of judging the jockey's skin tone at standard emoji sizes. I still can't figure out the problem with u = "1f468-2764-1f48b-1f468"
Apologies if this question comes out at all unclear. Chrome and Safari have different behaviors w.r.t these combo-emoji, so only the linked images are guaranteed to appear to you the way I see them on my end. 😒
These emoji are all either skin-tone renderings or tokens of same-sex affection. Is there some sort of bizarre latent racism & homophobia lurking in the system?! 😱 (Cue the conspiracy theories.)
Note that my attempt to use the zero-width joiner, u{200d} didn't help.
So, bug in Apple & Chrome's handling of certain emoji, or is there yet another idiosyncrasy of the standard that I've missed?
There's no conspiracy, the bugs are in your code.
The first character can be produced with:
U+1F468 U+200D U+2764 U+FE0F U+200D U+1F48B U+200D U+1F468
Note the ZERO WIDTH JOINER (U+200D) between each character, and the VARIATION SELECTOR-16 selector (U+FE0F) on the HEAVY BLACK HEART (U+2764) to ensure the emoji presentation style is used.
Refer to this table for a complete list of implemented multi-person groupings.
U+1F3C7 HORSE RACING is not an emoji modifier base, and so it does not support skin tone modifiers.

Percent Similarity of an Array Swift

Say I have two arrays:
var arrayOne = ["Hi", "Hello", "Hey", "Howdy"]
var arrayOne = ["Hi", "Hello", "Hey", "Not Howdy"]
What could I do to compare how similar the array elements are? As in a function that would return 75% Because the first three elements are the same but the last element are not. The arrays I'm using in my project are strings but they will almost entirely match except for a few elements. I need to see What percent the differences are. Any ideas?
let arrayOne = ["Hi", "Hello", "Hey", "Howdy"]
let arrayTwo = ["Hi", "Hello", "Hey", "Not Howdy"]
var matches = 0
for (index, item) in enumerate(arrayOne) {
if item == arrayTwo[index] {
matches++
}
}
Double(matches) / Double(arrayOne.count) // 0.75
Both of these algorithms use the idea that if you have two different length arrays, the highest similarity you can have is short length / long length, meaning that the difference in the array lengths are counted as not matching.
You could add all of the terms to a set and then make your percentage the size of the set / length of longest array.
You could sort both arrays and then do a loop with an index variable for each array and compare the values at the two indices, advancing the index for the array that has the "lower" value in the comparison, or increment a counter if they are equivalent. Your percentage would be the counter / length of longest array.
One thing to think about though is how you want to measure similarity in weird cases. Suppose you have two arrays: [1, 2, 3, 4, 5] and [1, 1, 1, 1, 1]. I don't know whether you would want to say they are completely similar, since all of the elements in the second array are in the first array, or if they only have a similarity of 20% because once the 1 in the first array is "used", it can't be used again.
Just some thoughts.
maybe something like this? (written off top of my head so havent checked if it actually compiles)
var arrayOne = ["Hi", "Hello", "Hey", "Howdy"]
var arrayTwo = ["Hi", "Hello", "Hey", "Not Howdy"]
var matches = 0
for i in 0...arrayOne.count { //assuming the arrays are always the same length
if arrayOne[i] == arrayTwo[i]{
matches++
}
}
var percent = matches / arrayOne.count
A good way to measure the similarity of 2 arrays is to iterate all elements of an array, and keep a cursor on the 2nd array, such that at any time the current element of the iterated array is not greater than the element at the cursor position.
As you may argue, this algorithm require elements to be comparable, and as such it works if the arrays type implements the Comparable interface.
I've worked on a generic function that perform that calculation, here it is:
func compare<T: Comparable>(var lhs: [T], var rhs: [T]) -> (matches: Int, total: Int) {
lhs.sort { $0 < $1 } // Inline sort
rhs.sort { $0 < $1 } // Inline sort
var matches = 0
var rightSequence = SequenceOf(rhs).generate()
var right = rightSequence.next()
for left in lhs {
while right != nil && left > right {
right = rightSequence.next()
}
if left == right {
++matches
right = rightSequence.next()
}
}
return (matches: matches, total: max(lhs.count, rhs.count))
}
Let me say that the implementation can probably be optimized, but my goal here is to show the algorithm, not to provide its best implementation.
The first thing to do is to obtain a sorted version of each of the 2 arrays - for simplicity, I have declared both parameters as var, which allows me to edit them, leaving all changes in the local scope. That's way I am using in-place sort.
A sequence on the 2nd array is created, called rightSequence, and the first element is extracted, copied into the right variable.
Then the first array is iterated over - for each element, the sequence is advanced to the next element until the left element is not greater than the right one.
Once this is done, left and right are compared for equality, in which case the counter of matches is incremented.
The algorithm works for arrays having repetitions, different sizes, etc.

Swift countElements() return incorrect value when count flag emoji

let str1 = "πŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺ"
let str2 = "πŸ‡©πŸ‡ͺ.πŸ‡©πŸ‡ͺ.πŸ‡©πŸ‡ͺ.πŸ‡©πŸ‡ͺ.πŸ‡©πŸ‡ͺ."
println("\(countElements(str1)), \(countElements(str2))")
Result: 1, 10
But should not str1 have 5 elements?
The bug seems only occurred when I use the flag emoji.
Update for Swift 4 (Xcode 9)
As of Swift 4 (tested with Xcode 9 beta) grapheme clusters break after every second regional indicator symbol, as mandated by the Unicode 9
standard:
let str1 = "πŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺ"
print(str1.count) // 5
print(Array(str1)) // ["πŸ‡©πŸ‡ͺ", "πŸ‡©πŸ‡ͺ", "πŸ‡©πŸ‡ͺ", "πŸ‡©πŸ‡ͺ", "πŸ‡©πŸ‡ͺ"]
Also String is a collection of its characters (again), so one can
obtain the character count with str1.count.
(Old answer for Swift 3 and older:)
From "3 Grapheme Cluster Boundaries"
in the "Standard Annex #29 UNICODE TEXT SEGMENTATION":
(emphasis added):
A legacy grapheme cluster is defined as a base (such as A or γ‚«)
followed by zero or more continuing characters. One way to think of
this is as a sequence of characters that form a β€œstack”.
The base can be single characters, or be any sequence of Hangul Jamo
characters that form a Hangul Syllable, as defined by D133 in The
Unicode Standard, or be any sequence of Regional_Indicator (RI) characters. The RI characters are used in pairs to denote Emoji
national flag symbols corresponding to ISO country codes. Sequences of
more than two RI characters should be separated by other characters,
such as U+200B ZWSP.
(Thanks to #rintaro for the link).
A Swift Character represents an extended grapheme cluster, so it is (according
to this reference) correct that any sequence of regional indicator symbols
is counted as a single character.
You can separate the "flags" by a ZERO WIDTH NON-JOINER:
let str1 = "πŸ‡©πŸ‡ͺ\u{200C}πŸ‡©πŸ‡ͺ"
print(str1.characters.count) // 2
or insert a ZERO WIDTH SPACE:
let str2 = "πŸ‡©πŸ‡ͺ\u{200B}πŸ‡©πŸ‡ͺ"
print(str2.characters.count) // 3
This solves also possible ambiguities, e.g. should "πŸ‡«β€‹πŸ‡·β€‹πŸ‡Ίβ€‹πŸ‡Έ"
be "πŸ‡«β€‹πŸ‡·πŸ‡Ίβ€‹πŸ‡Έ" or "πŸ‡«πŸ‡·β€‹πŸ‡ΊπŸ‡Έ" ?
See also How to know if two emojis will be displayed as one emoji? about a possible method
to count the number of "composed characters" in a Swift string,
which would return 5 for your let str1 = "πŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺ".
Here's how I solved that problem, for Swift 3:
let str = "πŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺ" //or whatever the string of emojis is
let range = str.startIndex..<str.endIndex
var length = 0
str.enumerateSubstrings(in: range, options: NSString.EnumerationOptions.byComposedCharacterSequences) { (substring, substringRange, enclosingRange, stop) -> () in
length = length + 1
}
print("Character Count: \(length)")
This fixes all the problems with character count and emojis, and is the simplest method I have found.

Resources