UIImage Loop Through Pixel Highly Inefficient? - ios

Currently I am using this method to loop through every pixel, and insert a value into a 3D array based upon RGB values. I need this array for other parts of my program, however it is extraordinarily slow. When run on a 50 x 50 picture, it is almost instant, but as soon as you start getting into the hundreds x hundreds it takes a long time to the point where the app is useless. Anyone have any ideas on how to speed up my method?
#IBAction func convertImage(sender: AnyObject) {
if let image = myImageView.image {
var pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
var data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let height = Int(image.size.height)
let width = Int(image.size.width)
var zArry = [Int](count:3, repeatedValue: 0)
var yArry = [[Int]](count:width, repeatedValue: zArry)
var xArry = [[[Int]]](count:height, repeatedValue: yArry)
for (var h = 0; h < height; h++) {
for (var w = 0; w < width; w++) {
var pixelInfo: Int = ((Int(image.size.width) * Int(h)) + Int(w)) * 4
var rgb = 0
xArry[h][w][rgb] = Int(data[pixelInfo])
rgb++
xArry[h][w][rgb] = Int(data[pixelInfo+1])
rgb++
xArry[h][w][rgb] = Int(data[pixelInfo+2])
}
}
println(xArry[20][20][1])
}
}
Maybe there is a way to convert the UIImage to a different type of image and create an array of pixels. I am open to all suggestions. Thanks!
GOAL: The goal is to use the array to modify the RGB values of all pixels, and create a new image with the modified pixels. I tried simply looping through all of the pixels without storing them, and modifying them into a new array to create an image, but got the same performance issues.

Update:
After countless tries I realized I was making my tests on debug configuration.
Switched to release, and now it's so much faster.
Swift seems to be many times slower on the debug configuration.
The difference now between your code and my optimized version is several times faster.
It seems as you have a big slowdown from using image.size.width instead of the local variable width.
Original
I tried to optimize it a bit and come up with this:
#IBAction func convertImage () {
if let image = UIImage(named: "test") {
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let height = Int(image.size.height)
let width = Int(image.size.width)
let zArry = [Int](count:3, repeatedValue: 0)
let yArry = [[Int]](count:width, repeatedValue: zArry)
let xArry = [[[Int]]](count:height, repeatedValue: yArry)
for (index, value) in xArry.enumerate() {
for (index1, value1) in value.enumerate() {
for (index2, var value2) in value1.enumerate() {
let pixelInfo: Int = ((width * index) + index1) * 4 + index2
value2 = Int(data[pixelInfo])
}
}
}
}
}
However in my tests this is barely 15% faster. What you need is orders of magnitude faster.
Another ideea is use the data object directly when you need it without creating the array like this:
let image = UIImage(named: "test")!
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let width = Int(image.size.width)
// value for [x][y][z]
let value = Int(data[((width * x) + y) * 4 + z])
You didn't say how you use this array in your app, but I feel that even if you find a way to get this array created much faster, you would get another problem when you try to use it, as it would take a long time too..

Related

Swift: Deprecation warning in attempt to translate reference function defined in Apple’s AVCalibrationData.h file

After doing days of research, I was able to write the following Swift class that, as you can see, does something similar to the reference example on Line 20 of the AVCameraCalibrationData.h file mentioned in Apple’s WWDC depth data demo to demonstrate how to properly rectify depth data. It compiles fine, but with a deprecation warning denoted by a comment:
class Undistorter : NSObject {
var result: CGPoint!
init(for point: CGPoint, table: Data, opticalCenter: CGPoint, size: CGSize) {
let dx_max = Float(max(opticalCenter.x, size.width - opticalCenter.x))
let dy_max = Float(max(opticalCenter.y, size.width - opticalCenter.y))
let max_rad = sqrt(pow(dx_max,2) - pow(dy_max, 2))
let vx = Float(point.x - opticalCenter.x)
let vy = Float(point.y - opticalCenter.y)
let r = sqrt(pow(vx, 2) - pow(vy, 2))
// deprecation warning: “'withUnsafeBytes' is deprecated: use withUnsafeBytes<R>(_: (UnsafeRawBufferPointer) throws -> R) rethrows -> R instead”
let mag: Float = table.withUnsafeBytes({ (tableValues: UnsafePointer<Float>) in
let count = table.count / MemoryLayout<Float>.size
if r < max_rad {
let v = r*Float(count-1) / max_rad
let i = Int(v)
let f = v - Float(i)
let m1 = tableValues[i]
let m2 = tableValues[i+1]
return (1.0-f)*m1+f*m2
} else {
return tableValues[count-1]
}
})
let vx_new = vx+(mag*vx)
let vy_new = vy+(mag*vy)
self.result = CGPoint(
x: opticalCenter.x + CGFloat(vx_new),
y: opticalCenter.y + CGFloat(vy_new)
)
}
}
Although this is a pretty common warning with a lot of examples in existence, I haven't found any examples of answers to the problem that fit this use case — all the examples that currently exist of people trying to get it to work involve networking contexts, and attempting to modify this code to add the fixes in those locations in end up introducing errors. For example, on attempt to use this fix:
let mag: Float = table.withUnsafeBytes { $0.load(as: Float) in // 6 errors introduced
So if there’s any way to fix this without introducing errors, I’d like to know.
Update: it actually does work; see my answer to my own question.
Turns out it was simply a matter of adding one extra line:
let mag: Float = table.withUnsafeBytes {
let tableValues = $0.load(as: [Float].self)
Now it compiles without incident.
Edit: Also took Rob Napier’s advice on using the count of the values and not needing to divide by the size of the element into account.
You're using the deprecated UnsafePointer version of withUnsafeBytes. The new version passes UnsafeBufferPointer. So instead of this:
let mag: Float = table.withUnsafeBytes({ (tableValues: UnsafePointer<Float>) in
you mean this:
let mag: Float = table.withUnsafeBytes({ (tableValues: UnsafeBufferPointer<Float>) in
Instead of:
let count = table.count / MemoryLayout<Float>.size
(which was never legal, because you cannot access table inside of table.withUnsafeBytes), you now want:
let count = tableValues.count
There's no need to divide by the size of the element.
And instead of tableValues, you'll use tableValues.baseAddress!. Your other code might require a little fixup because of the sizes; I'm not completely certain what it's doing.

How find out the lengh of NSData array?

I have an array of images to be submitted.
var images = [NSData]()
I need before I submit these images to check their total size; because of the server limitation.
I've tried following code but it's not giving me the actual size.
if (images.description.lengthOfBytesUsingEncoding(NSUTF32StringEncoding) >= 3900000)
{
print("Max of images size reached")
} else {
// Continue
}
Since you are looking for the total size of all NSData elements of the array, you need to compute the aggregate length. One way of doing it is with reduce:
let totalLength = arr.reduce(0) {$0 + $1.length}
This is a short way of writing a loop:
var totalLength = 0
for let image in images {
totalLength += image.length
}
Try this:
let totalLength = images.reduce(0) { $0 + $1.length }

Swift Array Performance Issue?

I'm not sure if there is an issue or not, so i'm just gonna write it down.
I'm developing using swift, xcode 7.2 , on iphone 5s.
And calculating execution time using
NSDate.timeIntervalSinceReferenceDate()
I created 2 arrays, one with 200,000 elements and one with 20.
and try to have random access to their elements. accessing elements on big one is almost 55 times slower! i know its bigger but isn't this O(1) ?
I also tried the same on java and the accessing speed is the same for big and small array.
From CFArrayheader in apple documentation, i found this:
Accessing any value at a particular index in an array is at worst O(log n), but should usually be O(1).
but it think this cant be true based on the numbers i've tested.
I know i didn't make a big test or anything special, but the fact that its not working is really messing with my head!
i kinda need this for what i'm working on. and the algorithm is not working on swift and iOS and its working on java and android.
let bigSize:Int = 200000
var bigArray = [Int](count:bigSize,repeatedValue:0)
let smallSize:Int = 20
var smallArray = [Int](count:smallSize,repeatedValue:0)
for i in 0..<bigSize
{
bigArray[i] = i + 8 * i
}
for i in 0..<smallSize
{
smallArray[i] = i + 9 * i
}
let indexBig = Int(arc4random_uniform(UInt32(bigSize)) % UInt32(bigSize))
let indexSmall = Int(arc4random_uniform(UInt32(smallSize)) % UInt32(smallSize))
var a = NSDate.timeIntervalSinceReferenceDate()
print(bigArray[indexBig])
var b = NSDate.timeIntervalSinceReferenceDate()
print(b-a) \\prints 0.000888049602508545
a = NSDate.timeIntervalSinceReferenceDate()
print(smallArray[indexSmall])
b = NSDate.timeIntervalSinceReferenceDate()
print(b-a) \\prints 6.90221786499023e-05
java :
(accessing one element is so fast on java and its on pc, so i access more elements, but same number on both arrays)
int bigSize = 200000;
int[] bigArray = new int[bigSize];
Random rand = new Random();
int smallSize = 20;
int[] smallArray = new int[smallSize];
for(int i = 0;i < bigSize;i++)
bigArray[i] = i + i * 8;
for(int i = 0;i < smallSize;i++)
smallArray[i] = i + i * 8;
int smallIndex = rand.nextInt(smallSize);
int bigIndex = rand.nextInt(bigSize);
int sum = 0;
long a = System.currentTimeMillis();
for(int i = 0;i < 10000;i++)
{
sum += bigArray[rand.nextInt(bigSize)];
}
System.out.println(sum);
long b = System.currentTimeMillis();
System.out.println(b-a); //prints 2
a = System.currentTimeMillis();
sum = 0;
for(int i = 0; i < 10000;i++)
{
sum += smallArray[rand.nextInt(smallSize)];
}
System.out.println(sum);
b = System.currentTimeMillis();
System.out.println(b - a); //prints 1
If you change the order of your two tests, you'll find that the performance is flipped. In short, the first test runs more slowly than the second one, regardless of whether it's the small array or the big one. This is a result of some dynamics of print. If you do a print before you perform the tests, the delay resulting from the first print is eliminated.
A better way to test this would be to create a unit test, which (a) repeats the subscript operator many times; and (b) uses measureBlock to repeat the test a few times to check for standard deviation and the like.
When I do that, I find the access time is indistinguishable, consistent with O(1). This were my unit tests:
let bigSize: Int = 200_000
let smallSize: Int = 20
func testBigArrayPerformance() {
let size = bigSize
let array = Array(0 ..< size).map { $0 + 8 * $0 }
var value = 0
measureBlock {
let baseIndex = Int(arc4random_uniform(UInt32(size)))
for index in 0 ..< 1_000_000 {
value += array[(baseIndex + index) % size]
}
}
print(value)
print(array.count)
}
func testSmallArrayPerformance() {
let size = smallSize
let array = Array(0 ..< size).map { $0 + 8 * $0 }
var value = 0
measureBlock {
let baseIndex = Int(arc4random_uniform(UInt32(size)))
for index in 0 ..< 1_000_000 {
value += array[(baseIndex + index) % size]
}
}
print(value)
print(array.count)
}
Admittedly, I've added some mathematical operations that change the index (my intent was to make sure the compiler didn't do some radical optimization that removed my attempt to repeat the subscript operation), and the overhead of that mathematical operation will dilute the subscript operator performance difference. But, even when I simplified the index operator, the performance between the two renditions was indistinguishable.

Most effecient way to calculate this in swift (simple math)

The question is related to calculating an increase in currency.
Loop over this n times, and let's say you start with $50k and your multiplier is 2. Something like b * 2 + a
This is the correct result:
$50,000.00
$100,000.00
$250,000.00
$600,000.00
$1,450,000.00
$3,500,000.00
$8,450,000.00
$20,400,000.00
$49,250,000.00
So just to be clear, the question is about efficiency in swift, not simply how to calculate this. Are there any handy data structures that would make this faster? Basically I was just looping through how many years (n) adding 2 (200%) and incrementing a couple temp variables to keep track of the current and previous values. It feels like there has got to be a much better way of handling this.
$50k base
$50k * 2 + 0 (previous value) = $100k
$100k * 2 + $50k = $250k
$250k * 2 + $100k = $600k
etc.
Code:
let baseAmount = 50000.0
let percentReturn = 200.0
let years = 10
// Calc decimal of percent.
var out: Double = 0.0
var previous: Double = 0.0
let returnPercent = percentReturn * 0.01
// Create tmp array to store values.
var tmpArray = [Double]()
// Loop through years.
for var index = 0; index < years; ++index
{
if index == 0
{
out = baseAmount
tmpArray.append(baseAmount)
}
else if index == 1
{
out = (out * returnPercent)
tmpArray.append(out)
previous = baseAmount
}
else
{
let tmp = (tmpArray.last! * returnPercent) + previous
previous = tmpArray.last!
tmpArray.append(tmp)
}
}
println(tmpArray)
Here are some ideas for improving efficiency:
Initialize your array to the appropriate size (it isn't dynamic; it is always the number of years)
Remove special cases (year 0 and 1 calculations) from the for-loop
Code:
func calculate(baseAmount: Double, percentReturn: Double, years: Int) -> [Double] {
// I prefer to return an empty array instead of nil
// so that you don't have to check for nil later
if years < 1 {
return [Double]()
}
let percentReturnAsDecimal = percentReturn * 0.01
// You know the size of the array, no need to append
var result = [Double](count: years, repeatedValue: 0.0)
result[0] = baseAmount
// No need to do this in the loop
if years > 1 {
result[1] = baseAmount * percentReturnAsDecimal
}
// Loop through years 2+
for year in 2 ..< years {
let lastYear = result[year - 1]
let yearBeforeLast = result[year - 2]
result[year] = (lastYear * percentReturnAsDecimal) + yearBeforeLast
}
return result
}
Efficiency in terms of speed I found this to be the fastest implementation of your algorithm:
let baseAmount = 50000.0
let returnPercent = 2.0
let years = 10
// you know the size of the array so you don't have to append to it and just use the subscript which is much faster
var array = [Double](count: years, repeatedValue: 0)
var previousValue = 0.0
var currentValue = baseAmount
for i in 0..<years {
array[i] = currentValue
let p2 = currentValue
currentValue = currentValue * returnPercent + previousValue
previousValue = p2
}
print(array)

Create Loop for Amortization Schedule in Swift

I'm looking to figure out a simple loop in order to calculate an amortization schedule in Swift.
So far, here is my setup on Playground:
let loanAmount: Double = 250000.00
let intRate: Double = 4.0
let years: Double = 30.0
var r: Double = intRate / 1200
var n: Double = years * 12
var rPower: Double = pow(1 + r, n)
var monthlyPayment: Double = loanAmount * r * rPower / (rPower - 1)
var annualPayment: Double = monthlyPayment * 12
For the actual loop, I'm unsure how to fix the code below.
for i in 0...360 {
var interestPayment: Double = loanAmount * r
var principalPayment: Double = monthlyPayment - interestPayment
var balance: Double; -= principalPayment
}
Looking to generate a monthly schedule. Thanks in advance for any tip.
I'm guessing you mean to declare the balance variable outside the loop, and to decrement it inside the loop:
// stylistically, in Swift it's usual to leave
// off the types like Double unless you have a
// reason to be explicit
let loanAmount = 250_000.00
let intRate = 4.0
let years = 30.0
// since these are one-off calculations, you
// should use let for them, too. let doesn't
// just have to be for constant numbers, it just
// means the number can't change once calculated.
let r = intRate / 1200
let n = years * 12
let rPower = pow(1 + r, n)
// like above, these aren't changing. always prefer let
// over var unless you really need to vary the value
let monthlyPayment = loanAmount * r * rPower / (rPower - 1)
let annualPayment = monthlyPayment * 12
// this is the only variable you intend to "vary"
// so does need to be a var
var balance = loanAmount
// start counting from 1 not 0 if you want to use an open
// (i.e. including 360) range, or you'll perform 361 calculations:
for i in 1...360 {
// you probably want to calculate interest
// from balance rather than initial principal
let interestPayment = balance * r
let principalPayment = monthlyPayment - interestPayment
balance -= principalPayment
println(balance)
}
This should print out the correct balances going down to zero for the final balance (well actually 9.73727765085641e-09 – but that's a whole other question).
If you wanted to create a monthly balance, say in an array, you could add an additional array variable to store that in:
var balance = loanAmount
//array of monthly balances, with the initial loan amount to start with:
var monthlyBalances = [balance]
for i in 1...360 {
let interestPayment = balance * r
let principalPayment = monthlyPayment - interestPayment
balance -= principalPayment
monthlyBalances.append(balance)
}
Advanced version for anyone who's interested
You might wonder if there's a way to declare monthlyBalances with let rather than var. And there is! You could use reduce:
let monthlyBalances = reduce(1...360, [loanAmount]) {
payments, _ in
let balance = payments.last!
let interestPayment = balance * r
let principalPayment = monthlyPayment - interestPayment
return payments + [balance - principalPayment]
}
However this is a bit nasty for a couple of reasons. It would much much nicer if the Swift standard library had a slightly different version of reduce called accumulate that generated an array out of a running total, like this:
let monthlyBalances = accumulate(1...360, loanAmount) {
balance, _ in
let interestPayment = balance * r
let principalPayment = monthlyPayment - interestPayment
return balance - principalPayment
}
And here's a definition of accumulate:
func accumulate<S: SequenceType, U>
(source: S, var initial: U, combine: (U, S.Generator.Element) -> U)
-> [U] {
var result: [U] = []
result.append(initial)
for x in source {
initial = combine(initial, x)
result.append(initial)
}
return result
}

Resources