objective-c looking for algorithm - ios

In my application I need to determine what the plates a user can load on their barbell to achieve the desired weight.
For example, the user might specify they are using a 45LB bar and have 45,35,25,10,5,2.5 pound plates to use. For a weight like 115, this is an easy problem to solve as the result neatly matches a common plate. 115 - 45 / 2 = 35.
So the objective here is to find the largest to smallest plate(s) (from a selection) the user needs to achieve the weight.
My starter method looks like this...
-(void)imperialNonOlympic:(float)barbellWeight workingWeight:(float)workingWeight {
float realWeight = (workingWeight - barbellWeight);
float perSide = realWeight / 2;
.... // lots of inefficient mod and division ....
}
My thought process is to determine first what the weight per side would be. Total weight - weight of the barbell / 2. Then determine what the largest to smallest plate needed would be (and the number of each, e.g. 325 would be 45 * 3 + 5 or 45,45,45,5.
Messing around with fmodf and a couple of other ideas it occurred to me that there might be an algorithm that solves this problem. I was looking into BFS, and admit that it is above my head but still willing to give it a shot.
Appreciate any tips on where to look in algorithms or code examples.

Your problem is called Knapsack problem. You will find a lot solution for this problem. There are some variant of this problem. It is basically a Dynamic Programming (DP) problem.
One of the common approach is that, you start taking the largest weight (But less than your desired weight) and then take the largest of the remaining weight. It easy. I am adding some more links ( Link 1, Link 2, Link 3 ) so that it becomes clear. But some problems may be hard to understand, skip them and try to focus on basic knapsack problem. Good luck.. :)
Let me know if that helps.. :)

Related

How to scale % change based features so that they are viewed "similarly" by the model

I have some features that are zero-centered values and supposed to represent change between a current value and previous value. Generally speaking i believe there should be some symmetry between these values. Ie. there should be roughly the same amount of positive values as negative values and roughly these values should operate on the same scale.
When i try to scale my samples using MaxAbsScaler, i notice that my negative values for this feature get almost completely drowned out by the positive values. And i don't really have any reason to believe my positive values should be that much larger than my negative values.
So what i've noticed is that fundamentally, the magnitude of percentage change values are not symmetrical in scale. For example if i have a value that goes from 50 to 200, that would result in a 300.0% change. If i have a value that goes from 200 to 50 that would result in a -75.0% change. I get there is a reason for this, but in terms of my feature, i don't see a reason why a change of 50 to 100 should be 3x+ more "important" than the same change in value but the opposite direction.
Given this information, i do not believe there would be any reason to want my model to treat a change of 200-50 as a "lesser" change than a change of 50-200. Since i am trying to represent the change of a value over time, i want to abstract this pattern so that my model can "visualize" the change of a value over time that same way a person would.
Right now i am solving this by using this formula
if curr > prev:
return curr / prev - 1
else:
return (prev / curr - 1) * -1
And this does seem to treat changes in value, similarly regardless of the direction. Ie from the example of above 50>200 = 300, 200>50 = -300. Is there a reason why i shouldn't be doing this? Does this accomplish my goal? Has anyone ran into similar dilemmas?
This is a discussion question and it's difficult to know the right answer to it without knowing the physical relevance of your feature. You are calculating a percentage change, and a percent change is dependent on the original value. I am not a big fan of a custom formula only to make percent change symmetric since it adds a layer of complexity when it is unnecessary in my opinion.
If you want change to be symmetric, you can try direct difference or factor change. There's nothing to suggest that difference or factor change are less correct than percent change. So, depending on the physical relevance of your feature, each of the following symmetric measures would be correct ways to measure change -
Difference change -> 50 to 200 yields 150, 200 to 50 yields -150
Factor change with logarithm -> 50 to 200 yields log(4), 200 to 50 yields log(1/4) = -log(4)
You're having trouble because you haven't brought the abstract questions into your paradigm.
"... my model can "visualize" ... same way a person would."
In this paradigm, you need a metric for "same way". There is no such empirical standard. You've dropped both of the simple standards -- relative error and absolute error -- and you posit some inherently "normal" standard that doesn't exist.
Yes, we run into these dilemmas: choosing a success metric. You've chosen a classic example from "How To Lie With Statistics"; depending on the choice of starting and finishing proportions and the error metric, you can "prove" all sorts of things.
This brings us to your central question:
Does this accomplish my goal?
We don't know. First of all, you haven't given us your actual goal. Rather, you've given us an indefinite description and a single example of two data points. Second, you're asking the wrong entity. Make your changes, run the model on your data set, and examine the properties of the resulting predictions. Do those properties satisfy your desired end result?
For instance, given your posted data points, (200, 50) and (50, 200), how would other examples fit in, such as (1, 4), (1000, 10), etc.? If you're simply training on the proportion of change over the full range of values involved in that transaction, your proposal is just what you need: use the higher value as the basis. Since you didn't post any representative data, we have no idea what sort of distribution you have.

Nested iif statement to round to nickels! Will it work? AKA Penny Rounding

I have searched the web over, and have found very little dealing with this. I wanted to know if there are any deeper issues that I am unware of getting the results this way. the [total] variable represents the calculated total owing. PayAmt represents what the customer will pay when paying cash only.
PayAmt: FormatCurrency(
IIf(Right(Str(Round([total],2)),1)="1",[total]-1,
IIf(Right(Str([total]),1)="2",[total]-2,
IIf(Right(Str([total]),1)="3",[total]+2,
IIf(Right(Str([total]),1)="4",[total]+1,
IIf(Right(Str([total]),1)="6",[total]-1,
IIf(Right(Str([total]),1)="7",[total]-2,
IIf(Right(Str([total]),1)="8",[total]+2,
IIf(Right(Str([total]),1)="9",[total]+1,[total])))))))))/100
This does on its face give the results as expected, I am just not sure IF I should approach this issue this way?
0.98 - 1.02 = 1.00
1.03 - 1.07 = 1.05
Having not seen anything like this, I suspect it can't be this easy. I just don't know why.
Thanks for any help!
Never use string handling for numbers.
Here is an article about serious rounding including all necessary code for any value and data type of VBA:
Rounding values up, down, by 4/5, or to significant figures

What algorithm can I use to turn a drunkards walk into a correlated RNG?

I'm a novice programmer (the only reason I say this is because I'm not super familiar with all the terms yet) and I'm trying to make walls generate in respect to the wall before it. I've posted a question about it on here before
Randomly generated tunnel walls that don't jump around from one to the next
and sort of got the answer. What I was mainly looking for was the for loop that was used (I think). Th problem is I didn't know how to implement it properly without getting errors.
My problem ended up being "I couldn't figure out how to inc. this in to it. I have 41 walls altogether that i'm using and the walls are named Left1 and Right1. i had something like this
CGFloat Left1 = 14; for( int i = 0; i < 41; i++ ){
CGFloat offset = (CGFloat)arc4random_uniform(2*100) - 100;
Left1 += offset;
Right1 = Left1 + 100;
but it was telling me as a yellow text that Local declaration of "Left1" hides instance variable and then in a red text it says "Assigning to 'UIImageView *__strong' from incompatible type 'float'. i'm not sure how to fix this"
and I wasn't sure how to fix it. I realize (I think) that arc4random and arc4random_uniform are pretty much the same thing, as far as i know, with slight differences, but not the difference i'm looking for.
As I said before, i'm pretty novice so any example would really be helpful, especially with the variables i'm trying to use. Thank you.
You want a "hashing" function, and preferably a "cryptographic" one because they tend to be significantly higher quality - at the expense of requiring additional CPU resources. But on modern hardware the extra CPU power usually isn't a problem.
The basic idea is you can give any data to the function, and it will spit out a completely random result, but always the same result if you provide the same input.
Have a read up on them here:
http://en.wikipedia.org/wiki/Hash_function
http://en.wikipedia.org/wiki/Cryptographic_hash_function
There are hundreds of different algorithms in common use, which is best will depend on what you need.
Personally I recommend sha256. A quick search of "sha256 ios" here on stack overflow will show you how to make one, with the CommonCrypto library. The gist is you should create an NSString or NSData object that contains every offset, then run the entire thing through sha256. The result will be a perfectly random 256 bit number.
If 256 bits is too much, just cut it up. For example you could grab just the first 16 bits of the number, and you will have a perfectly random 16 bit number.

Binary floating point Subtraction

I was solving a binary subtraction and got stuck at a point. I am unable to understand how to subtract a larger number from a smaller number.
My operands are 0.111000*2^-3 and 1.0000*2^-3.
I have easily subtracted the fractional part but when coming to the MSB, I dont know how to do it. From where should I borrow to perform the operation. I know subtracting 1 from 0 requires a borrow and it turns sign bit to negative. But here, the storing is not under concern. My problem is with the operation itself. Could anyone explain wats the result and how to perform it??
Thanks
Very late, but had the same question so putting this here for others, having the same problem.
If your problem lies only at the fraction part, you could try this method:
1.000
- 0.111
-------
Step 1: Add sign bit of both binary numbers so you can add.
0 1.000
1 0.111
------- +
1 1.111
Now invert and add one to convert from 2's complement to sign-magnitude:
1 1.111 -> 0.001
Here is a great example that may help you:
http://sandbox.mc.edu/~bennet/cs110/pm/sub.html
If doing this programmatically, you can cheat and see which is larger before you perform the subtraction (which is what I do).

Which Improvements can be done to AnyTime Weighted A* Algorithm?

Firstly , For those of your who dont know - Anytime Algorithm is an algorithm that get as input the amount of time it can run and it should give the best solution it can on that time.
Weighted A* is the same as A* with one diffrence in the f function :
(where g is the path cost upto node , and h is the heuristic to the end of path until reaching a goal)
Original = f(node) = g(node) + h(node)
Weighted = f(node) = (1-w)g(node) +h(node)
My anytime algorithm runs Weighted A* with decaring weight from 1 to 0.5 until it reaches the time limit.
My problem is that most of the time , it takes alot time until this it reaches a solution , and if given somthing like 10 seconds it usaully doesnt find solution while other algorithms like anytime beam finds one in 0.0001 seconds.
Any ideas what to do?
If I were you I'd throw the unbounded heuristic away. Admissible heuristics are much better in that given a weight value for a solution you've found, you can say that it is at most 1/weight times the length of an optimal solution.
A big problem when implementing A* derivatives is the data structures. When I implemented a bidirectional search, just changing from array lists to a combination of hash augmented priority queues and array lists on demand, cut the runtime cost by three orders of magnitude - literally.
The main problem is that most of the papers only give pseudo-code for the algorithm using set logic - it's up to you to actually figure out how to represent the sets in your code. Don't be afraid of using multiple ADTs for a single list, i.e. your open list. I'm not 100% sure on Anytime Weighted A*, I've done other derivatives such as Anytime Dynamic A* and Anytime Repairing A*, not AWA* though.
Another issue is when you set the g-value too low, sometimes it can take far longer to find any solution that it would if it were a higher g-value. A common pitfall is forgetting to check your closed list for duplicate states, thus ending up in a (infinite if your g-value gets reduced to 0) loop. I'd try starting with something reasonably higher than 0 if you're getting quick results with a beam search.
Some pseudo-code would likely help here! Anyhow these are just my thoughts on the matter, you may have solved it already - if so good on you :)
Beam search is not complete since it prunes unfavorable states whereas A* search is complete. Depending on what problem you are solving, if incompleteness does not prevent you from finding a solution (usually many correct paths exist from origin to destination), then go for Beam search, otherwise, stay with AWA*. However, you can always run both in parallel if there are sufficient hardware resources.

Resources