I have a function that I want to run x percent of the time. example: when I tap a button, I want to run a function x percent of the time(the user enters 0.01, the function will run 1% of the time the button is tapped). Does anyone know how to do this in swift?
You can use a random function to determine whether a particular action should be undertaken:
func possiblyDoSomething(){
if Int.random(1...100) == 1 {
actuallyDoSomething() // will execute with a 0.01 chance
}
}
Int.random(1...100) will choose a random number in the range from 1 to 100, so any one number has a chance of 0.01 to appear, so it doesn't matter what number you equate it to (I chose to check that it equals to 1 arbitrarily)
Related
I am having an issue with the results I am getting from performing value iteration, with the numbers increasing to infinity so I assume I have a problem somewhere in my logic.
Initially I have a 10x10 grid, some tiles with a reward of +10, some with a reward of -100, and some with a reward of 0. There are no terminal states. The agent can perform 4 non-deterministic actions: move up, down, left, and right. It has an 80% chance of moving in the chosen direction, and a 20% chance of moving perpendicularly.
My process is to loop over the following:
For every tile, calculate the value of the best action from that tile
For example to calculate the value of going north from a given tile:
self.northVal = 0
self.northVal += (0.1 * grid[x-1][y])
self.northVal += (0.1 * grid[x+1][y])
self.northVal += (0.8 * grid[x][y+1])
For every tile, update its value to be: the initial reward + ( 0.5 * the value of the best move for that tile )
Check to see if the updated grid has the changed since the last loop, and if not, stop the loop as the numbers have converged.
I would appreciate any guidance!
What you're trying to do here is not Value Iteration: value iteration works with a state value function, where you store a value for each state. This means, in value iteration, you don't keep an estimate of each (state,action) pair.
Please refer the 2nd edition of Sutton and Barto book (Section 4.4) for explanation, but here's the algorithm for quick reference. Note the initialization step: you only need a vector storing the value for each state.
When a function is run, it saves two values:
os.clock() + int
os.clock()
so example values:
2795.100
2790.100
(5 second difference)
so the moment this function is called, I need to draw a bar, and at this moment it would be drawn at 0% width, when os.clock() returns 2792.550 it should be at 50% width, os.clock() returns 2795.100 it should be at 100% width.
I am trying to find the math logic to:
based on the difference between these two values, decide how many pixels one % would be, so I can use os.clock() to calculate the bar length
I have been struggling with this but I have no code of use to show. Since in example values there's a 5 second difference between the two values, and I want to draw a bar that is at 0% width that grows to 100% over 5 seconds.
That's just basic math.
local interval_start = 2790.100
local interval_end = 2795.100
local current_value = 2792
local bar_max_width = 350
-- how far current value is from start / entire length of interval of allowed values
local fill_percentage = ((current_value - interval_start) / (interval_end - interval_start))
local fill_width = bar_max_width * fill_percentage
print(fill_percentage, fill_width)
Feed one of those values to whatever drawing facility you use.
I am familiar with Mathematics and what the % operator (modulo) gives us for certain values. However, I am following along a Swift code lecture, and the instructor wants to return a value somewhere between 0 and half of the height of the view. He sets up the equation as:
var offSet = arc4random() % UInt32(self.frame.size.height / 2)
I must be missing something. Wouldn't arc give a number between 0 and 1, and then performing % on the height (roughly 700 pixels) would always give 0. Yet each time the code is run it offsets a random amount somewhere between 0 and half the height of the screen. If I change % to * the program crashes.
Ideas?
Read the arc4random() man page
"arc4random() function returns pseudo-random numbers in the range of 0 to (2**32)-1"
So your random number will be something potentially large, then the modulo will bring it into the range of 0 - (height/2 - 1)
I'm playing with an optimized game of life implementation in swift/mac_os_x. First step: randomize a big grid of cells (50% alive).
code:
for(var i=0;i<768;i++){
for(var j=0;j<768;j++){
let r = Int(arc4random_uniform(100))
let alive = (aliveOdds > r)
self.setState(alive,cell: Cell(tup:(i,j)),cells: aliveCells)
}
}
I expect a relatively uniform randomness. What I get has definite patterns:
Zooming in a bit on the lower left:
(I've changed the color to black on every 32 row and column, to see if the patterns lined up with any power of 2).
Any clue what is causing the patterns? I've tried:
replacing arc4random with rand().
adding arc4stir() before each arc4random_uniform call
shifting the display (to ensure the pattern is in the data, not a display glitch)
Ideas on next steps?
You cannot hit period or that many regular non-uniform clusters of arc4random on any displayable set (16*(2**31) - 1).
These are definitely signs of the corrupted/unininitialized memory. For example, you are initializing 768x768 field, but you are showing us 1024xsomething field.
Try replacing Int(arc4random_uniform(100)) with just 100 to see.
I'm making a clone for that old game Simon (a.k.a. Genius, in Brazil), the one with the for coloured buttons which the player needs to press following a sequence of colors.
For testing, interface has 4 coloured buttons
I created an array for the button outlets, for easy access:
var buttonArray:[UIButton] = [self.greenButton, self.yellowButton, self.redButton, self.blueButton]
Also, created another array to store the sequence of colors
var colors:[Int] = []
When a game starts it calls a function which adds a random number from 0 to 3 (index on buttonArray), and add this number to the colors array
After adding a new color the color sequence, the app needs to show the sequence for the user, so he can repeat it
For that, it calls the playMoves function, which uses a for loop the run through the colors array and change the alpha from the button, simulating a 'blink'
func playMoves(){
let delay = 0.5 * Double(NSEC_PER_SEC)
let time = dispatch_time(DISPATCH_TIME_NOW, Int64(delay))
for i in self.colors{
self.buttonArray[i].alpha = 0.2
dispatch_after(time, dispatch_get_main_queue(), {
self.buttonArray[i].alpha = 1
})
}
}
It changes the alpha from the button to 0.2 and then, after half a second it returns the alpha to 1. I was using dispatch_after, passing the 0.5 seconds and on the code block it returns the alpha, as you guys can see on the code above.
On the first run, it appears to do it correctly, but when the colors array has 2 or more items, when it runs the loop, although it has a 0.5 sec delay, it blinks all the buttons on the same time.
It's probably some dumb mistake I'm making, but I'm clueless on the moment.
I would appreciate very much all the help!
Thanks!
All of these dispatch_after calls are scheduled at nearly the same time, making them appear to blink at the same time. There are a couple of approaches that would solve this:
You could, for example, adjust the when parameter (the dispatch_time_t parameter) for each button to be offset from the original time (such that delay was, effectively i * 0.5 * Double(NSEC_PER_SEC)).
You could also use key-frame animation, but I'd suggest you first try fixing the delay in your dispatch_after approach first.