using arc4random does not place images as "random" as I would like - ios

I would like the images to appear more randomly than they do with this code:
//placing images on the screen
-(void)PlaceImage {
RandomImagePosition = arc4random() %1000;
Image.center = CGPointMake(570, RandomImagePosition);
// the higher the number (570) the farther to the right the platforms appear
}
They appear in different positions but most of the time towards to top of the screen. There will be a few times when the image is placed towards the bottom of the screen. I would like there to be more randomness.

Use arc4random_uniform to generate a random integer in a specified range. Never use arc4random mod something; that is indeed biased and will produce suboptimal results.
If you have further issues with "randomness" the you should look carefully at how you are using your random value. Notably, people's perceptions of "random" are often quite different from mathematical random: for instance, people expect "random" coin flips to switch between heads and tails much more frequently than actual random will produce. Therefore, to make something perceptually random, you may have to fudge the output a bit (e.g. to reduce the chance that a value will repeat twice).

You are likely experiencing modulo bias and should be using arc4_random_uniform(700). From man arc4random:
arc4random_uniform() will return a uniformly distributed random number
less than upper_bound. arc4random_uniform() is recommended over con-
structions like ``arc4random() % upper_bound'' as it avoids "modulo bias"
when the upper bound is not a power of two.

Related

How random is arc4random (mac os x)? (or what am I doing wrong?)

I'm playing with an optimized game of life implementation in swift/mac_os_x. First step: randomize a big grid of cells (50% alive).
code:
for(var i=0;i<768;i++){
for(var j=0;j<768;j++){
let r = Int(arc4random_uniform(100))
let alive = (aliveOdds > r)
self.setState(alive,cell: Cell(tup:(i,j)),cells: aliveCells)
}
}
I expect a relatively uniform randomness. What I get has definite patterns:
Zooming in a bit on the lower left:
(I've changed the color to black on every 32 row and column, to see if the patterns lined up with any power of 2).
Any clue what is causing the patterns? I've tried:
replacing arc4random with rand().
adding arc4stir() before each arc4random_uniform call
shifting the display (to ensure the pattern is in the data, not a display glitch)
Ideas on next steps?
You cannot hit period or that many regular non-uniform clusters of arc4random on any displayable set (16*(2**31) - 1).
These are definitely signs of the corrupted/unininitialized memory. For example, you are initializing 768x768 field, but you are showing us 1024xsomething field.
Try replacing Int(arc4random_uniform(100)) with just 100 to see.

How to count red blood cells/circles in Octave 3.8.2

I have an image with a group of cells and I need to count them. I did a similar exercise using bwlabel, however this one is a bit more challenging because there are some little cells that I don't want to count. In addition, some cells are on top of each other. I've seem some MATLAB examples online but they all involved functions that aren't available. Do you have any ideas how to separate the overlapping cells?
Here's the image:
To make it clearer: Please help me count the number of red blood cells (which have a circular shape) like so:
The image is in grayscale but I think you can distinguish which ones are red blood cells. They have a distinctive biconcave shape... Everything else doesn't matter. But to be more specific here is an image with all the things that I want to ignore/discard/not count highlighted in red.
The main issue is the overlapping of cells.
The following is an ImageJ macro to do this (which is free software too). I would recommend you use ImageJ (or Fiji), to explore this type of stuff. Then, if you really need it, you can write an Octave program to do it.
run ("8-bit");
setAutoThreshold ("Default");
setOption ("BlackBackground", false);
run ("Convert to Mask");
run ("Fill Holes");
run ("Watershed");
run ("Analyze Particles...", "size=100-Infinity exclude clear add");
This approach gives this result:
And it is point and click equivalent as:
Image > Type > 8-bit
Image > Adjust > Threshold
select "Default" and untick "dark background" on the threshold dialogue. Then click "Apply".
Process > Binary > Fill holes
Process > Binary > Watershed
Analyze > Analyze particles...
7 Set "100-Infinity" as range of valid particle size on the "Analyze particles" dialogue
On ImageJ, if you have a bianry image, watershed actually performs the distance transform, and then the watershed.
Octave has all the functions above except watershed (I plan on implementing it soon).
If you can't use ImageJ for your problem (why not? It can run in headless mode too), then an alternative is to get the area of each object, and if too high, then assume it's multiple cells. It kinda of depends on your question and if can generate a value for average cell size (and error).
Another alternative is to measure the roundness of each object identified. Cells that overlap will be less round, you can identify them that way.
It depends on how much error are you willing to accept on your program output.
This is only to help with "noise" but why not continue using bwlabel and try using bwareaopen to get rid of small objects? It seems the cells are pretty large, just set some size threshold to get rid of small objects http://www.mathworks.com/matlabcentral/answers/46398-removing-objects-which-have-area-greater-and-lesser-than-some-threshold-areas-and-extracting-only-th
As for overlapping cells, maybe setting an upperbound for the size of a single cell. so when you have two cells overlapping, it will classify this as "greater than one cell" or something like that. so it at least acknowledges the shape, but can't determine exactly how many cells are there

spatial set operations in Apache Spark

Has anyone been able to do spatial operations with #ApacheSpark? e.g. intersection of two sets that contain line segments?
I would like to intersect two sets of lines.
Here is a 1-dimensional example:
The two sets are:
A = {(1,4), (5,9), (10,17),(18,20)}
B = {(2,5), (6,9), (10,15),(16,20)}
The result intersection would be:
intersection(A,B) = {(1,1), (2,4), (5,5), (6,9), (10,15), (16,17), (18,20)}
A few more details:
- sets have ~3 million items
- the lines in a set cover the entire range
Thanks.
One approach to parallelize this would be to create a grid of some size, and group line segments by the grids they belong to.
So for a grid with sizes n, you could flatMap pairs of coordinates (segments of line segments), to create (gridId, ( (x,y), (x,y) )) key-value pairs.
The segment (1,3), (5,9) would be mapped to ( (1,1), ((1,3),(5,9) ) for a grid size 10 - that line segment only exists in grid "slot" 1,1 (the grid from 0-10,0-10). If you chose a smaller grid size, the line segment would be flatmapped to multiple key-value pairs, one for each grid-slot it belongs to.
Having done that, you can groupByKey, and for each group, calculation intersections as normal.
It wouldn't exactly be the most efficient way of doing things, especially if you've got long line segments spanning multiple grid "slots", but it's a simple way of splitting the problem into subproblems that'll fit in memory.
You could solve this with a full cartesian join of the two RDDs, but this would become incredibly slow at large scale. If your problem is smallish, sure, this is an easy and cheap approach. Just emit the overlap, if any, between every pair in the join.
To do better, I imagine that you can solve this by sorting the sets by start point, and then walking through both at the same time, matching one's current interval versus another and emitting overlaps. Details left to the reader.
You can almost solve this by first mapping each tuple (x,y) in A to something like ((x,y),'A') or something, and the same for B, and then taking the union and sortBy the x values. Then you can mapPartitions to encounter a stream of labeled segments and implement your algorithm.
This doesn't quite work though since you would miss overlaps between values at the ends of partitions. I can't think of a good simple way to take care of that off the top of my head.

How to move image with low values?

The problem is simple: I want to move (and later, be able to rotate) an image. For example, every time i press the right arrow on my keyboard, i want the image to move 0.12 pixels to the right, and every time i press the left arrow key, i want the image to move 0.12 pixels to the left.
Now, I have multiple solutions for this:
1) simply add the incremental value, i.e.:
image.x += 0.12;
this is of course assuming that we're going to the right.
2) i multiplicate the value of a single increment by the times i already went into this particular direction + 1, like this:
var result:Number = 0.12 * (numberOfTimesWentRight+1);
image.x = result;
Both of these approaches work but produce similiar, yet subtly different, results. If we add some kind of button component that simply resets the x and y coordinates of the image, you will see that with the first approach the numbers don't add up correctly.
it goes from .12, .24, .359999, .475 etc.
But with the second approach it works well. (It's pretty obvious as to why though, it seems like += operations with Numbers are not really precise).
Why not use the second approach then? Well, i want to rotate the image as well. This will work for the first attempt, but after that the image will jump around. Why? In the second approach we never took the original position of the image in account. So if the origin-point shifts a bit down or up because you rotated your image, and THEN you try to move the image again: it will move to the same position as if you hadn't rotated before.
Alright, to make this short:
How can i reliably move, scale and rotate images for 1/10 of a pixel?
Short answer: I don't know! You're fighting with floating point math!
Luckily, I have a workaround, if you don't mind.
You store the location (x and y) of the image in a separate variable... at a larger scale. Such as 100x. So 123.45 becomes 12345, and you then divide by 100 to set the attribute that flash uses to display.
Yes, there are limits to number sizes too, but if you're willing to accept some error rate, and the fact that you'll be limited to, I dunno, a million pixels in each direction, you can fit it in a regular int. The only rounding error you will encounter will be a single rounding error when you divide by 100 (or the factor you used). So instead of the compound rounding error which you described (0.12 * 4 = 0.475), you should see things like 0.47999999. Which doesn't matter because it's, well, so small.
To expand on #Pimgd answer a bit, you're probably hitting a floating point error (multiple +='s will exaggerate the error more than one *='s) - Numbers in Flash are 53-bit precision.
There's also another thing to keep in mind, which is probably playing a bigger role with such small movement values; Flash positions all objects using twips, which is roughly about 1/20th of a pixel, or 0.05, so all values are rounded to this. When you say image.x += 0.12, it's actually the equivalent of image.x += 0.10, hence which the different becomes apparent; you're losing 0.02 of a pixel with every move.
You should be able to get around it by moving to another scale, as #Pimgd says, or just storing your position separately - i.e. work from a property _x rather than image.x so you're not losing that precision everytime:
this._x += 0.12;
image.x = this._x;

How to detect if a frame is odd or even on an interlaced image?

I have a device that is taking TV screenshots at precise times (it doesn't take incomplete frames).
Still this screenshot is an interlace image made from two different original frames.
Now, the question is if/how is possible to identify which of the lines are newer/older.
I have to mention that I can take several sequential screenshots if needed.
Take two screenshots one after another, yielding a sequence of two images (1,2). Split each screenshot into two fields (odd and even) and treat each field as a separate image. If you assume that the images are interlaced consistently (pretty safe assumption, otherwise they would look horrible), then there are two possibilities: (1e, 1o, 2e, 2o) or (1o, 1e, 2o, 2e). So at the moment it's 50-50.
What you could then do is use optical flow to improve your chances. Say you go with the
first option: (1e, 1o, 2e, 2o). Calculate the optical flow f1 between (1e, 2e). Then calculate the flow f2 between (1e, 1o) and f3 between (1o,2e). If f1 is approximately the same as f2 + f3, then things are moving in the right direction and you've picked the right arrangement. Otherwise, try the other arrangement.
Optical flow is a pretty general approach and can be difficult to compute for the entire image. If you want to do things in a hurry, replace optical flow with video tracking.
EDIT
I've been playing around with some code that can do this cheaply. I've noticed that if 3 fields are consecutive and in the correct order, the absolute error due to smooth, constant motion will be minimized. On the contrary, if they are out of order (or not consecutive), this error will be greater. So one way to do this is two take groups of 3 fields and check the error for each of the two orderings described above, and go with the ordering that yielded the lower error.
I've only got a handful of interlaced videos here to test with but it seems to work. The only down-side is its not very effective unless there is substantial smooth motion or the number of used frames is low (less than 20-30).
Here's an interlaced frame:
Here's some sample output from my method (same frame):
The top image is the odd-numbered rows. The bottom image is the even-numbered rows. The number in the brackets is the number of times that image was picked as the most recent. The number to the right of that is the error. The odd rows are labeled as the most recent in this case because the error is lower than for the even-numbered rows. You can see that out of 100 frames, it (correctly) judged the odd-numbered rows to be the most recent 80 times.
You have several fields, F1, F2, F3, F4, etc. Weave F1-F2 for the hypothesis that F1 is an even field. Weave F2-F3 for the hypothesis that F2 is an even field. Now measure the amount of combing in each frame. Assuming that there is motion, there will be some combing with the correct interlacing but more combing with the wrong interlacing. You will have to do this at several times in order to find some fields when there is motion.

Resources