Synchronized random numbers - ios

I have 2 devices, and I am looking for a way to sync random number generation between them.
More background: The 2 devices connect, one device sends the other a file containing a data set. The data set is then loaded on both devices. The data is displayed with randomization at various levels. I want the display to be synced between the devices, however still randomized.
A conceptual example: Take a stack of pictures. A copy of the stack is sent to the remote device and stored for future use. The stacks are then shuffled the same way on both devices so that drawing the first picture on each device will result in the same output. This is overly simplified, there are far more random numbers required in my application so optimizations such as sharing the sort order are not applicable...
Breaking it down: I need a simple way to draw from the same random number pool on 2 devices. I do not know how many random draws may occur before the devices sync, but once synced it should be predictable that they will draw the same number of random numbers since they are using the same data sets, however there is a chance one could draw more than the other before proceeding to the next batch (which would require a re-sync of the random data).
I'm looking to avoid having to transfer sort orders, position info etc. for each entity already transferred in the data set at display time (which also raises structural concerns since the project wasn't initially designed to share that info) by being able to generate the same placement, which requires the random numbers come out in the same order.
Any thoughts or suggestions would be much appreciated.

You can use an LCG algorithm and set the same seed for the generation. Because an LCG algorithm is deterministic, as long as you seed both devices with the same seed, they will produce exactly the same pseudo-random numbers.
You can find more information on the LCG algorithm here:
Linear congruential generator
This LCG is used for example by java.util.Random.

If you give rand() the same seed on each device, i.e. srand( SEED );, the (pseudo-)random numbers that come out are guaranteed to be the same every time, and you can keep pulling numbers out indefinitely without reseeding.

Most random number generators let you set the "seed". If you create two random number generators, implementing the exact same generation algorithm, on two different machines (need not even be of the same type or running the same operating system) and then supply both machines with the same "seed" value, they will both produce the exact same random number sequence.
So your "sync" should really only need to transfer one number (generally itself a randomly-chosen number) from the first machine to the second. Then both machines use that same number as the "seed".
(I'd look up the specifics for the iPhone random number generators, but the Apple documentation site has apparently been affected by the Minnesota government shutdown.)

If you do not always want to specify the seed, you could simply designate one device as the master. When the master generates a random number, it sends a message to the other device containing that random number.

If it is truly random no seed number will generate the same number on second machine. It is implied that both random and chaos theories would apply.

Related

How to tell the different SOMs apart?

I have been handed many GC Dev boards and SOMs at work but there seems to be no external way to tell them apart from looking at part/model numbers. My coworkers are disorganized and have mixed three different orders together and we cannot tell which is which anymore. I could turn them all on and look in the terminal but certainly there must be an easier way. I dont know why there is no label or discerning factor on the packages or SOM itself. The Datasheet says this only: These numbers don't correlate to any numbers on the actual SOM Board I have a model number that's the same for all of them, "AA1" but at the top is "09JF001TK" which contains the date it was manufactured but I cannot decipher what the other letters/numbers mean. I sent in a support request but have not heard back, hope you can help. Both QR codes don't seem to yield any results

AlphaVantage API Technical Indicators: Do they use only information of the past?

I am writing because I found no public documentation or code to solve this doubt. I have been using the AlphaVantage APIs for a project about stock markets prediction with Machine Learning. I have been using a lot of technical indicators of the AlphaVantage library, and, many of them use sequences (windows) of data points, rolling them (e.g. Moving Averages).
However, many financial libraries tend to update the values they previously computed for some of these indicators, by using windows retaining future information with respect to the point in time the indicator is referred to. Obviously, that would represent an "hidden" information that a predictive system (only relying either on past or present information), like mine, should not have access to.
Hence, I was wondering if it is the same case for the AlphaVantage library. I personally manually checked a lot of indicators referred to the same stock (and I repeated the process for many stocks), at a distance of days, and I did not find any inconsistencies on the values referred to the common dates (the only difference is that the most recent versions of those technical indicators have new points, referred to the new evolutions of the price in time).
I would be very pleased, if anybody of you could help me in solving this.
Most indicators will use a look back window of quote values, including current price, to calculate current indicator values. Many will also include previously calculated indicator values as a basis for current indicator values. Fewer even recalculate older indicator values based on new price information.
For this last scenario, in looking at the AlphaVantage library, I don’t see any in there that would recalculate older indicator values based on newer data. If you’re seeing indicator values change, it’s probably due to a revision or updates of their underlying quote history.
I have a rather large .NET library of indicators, so I’m familiar with which kinds behave that way, due to the mathematics.
Some examples of indicators with retroactive recalculation are ZigZag and Williams Fractal. The reason they do this is because they find local high and low points, which can’t be verified without several confirming bars of data. In other words, you cannot indicate a high point until several lower bars occur thereafter.

how can opencl local memory size works?

I use opencl for image processing. For example, I have one 1000*800 image.
I use a 2D global size as 1000*800, and the local work size is 10*8.
In that case, will the GPU give 100*100 computing units automatic?
And do these 10000 units works at the same time so it can be parallel?
If the hardware has no 10000 units, will one units do the same thing for more than one time?
I tested the local size, I found if we use a very small size (1*1) or a big size(100*80), they are both very slow, but if we use a middle value(10*8) it is faster. So last question, Why?
Thanks!
Work group sizes can be a tricky concept to grasp.
If you are just getting started and you don't need to share information between work items, ignore local work size and leave it NULL. The runtime will pick one itself.
Hardcoding a local work size of 10*8 is wasteful and won't utilize the hardware well. Some hardware, for example, prefers work group sizes that are multiples of 32.
OpenCL doesn't specify what order the work will be done it, just that it will be done. It might do one work group at a time, or it may do them in groups, or (for small global sizes) all of them together. You don't know and you can't control it.
To your question "why?": the hardware may run work groups in SIMD (single instruction multiple data) and/or in "Wavefronts" (AMD) or "Warps" (NVIDIA). Too small of a work group size won't leverage the hardware well. Too large and your registers may spill to global memory (slow). "Just right" will run fastest, but it is hard to pick this without benchmarking. So for now, leave it NULL and let the runtime pick for you. Later, when you become an OpenCL expert and understand more about how the hardware works, you can try specifying the work group size. However, be aware that the optimal size may be different for different hardware, and there are other rules (like global size must be a multiple of local size).

Bloomberg real-time data with lot sizes

I am trying to download real-time trading data from Bloomberg using the api.
So far I can get bid / ask / last prices successfully but in some exchanges (like canada) quote sizes are in lots.
I can query the lots sizes of course with reference data api and write them for every security in the database or something like that but to convert the size for every quote tick is very "expensive" conversion since they come every second and maybe more often.
So is there any other way to achieve this?
Why do you need to multiply each value by lot size? As long as the lot size is constant each quote is comparable and any computation can be implemented using the exchange values. Any results scaled in a presentation layer if necessary.

How do I combine GPS track data with another time-coded dataset?

I have GPS track data from a logging device, in GPX format (or any other format, easily done with gpsbabel). I also have non-GPS data from another measurement device over the same time period. However, the measurement intervals of both devices are not synced.
Is there any software available that can combine the measurement data with the GPS data, so I can plot the measured values in a spatial context?
This would require matching of measurement times and interpolation of GPS trackpoints, combining the results in a new track file.
I could start scripting all of this, but if there are existing tools that can do this, I'd be happy to know about them. I was thinking that GPSBabel might be able to do this, but I haven't found how.
a simple Excel Macro would do your job
In desktop GIS software you could import the two data types in their appropriate formats (which you haven't specified) whether they are shapefiles or even simply tables. Then a table join can be undertaken based on attributes. By selecting the measurement times as the join fields then a table will be created where if the measurement times values are shared in both your types of data the rows will be appended to one another.

Resources