Is there an autonimus realtime clock wilth month loss less than 10milliseconds? - iot

Looking for a realtime clock for IoT project. Need a millisecond resolution for my app protocol and its loss is critical. So I wonder if there is an autonimus realtime clock (with a battery) that will loose less than 10ms per month and work for a year?

The drift parameters you're asking for here -- 10 ms / 30 days -- imply <4 ppb accuracy. This will be a very difficult target to hit. A typical quartz timing crystal of the type used by most RTCs will drift by 50 - 100+ ppm (50,000 - 100,000 ppb) just based on temperature fluctuations.
Most of the higher-quality timing options (TCXO, OCXO, etc) will not be usable within your power budget -- a typical OCXO may require as much as 1W of (continuous) power to run its heater. About the only viable option I can think of would be a GPS timing receiver, which can synchronize a low-quality local oscillator to the GPS time, which is highly accurate.
Ultimately, though, I suspect your best option will be to modify your protocol to loosen or remove these timing requirements.

Sync it with the precise clock source like GPS for example.
You can also use tiny atomic clock https://physicsworld.com/a/atomic-clock-is-smallest-on-the-market/
or in Europe DCF77 receiver.

Related

Reaching clock regions using BUFIO and BUFG

I need to realize a source-synchronous receiver in a Virtex 6 that receives data and a clock from a high speed ADC.
For the SERDES Module I need two clocks, that are basically the incoming clock, buffered by BUFIO and BUFR (recommended). I hope my picture makes the situation clear.
Clock distribution
My problem is, that I have some IOBs, that cannot be reached by the BUFIO because they are in a different, not adjacent clock region.
A friend recommended using the MMCM and connecting the output to a BUFG, which can reach all IOBs.
Is this a good idea? Can't I connect my LVDS clock buffer directly to a BUFG, without using the MMCM before?
My knowledge about FPGA Architecture and clocking regions is still very limited, so it would be nice if anybody has some good ideas, wise words or has maybe worked out a solution to a similar problem in the past.
It is quite common to use a MMCM for external inputs, if only to cleanup the signal and realize some other nice features (like 90/180/270 degree phase shift for quad-data rate sampling).
With the 7-series they introduced the multi-region clock buffer (BUFMR) that might help you here. Xilinx has published a nice answer record on which clock buffer to use when: 7 Series FPGA Design Assistant - Details on using different clocking buffers
I think your friends suggestion is correct.
Also check this application note for some suggestions: LVDS Source Synchronous 7:1
Serialization and Deserialization Using
Clock Multiplication

iOS7: Is it more power efficient to defer updates or to set desired accuracy to low-accuracy

I have an app where I need accurate location updates every K minutes -- even while in the background. Significant location-change updates are not sufficient for my needs, hence I need to use CLLocationManager's startUpdatingLocation method and keep it running forever.
I want to use as little power as possible while still getting my periodic location updates. It seems that the two options for saving power are (temporarily) setting the desiredAccuracy property of the CLLocationManager to the least-accurate setting (e.g. 3-miles), or to defer location updates via the allowDeferredLocationUpdates* method. However, these two techniques are mutually incompatible since deferred updates require a high accuracy setting (most accurate).
Does anyone know which approach saves more power, or if there is another way to minimize power usage while still getting periodic updates (even in the background).
You should be doing both deferred updates and reduce desiredAccuracy.
And every K minutes, check the current CLLocation value, if its accuracy it acceptable, then use it. If not reduce the desiredAccuracy to 30m (or Best or whatever max is acceptable) for up to 30 seconds. This will turn on the GPS chip for 30 seconds, if you get an acceptable accuracy location, use that location and immediately put the desiredAccuracy back to 3000 (kCLLocationAccuracyThreeKilometers) until the next K minute period starts. If you don't get acceptable accuracy during that 30 seconds, too bad, use the best CLLocation that you got during that 30 sec period, go back to 3000m accuracy and try again in K minutes.
Be sure to read up on how to configure deferred updates. It's not easy to get them to work, but using that will allow you to wake the CPU only 1 during your 30 second time when the GPS is on instead of 30 times, saving lots of battery there too.
Deferred updates require iPhone 5 or later and iOS 6 or later. You can use deferredLocationUpdatesAvailable to determine if a device supports it.
Deferred updates uses vastly less power, but it requires hardware support so it isn't always available. It works by caching the location data in hardware and then passing it to your app all at one time (the power saving is in not activating the app frequently). It also offers time based configuration.
Monitoring for significant changes (startMonitoringSignificantLocationChanges) again uses less power by not using the GPS (using cell towers instead), so again it requires specific hardware support.
Simply setting the desired accuracy to low doesn't necessarily use either of the above features so you should check the device capability at runtime and use whichever features are available. AFAIK there are no statistics released for which of the hardware supported options uses less power.

NSThread, NSOperation or GCD for CoreMotion and accurate timing purposes?

I'm looking to do some high precision core motion reading (>=100Hz if possible) and motion analysis on the iPhone 4+ which will run continuously for the duration of the main part of the app. It's imperative that the motion response and the signals that the analysis code sends out are as free from lag as possible.
My original plan was to launch a dedicated NSThread based on the code in the metronome project as referenced here: Accurate timing in iOS, along with a protocol for motion analysers to link in and use the thread. I'm wondering whether GCD or NSOperation queues might be better?
My impression after copious reading is that they are designed to handle a quantity of discrete, one-off operations rather than a small number of operations performed over and over again on a regular interval and that using them every millisecond or so might inadvertently create a lot of thread creation/destruction overhead. Does anyone have any experience here?
I'm also wondering about the performance implications of an endless while loop in a thread (such as in the code in the above link). Does anyone know more about how things work under the hood with threads? I know that iPhone4 (and under) are single core processors and use some sort of intelligent multitasking (pre-emptive?) which switches threads based on various timing and I/O demands to create the effect of parallelism...
If you have a thread that has a simple "while" loop running endlessly but only doing any additional work every millisecond or so, does the processor's switching algorithm consider the endless loop a "high demand" on resources thus hogging them from other threads or will it be smart enough to allocate resources more heavily towards other threads in the "downtime" between additional code execution?
Thanks in advance for the help and expertise...
IMO the bottleneck are rather the sensors. The actual update frequency is most often not equal to what you have specified. See update frequency set for deviceMotionUpdateInterval it's the actual frequency? and Actual frequency of device motion updates lower than expected, but scales up with setting
Some time ago I made a couple of measurements using Core Motion and the raw sensor data as well. I needed a high update rate too because I was doing a Simpson integration and thus wnated to minimise errors. It turned out that the real frequency is always lower and that there is limit at about 80 Hz. It was an iPhone 4 running iOS 4. But as long as you don't need this for scientific purposes in most cases 60-70 Hz should fit your needs anyway.

iOS 4+: lag in CMDeviceMotion time intervals

I'm working on a calculation-intensive app that happens to listen to sensor data (acceleration, but also angular velocity). After a couple filters, these vectors are integrated to track displacement.
I have noticed that the timestamps associated to CMDeviceMotion and CMGyroData are late, because my CMMotionManager's handlers aren't fired at 100 Hz as specified by its accelerometerUpdateInterval and gyroUpdateInterval. It starts around 60 Hz and goes up and down. This affects the integrations majorly.
The same code in a stand-alone app does 100Hz like a charm.
So it looks like computation peaks from other modules of the big app make the sensor updates lag. Which surprises me, since the sensor manager is on a thread of its own and I understood from the doc that the sensor events were triggered by the hardware.
My question is: when the timestamp is unreliable as described, can the data still be used? Can it be extrapolated using another clock?
And I'm confused why big, asynchronous computation on other threads can lag the accelerator updates.
Thanks,
Antho
Bad timestamps are just as bad as inaccurate data since they have the same effect on the integration.
About 50 Hz is enough to track orientation. I was wondering how you track displacement because it is impossible with current sensors.

How do different visitor metrics relate?

Hypothetically, tets say someone tells you to to expect X (like 100,000 or something) number of unique visitors per day as a result of a successful marketing campaing.
How does that translate to peak requests/second? Peak simultaneous requests?
Obviously this depends on many factors, such as typical number of pages requested per user session or load time of a typical page, or many other things. These are other variables Y, Z, V, etc.
I'm looking for some sort of function or even just a ratio to estimate these metrics. Obviously for planing out the production environment scalability strategy.
This might happen on a production site I'm working on really soon. Any kind of help estimating these is useful.
Edit: (following indication that we have virtually NO prior statistics on the traffic)
We can therefore forget about the bulk of the plan laid out below and directly get into the "run some estimates" part. The problem is that we'll need to fill-in parameters from the model using educated guesses (or plain wild guesses). The following is a simple model for which you can tweak the parameters based on your understanding of the situation.
Model
Assumptions:
a) The distribution of page requests follows the normal distribution curve.
b) Considering a short period during peak traffic, say 30 minutes, the number of requests can be considered to be evenly distributed.
This could be [somewhat] incorrect: for example we could have a double curve if the ad campaign targets multiple geographic regions, say the US and the Asian markets. Also the curve could follow a different distribution. These assumptions are however good ones for the following reasons:
it would err, if at all, on the "pessimistic side" i.e. over-estimating peak traffic values. This "pessimistic" outlook can further be further adopted by using a slightly smaller std deviation value. (We suggest using 2 to 3 hours, which would put 68% and 95% of the traffic over a period of 4 and 8 hours (2h std dev) and 6 and 12 hours (3h stddev), respectively.
it makes for easy calculations ;-)
it is expected to generally match reality.
Parameters:
V = expected number of distinct visitors per 24 hour period
Ppv = average number of page requests associated with a given visitor session. (you may consider using the formula twice, one for "static" type of responses, and the other for dynamic responses, i.e. when the application spends time crafting a response for a given user/context)
sig = std deviation in minutes
R = peak-time number of requests per minute.
Formula:
R = (V * Ppv * 0.0796)/(2 * sig / 10)
That is because, with a normal distribution, and as per z-score table, roughly 3.98% of the samples fall within 1/10 of a std dev, on one or the other side of the mean (of the very peak), therefore get almost 8 percent of the samples within one tenth of a std dev on each side, and with the assumption of relatively even distribution during this period, we just divide by the number of minutes.
Example: V=75,000 Ppv=12 and sig = 150 minutes (i.e 68% of traffic assumed to come over 5 hours, 95% over 10 hours, 5% for the other 14 hours of the day).
R = 2,388 requests per minute, i.e. 40 requests per second. Rather Heavy, but "doable" (unless application takes 15 seconds per request...)
Edit (Dec 2012):
I'm adding here an "executive summary" of the model proposed above, as provided in comments by full.stack.ex.
In this model, we assume most people visit our system, say, at noon. That's the peak time. Others jump ahead or lag behind, the farther the fewer; Nobody at midnight. We chose a bell curve that reasonably covers all requests within the 24h period: with about 4 sigma on the left and 4 on the right, "nothing" significant remains in the long tail. To simulate the very peak, we cut out a narrow stripe around the noon and count the requests there.
It is noteworthy that this model, in practice, tends to overestimate the peak traffic, and may prove to be more useful at estimating "worse case" scenario rather than more plausible traffic patterns. Tentative suggestions to improve the estimate are
to extend the sig parameter (to acknowledge that the effective traffic period of relatively high traffic is longer)
to reduce the overall amount of visits for the period considered i.e. reduce the V parameter, by say 20% (to acknowledge that about that many visit happen outside of any peak time)
to use a different distribution such as say the Poisson or some binomial distribution.
to consider that there are a number of peaks each day, and that the traffic curve is actually the sum of several normal (or other distribution) functions with similar spread, but with a distinct peak. Assuming that such peaks are sufficiently apart, we can use the original formula, only with a V factor divided by as many peaks as considered.
[original response]
It appears that your immediate concern is how the server(s) may handle the extra load... A very worthy concern ;-). Without distracting you from this operational concern, consider the process of estimating the scale of the upcoming surge, also provides an opportunity of preparing yourself to gather more and better intelligence about the site's traffic, during and beyond the ad-campaign. Such information will in time prove useful for making better estimates of surges etc, but also for guiding some of the site's design (for commercial efficiency as well as for improving scalability).
A tentative plan
Assume qualitative similarity with existing traffic.
The ad campaign will expose the site to a distinct population (type of users) than its current visitors/users population: different situations select different subjects. For example the "ad campaign" visitors may be more impatient, focussed on a particular feature, concerned about price... as compared to the "self selected ?" visitors. Never the less, by lack of any other supporting model and measurement, and for sake of estimating load, the general principle could be to assume that the surge users will on-the-whole behave similarly to the self-selected crowd. A common approach is "run numbers" on this basis and to use educated guesses to slightly bend the coefficients of the model to accommodate for a few distinctive qualitative distinctions.
Gather statistics about existing traffic
Unless you readily have better information for this (eg. tealeaf, Google Analytics...) your source for such information may simply be the webserver's log... You can then build some simple tools to extract parse these logs and extract the following statistics. Note that these tools will be reusable for future analysis (eg: of the campaign itself), and also look for opportunities of logging more/different data, without significantly changing the application!
Average, Min, Max, Std Dev. for
number of pages visited per session
duration of a session
percentage of 24 hour traffic for each hour of a work day (exclude week-ends and such, unless of course this is a site which receives much traffic during these periods) These percentages should be calculated over several weeks at least to remove noise.
"Run" some estimates:
For example, start with peak use estimate, using the peak hour(s) percentage, the average daily session count, the average number of pages hits per session etc. This estimate should take into account the stochastic nature of traffic. Note that you don't have to, in this phase, worry about the impact of the queuing effect, instead, assume that the service time relative to the request period is low enough. Therefore just use a realistic estimate (or rather a value informed from the log analysis, for these very high usage periods), for the way the probability of a request is distributed over short periods (say of 15 minutes).
Finally, based on the numbers you obtained in this fashion, you can get a feel for the type of substained load this would represent on the server, and plan to add resources, to refactor part of the application. Also -very important!- if the outlook for sustained at-capacity load, start running the Pollaczek-Khinchine formula, as suggested by ChrisW, to get a better estimate of the effective load.
For extra credit ;-) Consider running some experiments during the campaign for example by randomly providing a distinct look or behavior for some of the pages visited, and by measuring the impact this may have (if any) on particular metrics (registration for more info, orders place, number of pages visited ...) The effort associated with this type of experiment may be significant, but the return can be significant as well, and if nothing else it may keep your "useability expert/consultant" on his/her toes ;-) You'll obviously want to work on defining such experiments, with the proper marketing/business authorities, and you may need to calculate ahead of time the minimum percentage of users upon which the alternate site would be proposed, to keep the experiment statistically representative. It is indeed important to know that the experiment doesn't need to be applied to 50% of the visitors; one can start small, just not so small that possible variations observed may be due to random...
I'd start by assuming that "per day" means "during the 8-hour business day", because that's a worse-case scenario without perhaps being unecessarily worst-case.
So if you're getting an average of 100,000 in 8 hours, and if the time at which each one arrives is random (independent of the others) then in some seconds you're getting more and in some seconds you're getting less. The details are a branch of knowledge called "queueing theory".
Assuming that the Pollaczek-Khinchine formula is applicable, then because your service time (i.e. CPU time per request) is quite small (i.e. less than a second, probably), therefore you can afford to have quite a high (i.e. greater than 50%) server utilization.
In summary, assuming that the time per request is small, you need a capacity that's higher (but here's the good news: not much higher) than whatever's required to service the average demand.
The bad news is that if your capacity is less than the average demand, then the average queueing delay is infinite (or more realistically, some requests will be abandoned before they're serviced).
The other bad news is that when your service time is small, you're sensitive to temporary fluctations in the average demand, for example ...
If the demand peaks during the lunch hour (i.e. isn't the same average demand as during other hours), or even if for some reason it peaks during a 5-minute period (during a TV commercial break, for example)
And if you can't afford to have customers queueing for that period (e.g. queueing for the whole lunch hour, or e.g. the whole five-minute commercial break)
... then your capacity needs to be enough to meet those short-term peak demands. OTOH you might decide that you can afford to lose the surplus: that it's not worth engineering for the peak capacity (e.g. hiring extra call centre staff during the lunch hour) and that you can afford some percentage of abandoned calls.
That will depend on the marketing campaign. For instance a TV ad will bring a lot of traffic at once, for a newspaper ad it will be spread out more over the day.
My experience with marketing types has been that they just pull a number from where the sun doesn't shine, typically higher than reality by at least an order of magnitude

Resources