Sync time across multiple iOS devices to milliseconds level? [closed] - ios

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Is it (really) possible to sync time across multiple (not inter-connected) iOS devices to within a few milliseconds accuracy? The only possible solution I (and others, according to Stack Overflow) can think of is sync the devices with a time server(s) over NTP.
Multiple sources state:
NTP is intended to synchronize all participating computers to within a
few milliseconds of Coordinated Universal Time (UTC). It uses the
intersection algorithm, a modified version of Marzullo's algorithm, to
select accurate time servers and is designed to mitigate the effects
of variable network latency. NTP can usually maintain time to within
tens of milliseconds over the public Internet, and can achieve better
than one millisecond accuracy in local area networks under ideal
conditions. Asymmetric routes and network congestion can cause errors
of 100 ms or more.
Can NTP really achieve accuracy within the few milliseconds level?

Related

How to set up automated test for real-time communication (RTC) with multiple users on separate devices on iOS? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 days ago.
Improve this question
Assume you have an iOS app that implements WebRTC protocols and P2P communication (i.e. signaling server, STUN/TURN, data channels are implemented). Let's say the app makes is for a collaborative document.
I would like to simulate the following scenario to run some tests:
30 people using 30 different devices make some non-conflicting edits on a shared document. I want to test the latency it takes for a device to receive the changes from the other devices (latency of p2p connection) and correctness (are all the changes received?).
How can I set this up in an automated test? My goal is to measure the latency and correctness in an automated way.
When I posted this first it got closed due to being "opinion based". I am asking "how to do this?". I know latency is not in my control - I just want to be able to measure this via automated testing. The goal here is to set up an automated test, and my question is how to do this. Thanks!

Design of HA, consistent and responsive counter [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Lets say flipkart launched a exclusive redmi sale by 12PM, stock is 10K but more people will access at same time. There are adv and disadv of keeping the counter in single machine or distributed. If we keep it in some in-memory data store single machine, the machine will become bottleneck as many app machines will retrieve at same time, have to consider memory and cpu for queueing those requests. If its distributed across nodes and machines access different nodes, here we eliminate bottleneck, but a update in node has to be consistent across nodes, this will also affect response time. What can be the design choice for the same?
Yes, a single machine counter will be really a performance bottleneck during intensive load and a single point of failure as well. I would suggest to go for a sharded counter implementation.

Recommendations on writing sensor data on iOS device [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I need to store plenty of data points (timeseries) coming from a sensor reach device (SensorTag)
Is there any recommended framework to store plenty of fast streaming data?
What type of local storage system do you recommend, sql, file, else?
Details
- Data comes in at 25hz per second
- Each row might have 70 bytes worth of data
- It's a continuous capture for 12 hours straight
When I did something similar with a BTLE device, I used Core Data with one new managed object instance per reading. To avoid excessive Core Data work, I didn't save changes after every new instance-- I'd save at intervals, after 100 new unsaved readings were available.
You might need to tune the save interval, depending on details like how much data the new entries actually have, what else is happening in your app at the time, and what device(s) you support. In my case I was updating an OpenGL view to show a 3D visualization of the data in real time.
Whatever you choose, make sure it lets you get the readings out of memory quickly. 25Hz * 70 bytes * 12 hours is a little over 75Mb. You don't want that all in RAM if you can avoid it.

Would doubling speed of CPU allow system to handle twice as many processes? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
If the speed of the CPU is doubled, would the system be able to handle twice as many processes? Assuming you ignore context switches that is.
No. CPU speed is rarely the bottleneck anymore. Also, doubling the clock speed would require changers in both your OS's scheduler and your compiler (both of which make assumptions about the speed of a CPU relative to the data buses).
It would make things better, but it's not a linear improvement.

Is there a way that I can use the 100% of my network bandwidth with only one connection? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a program that reads about a million of rows and group the rows; the client computer is not stressed at all, no more than 5% cpu usage and the network card is used at about 10% or less.
If in the same client machine I run four copies of the program the use grow at the same rate, with the four programs running, I get about 20% cpu usage and about 40% network usage. That makes me think that I can improve the performance using threads to read the information from the database. But I don't want to introduce this complexity if a configuration change could do the same.
Client: Windows 7, CSDK 3.50.TC7
Server: AIX 5.3, IBM Informix Dynamic Server Version 11.50.FC3
There are a few tweaks you can try, most notably setting the fetch buffer size. The environment variable FET_BUF_SIZE can be set to a value such as 32767. This may help you get closer to saturating the client and the network.
Multiple threads sharing a single connection will not help. Multiple threads using multiple connections might help - they'd each be running a separate query, of course.
If the client program is grouping the rows, we have to ask "why?". It is generally best to leave the server (DBMS) to do that. That said, if the server is compute bound and the client PC is wallowing in idle cycles, it may make sense to do the grunt work on the client instead of the server. Just make sure you minimize the data to be relayed over the network.

Resources