I know from the Central side you can find the time between a peripheral's advertisements, but I would like to know if you could do this on the peripheral side.
For instance, lets say I have an algorithm that uses time as an input. Now lets look at two consecutive advertisements. the first advertisement will conveniently be from 0-20 ms and the second advertisement will be from 21-40 ms. My algorithms results are:
result 1 = algorithm output for 0-20 ms or the time interval between the start advertising call and the actual time the first advertisement is sent.
result 2 = algorithm output for 21-40 ms or the time interval between the first advertisement and the second advertisement.
I know there is going to be some lagging due to algorithm processing time and advertisement packet building time, but I would like to know if its possible to record the algorithms output as a string from 0-19 or almost 20 ms until an advertisement is ready to be sent out at 20 ms with that string. Then after that, the second advertisment will pick up the algorithms output where the first advertisement left off such as at 20 or 21 ms until its time for that second advertisement to be sent out at 40 ms. I cannot find this in apples documentation. The thing is I know I will not be able to predict when the advertisement will be sent out. I just want to basically say:
-When advertisement 1 is sent out at t1
{
send out data gathered from 0 - t1;
}
-When advertisement 2 is sent out at t2
{
send out data gathered from t1 - t2;
}
Again I know the time you end the interval cannot be the exact same time you send out the advertisement. I just want something very close to it. Also I know i could just do the algorithm on the central side using the time elapsed between advertisements received but i actually want to use this with the peripherals sensor readings as well such as total magnetometer readings between advertisement intervals.
Thanks and sorry for the long explanation.
Related
I am looking for a solution because the sth-channel is full.
I am troubled with calculating the appropriate capacity of channel capacity.
This document has the following description.
In order to calculate the appropriate capacity, just have in consideration the following parameters:
・The amount of events to be put into the channel by the sources per unit time (let's say 1 minute).
・The amount of events to be gotten from the channel by the sinks per unit time.
・An estimation of the amount of events that could not be processed per unit time, and thus to be reinjected into the channel (see next section).
How can I check the values of these parameters?
How can I check the values of these parameters?
You can't just check these parameters. They depend on your application.
What they are saying is that you should have a size which is large enough so the generator doesn't get stuck. This may not be possible in your application.
Say your generator receives one event per second and it takes 2 seconds for a receiver to manage that event. Now lets assume you have 3 receivers. In 1 second, you can manage to process 0.5 events per receiver. You have 3 receivers, so your receivers, together, are capable of processing 0.5 × 3 = 1.5 events, which is more than what you get as input. Your capacity can be 1 or 2, using 2 will greatly increase your chances that you do not get blocked.
Let's review another example:
Your generator wants to pushes 1,000 events per second
Your receivers take 3 seconds to process one event
You would need 1,000 x 3 = 3,000 receivers (3,000 goroutines that can run at full speed in parallel...)
In this example, the total number of receivers is so large that you have to either break up your code to work on multiple computers or optimize your receiver code so it can process the data in an amount of time that makes sense. Say you have 50 processors, your receivers will get 1,000 events per second, all 50 can run at full speed, you need one receiver to do its work in:
50 / 1000 = 0.05 seconds
Now let's assume that in most cases your goroutines take 0.02 but once in a while one will take 1 second. That means your goroutines can get a little behind. In that case your capacity (so the generator doesn't get blocked) should be a little over 1,000. Again, it will depend on how many of the routines get slowed down, etc. In this last example, a run is 0.02 seconds so to process 1,000 events it usually takes 0.02 seconds. If you can send those 1,000 event over the 1 second period, you may not even need the 50 goroutines and could have a smaller capacity. On the other hand, if you have big bursts where you may end up sending many (say 500) events all at ones, then more goroutines and a larger capacity is important to not get blocked.
I am trying to calculate the Read/second and Write/Second in my Cassandra 2.1 cluster. After searching and reading, I came to know about JMX bean
org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency
Here I can see oneMinuteRate. I have started a brand new cluster and started collected these metrics from 0.
When I started my first record, I can see
Count = 1
OneMinuteRate = 0.01599111...
Does it mean that my write/s is 0.0159911?
Or does it mean that based on 1 minute data, my write latency is 0.01599 where Write Latency refers to the response time for writing a record?
Please help me understand the value.
Thanks.
It means that in the last minute, your writes per second were occuring at a rate of .01599 writes per second. Think about it this way: the rate of writes in the last 60 seconds would be
WritesInLastMinute ÷ 60
So in your case
1 ÷ 60 = 0.0166
Or more precisely, .01599.
If you observed no further writes after that, the value would descend down to zero over the next minute.
OneMinuteRate, FiveMinuteRate, and FifteenMinuteRate are exponential moving averages, meaning they are not simply dividing readings against time, instead, as the name implies they take an exponential series of averages as below:
result(t) = (1 - w) * result(t - 1) + (w) * event_this_period
where w is the weighting factor, t is the ticking time, in other words, simply they take 20% or the new reading and 80% of old readings, it's the same way UNIX systems measure CPU loads.
however, if this applies to requests that the server receives, below is a chart from one request to a server, measures taken by dropwizard.
as you can see, from one request a curve is drawn by time, it's really useful to determine trends, but not sure if they are great to monitor live traffic and especially critical one.
I have a bunch of questions concerning Timing Advance in GSM :
When is it defined ?
Is it the phone or the BTS who's in charge of defining it's value ?
is it dynamic, does it depends on certain situations ?
Let's say that I figured out a way to get the exact value of the Timing Advance (GSM Layer 1 Transmission level) from the phone's modem :
In order to verify my solution, I'm supposed to put my phone over and over in a situation where he have to use/change the Timing Advance while I log its value...
How can I do that ?
Thanks
In the GSM cellular mobile phone standard, timing advance value corresponds to the length of time a signal takes to reach the base station from a mobile phone. GSM uses TDMA technology in the radio interface to share a single frequency between several users, assigning sequential timeslots to the individual users sharing a frequency. Each user transmits periodically for less than one-eighth of the time within one of the eight timeslots. Since the users are at various distances from the base station and radio waves travel at the finite speed of light, the precise arrival-time within the slot can be used by the base station to determine the distance to the mobile phone. The time at which the phone is allowed to transmit a burst of traffic within a timeslot must be adjusted accordingly to prevent collisions with adjacent users. Timing Advance (TA) is the variable controlling this adjustment.
Technical Specifications 3GPP TS 05.10[1] and TS 45.010[2] describe the TA value adjustment procedures. The TA value is normally between 0 and 63, with each step representing an advance of one bit period (approximately 3.69 microseconds). With radio waves travelling at about 300,000,000 metres per second (that is 300 metres per microsecond), one TA step then represents a change in round-trip distance (twice the propagation range) of about 1,100 metres. This means that the TA value changes for each 550-metre change in the range between a mobile and the base station. This limit of 63 × 550 metres is the maximum 35 kilometres that a device can be from a base station and is the upper bound on cell placement distance.
A continually adjusted TA value avoids interference to and from other users in adjacent timeslots, thereby minimizing data loss and maintaining Mobile QoS (call quality-of-service).
Timing Advance is significant for privacy and communications security, as its combination with other variables can allow GSM localization to find the device's position and tracking the mobile phone user. TA is also used to adjust transmission power in Space-division multiple access systems.
This limited the original range of a GSM cell site to 35km as mandated by the duration of the standard timeslots defined in the GSM specification. The maximum distance is given by the maximum time that the signal from the mobile/BTS needs to reach the receiver of the mobile/BTS on time to be successfully heard. At the air interface the delay between the transmission of the downlink (BTS) and the uplink (mobile) has an offset of 3 timeslots. Until now the mobile station has used a timing advance to compensate for the propagation delay as the distance to the BTS changes. The timing advance values are coded by 6 bits, which gives the theoretical maximum BTS/mobile separation as 35km.
By implementing the Extended Range feature, the BTS is able to receive the uplink signal in two adjacent timeslots instead of one. When the mobile station reaches its maximum timing advance, i.e. maximum range, the BTS expands its hearing window with an internal timing advance that gives the necessary time for the mobile to be heard by the BTS even from the extended distance. This extra advance is the duration of a single timeslot, a 156 bit period. This gives roughly 120 km range for a cell.[3] and is implemented in sparsely populated areas and to reach islands for example.
Hope this Answer the question:)
It's defined everytime the BTS needs to set the define the phone's transmission power, which happens quite often.
It's the core system (BTS in GSM) who totally in charge of defining it's value.
It's very dynamic, and change a lot. Globally, the GSM core system is constantly trying to find the exact distance between the BTS and the MS, so it constantly make a kind of "ping" to calculate it. The result of such operations is generally not that accurate since there are a lot of obstacles between the mobile and the BTS (it's not a direct link in an open space).
Such operations happens a lot, so use your smartphone. Simply.
My aim is to measure MQTT device-to-device message latency (not throughput) and I'm looking for feedback on my code-hacks. The setup is simple; just one device serving as two end-points (old Linux PC with two terminal sessions; one running the subscriber and the other running the publisher sample app) and the default broker at tcp://m2m.eclipse.org:1883). I inserted time-capturing code-fragments into the C-language publish/subscribe sample apps on the src/samples folder.
Below are the changes. Please provide feedback.
Changes to the subscribe sample app (MQTTAsync_subscribe.c)
Inserted the lines below at the top of the msgarrvd (message arrived) function
//print arrival time
struct timeval tv;
gettimeofday (&tv, NULL);
printf("Message arrived: %ld.%06ld\n", tv.tv_sec, tv.tv_usec);
Changes to the publish sample app (MQTTAsync_publish.c)
Inserted the lines below at the top of the onSend (callback) function
struct timeval tv;
gettimeofday (&tv, NULL);
printf("Message with token value %d delivery confirmed at %ld.%06ld\n",
response->token, tv.tv_sec, tv.tv_usec);
With these changes (after subtracting the time message arrived at the subscriber from the time that the delivery was confirmed at the publisher), I get a time anywhere between 1 millisecond and 0.5 millisecond.
Questions
Does this make sense as a rough benchmark on latency?
Is this the round-trip time?
Is the round-trip time in the right ball-park? Should be less? more?
Is it the one-way time?
Should I design the latency benchmark in a different way? I need a rough measurements (I'm comparing with XMPP).
I'm using the default QoS value (1). Should I change it?
The publisher takes a finite amount of time to connect (and disconnect). Should these be added?
The 200ms latency is high ! Can you please upload your code here ?
Does this make sense as a rough benchmark on latency?
-- Yes it makes sense. But a better approach is to make an automated time subtract with subscribed message and both synchronized to NTP.
Is this the round-trip time? Is it the one-way time?
-- Messages got published - you received ACK for publisher and same message got transferred to subscribed client.
Is the round-trip time in the right ball-park? Should be less? more?
-- It should be less.
Should I design the latency benchmark in a different way? I need a rough measurements (I'm comparing with XMPP).
I'm using the default QoS value (1). Should I change it?
-- Try with QoS 0 ( fire and forget )
The publisher takes a finite amount of time to connect (and disconnect). Should these be added?
-- Yes. It needs to be added but this time should be very small.
I am using Corona sdk to create a multiplayer game. Each device sends 15 messages per second containing the position of their character. This results in a choppy movement because some messages take slightly different times to be received and it only sends 15 per second in a 30 fps game. How could I take the difference in the position or rotation of the character from the previous frame to current frame to predict the position if no data is received in the next frame. I am completely open to other solutions too! Thanks!
If the send rate is fixed and known to both parties (sender and receiver), then the receiver can assume it. In that case, dead reckoning is a well-established technique that is easy to apply. For example, if sender sends data every 1/15th second, then receiver can assume that, after the first packet has been received, any other packets will be 1/15th second apart. If something is not received after 1/15th second, then receiver makes a guess as to what the data would be at 1/15th second if it had been received. Then when the packet actually comes, it corrects the guess.
The only thing to guarantee is the order of packets, and delivery. If packets never drop, and always get there in order (i.e., TCP), you're laughing.
If order is guaranteed but some get dropped, then receiver needs to know when drop occurred. I think this could be corrected via a counter that is part of packet. So if receiver gets a packet with counter = X, and next packet has packet = X+2, then receiver knows that the packet was dropped because order is guaranteed, so time delta between the two is 2/15th second.
If no dropping but order not guaranteed, the counter will help there too: your receiver is guaranteed that if X has arrived, X+1 will arrive eventually, so even if it gets X+2, it should save X+2 and wait till X+1 arrives.
Finally, if neither delivery nor order are guaranteed (the case of UDP packets on a WAN), then once more counter is sufficient, but the algorithm changes: if it gets packet with counter=X, and it gets X+2, it could mean that X+1 has been dropped, or will arrive shortly, but there is no way to know. So the receiver could wait a small amount (maybe another 1/15th second) if X+1 hasn't arrived by then, declare it as dropped (so if does eventually arrive it will be ignored).
The near-constant frequency of sender is critical in the above.