I have question about the effects of multiple users.
I assume that the scheduler only allocates the frequency elements per the number of users and ignore multiuser interference. Ex) 1 user : can using 100 RBs / 2 user : can using 50 RBs / 3 user : can using 33 RBs ...
and for example, If one user needs to transmit 100000 bits and in 1 transmission time interval (TTI), 100 bits can be transmitted per 1 RB. (5000 bits per 50 RBs / 10000 bits per 100 RBs)
In this example, I assumed two cases as below
Case 1) If user can use 100 RBs, the user can transmit in 10 TTI (only 1 user exists)
Case 2) If user can use only 50 RBs, the user can transmits in 20 TTI ( 2 users exist )
In both cases, in my opinion three conclusions are expected.
A) Their error will be same since their total payload traffic is equal to 100000 bits
B) the performance block error rate (BLER) of case 1 is worse than case 2. Even though TTI is half of case 2 but the number of RBs per transmission is larger, therefore retransmission will occurs more frequently in case 1.
C) the performance block error rate (BLER) of case 2 is worse than case 1. Even though the number of RBs are half of case 1 but in case 2 should transmits a data during longer TTI thus, retransmission will occurs more frequently in case 2.
What conclusion can be derived? I have searched many papers and standards but i could not have found clear conclusion.
Thank you for reading a long question
have a nice day :)
Related
I have a python script which sends to all 11K people an SMS at once, they are from all sorts of countries.
I don't want to have money left over in my balance as I won't be doing that again.
Problem it's too difficult to estimate the cost as the people are from 190 different countries.
I know there is Auto-recharge which is enabled for me, but the issue is that it's sending all messages at once, so I do not think auto-recharge will work as it needs to recharge inside milliseconds.
Any solution?
I'd try batching strategies, since most numbers can't process more than 10 SMS/second (1 SMS/second for NA numbers) anyhow (anything more will just get queued), so 11k messages would take ~18 mins anyhow.
So split your pool into 5 batches of ~2k messages, and see how much the first 3 batches cost, which would inform how much money to load for batches 4 & 5.
NOTE: running out of money mid-batch would need to be adequately handled, too.
Sending costs will vary by [destination] country, but these rates are published, e.g. US - 0.75 cents/msg, India - 1.75 cents/msg, UK - 4 cents/msg, etc.
Then the problem becomes one of parsing out country codes from your target numbers if they're not already split (e.g. +18005551234 vs. +1 8005551234).
To detect a number of n_errors errors in a code of n_total bits length, we must sacrifice a certain number n_check of the bits for some sort of checksum.
My question is: What is the method, where I have to sacrifice the least number of bits n_check to detect a given number of errors n_errors in a code of n_total bits length.
If there is no general answer to this question, I would greatly appreciate some hints concerning methods for the following conditions:
n_total=32, n_errors>1and
obviously n_check should be as small as possible.
Thank you.
Link to CRC Zoo notes:
http://users.ece.cmu.edu/~koopman/crc/notes.html
If you look at the table for 3 bit crc:
http://users.ece.cmu.edu/~koopman/crc/crc3.html
You can see that the second entry, 0xb => x^3 + x + 1, has HD (Hamming Distance) 3 for 4 data bits, for a total size of 7 bits. This can detect all 2 bit error patterns (out of 7 bits), but some 3 bit patterns will fail, an obvious case where all bits should be zero would be
0 0 0 1 0 1 1 (when it should be 0 0 0 0 0 0)
This is a simple example where the number of 1 bits in the polynomial determined the maximum number of bit errors. To verify HD = 3 (2 bit errors detected) , all 21 cases of 7 bits total, 2 bits bad were checked.
If you check out 32 bit CRCs, you'll see that 0x04c11db7 (ethernet 802.3, has HD=6 (5 bit error detection) at 263 data bits => 263+32 = 295 total bits, while 0x1f4acfb13, has HD=6 at 32736 data bits => 32736+32 = 32768 total bits.
Here is a pdf article about a search for good CRCs:
https://users.ece.cmu.edu/~koopman/networks/dsn02/dsn02_koopman.pdf
Finding "good" CRCs for specific Hamming distances requires some knowledge about the process. For example, in the case of 0x1f4acfb13 with HD=6 (5 bad bit detection), there 314,728,365,660,920,250,368 possible combinations of 5 bits bad out of 32768 bits. However, 0x1f4acfb13 = 0x1f4acfb13 = 0xc85f*0x8011*0x3*0x3, and either of the 0x3 (x+1) terms will detect any odd number of error bits, which reduces the search to 4 bad bit cases. For the minimal size of a message that fails with this polynomial, the first and last bits will be bad. This reduces the searching down to just 2 of the 5 bits, which reduces the number of cases to about 536 million. Rather than calculating CRC for each bit combination, a table of CRCs can be created for each 1 bit in an otherwise all 0 bit message, and then the table entries corresponding to specific bits can be xor'ed to speed up the process. For a polynomial where it isn't the first and last bits, a table of CRC's could be generated for all 2 bit errors (assuming such a table fits in memory), then sorted, then checked for duplicate values (with sorted data, this only requires one sequential pass of the sorted table). Duplicate values would correspond to a 4 bit failure. Other situations will require different approaches, and in some cases, it's still time consuming.
I am looking for a solution because the sth-channel is full.
I am troubled with calculating the appropriate capacity of channel capacity.
This document has the following description.
In order to calculate the appropriate capacity, just have in consideration the following parameters:
・The amount of events to be put into the channel by the sources per unit time (let's say 1 minute).
・The amount of events to be gotten from the channel by the sinks per unit time.
・An estimation of the amount of events that could not be processed per unit time, and thus to be reinjected into the channel (see next section).
How can I check the values of these parameters?
How can I check the values of these parameters?
You can't just check these parameters. They depend on your application.
What they are saying is that you should have a size which is large enough so the generator doesn't get stuck. This may not be possible in your application.
Say your generator receives one event per second and it takes 2 seconds for a receiver to manage that event. Now lets assume you have 3 receivers. In 1 second, you can manage to process 0.5 events per receiver. You have 3 receivers, so your receivers, together, are capable of processing 0.5 × 3 = 1.5 events, which is more than what you get as input. Your capacity can be 1 or 2, using 2 will greatly increase your chances that you do not get blocked.
Let's review another example:
Your generator wants to pushes 1,000 events per second
Your receivers take 3 seconds to process one event
You would need 1,000 x 3 = 3,000 receivers (3,000 goroutines that can run at full speed in parallel...)
In this example, the total number of receivers is so large that you have to either break up your code to work on multiple computers or optimize your receiver code so it can process the data in an amount of time that makes sense. Say you have 50 processors, your receivers will get 1,000 events per second, all 50 can run at full speed, you need one receiver to do its work in:
50 / 1000 = 0.05 seconds
Now let's assume that in most cases your goroutines take 0.02 but once in a while one will take 1 second. That means your goroutines can get a little behind. In that case your capacity (so the generator doesn't get blocked) should be a little over 1,000. Again, it will depend on how many of the routines get slowed down, etc. In this last example, a run is 0.02 seconds so to process 1,000 events it usually takes 0.02 seconds. If you can send those 1,000 event over the 1 second period, you may not even need the 50 goroutines and could have a smaller capacity. On the other hand, if you have big bursts where you may end up sending many (say 500) events all at ones, then more goroutines and a larger capacity is important to not get blocked.
If you are writing a bosun alert which is based of a percentage error rate for requests handled by your system, how do you write it in such a way that it handles periods of low traffic.
For example:
If I have an alert which looks back over the last 5 minutes and works out the error rate for requests
$errorRate = $numberErr/$numberReq and then triggers an alarm if the errorRate exceeds a predefined threshold crit = $errorRate > 0.05 this can work quite well so long as every 5 minute period had a sufficiently large number of requests ($numberReq).
If the number of requests in a 5 minute period was 10,000 then 501 errors would be required to trigger an alarm. However if the number of requests in a 5 minute period was 100 then only 5 errors would be required to trigger an alarm.
How can I write an alert which handles periods where the number of requests are so low that a small number of errors will equate to a large error rate. I had considered a sliding window of time, rather than a fixed 5 minute period, where the window would increase in size until the number of requests was high enough to give some confidence in the alarm. e.g. increase the time period until the number of requests is 10,000.
I can't find a way to achieve this in bosun, and I don't want to commit to a larger period of time for my alerts because the traffic rate varies so much. A longer period during peak traffic could result in an actual error causing a much larger impact.
I generally pair any percentage and/or historical based alerts with a static threshold.
For example: crit = numberErr > 100 && $errorRate > 0.05. That way the percent part doesn't matter unless the number of errors have also crossed some threshold because the entire statement won't be true.
We want a connectionless client-server. But, we want to reduce the overhead of creating/closing connections on every single request.
e.g., on client side, if connection was idle for 5 seconds, close it. Then create a new connection when you decided to send a new request.
ZeroC ICE use this model.
The question is, can I set a life time for ZeroMQ connections?
e.g. if connection was idle for 5 seconds, it will be closed automatically. Then on each request, I check if connection still alive. If it wasn't, I re-connection to the server.
Sure you can. But to do this you need a Win_RELOC procedure sequence. After installing the arm bait model of Win_LOC over the desired port in ZeroMQs, you can start listening over the wide suite of protocols for a while.
Realization part is genuine, mostly come around a 1 min - 1000 hrs re loader. Most of these configs can be reconstructed with MAGA_LAPO counter measure.
The simplest way to attain this is to avoid the baud rates customization model. Most of it comprises of hop values attaining max of .0000017845 nano-hops/ammp.
The chart consists of
J K 1 J K 1 I E
1 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit
Frame Status
A one byte field used as a primitive acknowledgement scheme on whether the frame was recognized and copied by its intended receiver.
A C 0 0 A C 0 0
1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit
A = 1, Address recognized C = 1, Frame copied
SD AC FC DA SA PDU from LLC (IEEE 802.2) CRC ED FS
8 bits 8 bits 8 bits 48 bits 48 bits up to 18200x8 bits 32 bits 8 bits 8 bits
0MQ manages TCP connections for you automatically. (I assume your client/server will use TCP.) It provides very little information about connect/disconnect/reconnect status. Nor does it provide any "lifetime" or "timeout" features for sockets.
You'll need to implement the timeout logic you describe in your clients. At a high level: when the client needs to make a request it will first connect a socket, dispatch the request, get the response, then set a timer for 5 seconds. If another request is made in < 5 sec then it reuses the existing connection and resets the timer to 5 sec. If the timer fires then it closes the connection.
Be aware that 0MQ sockets are not thread safe. If your timer fires on a separate thread then it cannot safely close the 0MQ socket. Only the thread that created the socket should close it.