AFHTTPRequestOperation progress is uneven - ios

I use a AFHTTPRequestOperation to upload between 1-6 images to a web server. The weird thing is that when it reports progress in my "setUploadProgressBlock" it reports totalBytesWritten as:
32,768
32,768
32,768
32,768
3,238
2,420
2,420... and keeps repeating 2420 until final chunk which is the remainder.
I'm using a UIProgressView to report upload progress, which jumps to 30% or so immediately because of the unequal chunks in the beginning (32,768 byte chunks). I have cheated this to basically ignore the first four large chunks, but I'm wondering if anyone has an explanation for why it does this, or a more elegant way to handle it. Also, once it reports that all bytes have been written, it sits there "doing nothing" for several seconds which seems like an unreasonably long delay. I've handled this with a UIActivityIndicator (spinner), but it's annoying that the delay is so long. I should mention that this is tested on 3g as that will be the target environment.

Can you double check that you're not reading the value for bytesWritten, which reports how many bytes were uploaded in the last batch, as opposed to totalBytesWritten? Alternatively, it may be that several uploads are being performed simultaneously, which could be confusing if you're logging these all in the same callback.
The "doing nothing" for several seconds could be waiting for the response from the server. Do you have any more details about that?

Related

Script timing for getting data from API

I am trying to write a macro that combines /use on a trinket then calls the WoW API and outputs some data.
/use 14
/run local d=string.format("%.2f",GetDodgeChance()); print("Dodge Trinket used, dodge at:",d);
This works fine apart from there seems to be a timing problem with using the trinket and then getting the updated dodge chance from the API. When I click once, the trinket activates it shows the dodge chance without the trinket buff. If I click immediately again it then shows the correct value, including the buff.
Is there some timing issue, such as the first /use command not firing until the macro ends? How do I ensure the GetDodgeChance() call includes the trinket buff?
Your game client needs to send the trinket use to the server, and get a confirmation back that you received the buff which changes your stats.
However, execution of code can be delayed with C_Timer.After
Your example would become:
/use 14
/run C_Timer.After( 0.5, function() local d=string.format("%.2f",GetDodgeChance()); print("Dodge Trinket used, dodge at:",d) end)
with half a second delay, which should both be long enough for the Dodge function to return the correct (=anticipated) value, and appear as "almost instant" to human perception.
You can of course tweak that value, try shorter delays - e.g. 0.1 seconds should also be quite long enough for the calculation to be correct, but as this involves communicating with the server, it might work fine 99% of the time for a very short value of delay, and then there's a random minor lag spike, not even something you notice in gameplay, but the info about the buff and changed dodge chance reaches your client a few milliseconds too late.

Circumventing negative side effects of default request sizes

I have been using Reactor pretty extensively for a while now.
The biggest caveat I have had coming up multiple times is default request sizes / prefetch.
Take this simple code for example:
Mono.fromCallable(System::currentTimeMillis)
.repeat()
.delayElements(Duration.ofSeconds(1))
.take(5)
.doOnNext(n -> log.info(n.toString()))
.blockLast();
To the eye of someone who might have worked with other reactive libraries before, this piece of code
should log the current timestamp every second for five times.
What really happens is that the same timestamp is returned five times, because delayElements doesn't send one request upstream for every elapsed duration, it sends 32 requests upstream by default, replenishing the number of requested elements as they are consumed.
This wouldn't be a problem if the environment variable for overriding the default prefetch wasn't capped to minimum 8.
This means that if I want to write real reactive code like above, I have to set the prefetch to one in every transformation. That sucks.
Is there a better way?

What's the reason of using Circular Buffer in iOS Audio Calling APP?

My question is pretty much self explanatory. Sorry if it seems too dumb.
I am writing a iOS VoIP dialer and have checked some open-source code(iOS audio calling app). And almost all of those use Circular Buffer for storing recorded and received PCM audio data. SO i am wondering why we need to use a Circular Buffer in this case. What's the exact reason for using such audio buffer.
Thanks in advance.
Using a circular buffer lets you process the input and output data asynchronously from it's source. The audio render process takes place on a high priority thread. It asks for audio samples from your app (playback), and offers audio (recording/processing) on a timer in the form of callbacks.
A typical scenario would be for the audio callback to fire every 0.023 seconds to ask for (and/or offer) 1024 samples of audio. This thread is synchronized with system hardware so it is imperative that your callback returns before the 0.023 seconds is up. If you don't, the hardware won't wait for you, it will just skip that cycle and you will have an audible pop or silence, or miss audio you are trying to record.
A circular buffer's place is to pass data between threads. In an audio application that would be moving the samples to and from the audio thread asynchronously. One thread produces samples on to the "head" of the buffer, and the other thread consumes them from the "tail".
Here's an example, retrieving audio samples from the microphone and writing them to disk. Your app has subscribed to a callback that fires every 0.023 seconds, offering 1024 samples to be recorded. The naive approach would be to simply write the audio to disk from within that callback.
void myCallback(float *samples,int sampleCount, SampleSaver *saver){
SampleSaverSaveSamples(saver,samples,sampleCount);
}
This will work!! Most of the time...
The problem is that there is no guarantee that writing to disk will finish before 0.023 seconds, so every now and then, your recording has a pop in it because SampleSaver just plain took too long and the hardware just skips the next callback.
The right way to do this is to use a circular buffer. I personally use TPCircularBuffer because it's awesome. The way it works (externally) is that you ask the buffer for a pointer to write data to (the head) on one thread, then on another thread you ask the buffer for a pointer to read from (the tail). Here's how it would be done using TPCircularBuffer (skipping setup and using a simplified callback).
//this is on the high priority thread that can't wait for anything like a slow write to disk
void myCallback(float *samples,int sampleCount, TPCircularBuffer *buffer){
int32_t availableBytes = 0;
float *head = TPCircularBufferHead(buffer, &availableBytes);
memcpy(head,samples,sampleCount * sizeof(float));//copies samples to head
TPCircularBufferProduce(buffer,sampleCount * sizeof(float)); //moves buffer head "forward in the circle"
}
This operation is super quick and puts no extra pressure on that sensitive audio thread. You then create your own timer a separate thread to write the samples to disk.
//this is on some background thread that can take it's sweet time
void myLeisurelySavingCallback(TPCircularBuffer *buffer, SampleSaver *saver){
int32_t available;
float *tail = TPCircularBufferTail(buffer, &available);
int samplesInBuffer = available / sizeof(float); //mono
SampleSaverSaveSamples(saver, tail, samplesInBuffer);
TPCircularBufferConsume(buffer, samplesInBuffer * sizeof(float)); // moves tail forward
}
And there you have it, not only do you avoid audio glitches, but if you initialize a big enough buffer, you can set your write to disk callback to only fire every second or two (after the circular buffer has built up a good bit of audio) which is much easier on your system than writing to disk every 0.023 seconds!
The main reason to use the buffer is so the samples can be handled asynchronously. They are a great way to pass messages between threads without locks as well. Here is a good article explaining a neat memory trick for the implementation of a circular buffer.
Good question. There is another good reason for using Circular Buffer.
In iOS, if you use callbacks(Audio unit) for recording and playing audio(In-fact you need to use it if you want to create a real-time audio transferring app) then you will get a chunk of data for a specific amount of time(let's say 20 milliseconds) from the recorder callback. And in iOS, you will never get fixed length of data always(If you set the callback interval as 20ms then you will get 370 or 372 bytes of data. And you will never know when you will get 370 bytes or 372 bytes. Correct me if i am wrong). Then, to transfer the audio through UDP packets you need to use a codec for data encoding and decoding(G729 is generally used for VoIP apps). But g729 takes data by the multiplier of 8. Assume, you encode 368(8*46) bytes per 20ms. So what are you going to do with rest of the data ? You need to store it by sequence for the next chunk to process.
SO that's the reason. There are some other details thing but i kapt it simple for your better understanding. Just comment below if you have any question.

Why does epoll_wait only provide huge 1ms timeout?

epoll_wait, select and poll functions all provide a timeout. However with epoll, it's at a large resolution of 1ms. Select & ppoll are the only one providing sub-millisecond timeout.
That would mean doing other things at 1ms intervals at best. I could do a lot of other things within 1ms on a modern CPU.
So to do other things more often than 1ms I actually have to provide a timeout of zero (essentially disabling it). And I'd probably add my own usleep somewhere in the main loop to stop it chewing up too much CPU.
So the question is, why is the timeout in milli's when I would think clearly there is a case for a higher resolution timeout.
Since you are on Linux, instead of providing a zero timeout value and manually usleeeping in the loop body, you could simply use the timerfd API. This essentially lets you create a timer (with a resolution finer than 1ms) associated with a file descriptor, which you can add to the set of monitored descriptors.
The epoll_wait interface just inherited a timeout measured in milliseconds from poll. While it doesn't make sense to poll for less than a millisecond, because of the overhead of adding the calling thread to all the wait sets, it does make sense for epoll_wait. A call to epoll_wait doesn't require ever putting the calling thread onto more than one wait set, the calling overhead is very low, and it could make sense, on very rare occasions, to block for less than a millisecond.
I'd recommend just using a timing thread. Most of what you would want to do can just be done in that timing thread, so you won't need to break out of epoll_wait. If you do need to make a thread return from epoll_wait, just send a byte to a pipe that thread is polling and the wait will terminate.
In Linux 5.11, an epoll_pwait2 API has been added, which uses a struct timespec as timeout. This means you can now wait using nanoseconds precision.

What does it mean by buffer?

I see the word "BUFFER" everywhere, but I am unable to grasp what it exactly is.
Would anybody please explain what is buffer in layman's language?
When is it used?
How is it used?
Imagine that you're eating candy out of a bowl. You take one piece regularly. To prevent the bowl from running out, someone might refill the bowl before it gets empty, so that when you want to take another piece, there's candy in the bowl.
The bowl acts as a buffer between you and the candy bag.
If you're watching a movie online, the web service will continually download the next 5 minutes or so into a buffer, that way your computer doesn't have to download the movie as you're watching it (which would cause hanging).
The term "buffer" is a very generic term, and is not specific to IT or CS. It's a place to store something temporarily, in order to mitigate differences between input speed and output speed. While the producer is being faster than the consumer, the producer can continue to store output in the buffer. When the consumer gets around to it, it can read from the buffer. The buffer is there in the middle to bridge the gap.
If you average out the definitions at http://en.wiktionary.org/wiki/buffer, I think you'll get the idea.
For proof that we really did "have to walk 10 miles thought the snow every day to go to school", see TOPS-10 Monitor Calls Manual Volume 1, section 11.9, "Using Buffered I/O", at bookmark 11-24. Don't read if you're subject to nightmares.
A buffer is simply a chunk of memory used to hold data. In the most general sense, it's usually a single blob of memory that's loaded in one operation, and then emptied in one or more, Perchik's "candy bowl" example. In a C program, for example, you might have:
#define BUFSIZE 1024
char buffer[BUFSIZE];
size_t len = 0;
// ... later
while((len=read(STDIN, &buffer, BUFSIZE)) > 0)
write(STDOUT, buffer, len);
... which is a minimal version of cp(1). Here, the buffer array is used to store the data read by read(2) until it's written; then the buffer is re-used.
There are more complicated buffer schemes used, for example a circular buffer, where some finite number of buffers are used, one after the next; once the buffers are all full, the index "wraps around" so that the first one is re-used.
Buffer means 'temporary storage'. Buffers are important in computing because interconnected devices and systems are seldom 'in sync' with one another, so when information is sent from one system to another, it has somewhere to wait until the recipient system is ready.
Really it would depend on the context in each case as there is no one definition - but speaking very generally a buffer is an place to temporarily hold something. The best real world analogy I can think of would be a waiting area. One simple example in computing is when buffer refers to a part of RAM used for temporary storage of data.
That a buffer is "a place to store something temporarily, in order to mitigate differences between input speed and output speed" is accurate, consider this as an even more "layman's" way of understanding it.
"To Buffer", the verb, has made its way into every day vocabulary. For example, when an Internet connection is slow and a Netflix video is interrupted, we even hear our parents say stuff like, "Give it time to buffer."
What they are saying is, "Hit pause; allow time for more of the video to download into memory; and then we can watch it without it stopping or skipping."
Given the producer / consumer analogy, Netflix is producing the video. The viewer is consuming it (watching it). A space on your computer where extra downloaded video data is temporarily stored is the buffer.
A video progress bar is probably the best visual example of this:
That video is 5:05. Its total play time is represented by the white portion of the bar (which would be solid white if you had not started watching it yet.)
As represented by the purple, I've actually consumed (watched) 10 seconds of the video.
The grey portion of the bar is the buffer. This is the video data that that is currently downloaded into memory, the buffer, and is available to you locally. In other words, even if your Internet connection where to be interrupted, you could still watch the area you have buffered.
Buffer is temporary placeholder (variables in many programming languages) in memory (ram/disk) on which data can be dumped and then processing can be done.
The term "buffer" is a very generic term, and is not specific to IT or CS. It's a place to store something temporarily, in order to mitigate differences between input speed and output speed. While the producer is being faster than the consumer, the producer can continue to store output in the buffer. When the consumer speeds up, it can read from the buffer. The buffer is there in the middle to bridge the gap.
A buffer is a data area shared by hardware devices or program processes that operate at different speeds or with different sets of priorities. The buffer allows each device or process to operate without being held up by the other. In order for a buffer to be effective, the size of the buffer and the algorithms for moving data into and out of the buffer.
buffer is a "midpoint holding place" but exists not so much to accelerate the speed of an activity as to support the coordination of separate activities.
This term is used both in programming and in hardware. In programming, buffering sometimes implies the need to screen data from its final intended place so that it can be edited or otherwise processed before being moved to a regular file or database.
Buffer is temporary placeholder (variables in many programming languages) in memory (ram/disk) on which data can be dumped and then processing can be done.
There are many advantages of Buffering like it allows things to happen in parallel, improve IO performance, etc.
It also has many downside if not used correctly like buffer overflow, buffer underflow, etc.
C Example of Character buffer.
char *buffer1 = calloc(5, sizeof(char));
char *buffer2 = calloc(15, sizeof(char));

Resources