for API performance testing using Gatling, whenever I setup a user or QPS, I use below configuration to manage my max QPS and to have consistent QPS throughout test duration.
setUp(
scn.inject(constantUsersPerSec(2) during (UserRampup seconds))
)
.throttle(
reachRps(TotalQPS) in (QPSRampup seconds),
holdFor(60 minutes)
)
.protocols(httpProtocol)
.maxDuration(Duration minutes)
Now this is working fine, but in one of the scenario, I need to add pause between 2 requests. What I observed is, that when I use "throttle", it does not take any pause or sleep or pace in consideration. Is there any way in which I can use pause while having "throttle" in place
No, that's not possible. throttle disables pauses so it can take over throughput generation.
Related
(Cross-posted on Electrical Engineering Stack Exchange)
I'm using FreeRTOS in an application which requires the processor to sleep in low power mode for a long time (as long as 12 hours), then wake up in response to an interrupt request. I'm having a problem with the amount of time taken for FreeRTOS to wake up.
Before going to sleep, I disable the scheduler through a call to vTaskSuspendAll().
On waking, I calculate the amount of time that the processor has been asleep, and update FreeRTOS via a call to vTaskJumpTime(). I then make a call to xTaskResumeAll() to restart the scheduler.
The issue I have is that xTaskResumeAll() makes a call to xTaskIncrementTick() once for each tick that has been missed while the processor was asleep (recorded through vTaskJumpTime()). It takes about 12 seconds to wake after an hour asleep (3,600,000 calls to xTaskIncrementTick()).
Much as I'm tempted to modify the FreeRTOS xTaskIncrementTick() function, so that it can jump a number of ticks in one call, experience says that I would be smart to look for a standard way first.
Has anyone found another way to implement this behavior which doesn't result in the long wake-up delay?
This is implemented on an Microchip/Atmel SAM4L (Cortex-M4). I use the AST sub system to record the time that the processor is asleep. The sleep is implemented through BPM sub system bpm_sleep() in RETENTION mode.
FreeRTOS's built in low power support uses vTaskStepTick() to jump the tick count forward in one go. It can do that because it won't calculate a wake time (the time at which it leaves low power mode) past the time it knows a task must unblock because a timeout expired. If you remain in sleep mode past the time a task should have unblocked because its timeout expired you have an error anyway. The best place to get FreeRTOS support is it's dedicated support forum.
I'm trying to realize an app which plays a sequence of tones in a loop.
Actually, I use OpenAL and my experiences with such framework are positive, as I can perform a sound pitch also.
Here's the scenario:
load a short sound (3 seconds) from a CAF file
play that sound in a loop and perform a sound shift also.
This works well, provided that the tact rate isn't too high - I mean a time of more than 10 milliseconds per tone.
Anyhow, my NSTimer (which embeds my sound sequence to play) should be configurable - and as soon as my tact rate increases (I mean less than 10 ms per tone), the sound is no more echoed correctly - even some tones are dropped in an obvious random way.
It seems that real time sound processing becomes an issue.
I'm still a novice in IOS programming, but I believe that Apple sets a limit concerning time consumption and/or semaphore.
Now my questions:
OpenAL is written in C - until now, I didn't understand the whole code and philosophy behind that framework. Is there a possibility to resolve my above mentioned problem making some modifications - I mean setting flags/values or overwriting certain methods?
If not, do you know another IOS sound framework more appropriate for such kind of real time sound processing?
Many thanks in advance!
I know that it deals with a quite extraordinary and difficult problem - maybe s.o. of you has resolved a similar one? Just to emphasize: sound pitch must be guaranteed!
It is not immediately clear from the explanation precisely what you're trying to achieve. Some code is expected.
However, your use of NSTimer to sequence audio playback is clearly problematic. It is neither intended as a reliable nor a high resolution timer.
NSTimer delivers events through a run-loop queue - probably your application's main queue - where they content with user interface events.
As the main thread is not a real-time thread, it may not even be scheduled to run for some time.
There may be quantisation effects on with the delay you requested, meaning your events effectively round to zero clock ticks and get scheduled immediately.
Perioidic timers have deleterious effects on battery life. iOS and MacOSX both take steps to reduce their impact by timer coalescing
The clock you should be using for sequencing events is the playback sample clock - which is available in the render handler of whatever framework you use. As well as being reliable this is efficient as well, as the render handler will be running periodically anyway, and in a real-time thread.
I'm working on an iOS7-only app that needs to display a clock complete with ticking sound. I've used a NSTimer of 1s and I use AVAudioPlayer to play the tick sound every second.
Unfortunately, there's something slightly off with the timing. I've measured that timer is off by between 2 and 22 thousands of a second, which you wouldn't think would matter a great deal, but the lag creates a nail biting tension.. kind of like a heart flutter :-)
I've looked around a bit but it sounds like using audio queue services is the only way to go.. and I really don't fancy delving into the depths of that particular framework again.
My question: Is there some other way of getting precisely scheduled sound events in iOS 7 and failing that is there a decent wrapper framework for audio queue services available somewhere? Or better still is there a way of more precisely scheduling NSTimers?
Using any of NSTimer, libdispatch, or spawning a thread that sleeps for the tick duration rely on the underlying thread getting scheduled in time. The kernel provides no guarantee of this, and it is not surprising that the you observe timing jitter; the latency you observe looks reasonable.
NSTimer running on the main thread is likely to perform worst of these as you are also contending against other events delivered through it.
I think your options here are either to use audio queue services, a real-time thread to schedule the events with AVAudioPlayer, or render the audio yourself to a remoteIO unit.
I don't think AVPlayer provides any particular guarantees about timing either.
I have a game with a visible running timer, but can only achieve 2 digits of accuracy (#.##) beyond the decimal. Is this a limit of the framework, or is there a workaround? I am trying to achieve 4-6 digits of accuracy (#.######) on this timer.
A timer on iOS runs at the max frequency of 60Hz so thats why you only get 2 digit accuracy.
You could make a work around by taking the time and the start of your event and then take the time at the end of the event and calculate the difference. This time won't take into account things like frame rate drops, pausing and moving into the background though.
This is a limitation of the underlying system. iOS is not a realtime system and timers get scheduled on the so called run loop, which dispatches timers once they are due. However, for a timer to get dispatched accurate, the run loop has to run often enough and check the timer on every iteration. The run loop however runs also other stuff, for example the whole event mechanism, messages and networking are run on the run loop, so your timer aren't checked every few nanoseconds (also, the run loop ins't run consistently but gives some time back to the system as well)
I have an iOS app which synchronizes a certain number of assets at startup. I'm using AFNetworking and set up an NSOperationQueue to handle all of the downloads. I was wondering, how many simultaneous downloads make sense. Is there a limit where network performance will drop if I have to many at the same time? At the moment I'm doing max 5 downloads at a time.
This depends on several factors:
What is the network speed and latency?
What is the data size of the requests and responses?
How long does processing a request take on the server?
How long does processing a response take on the client?
How many parallel requests can the server fulfill efficiently?
How many users will make requests at the same time?
What is the minimal speed and memory size of the target device?
For small and medium sized applications, the limiting factor is usually the device's network latency, but that might not be the case in your situation. In the end, you'll have to test and figure out the most efficient compromise. 5 is a good number to start with.
You might want to set the number of concurrent downloads by the available network connection (WLAN or 3G or even slower...).
The beauty of using NSOperationQueues is that they are closely tied into the underlying OS (iOS or OSX). The queue decides how many operations to run based on many factors, including free memory, load on the system, etc.
You should not try to second guess the system and throttle yourself. Queue as many operations as you have and let the OS deal with it. I have an iPhone app that adds hundreds of operations in the queue when it has to fetch images of varying sizes etc. Works great, UI is not blocked, etc.
EDIT: well, it seems that when doing NSURLConnections and similar network connections, NSOperationQueue is NOT really keyed in to network usage. I asked on the Apple internal forums this summer, and in the end was told by Quinn "The Eskimo" (Apple network guru) to use a limit of something like 4. So this post is correct in the sense of pure processing power - NSOperationQueue will do the right thing - but when it comes to network ops you need to set a limit.
Depends on your hardware mostly I would say. Best way to address this is to test it with multiple cases with multiple trials. Try to diversify the hardware you test on as much as possible (remember do not use the simulator to test this!).
There actually is a constant the SDK provides that varies depending on various constraints. I would recommend you look into using it.
Regarding this question, I've done some tests on a Ipad2 IOS6.0. I've created a little app that performs an HTTP-GET request to a webserver. This webserver will provide data for 60 seconds ( this to get a meaningfull result, will change this to 10 min later in my tests ).
For one HTTP-GET request it works very good. Then I tried to perform several HTTP-request at the same time and see how many and how fast I can download over a WIFI connection of the IPad
I made 2 versions. 1 version using NSOperations and 1 version using NSThread an Synchron HTTP-GET request. In short, I always get a TimeOut for my 6th request. ( The tcp-syn doesn't get to my HTTP-Server ).
Extra info:
NSThead-implementation:
Simply make a for loop and create a Thread. This will perform a synchronized HTTP requests.
There I observe that my 6th request times out after 20 seconds. If I set the Timeout to 80 seconds, I clearly see that after the end of my first http-request ( after 60 seconds ) my 6th request is launched...
NSOperation-implementation:
Create a Queue and set the maxConcurrentOperations to 12. Add 12 http-request Operations to the queue. Here as well I notice that the 6th request gets a -1001 error code ( meaning: timout ). and I see no tcp-syn of the 6th request.