I’m a beginner of WAPT and I run the test with 100 fixed user and I want to know the avg response time with page element. Which one I need to check?
1. Avg response time under summary or
2. Avg under response time.
3. Avg90 under response time
Thanks in advance.
Related
I am using Google Ads REST API to pull Ads data. I am not using client library.
One question, how do you programatically check current API usage when calling requests, so you can stop and wait before continuing? Other APIs like Facebook Marketing API has a header in the result that tells you how much requests you have left, so I could stop and wait. Is there a similar info on Google Ads REST API?
Thank you for reading this.
I've seen nothing in the documentation so far to suggest that there is :(
(There is, separately, a RateExceeded error, which includes a retryAfterSeconds field, if you're going too fast / the API is overloaded.)
Ultimately, I tried this method. So far, I haven't reached limit with it:
The basic developer token for Google Ads API allow 15,000 requests per day as of this answer (Link: https://developers.google.com/google-ads/api/docs/access-levels). So that's 15,000 / 24 = 625 requests every hours.
Further divisions show that I can have 625/60 = 10.4 requests every minutes. So 1 request every 6 seconds will ensure I won't reach rate limit.
So my solution is:
Measure the time it takes to complete a request call and subsequent processing
If total time is over 6 seconds, perform the next request. Else, wait so the total time is 6 seconds, then perform the next request.
The below code is what I used to perform this. Hope it helps you guys.
import time
from math import ceil
waiting_seconds = 6
start_time = time.time()
###############PERFORM API REQUEST HERE
#Measure how long it takes, should be at least 6 secs to be under API limit
end_time = time.time()
elapsed = end_time - start_time
if elapsed < waiting_seconds:
remaining = ceil(waiting_seconds - elapsed)
time.sleep(remaining)
I have a situation where I'm trying to count the number of files loaded into the system I am monitoring. I'm sending a "load time" metric to Datadog each time a file is loaded, and I need to send an alert whenever an expected file does not appear. To do this, I was going to count up the number of "load time" metrics sent to Datadog in a 24 hour period, then use anomaly detection to see whether it was less than the normal number expected. However, I'm having some trouble finding a way to consistently pull out this count for use in the alert.
I can't use the count_nonzero function, as some of my files are empty and have a load time of 0. I do know about .as_count() and count:metric{tags}, but I haven't found a way to include an evaluation interval with either of these. I've tried using .rollup(count, time) to count up the metrics sent, but this call seems to return variable results based on the rollup interval. For instance, if I compare intervals of 2000 and 4000 seconds, I would expect each 4000 second interval to count up about the sum of two 2000 second intervals over the same time period. This does not seem to be what happens at all - the counts for the smaller intervals seem to add up to much more than the count for the larger one. Additionally some rollup intervals display decimal numbers as counts, which does not make any sense to me if this function is doing what I thought it was.
Does anyone have any thoughts on how to accomplish this? I'd really appreciate any new ideas.
The below code registers five metrics count, oneminuteRate, fiftenMinuteRate, fiveMinuteRate, meanRate into graphite for every 30 seconds from my application.
public void collectMetric(string metricName, long metricValue){
mr.meter(metricName).mark(value)
}
I would like to show in the Grafana dashboard the no of requests that are received every minute.(i,e if in the first minute 60 is received, in the second minute 120 is received) Since the count in the meter metric above just keeps increasing and all the *Rate values are events per second. I am not sure how to log metric into Grafana dashboard that displays the no of requests received per minute. Any advice is highly appreciated?
Suppose if I use
mr.counter(metricName).inc(value) IS there a way to reset the counters every 1 minute?
I had the same problem. The way that I found is that I resolved this in Grafana.
When you're on the panel's metrics, you can add a function to your query like this:
You can try the derivative() function or perSecond() function but these functions are not completly reliable, it depends what you're doing with these in your panel.
But with these you'll see the number of input in time and not the total.
I am new to Prometheus and Grafana. My primary goal is to get the response time per request.
For me it seemed to be a simple thing - but whatever I do I do not get the results I require.
I need to be able to analyse the service latency in the last minutes/hours/days. The current implementation I found was a simple SUMMARY (without definition of quantiles) which is scraped every 15s.
Is it possible to get the average request latency of the last minute from my Prometheus SUMMARY?
If YES: How? If NO: What should I do?
Currently I am using the following query:
rate(http_response_time_sum{application="myapp",handler="myHandler", status="200"}[1m])
/
rate(http_response_time_count{application="myapp",handler="myHandler", status="200"}[1m])
I am getting two "datasets". The value of the first is "NaN". I suppose this is the result from a division by zero.
(I am using spring-client).
Your query is correct. The result will be NaN if there have been no queries in the past minute.
I am trying to work with the UP metric to determine the number of times the service was down for less than a minute (potentially a network hiccup) during a time range (or per hour). I am sampling at 5 seconds intervals
The best I got so far is up == 0 would give me a series with points only when the service was down but I am not sure what to do next.
Any help with this type of query would be greatly appreciated
Thanks.
You might try the following: calculate the average of the up metric. If the service goes down, the average (sliding windows of 1 minute) will decrease over time.
If the job comes up again, and the average is greater than 0, then the service wasn't down for more than one minute.
The following query (works via the Prometheus web console) delivers one data point for each time the service comes up before it was down for more than one minute.
avg_over_time(up{job="jobname"} [1m]) > 0
AND
irate(up{job="jobname"} [1m]) > 0