Response Time per Route Traefik - docker

I'm trying to find a way to get response times from Traefik per route.
For instance:
/api/asdf/.* 123.5ms
/blub/narf/.* 70.5ms
/blub/.* 1337.8ms
and so on.
This doesn't seem to be a very unusual requirement but after googeling a lot, I didn't find anything that could do the job.
I even had a look at the middlewares but there is no way to get response time of a request because it only hooks itself into the request but there is no hook that would be called after the request completed.
The traefik log files actually contain the information (in debug log level) and if I could tail somehow with a e.g. python script, I could run a list of regexs on them and collect this way the response times. But tailing on docker logs is quite messy imho and I think there should be some more obvious way I havn't found yet 🤔
Can't imaging that I'm the first person trying to track response times per route - but why can't I find anything 🤷‍♂️
Does someone perhaps an idea in which direction I should search?
Thank you in advance!

If you dont find any, you can take help from this project and https://github.com/traefik/plugin-rewritebody. It enables the user to replace the body but you get the idea, there is an example to get the response and you can add your own logic to write the response time in the file or whatsoever

You can try using Prometheus with Traefik. Here is a sample docker-compose which will do the job
You might wanna checkout this open-source repo
We calculated the average response time of API's by enabling prometheus in traefik.
The expression which we are using for the same is like
expr: sum(traefik_service_request_duration_seconds_sum{instance="company.com:12345",service=~"backend-module-test.*"}) / sum(traefik_service_request_duration_seconds_count{instance="company.com:12345",service=~"backend-module-test.*"}) * 1000
This expression is evaluated for a period of 1m, 5m, 10m etc and the resulting graph is displayed on the dashboard.
Other solutions include traefik-docs

Related

Azure-devops rest api - pagination and rate limit

I am trying to pull Azure-Devops entities data (teams, projects, repositories, members etc...) and process that data locally,
I cannot find any documentation regarding rate-limiting and pagination,
does anyone has any experience with that?
There is some documentation for pagination on the members api:
https://learn.microsoft.com/en-us/rest/api/azure/devops/memberentitlementmanagement/members/get?view=azure-devops-rest-6.0
But that is the only one, i couldn't find any documentation for any of the git entities,
e.g: repositories.
https://learn.microsoft.com/en-us/rest/api/azure/devops/git/repositories/list?view=azure-devops-rest-6.0
If someone could point me to the right documentation,
Or shed some light on these subjects it would be great.
Thanks.
I cannot find any documentation regarding rate-limiting and pagination, does anyone has any experience with that?
There is a document about Service limits and rate limits, which introduced service limits and rate limits that all projects and organizations are subject to.
For the Rate limiting:
Azure DevOps Services, like many Software-as-a-Service solutions, uses
multi-tenancy to reduce costs and to enhance scalability and
performance. This leaves users vulnerable to performance issues and
even outages when other users of their shared resources have spikes in
their consumption. To combat these problems, Azure DevOps Services
limits the resources individuals can consume and the number of
requests they can make to certain commands. When these limits are
exceeded, subsequent requests may be either delayed or blocked.
You could refer Rate limits documentation for details
For the pagination, generally, REST API will have paginated response and ADO REST API normally have limits of 100 / 200 (depending which API) per page in each response. The way to retrieve next page information is to refer the response header x-ms-continuationtoken and use this for next request parameter as continuationToken.
But Microsoft does not document this very well - this should have been mentioned in every API call that supports continuation tokens:
Builds - List:
GET https://dev.azure.com/{organization}/{project}/_apis/build/builds?definitions={definitions}&continuationToken={continuationToken}&maxBuildsPerDefinition={maxBuildsPerDefinition}&deletedFilter={deletedFilter}&queryOrder={queryOrder}&branchName={branchName}&buildIds={buildIds}&repositoryId={repositoryId}&repositoryType={repositoryType}&api-version=5.1
If I use above REST API with $top=50, as expected I get 50 back and a header called "x-ms-continuationtoken", then we could loop output the result with continuationtoken:
You could check this similar thread for some more details.
I think for most of the apis you have query parameter as $top/$skip.You can use these parameter to do pagination. Lets say the default run gives 200 documents in the response. For the next run skip those 200 by providing $skip=200 in the query parameter of the request to get the next 200 items. You can keep on iterating until count attribute of the response becomes 0.
For those apis were you don't have these parameter you can use continuation-token as mentioned by Leo Liu-MSFT.
It looks like you can pass $top and continuationToken to list Azure Git Refs.
The documentation is here:
https://learn.microsoft.com/en-us/rest/api/azure/devops/git/refs/list?view=azure-devops-rest-6.0

uri module with response times

I'd like to have the response times from sites in Ansible, something along the lines like this, but in Ansible. I'm using the URI module but it seems it does not support response times.
I do not like to use the callback plugin with the time profile, because I state multiple url's in a single task.
I see Ansible does not return the values I require, is this something someone has already done?
So, I've created a whole playbook out of this request. The playbook itself includes:
Check whether URL's are providing status code 200
Includes the reaction time from the host to the server
Sending Slack message when failed
Send the logs to an Elasticsearch server
One could set a cronjob to let the playbook run each x seconds.

Asana - Rest API - Multipart/form image upload times out

I am working on a little tool to upload issues found during development to Asana. I am able to get and use post to create tasks etc, but I am unable to do a proper multipart forum upload.
When I run my image upload post request through an independent perl based cgi script I am getting 200's back and an image saved on my server.
When I target Asana, I get 504 gateway timeouts. I am thinking there must be something strict that the perl script is letting through but I have malformed in my request but I am hard pressed to find it.
Is there a web expert or asana expert out there who might be able to help shed some light on what might be missing.
Note the wireshark capture has an extra field. The Asana docs indicate a task field I have tried with and without that field since it is unclear if the task id encoded in the url satisfies that requirement.
I found the problem!
My boundary= had quotes around the value which was getting through on my cgi / apache setup but not for asana.

haproxy load balance based on Jenkins api

Using Haproxy Is it possible to load balance based on the output of a GET request to a specific url? The use case for this is load balancing between a set of jenkins machines and routing the next automated job to the least busy server.
for example I can hit this url server-1/computer/api/json?pretty&tree=busyExecutors
which gives an output like:
{
"busyExecutors" : 5
}
in this case we have 5 busy executors.
Id like Haproxy to hit this url and assess which server is least busy and route the next job there. Does this sound possible? really the output and busyExecutors is irrelevant here im just looking for a way to get some kind of information from the jenkins servers and load balance off of that info.
I've looked into balance url_param and balance uri but neither really seem to be what im looking for. I've also tested balance leastconn and it also is not what im looking for.
If im completely off base here let me know, and if there would be a better way to go about this im all ears.
Thanks

Size of API response is causing R12 timeout error on Heroku

So I have an API running on heroku where one of the actions returns a large list (+300 items) of kites (via Kite.all in Rails). What's happening is that I get an R12 Request timeout error. Is there a way to avoid this? I was thinking of a paginated response, but I wonder if there are better techniques out there?
There are multiple things you can do.
Paginate - this would be the most recommended.
Increase the timeout. If its set to 60s, try setting it to 120s or more.
Identify the bottleneck or the most time consuming part of the request. It could be n+1 query problem, serialization library etc.
Only send required information in the response. Client of the API might not require all the information from kite. Send only minimal information and expose another API to get more detailed information about a particular kite.

Resources