I'd like to have the response times from sites in Ansible, something along the lines like this, but in Ansible. I'm using the URI module but it seems it does not support response times.
I do not like to use the callback plugin with the time profile, because I state multiple url's in a single task.
I see Ansible does not return the values I require, is this something someone has already done?
So, I've created a whole playbook out of this request. The playbook itself includes:
Check whether URL's are providing status code 200
Includes the reaction time from the host to the server
Sending Slack message when failed
Send the logs to an Elasticsearch server
One could set a cronjob to let the playbook run each x seconds.
Related
I'm trying to find a way to get response times from Traefik per route.
For instance:
/api/asdf/.* 123.5ms
/blub/narf/.* 70.5ms
/blub/.* 1337.8ms
and so on.
This doesn't seem to be a very unusual requirement but after googeling a lot, I didn't find anything that could do the job.
I even had a look at the middlewares but there is no way to get response time of a request because it only hooks itself into the request but there is no hook that would be called after the request completed.
The traefik log files actually contain the information (in debug log level) and if I could tail somehow with a e.g. python script, I could run a list of regexs on them and collect this way the response times. But tailing on docker logs is quite messy imho and I think there should be some more obvious way I havn't found yet š¤
Can't imaging that I'm the first person trying to track response times per route - but why can't I find anything š¤·āāļø
Does someone perhaps an idea in which direction I should search?
Thank you in advance!
If you dont find any, you can take help from this project and https://github.com/traefik/plugin-rewritebody. It enables the user to replace the body but you get the idea, there is an example to get the response and you can add your own logic to write the response time in the file or whatsoever
You can try using Prometheus with Traefik. Here is a sample docker-compose which will do the job
You might wanna checkout this open-source repo
We calculated the average response time of API's by enabling prometheus in traefik.
The expression which we are using for the same is like
expr: sum(traefik_service_request_duration_seconds_sum{instance="company.com:12345",service=~"backend-module-test.*"}) / sum(traefik_service_request_duration_seconds_count{instance="company.com:12345",service=~"backend-module-test.*"}) * 1000
This expression is evaluated for a period of 1m, 5m, 10m etc and the resulting graph is displayed on the dashboard.
Other solutions include traefik-docs
We are using Jenkins pipeline to run jmeter tests for testing one of our application API. EVeryting is working ok but there are cases where the Application returns an error. We would like to log the request payload for such failures and also the timestamp so that we can investigate in the application about corresponding failures.
Is there a way, I can instruct jmeter to log the Request Data for cases which result in failure?
The easiest option is adding a Listener like Simple Data Writer to your test plan.
The configuration to save the timestamp and payload would look like:
Once the test finishes you will be able to observe requests details (if any) using View Results Tree listener.
More information: How to Save Response Data in JMeter
Jenkins. How to get Trigger builds remotely data from POST BODY
Quay.io (private docker containers registry) has notification about build status through Webhook POST, data is in body. I tried to google and read Jenkins docs, but found only how to read parameters from URL.
I found a plugin (Generic Webhook Trigger), which is capable to do it partially. It is able to work only with one link (http://{JENKINS_URL}/generic-webhook-trigger/invoke). And to start different jobs i need to use regexp.
At the same time i need to set up minimum 3 notifications on quay.io and a lot of webhooks from different services. Maybe somebody knows how to set up in Jenkins such stuff:
Create route like {JENKINS_URL}/jobName/ ā¦
Take whole parameters and write it down into $POST_DATA variable.
Execute script with $POST_DATA parameter.
Another manipulations iām able to do myself in script.
If you have a job like some-job-name.
Question 1:
Check the "Trigger builds remotely". Specify a token, like some-job-name.
Point the webhook to http://{JENKINS_URL}/generic-webhook-trigger/invoke?token=some-job-name .
Now this will be the only job triggered from this request.
Question 2:
Set the json-path to just $ and it will evaluate to the entire post data. Use any variable like variable.
Question 3:
Just use the variable from 2 like $variable.
Using Haproxy Is it possible to load balance based on the output of a GET request to a specific url? The use case for this is load balancing between a set of jenkins machines and routing the next automated job to the least busy server.
for example I can hit this url server-1/computer/api/json?pretty&tree=busyExecutors
which gives an output like:
{
"busyExecutors" : 5
}
in this case we have 5 busy executors.
Id like Haproxy to hit this url and assess which server is least busy and route the next job there. Does this sound possible? really the output and busyExecutors is irrelevant here im just looking for a way to get some kind of information from the jenkins servers and load balance off of that info.
I've looked into balance url_param and balance uri but neither really seem to be what im looking for. I've also tested balance leastconn and it also is not what im looking for.
If im completely off base here let me know, and if there would be a better way to go about this im all ears.
Thanks
We have the following situation:
We invoke a url which runs an action in a controller. The action is fairly long running - it builds a big string of XML, generates a PDF and is supposed to redirect when done.
After 60 seconds or so, the browswer gets a 200, but with content type of "application/x-unknown-content-type" no body and no Response Headers (using Tamper to look at headers)
The controller action actually continues to run to completion, producing the PDF
This is happening in our prod environment, in staging the controller action runs to completion, redirecting as expected.
Any suggestions where to look?
We're running Rails 2.2.2 on Apache/Phusion Passenger.
Thanks,
I am not 100% sure, but probably your Apache times out the request to Rails application. Could you try to set Apache's Timeout directive higher? Something like:
Timeout 120
I'd consider bumping this task off to a job queue and returning immediately rather than leaving the user to sit and wait. Otherwise you're heading for a world of problems when lots of people try to use this and you run out of available rails app instances to handle any new connections.
One way to do this easily might be to use an Ajax post to trigger creating the document, drop this into Delayed Job and then run a 10 second periodic check via ajax informing the waiting user of the jobs status. Once delayed_job has finished processing your task in the background and updated something in the database to indicate it is complete, then you can redirect the user via ajax to the newly created document.