There is my irule config as below :
when HTTP_REQUEST {
switch [HTTP::query] {
"*NetTest0*" {
HTTP::respond 200 content "NetTest0()"
}
"*NetTest1*" {
HTTP::respond 200 content "NetTest1()"
}
"*NetTest2*" {
HTTP::respond 200 content "NetTest2()"
}
"*NetTest3*" {
HTTP::respond 200 content "NetTest3()"
}
}}
Is there any method can get uri as a variable, and replace NetTest0 to NetTest0()?
you can use the HTTP::uri command to get and set the URI. Depending on the complexity of your replacements, you can use string map (preferred for performance reasons) or regsub (avoid if possible, but useful if you need it.)
Here are a couple examples of the string map that should help you get started:
https://community.f5.com/t5/technical-forum/variable-in-a-string-map/td-p/263863
https://community.f5.com/t5/technical-forum/string-map-with-variables-including-quotes/td-p/252969
We recently launched a new platform and engaging on DevCentral is far easier than before, feel free to join us there for future F5-related questions!
Related
I'm currently evaluating whether k6 fits our load testing needs. We have a fairly traditional website architecture that uses Apache webservers with PHP und a MySQL database. Sending simple HTTP requests with k6 looks simple enough and I think we will be able to test all major functionality with it, as we don't rely on JavaScript that much and most pages are static.
However, I'm unsure how to deal with resources (stylesheets, images, etc.) that are referenced in the HTML that is returned in the requests. We need to load them as well, as this sometimes leads to database requests, which must be part of the load test.
Is there some out-of-the-box functionality in k6 that allows you to load all the resources like a browser would? I'm aware that k6 does NOT render the page and I don't need it to. I only need to request all the resources inside the HTML.
You basically have two options, both with their caveats:
Record your session - you can either export har directly from the browser as shown there or use an extension made for your browser here is firefox and chromes. Both should be usable without a k6 cloud account you just need to set them to download the har and it will automatically (and somewhat silently) download them when you hit stop. And then either use the in k6 har converter (which is deprecated, but still works) or the new har-to-k6 one which.
This method is particularly good if you have a lot of pages and/or resources and even works if you have a single page style of application as it just gets what the browser requested as a HAR and then transforms it into a script. And if there were no dynamic things that need to be inputed (username/password) the final script can be used as is most of the time.
The biggest problem with this approach is that if you add a css file you need to redo this whole exercise. This is even more problematic if you css/js file name change on each change or something like that. Which is what the next method is good for:
Use parseHTML and then find the elements you care about and make a request for them.
import http from "k6/http";
import {parseHTML} from "k6/html";
export default function() {
const res = http.get("https://stackoverflow.com");
const doc = parseHTML(res.body);
doc.find("link").toArray().forEach(function (item) {
console.log(item.attr("href"));
// make http gets for it
// or added them to an array and make one batch request
});
}
will produce
NFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/favicon.ico?v=4f32ecc8f43d
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon.png?v=c78bd457575a
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon.png?v=c78bd457575a
INFO[0001] /opensearch.xml
INFO[0001] https://cdn.sstatic.net/Shared/stacks.css?v=53507c7c6e93
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/primary.css?v=d3fa9a72fd53
INFO[0001] https://cdn.sstatic.net/Shared/Product/product.css?v=c9b2e1772562
INFO[0001] /feeds
INFO[0001] https://cdn.sstatic.net/Shared/Channels/channels.css?v=f9809e9ffa90
As you can see some of the urls are relative and not absolute so you will need to handle this. And in this example only some are css, so probably more filtering is needed.
The problem here is that you need to write the code and if you add a relative link or something else you need to handle it. Luckily k6 is scriptable so you can reuse the code :D.
I've followed Михаил Стойков suggestion and written my own function to load resources. You can set the way resources are loaded (batch or sequential gets with options.concurrentResourceLoading).
/**
* #param {http.RefinedResponse<http.ResponseType>} response
*/
export function getResources(response) {
const resources = [];
response
.html()
.find('*[href]:not(a)')
.each((index, element) => {
resources.push(element.attributes().href.value);
});
response
.html()
.find('*[src]:not(a)')
.each((index, element) => {
resources.push(element.attributes().src.value);
});
if (options.concurrentResourceLoading) {
const responses = http.batch(
resources.map((r) => {
return ['GET', resolveUrl(r, response.url), null, {
headers: createHeader()
}];
})
);
responses.forEach(() => {
check(response, {
'resource returns status 200': (r) => r.status === 200,
});
});
} else {
resources.forEach((r) => {
const res = http.get(resolveUrl(r, response.url), {
headers: createHeader(),
});
!check(res, {
'resource returns status 200': (r) => r.status === 200,
});
});
}
}
I use F5 and I have an issue.
I want to build an Irule that check the following scenario
url=="domain.com" and Content-Length(of the request) > 400
then
alert(response)
Is it possible to create this Irule?
I'm not sure what your alert action desire is (simulated here with the # take action block), but the rule is pretty simple:
when HTTP_REQUEST {
if { ([HTTP::host] eq "domain.com") and ([HTTP::header Content-Length] > 400) } {
# take action
}
}
Do you possibly mean you want to use alertd to generate an email or SNMP trap? If so, then as long as alertd is looking for the error you put in the log, that would take the alert action.
I was trying to use the planner endpoint on version 1 of the graph. The main goal for me is to update the status of a task and decide whether it is ‘completed’ or ‘to do’. The first thing I do is to get all tasks from myself. See the endpoint below:
https://graph.microsoft.com/v1.0/me/planner/tasks
function plannerCompleteTask(id, etag) {
var specialEtag = etag.replace(/\\/g, "");
var deferred = $q.defer();
var endpoint = config.baseGraphApiUrl + 'planner/tasks/' + id;
var data = {
"percentComplete": "100"
};
var configRest = {
headers: {
"content-type": "application/json",
"If-Match": specialEtag
}
}
//"completedDateTime": "2018-02-15T07:56:25.7951905Z",
$http.patch(endpoint, data, configRest).then(function (result) {
console.log('log code', result);
deferred.resolve(result.status);
});
return deferred.promise;
}
I will create the following request
This will return a status: 204 with no content.
If I rerun the query with a "percentageCompleted: 0" in the body I get the following error.
Also If I try to log the request I get back from the AJAX call it doesn't give me back anything. As if there is no error handling being send back. I would need this because I have to reload the data in my application; but right now my code runs before the changes on the graph get completed, yet it returns a 204 status.
So I am clueless to find out when the call doesn't work or to find out when it is finished. Did anyone faced this issue before?
Thanks for reading and any help would be much appreciated. Cheers!
I think what you are looking for is the "prefer" header. If in your patch request you provide the "prefer" header with value "return=representation", the result of the patch will be final task data, including the new etag, with 200 status code, instead of default behavior of returning 204 "no content" status code.
Write operations in Planer are asynchronous. So, when possible, you should always update your local data based on the results of write operations with the prefer header, instead of reading them again.
In your requests, since you are reading the data before the task update is complete, essentially you are updating the same state of the task to be completed and not completed at the same time, which is the reason for the conflict.
I've implemented a search-as-you-type component in React and Relay. It's roughly the same setup as search functionality using relay
It works as intended with one exception. New results from the server never appear when I retype a search I've already performed on the client. I looks like Relay always goes to the local cache in this case.
So, for example, say I've searched for 'foo' and didn't find any results. Now, seconds later, another user on the website creates this 'foo', but Relay will never query the server since the cached response to the 'foo' search was an empty result.
Is there a pattern or best practice for this scenario?
The query is as follows. I call this.props.relay.setVariables to perform the search:
initialVariables: {
search: '',
hasSearch: false
},
fragments: {
me: () => Relay.QL`
fragment on Viewer {
relationSearch(search: $search) #include(if: $hasSearch) {
... on User {
username
}
}
}
`
}
The answer seems to be to use this.props.relay.forceFetch with the search variables instead.
See https://facebook.github.io/relay/docs/api-reference-relay-container.html#forcefetch
Someone correct me if this isn't best practice.
I want to prefix URLs which match my patterns. When I open a new tab in Firefox and enter a matching URL the page should not be loaded normally, the URL should first be modified and then loading the page should start.
Is it possible to modify an URL through a Mozilla Firefox Addon before the page starts loading?
Browsing the HTTPS Everywhere add-on suggests the following steps:
Register an observer for the "http-on-modify-request" observer topic with nsIObserverService
Proceed if the subject of your observer notification is an instance of nsIHttpChannel and subject.URI.spec (the URL) matches your criteria
Create a new nsIStandardURL
Create a new nsIHttpChannel
Replace the old channel with the new. The code for doing this in HTTPS Everywhere is quite dense and probably much more than you need. I'd suggest starting with chrome/content/IOUtils.js.
Note that you should register a single "http-on-modify-request" observer for your entire application, which means you should put it in an XPCOM component (see HTTPS Everywhere for an example).
The following articles do not solve your problem directly, but they do contain a lot of sample code that you might find helpful:
https://developer.mozilla.org/en/Setting_HTTP_request_headers
https://developer.mozilla.org/en/XUL_School/Intercepting_Page_Loads
Thanks to Iwburk, I have been able to do this.
We can do this my overriding the nsiHttpChannel with a new one, doing this is slightly complicated but luckily the add-on https-everywhere implements this to force a https connection.
https-everywhere's source code is available here
Most of the code needed for this is in the files
IO Util.js
ChannelReplacement.js
We can work with the above files alone provided we have the basic variables like Cc,Ci set up and the function xpcom_generateQI defined.
var httpRequestObserver =
{
observe: function(subject, topic, data) {
if (topic == "http-on-modify-request") {
var httpChannel = subject.QueryInterface(Components.interfaces.nsIHttpChannel);
var requestURL = subject.URI.spec;
if(isToBeReplaced(requestURL)) {
var newURL = getURL(requestURL);
ChannelReplacement.runWhenPending(subject, function() {
var cr = new ChannelReplacement(subject, ch);
cr.replace(true,null);
cr.open();
});
}
}
},
get observerService() {
return Components.classes["#mozilla.org/observer-service;1"]
.getService(Components.interfaces.nsIObserverService);
},
register: function() {
this.observerService.addObserver(this, "http-on-modify-request", false);
},
unregister: function() {
this.observerService.removeObserver(this, "http-on-modify-request");
}
};
httpRequestObserver.register();
The code will replace the request not redirect.
While I have tested the above code well enough, I am not sure about its implementation. As far I can make out, it copies all the attributes of the requested channel and sets them to the channel to be overridden. After which somehow the output requested by original request is supplied using the new channel.
P.S. I had seen a SO post in which this approach was suggested.
You could listen for the page load event or maybe the DOMContentLoaded event instead. Or you can make an nsIURIContentListener but that's probably more complicated.
Is it possible to modify an URL through a Mozilla Firefox Addon before the page starts loading?
YES it is possible.
Use page-mod of the Addon-SDK by setting contentScriptWhen: "start"
Then after completely preventing the document from getting parsed you can either
fetch a different document from the same domain and inject it in the page.
after some document.URL processing do a location.replace() call
Here is an example of doing 1. https://stackoverflow.com/a/36097573/6085033