I am using Jasmine with PhantomJS to run test cases.
In my typical test case, I make a service call, wait for response and confirm response.
Some requests can return in a few seconds and some can take up to a minute to return.
When ran through PhantomJS, the test case fails for the service call that is supposed to take a minute ( fails because the response is not yet received).
What's interesting is that the test passes when ran through Firefox.
I have tried looking at tcpdump and the headers are same for requests through both browsers, so this looks like a browser timeout issue.
Has anyone had a similar issue ? Any ideas as to where could the timeout be configured ? Or do you think the problem is something else ?
Ah the pain of PhantomJS.
Apparently it turned out that I was using javascript's bind function which is not supported in PhantomJS .
This was causing the test to fail which resulted in messing up state of some global variable( my fault) and hence the failure.
But the root cause was using bind.
Solution: try getting a shim for bind like this from https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/Function/bind
if (!Function.prototype.bind) {
Function.prototype.bind = function (oThis) {
if (typeof this !== "function") {
// closest thing possible to the ECMAScript 5 internal IsCallable function
throw new TypeError("Function.prototype.bind - what is trying to be bound is not callable");
}
var aArgs = Array.prototype.slice.call(arguments, 1),
fToBind = this,
fNOP = function () {},
fBound = function () {
return fToBind.apply(this instanceof fNOP && oThis
? this
: oThis,
aArgs.concat(Array.prototype.slice.call(arguments)));
};
fNOP.prototype = this.prototype;
fBound.prototype = new fNOP();
return fBound;
};
}
I had exactly same issue. All you have to do is add setTimeout to exit
setTimeout(function() {phantom.exit();},20000); // stop after 20 sec ( add this before you request your webpage )
page.open('your url here', function (status) {
// operations here
});
Related
I want to examine http requests in an extension for firefox. To begin figuring out how to do what I want to do I figured I'd just log everything and see what comes up:
webRequest.onResponseStarted.addListener(
(stuff) => {console.log(stuff);},
{urls: [/^.*$/]}
);
The domain is insignificant, and I know the regex works, verified in the console. When running this code I get no logging. When I take out the filter parameter I get every request:
webRequest.onResponseStarted.addListener(
(stuff) => {console.log(stuff);}
);
Cool, I'm probably doing something wrong, but I can't see what.
Another approach is to manually filter on my own:
var webRequest = Components.utils.import("resource://gre/modules/WebRequest.jsm", {});
var makeRequest = function(type) {
webRequest[type].addListener(
(stuff) => {
console.log(!stuff.url.match(/google.com.*/));
if(!stuff.url.match(/google.com.*/))
return;
console.log(type);
console.log(stuff);
}
);
}
makeRequest("onBeforeRequest");
makeRequest("onBeforeSentHeaders");
makeRequest("onSendHeaders");
makeRequest("onHeadersReceived");
makeRequest("onResponseStarted");
makeRequest("onCompleted");
With the console.log above the if, I can see the regex returning true when I want it to and the code making it past the if. When I remove the console.log above the if the if no longer gets executed.
My question is then, how do I get the filtering parameter to work or if that is indeed broken, how can I get the code past the if to be executed? Obviously, this is a fire hose, and to begin searching for a solution I will need to reduce the data.
Thanks
urls must be a string or an array of match patterns. Regular expressions are not supported.
WebRequest.jsm uses resource://gre/modules/MatchPattern.jsm. Someone might get confused with the util/match-pattern add-on sdk api, which does support regular expressions.
I faced with very strange problem. I had a set of tests which I run daily on Jenkins and without any noticeable changes some asserts(expects) started fail. THe strange thing here is that they fails ONLY if I execute tests from Jenkins on Browserstack. Locally everything just fine, locally on browserstack everything is fine, on saucelabs everything is fine. I have 3 it() blocks with similar expects:
value1 = $('.someclass');
value2 = ..
value3 = ..
expect(value1.getText()).toContain('tratata');
expect(value2.getText()).toContain('uhuhuhu');
expect(value3.getText()).toContain('ahahaha');
they are all situated in different it() blocks. Now strange thing:
When I execute tests, test with 1st assert block passes just fine, on the 2nd it block it says that assert fails(i do some stuff in order to change values), but manually / locally I see that everything is fine. Also while test is getting executed I see that values are getting changed ( i even did screenshots and check visual log on browserstack). In the 3rd it block I did other action and assert fails again, BUT it compare it with value which I was expected from step 2, not step 1!! SO looks like for some reason I am one step behind... If I comment it block or just asserts in the 1st test, 2nd one passes fine, but 3 fails. If I comment 2 it block, 3rd passes fine.
Sounds like that in this particular case for some reason some magic happens and only on Jenkins and only on Browserstack. By the way tests have been working for a while without any problems and started fail without any updates.
I though that for some reason I have problems with control flow, I WAIT for elements to be presented in addition, I tried browser.sleep() anti-pattern also to investigate it better, but it magically keeps be on the step behind.
I am not looking for particular solution directly, but any suggestions will be highly appreciated, I am not sure which additional info I should provide, hope I described the problem enough.
#protractor2.1.0
#jasmine2.3.2
browser.ignoreSynchronization = false
it('', function () {
expect(viewBookingDetailsPage.totalCostSection.depositDue.getText()).toContain('542.00');
expect(viewBookingDetailsPage.totalCostSection.remainingBalance.getText()).toContain('4,878.00');
expect(viewBookingDetailsPage.totalCostSection.totalDepositAmount.getText()).toContain('5,420.00');
});
it('', function () {
$(viewBookingDetailsPage.eventAndItemsSection.addItemsBtn).click();
helper.waitElementToBeVisisble(viewBookingDetailsPage.addItemsModal.modalOpen);
viewBookingDetailsPage.addonItemAttribute(0, viewBookingDetailsPage.addItemsModal.events).click();
helper.waitElementToBeVisisble(viewBookingDetailsPage.addItemsModal.eventSelectionPopup);
viewBookingDetailsPage.addItemsModal.availableEvents.then(function (events) {
//Morning event
events[3].$('i').click();
viewBookingDetailsPage.addonItemAttribute(0, viewBookingDetailsPage.addItemsModal.selectItem).click();
viewBookingDetailsPage.addonItemAttribute(1, viewBookingDetailsPage.addItemsModal.selectItem).click();
viewBookingDetailsPage.addItemsModal.addButton.click();
helper.waitElementToDisappear(viewBookingDetailsPage.addItemsModal.modalOpen);
helper.waitElementToBeVisible(viewBookingDetailsPage.addonItemDetail(0, 0, 0, viewBookingDetailsPage.eventAndItemsSection.itemName));
expect(viewBookingDetailsPage.totalCostSection.depositDue.getText()).toContain('592.00');
expect(viewBookingDetailsPage.totalCostSection.remainingBalance.getText()).toContain('5,328.00');
expect(viewBookingDetailsPage.totalCostSection.totalDepositAmount.getText()).toContain('5,920.00');
});
});
it('', function () {
viewBookingDetailsPage.addonItemDetail(0, 0, 0, viewBookingDetailsPage.eventAndItemsSection.removeItem).click(); ----- method just returns element into multiple internal repeaters
expect(viewBookingDetailsPage.totalCostSection.depositDue.getText()).toContain('580.00');
expect(viewBookingDetailsPage.totalCostSection.remainingBalance.getText()).toContain('5,220.00');
expect(viewBookingDetailsPage.totalCostSection.totalDepositAmount.getText()).toContain('5,800.00');
});
I fixed this with waiting directly to text to change:
browser.wait(function() {
return viewBookingDetailsPage.totalCostSection.depositDue.getText().then(function(text) {
return text === '592.00USD';
});
}, 15000);
I am sure that something goes wrong here, but it worked for me. If I will have some free time I will try to enhance the answer and investigate it deeper.
This post hints at the fact that one of the steps could be blocked waiting to complete: http://makandracards.com/makandra/1709-single-step-and-slow-motion-for-cucumber-scenarios-using-javascript-selenium. Not sure if that helps at all.
I have my export server up and running an am using to generate all charts on the server. One issue I am having is with datalabels. If I use format it works ok but I need formatter and it does not appear to be called. This is the relevant portion of my options:
dataLabels: {
enabled: true,
color: '#606060',
align: 'right',
//format:'{point.y:.0f}',
formatter:function(){
var val = this.y;
return val / 1000000 + 'M';
},
Since the value just gets displayed normally it appears that this function just does not get called. When I use these exact options on client it works fine. Is this not possible?
UPDATE:
I have my own export server (highcharts-converter.js) as I am doing this all on the server with phantomjs. What I do is fetch data from another service and then build the infile like I would on the client. Problems I have run into are 1. doing JSON.stringify on infile gets rid of formatter functions. 2. If I make the function strings that does not work either because I have to call a webservice for the highcharts phantomjs. This means that the config automatically gets converted to JSON and when converter does a JSON.parse the functions remain strings. Not sure if I need to change highcharts-convert.js to somehow turn them back into functions but can't figure out how and am not sure how to debug that file since it is on a separate process (phantomjs child process). Have tried just about everything I can think of to fix this but no luck so far.
ALSO:
Because the function is a string highcharts throws this error:
TypeError: 'undefined' is not a function (evaluating 'axis.labelFormatter.call')
I found a solution though not sure if it's the best one. I call json.stringify and then override it. I then turn the function into a string so that when it gets sent in the post call it does not get removed. Then in highcharts-convert.js I turn it back into a function.
sorry took so long to get back. Been a while but I believe this is it. To stringify I did this:
function toJSONWithFuncs(obj) {
Object.prototype.toJSON = function() {
var sobj = {}, i;
for (i in this)
if (this.hasOwnProperty(i))
sobj[i] = typeof this[i] == 'function' ?
this[i].toString() : this[i];
return sobj;
};
var str = JSON.stringify(obj);
delete Object.prototype.toJSON;
return str;
}
then in highcharts-convert.createChart I do this right before page.open
input = params.infile;
input = input.replace(/:"function/g,':function');
input = input.replace(/(}\|")/g,'}');
This simple method for caching dynamic content uses register_shutdown_function() to push the output buffer to a file on disk after exiting the script. However, I'm using PHP-FPM, with which this doesn't work; a 5-second sleep added to the function indeed causes a 5-second delay in executing the script from the browser. A commenter in the PHP docs notes that there's a special function for PHP-FPM users, namely fastcgi_finish_request(). There's not much documentation for this particular function, however.
The point of fastcgi_finish_request() seems to be to flush all data and proceed with other tasks, but what I want to achieve, as would normally work with register_shutdown_function(), is basically to put the contents of the output buffer into a file without the user having to wait for this to finish.
Is there any way to achieve this under PHP-FPM, with fastcgi_finish_request() or another function?
$timeout = 3600; // cache time-out
$file = '/home/example.com/public_html/cache/' . md5($_SERVER['REQUEST_URI']); // unique id for this page
if (file_exists($file) && (filemtime($file) + $timeout) > time()) {
readfile($file);
exit();
} else {
ob_start();
register_shutdown_function(function () use ($file) {
// sleep(5);
$content = ob_get_flush();
file_put_contents($file, $content);
});
}
Yes, it's possible to use fastcgi_finish_request for that. You can save this file and see that it works:
<?php
$timeout = 3600; // cache time-out
$file = '/home/galymzhan/www/ps/' . md5($_SERVER['REQUEST_URI']); // unique id for this page
if (file_exists($file) && (filemtime($file) + $timeout) > time()) {
echo "Got this from cache<br>";
readfile($file);
exit();
} else {
ob_start();
echo "Content to be cached<br>";
$content = ob_get_flush();
fastcgi_finish_request();
// sleep(5);
file_put_contents($file, $content);
}
Even if you uncomment the line with sleep(5), you'll see that page still opens instantly because fastcgi_finish_request sends data back to browser and then proceeds with whatever code is written after it.
First of all,
If you cache the dynamic content this way, you're doing it wrong. Surely it can be used this way and it will be able to work, but its totally paralyzed by the approach itself.
if you want to efficiently cache and handle content, create one class and wrap all caching function into.
Yes, you can use fast_cgi_finish_request() like a register_shutdown(). The only difference is that, fast_cgi_finish_request() will send an output(if any) and WILL NOT TERMINATE the script, while an callback of register_shutdown() will be invoked on script termination.
This is an old question, but I don't think any of the answers correctly answer the problem.
As I understand it, the problem is that none of the output gets sent to the client until the ob_get_flush() call that happens during the shutdown function.
To fix that issue, you need to pass a function and a chunk size to ob_start() in order to handle the output in chunks.
Something like this for your else clause:
$content = '';
$write_buffer_func = function($buffer, $phase) use ($file, &$content) {
$content .= $buffer;
if ($phase & PHP_OUTPUT_HANDLER_FINAL) {
file_put_contents($file, $content);
}
return $buffer;
};
ob_start($write_buffer_func, 1024);
I'm building an nsIProtocolHandler implementation in Delphi. (more here)
And it's working already. Data the module builds gets streamed over an nsIInputStream. I've got all the nsIRequest, nsIChannel and nsIHttpChannel methods and properties working.
I've started testing and I run into something strange. I have a page "a.html" with this simple HTML:
<img src="a.png">
Both "xxm://test/a.html" and "xxm://test/a.png" work in Firefox, and give above HTML or the PNG image data.
The problem is with displaying the HTML page, the image doesn't get loaded. When I debug, I see:
NewChannel gets called for a.png, (when Firefox is processing an OnDataAvailable notice on a.html),
NotificationCallbacks is set (I only need to keep a reference, right?)
RequestHeader "Accept" is set to "image/png,image/*;q=0.8,*/*;q=0.5"
but then, the channel object is released (most probably due to a zero reference count)
Looking at other requests, I would expect some other properties to get set (such as LoadFlags or OriginalURI) and AsyncOpen to get called, from where I can start getting the request responded to.
Does anybody recognise this? Am I doing something wrong? Perhaps with LoadFlags or the LoadGroup? I'm not sure when to call AddRequest and RemoveRequest on the LoadGroup, and peeping from nsHttpChannel and nsBaseChannel I'm not sure it's better to call RemoveRequest early or late (before or after OnStartRequest or OnStopRequest)?
Update: Checked on the freshly new Firefox 3.5, still the same
Update: To try to further isolate the issue, I try "file://test/a1.html" with <img src="xxm://test/a.png" /> and still only get above sequence of events happening. If I'm supposed to add this secundary request to a load-group to get AsyncOpen called on it, I have no idea where to get a reference to it.
There's more: I find only one instance of the "Accept" string that get's added to the request headers, it queries for nsIHttpChannelInternal right after creating a new channel, but I don't even get this QueryInterface call through... (I posted it here)
Me again.
I am going to quote the same stuff from nsIChannel::asyncOpen():
If asyncOpen returns successfully, the
channel is responsible for keeping
itself alive until it has called
onStopRequest on aListener or called
onChannelRedirect.
If you go back to nsViewSourceChannel.cpp, there's one place where loadGroup->AddRequest is called and two places where loadGroup->RemoveRequest is being called.
nsViewSourceChannel::AsyncOpen(nsIStreamListener *aListener, nsISupports *ctxt)
{
NS_ENSURE_TRUE(mChannel, NS_ERROR_FAILURE);
mListener = aListener;
/*
* We want to add ourselves to the loadgroup before opening
* mChannel, since we want to make sure we're in the loadgroup
* when mChannel finishes and fires OnStopRequest()
*/
nsCOMPtr<nsILoadGroup> loadGroup;
mChannel->GetLoadGroup(getter_AddRefs(loadGroup));
if (loadGroup)
loadGroup->AddRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this), nsnull);
nsresult rv = mChannel->AsyncOpen(this, ctxt);
if (NS_FAILED(rv) && loadGroup)
loadGroup->RemoveRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
nsnull, rv);
if (NS_SUCCEEDED(rv)) {
mOpened = PR_TRUE;
}
return rv;
}
and
nsViewSourceChannel::OnStopRequest(nsIRequest *aRequest, nsISupports* aContext,
nsresult aStatus)
{
NS_ENSURE_TRUE(mListener, NS_ERROR_FAILURE);
if (mChannel)
{
nsCOMPtr<nsILoadGroup> loadGroup;
mChannel->GetLoadGroup(getter_AddRefs(loadGroup));
if (loadGroup)
{
loadGroup->RemoveRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
nsnull, aStatus);
}
}
return mListener->OnStopRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
aContext, aStatus);
}
Edit:
As I have no clue about how Mozilla works, so I have to guess from reading some code. From the channel's point of view, once the original file is loaded, its job is done. If you want to load the secondary items linked in file like an image, you have to implement that in the listener. See TestPageLoad.cpp. It implements a crude parser and it retrieves child items upon OnDataAvailable:
NS_IMETHODIMP
MyListener::OnDataAvailable(nsIRequest *req, nsISupports *ctxt,
nsIInputStream *stream,
PRUint32 offset, PRUint32 count)
{
//printf(">>> OnDataAvailable [count=%u]\n", count);
nsresult rv = NS_ERROR_FAILURE;
PRUint32 bytesRead=0;
char buf[1024];
if(ctxt == nsnull) {
bytesRead=0;
rv = stream->ReadSegments(streamParse, &offset, count, &bytesRead);
} else {
while (count) {
PRUint32 amount = PR_MIN(count, sizeof(buf));
rv = stream->Read(buf, amount, &bytesRead);
count -= bytesRead;
}
}
if (NS_FAILED(rv)) {
printf(">>> stream->Read failed with rv=%x\n", rv);
return rv;
}
return NS_OK;
}
The important thing is that it calls streamParse(), which looks at src attribute of img and script element, and calls auxLoad(), which creates new channel with new listener and calls AsyncOpen().
uriList->AppendElement(uri);
rv = NS_NewChannel(getter_AddRefs(chan), uri, nsnull, nsnull, callbacks);
RETURN_IF_FAILED(rv, "NS_NewChannel");
gKeepRunning++;
rv = chan->AsyncOpen(listener, myBool);
RETURN_IF_FAILED(rv, "AsyncOpen");
Since it's passing in another instance of MyListener object in there, that can also load more child items ad infinitum like a Russian doll situation.
I think I found it (myself), take a close look at this page. Why it doesn't highlight that the UUID has been changed over versions, isn't clear to me, but it would explain why things fail when (or just prior to) calling QueryInterface on nsIHttpChannelInternal.
With the new(er) UUID, I'm getting better results. As I mentioned in an update to the question, I've posted this on bugzilla.mozilla.org, I'm curious if and which response I will get there.