I believe I am running into an issue which I believe is a server issue, however, I was told to try to increase the Timeout value like this:
using (var db = new LEAP_Professional_DAL.DAL.LEAPEntitiesDAL())
{
Int32 timeoutVal = Convert.ToInt32(System.Web.Configuration.WebConfigurationManager.AppSettings["commandTimeValue"]);
((IObjectContextAdapter)db).ObjectContext.CommandTimeout = timeoutVal;
...
}
I'm just wondering if there is a way to verify if this is working as I'm expecting? The current value is set at 60 seconds.
Is there anyway to verify that the CommandTimeout is working?
Set it to 1 second and execute WAITFOR DELAY '00:00:02'.
Related
I would like to change a default timout for a wait in Frameworkium to a bit bigger.
I do not want to do it with Selenium nor any other 'workarounds' - I would like to change it in the Frameworkium as I can see it exists in the class: BaseUITest, line 39:
private static final Duration
DEFAULT_TIMEOUT = Duration.of(10, SECONDS);
Any idea?
I was going through many pages but can't find specific Frameworkium setup
wait.until(ExpectedConditions.elementToBeClickable(_element_));
I'd like to have, let say 30 second here, globally, for any condition
I've found a solution:
PageFactory.newInstance(Class<T> clazz, Duration timeout)
or
new MyPage().get(Duration timeout)
I'm not sure I have a specific question other than--have other people seen this? And if so, is there a known workaround / fix? I have failed to find anything on Google on this topic.
Basically, we have an ASP.NET MVC application that we run on localhost that extensively uses the ASP.NET in memory cache. We cache many repeated requests to the Db.
Yesterday, we upgraded two of our dev machines to Windows 10 Creators Update. After that update, we noticed that the page requests on just those machines started to craw. Upwards of 30 seconds per page.
After some debugging and viewing logs, we are seeing that the system is making the same request to the Db 200-300 times per request. Previously, this would just be cached the first time, and that request would not happen again until the cache expired it.
What we are seeing is that this code:
var myObject = LoadSomethingFromDb();
HttpRuntime.Cache.Insert("test", myObject);
var test = HttpRuntime.Cache.Get("test");
at some point, the Get would be returning NULL even if it is right after the Insert in code and even though there is no way the cache is even close to full. The application is just starting.
Anybody else see this?
Nevermind. We got bit by the Absolute Cache Expiration parameter which I neglected to include in the question's code because I didn't think that was relevant.
We were using an absolute cache expiration of:
DateTime.Now.AddMinutes(60)
instead, we should have been using:
DateTime.UtcNow.AddMinutes(60)
Not sure why the former was fine in Windows prior to Creator's Update, but the change to UtcNow seems to make the cache work again.
It appears that after the windows creators update that the behavior of cache.Insert overload methods behave differently.
[Test]
public void CanDemonstrateCacheExpirationInconsistency()
{
var cache = HttpRuntime.Cache;
var now = DateTime.Now;
var key1 =$"Now{now.Ticks}";
var key2 = key1+"2";
var key3 = $"UtcNow{now.Ticks}";
var key4 = key3 + "2";
cache.Insert(key1, true, null, DateTime.Now.AddHours(1), Cache.NoSlidingExpiration);
cache.Insert(key2, true, null, DateTime.Now.AddHours(1), Cache.NoSlidingExpiration,CacheItemPriority.Default,null);
cache.Insert(key3, true, null, DateTime.UtcNow.AddHours(1), Cache.NoSlidingExpiration);
cache.Insert(key4, true, null, DateTime.UtcNow.AddHours(1), Cache.NoSlidingExpiration, CacheItemPriority.Default, null);
Assert.That(cache.Get(key1), Is.Null); //Using this overload with datetime.now expires the cache immediately
Assert.That(cache.Get(key2), Is.Not.Null);
Assert.That(cache.Get(key3), Is.Not.Null);
Assert.That(cache.Get(key4), Is.Not.Null);
}
I want to execute two queries in zend 2 :
This is the content of my model file:
$email = $getData['login_email'];
$password = $getData['login_password'];
$select = $this->adapter->query ("select count(*) as counter from users where email = '$email' and password = '".md5($password)."'");
$results = $select->execute();
if ($results->current()['counter'] == 1 ){
// $update_user = $this->adapter->query("UPDATE users SET session_id = '".$session_id."' WHERE email = '".$email."'");
try {
$update_user = $this->adapter->query("select * from users");
} catch (\Exception $e) {
\Zend\Debug\Debug::dump($e->__toString()); exit;
}
$update_session = $update_user->execute();
For some reason if i remove one random query, the another one will be executed. I know it is weird but i believe there is a rational answer to it. The result of the try catch part is:
I did not write it wrong the query. AS you can see I tried a simple select query and i got the same result. Actually I have no idea what is wrong this. Please help with this, I'm looking up for an answer on the internet during the last 5-6 days and I found nothing. If you want me to provide any more information, please ask. THX
As this answer suggests, this is an issue with the mysqli driver using unbuffered queries by default.
To fix this, you have to buffer the result of the first query before running the next one. With ZF2, the Result interface has a buffer() method to achieve this :
$results = $select->execute();
$results->buffer();
I am using Jasmine with PhantomJS to run test cases.
In my typical test case, I make a service call, wait for response and confirm response.
Some requests can return in a few seconds and some can take up to a minute to return.
When ran through PhantomJS, the test case fails for the service call that is supposed to take a minute ( fails because the response is not yet received).
What's interesting is that the test passes when ran through Firefox.
I have tried looking at tcpdump and the headers are same for requests through both browsers, so this looks like a browser timeout issue.
Has anyone had a similar issue ? Any ideas as to where could the timeout be configured ? Or do you think the problem is something else ?
Ah the pain of PhantomJS.
Apparently it turned out that I was using javascript's bind function which is not supported in PhantomJS .
This was causing the test to fail which resulted in messing up state of some global variable( my fault) and hence the failure.
But the root cause was using bind.
Solution: try getting a shim for bind like this from https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/Function/bind
if (!Function.prototype.bind) {
Function.prototype.bind = function (oThis) {
if (typeof this !== "function") {
// closest thing possible to the ECMAScript 5 internal IsCallable function
throw new TypeError("Function.prototype.bind - what is trying to be bound is not callable");
}
var aArgs = Array.prototype.slice.call(arguments, 1),
fToBind = this,
fNOP = function () {},
fBound = function () {
return fToBind.apply(this instanceof fNOP && oThis
? this
: oThis,
aArgs.concat(Array.prototype.slice.call(arguments)));
};
fNOP.prototype = this.prototype;
fBound.prototype = new fNOP();
return fBound;
};
}
I had exactly same issue. All you have to do is add setTimeout to exit
setTimeout(function() {phantom.exit();},20000); // stop after 20 sec ( add this before you request your webpage )
page.open('your url here', function (status) {
// operations here
});
I have documentum dm_method
create dm_method object
set object_name = 'xxxxxxxxxxx',
set method_verb = 'xxx.yyy.Foo',
set method_type = 'java',
set launch_async = false,
set use_method_server = true,
set run_as_server = true,
set timeout_min = 60,
set timeout_max = 600,
set timeout_default = 500
It invoked via dm_job with period 600 second.
But my method can work more than 600 second (depend on size of input data, produced by users)
Whats happens whan max_timeout exceeded on dm_method implemented in java?
DFC job manager send Thread.interrupt()?
DFC waits for finishing job and only log warning?
I didn't find detailed description in Documentum documentation.
See Discussion on https://forums.opentext.com/forums/discussion/153860/how-documentum-method-timeout-performed
Actually, it's possible that the Java method will continue running in
the JMS after timeout. However, the Content Server will already have
closed the OutputStream where the method can write the response. So
you will most likely see errors in the log, and also in the job object
if the method was called by a job. Depending on what the method does,
it might actually be able to complete whatever it needs to do.
However, you should try to set the default timeout to a value that
will give your job enough time to complete cleanly.