According to the syntax of ie.waitforcomplete command, it takes input in terms of milliseconds but while I was running this code, it does for 10 seconds as it should.
ie.open google.com nowait true
window ✱internet✱
ie.seturl g1ant.com
ie.waitforcomplete 1000
ie.seturl duckduckgo.com
As manual suggests, ie.waiforcomplete "suspends script execution until a webpage is loaded". The first argument is timeout which "specifies time in milliseconds for G1ANT.Robot to wait for the command to be executed". It simply means that when you have:
ie.waitforcomplete 1000
It will wait for maximum time of 1000 milliseconds (1 second) for a webpage to load, if it fails, an exception will occurr.
And the following code means that it will wait for maximum time of 10 seconds.
ie.waitforcomplete 10000
Related
I have around 1000 targets that are probed using HTTP.
job="http_2xx", env="prod", instance="x.x.x.x"
job="http_2xx", env="test", instance="y.y.y.y"
job="http_2xx", env="dev", instance="z.z.z.z"
I want to know for the targets:
Rate of failure by env in last 10 minutes.
Increase in rate of failure by env in last 10 minutes.
Curious what the following does:
sum(increase(probe_success{job="http_2xx"}[10m]))
rate(probe_success{job="http_2xx", env="prod"}[5m]) * 100
The closest I have reached is with following to find operational by env in 10 minutes:
avg(avg_over_time(probe_success{job="http_2xx", env="prod"}[10m]) * 100)
Rate of failure by env in last 10 minutes. The easiest way you can do it is:
sum(rate(probe_success{job="http_2xx"}[10m]) * 100) by (env)
This will return you the percentage off successful probes, which you can reverse adding *(-1) +100
Calculating rate over 10m and increase of rate over 10m seems redundant adding an increase function to the above query didn't work for me. you can replace the rate function with increase if want to.
The first query was pretty close it will calculate the increase of successful probes over 10m period. You can make it show increase of failed probes by adding == 0 and sum it by the "env" variable
sum(increase(probe_success{job="http_2xx"} == 0 [10m])) by (env)
Your second query will return percentage of successful request over 5m for prod environment
I'm doing a parametric sweep of some Abaqus simulations, and so I'm using the waitForCompletion() function to prevent the script from moving on prematurely. However, occassionally the combination of parameters causes the simulation to hang on one or two of the parameters in the sweep for something like half an hour to an hour, whereas most parameter combos only take ~10 minutes. I don't need all the data points, so I'd rather sacrifice one or two results to power through more simulations in that time. Thus I tried to use waitForCompletion(timeout) as documented here. But it doesn't work - it ends up functioning just like an indefinite waitForCompletion, regardless of how low I set the wait time. I am using Abaqus 2017, and I was wondering if anyone else had gotten this function to work and if so how?
While I could use a workaround like adding a custom timeout function and using the kill() function on the job, I would prefer to use the built-in functionality of the Abaqus API, so any help is much appreciated!
It seems like starting from a certain version the timeOut optional argument was removed from this method: compare the "Scripting Reference Manual" entry in the documentation of v6.7 and v6.14.
You have a few options:
From Abaqus API: Checking if the my_abaqus_script.023 file still exists during simulation:
import os, time
timeOut = 600
total_time = 60
time.sleep(60)
# whait untill the the job is completed
while os.path.isfile('my_job_name.023') == True:
if total_time > timeOut:
my_job.kill()
total_time += 60
time.sleep(60)
From outside: Launching the job using the subprocess
Note: don't use interactive keyword in your command because it blocks the execution of the script while the simulation process is active.
import subprocess, os, time
my_cmd = 'abaqus job=my_abaqus_script analysis cpus=1'
proc = subprocess.Popen(
my_cmd,
cwd=my_working_dir,
stdout='my_study.log',
stderr='my_study.err',
shell=True
)
and checking the return code of the child process suing poll() (see also returncode):
timeOut = 600
total_time = 60
time.sleep(60)
# whait untill the the job is completed
while proc.poll() is None:
if total_time > timeOut:
proc.terminate()
total_time += 60
time.sleep(60)
or waiting until the timeOut is reached using wait()
timeOut = 600
try:
proc.wait(timeOut)
except subprocess.TimeoutExpired:
print('TimeOut reached!')
Note: I know that terminate() and wait() methods should work in theory but I haven't tried this solution myself. So maybe there will be some additional complications (like looking for all children processes created by Abaqus using psutil.Process(proc.pid) )
I'm taking a PCollection of sessions and trying to get average session duration per channel/connection. I'm doing something where my early triggers are firing for each window produced - if 60min windows sliding every 1 minute, an early trigger will fire 60 times. Looking at the timestamps on the outputs, there's a window every minute for 60minutes into the future. I'd like the trigger to fire once for the most recent window so that every 10 seconds I have an average of session durations for the last 60 minutes.
I've used sliding windows before and had the results I expected. By mixing sliding and sessions windows, I'm somehow causing this.
Let me paint you a picture of my pipeline:
First, I'm creating sessions based on active users:
.apply("Add Window Sessions",
Window.<KV<String, String>> into(Sessions.withGapDuration(Duration.standardMinutes(60)))
.withOnTimeBehavior(Window.OnTimeBehavior.FIRE_ALWAYS)
.triggering(
AfterWatermark.pastEndOfWindow()
.withEarlyFirings(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(10))))
.withAllowedLateness(Duration.ZERO)
.discardingFiredPanes()
)
.apply("Group Sessions", Latest.perKey())
Steps after this create a session object, compute session duration, etc. This ends with a PCollection(Session).
I create a KV of connection,duration from the Pcollection(Session).
Then I apply the sliding window and then the mean.
.apply("Apply Rolling Minute Window",
Window. < KV < String, Integer >> into(
SlidingWindows
.of(Duration.standardMinutes(60))
.every(Duration.standardMinutes(1)))
.triggering(
Repeatedly.forever(
AfterWatermark.pastEndOfWindow()
.withEarlyFirings(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(10)))
)
)
.withAllowedLateness(Duration.standardMinutes(1))
.discardingFiredPanes()
)
.apply("Get Average", Mean.perKey())
It's at this point where I'm seeing issues. What I'd like to see is a single output per key with the average duration. What I'm actually seeing is 60 outputs for the same key for each minute into the next 60 minutes.
With this log in a DoFn with C being the ProcessContext:
LOG.info(c.pane().getTiming() + " " + c.timestamp());
I get this output 60 times with timestamps 60 minutes into the future:
EARLY 2017-12-17T20:41:59.999Z
EARLY 2017-12-17T20:43:59.999Z
EARLY 2017-12-17T20:56:59.999Z
(cont)
The log was printed at Dec 17, 2017 19:35:19.
The number of outputs is always window size/slide duration. So if I did 60 minute windows every 5 minutes, I would get 12 output.
I think I've made sense of this.
Sliding windows create a new window with the .every() function. Setting early firings applies to each window so getting multiple firings makes sense.
In order to fit my use case and only output the "current window", I'm checking c.pane().isFirst() == true before outputting results and adjusting the .every() to control the frequency.
In fluentd, regarding retry_limit, disable_retry_limit http://docs.fluentd.org/v0.12/articles/output-plugin-overview:
If the limit is reached, buffered data is discarded and the retry interval is reset to its initial value (retry_wait).
In my setup I have the following configuration for output:
buffer_queue_limit 200
buffer_chunk_limit 1m
flush_interval 3s
buffer_queue_full_action drop_oldest_chunk
max_retry_wait 1h
disable_retry_limit true
So we will keep retrying to output from buffer, with a max_retry_wait of 1 hour, untill the buffer queue is full, in which case it will drop the oldest chunk and move onto the next one.
With the disable_retry_limit set to true, this means we drop the oldest chunk only when the buffer queue is full, buffer_queue_full_action drop_oldest_chunk.
My question is, when this buffer queue drops the oldest chunk, is the retry_wait(default 1s, incrementing with each try) reset to it's initial value for the next chunk in the queue due to be outputted (giving same behavior as when retry_limit is reached)
Tested on local machine, fluent-d does not reset the retry_wait to its initial value when a chunk is dropped.
Lets say I have a tcl script which should normally execute in less than a minute - How could I make sure that the script NEVER takes more than 'x' minutes/seconds to execute, and if it does then the script should just be stopped.
For example, if the script has taken more than 100 seconds, then I should be able to automatically switch control to a clean up function which would gracefully end the script so that I have all the data from the script run so far but I also ensure that it doesn't take too long or get stuck infinitely.
I'm not sure if this can be done in tcl - any help or pointers would be welcome.
You could use interp limit when you use a child interpreter.
Note that this will throw an uncachable error, if you want to do some cleanup you to remove the limit in a parent interp.
set interp [interp create]
# initialize the interp
interp eval $interp {
source somestuff.tcl
}
# Add the limit. From now you have 60 seconds or an error will be thrown
interp limit $interp time -seconds [clock seconds] -milliseconds 60000
set errorcode [catch {interp eval {DoExpensiveStuff}} res opts]
# remove the limit so you can cleanup the mess if needed.
interp limit $interp time -seconds {}
if {$errorcode} {
# Do some cleanup here
}
# delete the interp, or reuse it?
interp delete $interp
# And what shall be done with the error? Throw it.
return -options $opt $res
Resource limits are the best bet with Tcl, but they are not bullet-proof. Tcl can not (and will not) abort C procedures, and there are some ways to let the Tcl core do some hard working.
There must be a loop that you're worried might take more than 100 seconds, yes? Save clock seconds (current time) before you enter the loop, and check the time again at the end of each iteration to see if more than 100 seconds have elapsed.
If for some reason that's not possible, you can try devising something using after—that is, kick off a timer to a callback that sets (or unsets) some global variable that your executing code is aware of—so that on detection, it can attempt to exit.