I am getting strange statistics results when run Z3 3.1 with -st command option. If you press Ctrl-C, Z3 reports total_time < time. Otherwise, if you wait until Z3 finishes: total_time > time.
What does "total-time" and "time" measure?
Is it a bug(minor though)(the difference described above)?
Thanks!
This is a bug in Z3 for Linux (versions 3.0 and 3.1). The bug does not affect the Windows version. The fix will be available in the next release (Z3 3.2). The timer used to track time is incorrect.
BTW, total-time measures the total execution time, and time only the time consumed by the last check-sat command. So, we must have that total-time >= time.
Remark: this answer has been updated using the feedback provided by Swen Jacobs.
Related
when I try to set a timeout in Lua like this:
function wait(seconds)
local start = os.time()
repeat until os.time() > start + seconds
end
it is too inconsistent. Is there a more precise way to set a timeout that will consistently wait for the amount of time requested?
Without using external libraries, in Lua there are basically only two ways to get higher accuracy.
use os.clock
run a loop and find out how many iterations take how much time
As Nicol pointed out both will be inconsistent and unrealiable in a non real-time OS like Windows.
If your OS for whatever reason decides to clock your CPU down or to do some other crap in the background your doomed.
So think about your application and decide wether you should do it on a non-real-time OS.
On a machine where you can compile own stuff theres a way to build a sleep and msleep that dont eats up any cpu time...
Look: http://www.troubleshooters.com/codecorn/lua/lua_lua_calls_c.htm
...for: Make an msleep() Function
I am having trouble using emmeans to evaluate mean (or weighted mean) of all the predictions. For example, a mixed model:
library(emmeans)
library(lme4)
m1 <- lmer(mpg ~ 1 + wt + (1|cyl),data=mtcars)
Fixed effects "wt" is successful:
emmeans(m1,specs="wt")
wt emmean SE df lower.CL upper.CL
3.22 20.2 1.71 1.83 12.1 28.3
However, to calculate the mean of predictions, the following previously worked (~ 12 months ago), but now fails:
emmeans(m1,specs="1")
NOTE: Results may be misleading due to involvement in interactions
Error in `[[<-.data.frame`(`*tmp*`, ".wgt.", value = 1) :
replacement has 1 row, data has 0
The same error occurs for simple linear models. Many thanks for any help.
I thought I was using the current version of emmeans (1.4.8) when I had the troubles described in the question. However, I may actually have been using emmeans 1.4.6 (please see comment by Russ Lenth below). I reverted back to emmeans v1.4.3 and the code worked. I then updated to the current version of emmeans (1.4.8) and the code continued to work. Most likely the cause was my use of emmeans 1.4.6, which had a known bug. Please see this github entry for more information.
I have an instance which could be very efficiently solved by old version of Z3(version 2.18). It return SAT in a few seconds.
However, when I try it on the current version of Z3(version 4.3.1). It does not return any result after 10 minutes.
Here are some details about the experiment. Could anybody give some advice?
there are 4000 Bool variables and 200 Int variables
all the constraint are in propositional logic with comparison between integers like a < b
platform: open suse linux 12.3#thinkpad T400s
Z3 v2.18 was downloaded as a linux binary last year (I cannot find the link now)
Z3 v4.3.1 was downloaded as source code, and I compile it on my laptop by using the default setting
There are about 50,000 lines in the smt file, so I can not post it here. I would be happy to send the file by email if anybody are interested.
Thanks.
Z3 is a portfolio of solvers. The default configuration changes from version to version.
Progress is never monotonic. That is, a new version may solve more problems, but may be slower and fail in some problems.
Remark: the author has sent his benchmark by email to the Z3 authors.
In the “work-in-progress” branch, I managed to reproduce the Z3 2.19 performance by using
(set-option :smt.auto-config false)
Here are instructions on how to download the “work-in-progress” branch.
To get the same behavior, we also have to replace
(check-sat)
with
(check-sat-using smt)
BTW, in the official release, we have to use
(set-option :auto-config false)
instead of
(set-option :smt.auto-config false)
As said in manual, http://www.erlang.org/erldoc?q=erlang:now
If you do not need the return value to be unique and monotonically increasing, use os:timestamp/0 instead to avoid some overhead.
os:timestamp/0 should be faster than erlang:now/0
But I tested on my PC with timer:tc/3, for 10000000 calls, time spent in microsecond is:
erlang:now 951000
os:timestamp 1365000
Why erlang:now/0 faster than os:timestamp/0?
My OS: Windows 7 x64, erlang version: R16B01.
------------------edit-----------------
I wrote another test code in parallel (100 thread), os:timestamp/0 performed better in parallel. here are data:
----- single thread ------
erlang:now 95000
os:timestamp 147000
----- multi thread ------
erlang:now 333000
os:timestamp 91000
So, I think the "overhead" is for parallel.
I've always thought that the 'some overhead' comment was darkly amusing. The way erlang:now/0 achieves its trick of providing guaranteed unique, monotonically increasing values is to take out a per-VM global lock. In a serial test you won't notice anything, but when you've got a lot of parallel code running, you may.
The function os:timestamp/0 doesn't take out a lock and may return the same value in two processes.
This was recently discussed on the erlang-questions mailing list ("erlang:now() vs os:timestamp()" on 3rd April 2013), where two interesting results emerged:
erlang:now seems to be faster than os:timestamp in interpreted code (as opposed to compiled code, where os:timestamp is faster).
If you benchmark them, you should measure the time taken using os:timestamp instead of erlang:now, since erlang:now forces the clock to advance.
Apart from the excellent answer by troutwine, the reason why erlang:now() is faster in a serial test is probably that it avoids the kernel since you may be calling it faster than time progresses and then you are in a situation where you don't hit the kernel as often.
But note, that your test is deceiving until you add more than a single core. Then os:timestamp() like troutwine writes, will outperform erlang:now().
Also note you are on a weak platform, namely Windows. This usually affects performance in non-trivial ways.
I'am non-programmer, trying to assess the time spent in (opencv-)functions. We have an AD-converter which comes with a counter that is able to count external signals (e.g. from a function generator) with a frequency of 1 MHz = 1 µs resolution. The actual counter status can be queried with a function cbIn32(..., unsigned long *pointertovalue).
So my idea was to query the counter status before and after calling the function of interest and to calculate then the difference. However, doubts came up when I calcultated the difference without a function call in between, which revealed rel. high fluctuations (values between 80 and 400 µs or so). I wondered, if calculating the average time for calling cbIn32() (approx. 180 µs) and substract this from the putative time spent in the function of interest is a valid solution.
So my first two questions:
Is that approach generally feasible or useless?
Where do the fluctuations come from?
Alternatively, we tried using getTickCount(), which seemed to deliver reasonable values. But checking forums revealed that it has a low resolution of about 10 ms, which would be insatisfactory (100 µs resolution would be appreciated). However, the values we got were in the sub-ms range.
This brings me to the next questions:
How can the time assessed for a function with getTickCount() be in the microseconds range, when the resolution is around 10 ms?
Should I trust the obtained values or not?
I also tried it with gprof, but it gave me "no time accumulated", although I am sure that the time spent in a function containing opencv-related calls is at least a few milliseconds. I even tried rebuilding opencv with ENABLE_PROFILING=ON, but same result. I read somewhere that you need to build static opencv libraries to enable profiling, but I am not sure if this would improve the situation. So the question here is:
What do I have to do so that gprof also "sees" opencv functions?
Next alternative would be the QueryPerformanceCounter() function of the WINAPI. I don't how to use it, but I would fight my way through, if you recommend it. Question to that approach:
Will it be problematic because of multiple cores?
If yes, is there an "easy" way to handle that problem?
I also tried it with verysleepy, but it exits somehow to early (worked fine with other .exe).
Newbie-friendly answers would be very, very appreciated. My goal is to find the easiest approach with highest precision. I'm working on Win7 64bit, Eclipse with MinGW.
Thx for your help...