Z3 Command-line Timeout - z3

When using Z3 on the command line with the "-T" switch, is there a way to set the timeout to less than one second?
I know you can set the timeout to be less than that using the API, but for various stupid reasons I've been passing text files containing SMT-LIBv2 scripts to Z3 in a loop (please don't be mad), thinking it would work just as well. I've only just noticed that this approach seems to create a lower bound of one second on timeouts. This slows things down quite a bit if I'm using Z3 to check thousands of short files.
I understand if this is just the way things are, and I accept that what I'm doing isn't sensible when there's already a perfectly good API for Z3.

There are two options:
You can use "soft timeouts". They are less reliable than the timeout /T because soft timeout expiration is only checked periodically. Nevertheless, the option "smt.soft_timeout=10" would set a timeout of 10ms (instead of 10s). You can set the these options both from the command-line and within the SMT-LIB2 file using (set-option :smt.soft_timeout 10). The tutorial on using tactics/solvers furthermore explains how to use more advanced features (strategies) and you can also control these advanced features using options, such as timeouts, from the textual interface.
You can load SMT-LIB2 files from the programmatic API. The assertions from the files are stored in a conjunction. You can then call a solver (again from the API) and use the "soft timeout" option for the solver object. There isn't really a reason to use option 2 unless you need to speed up your pipe or use something more than the soft timeout feature because it is already reasonably exposed for the SMT-LIB level.

Related

Why is ram_utilization_factor usage not recommended

I wonder why increasing of --ram_utilization_factor is not recommended (from the docs):
This option, which takes an integer argument, specifies what percentage of the system's RAM Bazel should try to use for its subprocesses. This option affects how many processes Bazel will try to run in parallel. The default value is 67. If you run several Bazel builds in parallel, using a lower value for this option may avoid thrashing and thus improve overall throughput. Using a value higher than the default is NOT recommended. Note that Bazel's estimates are very coarse, so the actual RAM usage may be much higher or much lower than specified. Note also that this option does not affect the amount of memory that the Bazel server itself will use.
Since Bazel has no way of knowing how much memory an action/worker uses/will use, the only way of setting this up seems ram_utilization_factor.
That comment is very old and I believe was the result of some trial and error when --ram_utilization_factor was first implemented. The comment was added to make sure that developers would have some memory left over for other applications to run on their machines. As far as I can tell, there is no deeper reason for it.

What makes erlang scalable?

I am working on an article describing fundamentals of technologies used by scalable systems. I have worked on Erlang before in a self-learning excercise. I have gone through several articles but have not been able to answer the following questions:
What is in the implementation of Erlang that makes it scalable? What makes it able to run concurrent processes more efficiently than technologies like Java?
What is the relation between functional programming and parallelization? With the declarative syntax of Erlang, do we achieve run-time efficiency?
Does process state not make it heavy? If we have thousands of concurrent users and spawn and equal number of processes as gen_server or any other equivalent pattern, each process would maintain a state. With so many processes, will it not be a drain on the RAM?
If a process has to make DB operations and we spawn multiple instances of that process, eventually the DB will become a bottleneck. This happens even if we use traditional models like Apache-PHP. Almost every business application needs DB access. What then do we gain from using Erlang?
How does process restart help? A process crashes when something is wrong in its logic or in the data. OTP allows you to restart a process. If the logic or data does not change, why would the process not crash again and keep crashing always?
Most articles sing praises about Erlang citing its use in Facebook and Whatsapp. I salute Erlang for being scalable, but also want to technically justify its scalability.
Even if I find answers to these queries on an existing link, that will help.
Regards,
Yash
Shortly:
It's unmutable. You have no variables, only terms, tuples and atoms. Program execution can be divided by breakpoint at any place. Fully transactional model.
Processes are even lightweight than .NET threads and isolated.
It's made for communications. Millions of connections? Fully asynchronous? Maximum thread safety? Big cross-platform environment, which built only for one purpose — scale&communicate? It's all Ericsson language — first in this sphere.
You can choose some impersonators like F#, Scala/Akka, Haskell — they are trying to copy features from Erlang, but only Erlang born from and born for only one purpose — telecom.
Answers to other questions you can find on erlang.com and I'm suggesting you to visit handbook. Erlang built for other aims, so it's not for every task, and if you asking about awful things like php, Erlang will not be your language.
I'm no Erlang developer (yet) but from what I have read about it some of the features that makes it very scalable is that Erlang has its own lightweight processes that are using message passing to communicate with each other. Because of this there is no such thing as shared state and locking which is the case when using for example a multi threaded Java application.
Another difference compared to Java is that the Erlang VM does garbage collection on every little process that is running which does not take any time at all compared to Java which does garbage collection only per VM.
If you get problem with bottlenecks from database connection you could start by using a database pooling app running against maybe a replicated PostgreSQL cluster or if you still have bottlenecks use a multi replicated NoSQL setup with Mnesia, Riak or CouchDB.
I think process restarts can be very useful when you are experiencing rare bugs that only appear randomly and only when specific criteria is fulfilled. Bugs that cause the application to crash as soon as you restart the app should optimally be fixed or taken care of with a circuit breaker so that it does not spread further.
Here is one way process restart helps. By not having to deal with all possible error cases. Say you have a program that divides numbers. Some guy enters a zero to divide by. Instead of checking for that possible error (and tons more), just code the "happy case" and let process crash when he enters 3/0. It just restarts, and he can figure out what he did wrong.
You an extend this into an infinite number of situations (attempting to read from a non-existent file because the user misspelled it, etc).
The big reason for process restart being valuable is that not every error happens every time, and checking that it worked is verbose.
Error handling is verbose typically, so writing it interspersed with the logic handling doing a task can make it harder to understand the code. Moving that logic outside of the task allows you to more clearly distinguish between "doing things" code, and "it broke" code. You just let the thing that had a problem fail, and handle it as needed by a supervising party.
Since most errors don't mean that the entire program must stop, only that that particular thing isn't working right, by just restarting the part that broke, you can keep operating in a state of degraded functionality, instead of being down, while you repair the problem.
It should also be noted that the failure recovery is bounded. You have to lay out the limits for how much failure in a certain period of time is too much. If you exceed that limit, the failure propagates to another level of supervision. Each restart includes doing any needed process initialization, which is sometimes enough to fix the problem. For example, in dev, I've accidentally deleted a database file associated with a process. The crashes cascaded up to the level where the file was first created, at which point the problem rectified itself, and everything carried on.

Checkpoint mechanism for Z3

I am planning to do some work with the Z3 SMT solver from Microsoft Research that will run on a compute server with an execution time limit. I expect that the job will exceed this limit. The recommended policy for this computing center is to use "checkpoints" and to invoke a series of jobs, each of which picks up the checkpoint from the previous job and continues working. In this way, no process runs for more than the execution time limit, so other users have a chance to run their jobs too, but the total amount of compute time used can exceed the timeout for a single job.
Does Z3 have support for reading and writing checkpoints?
By "checkpoint", I mean a file that serializes (some part of) the internal state of the Z3 solver, such that if the Z3 process writes a checkpoint and exits, and then a second Z3 process is started that reads the checkpoint file, after reading it back the state of the new Z3 process is identical to the state of the previous process (so the solver doesn't start again, but continues solving from where it left off).
As an alternative, instead of checkpointing the entire solver, is it possible to read the database of learned clauses (or other inference databases built internally by Z3)? This could make it possible to do a form of checkpointing by augmenting the input file with learned clauses, although it might not be as efficient as a "real" checkpoint of the entire internal state.
No, Z3 does not have facilities that achieve all those goals already built in. Z3 goal and solver objects can be serialized to a string in SMT2 format via the Z3_goal_to_string and Z3_solver_to_string functions; these could be used for check-pointing, but they will not save any learnt clauses that weren't in the goal or the solver before the last search was started.
In case the main goal is to re-start an intricate interaction, perhaps a Z3 interaction log could be helpful (see Z3_open_log). These logs can be replayed, but again, learned clauses, etc, are not saved.

Grails/security: on the motivation for implementing a NonAuthFilter

Following up on this question
Why would one favour implementing a non-authenticating filter on a url, over allowing it under staticRules w/permitAll? Is there any difference, from a security standpoint?
They seem to me as achieving the same.
Thanks
There is a non-zero overhead from doing security checks. In most cases the cost is small, but depending on what gets checked it could be more resource-intensive, and even if each individual request's cost is small, the cumulative cost can be significant, especially when you consider that in many applications the security checks run for every request, including the favicon, and all CSS, JS, and image files.
There's also a cost to repopulating the SecurityContext with an Authentication; often this is configured from data stored in the HTTP session, but it could be more involved and even require database access or a remote call.
There's no direct support for this in the plugin and you need to use a workaround like in the link you showed, but I'm planning on adding support for 2.0 final; see this JIRA issue.

Learning Erlang? speedbump thread, common, small problems

I just want know all the small problems that got between you and your final solution when you were new to Erlang.
For example, here are the first speedbumps I had:
Use controlling_process(Socket, Pid) if you spawn off in multiple threads. Right packet to the right thread.
You going to start talking to another server? Remember to net_adm:ping('car#bsd-server'). in the shell. Else no communication will get through.
Timer:sleep(10), if you want to do nothing. Always useful when debugging.
Learning to browse the standard documentation
Once you learn how the OTP documentation is organised it becomes much easier to find what you're looking for (you tend to need to learn which applications provide which modules or kinds of modules).
Also just browsing the documentation for applications is often quite rewarding - I've discovered lots of really useful code this way - sys, dbg, toolbar, etc.
The difference between shell erlang and module erlang
Shell erlang is a slightly different dialect to module erlang. You can't define module functions (only funs), you need to load record definitions in order to work with records (rr/1) and so on. Learning how to write erlang code in terms of anonymous functions is somewhat tricky, but is essential for working on production systems with a remote shell.
Learning the interaction between the shell and {start,spawn}_link ed processes - when you run some shell code that crashes (raises an exception), the shell process exits and will broadcast exit signals to anything you linked to. This will in turn shut down that new gen_server you're working on. ("Why does my server process keep disappearing?")
The difference between erlang expressions and guard expressions
Guard expressions (when clauses) are not Erlang expressions. They may look similar, but they're quite different. Guards cannot call arbitrary erlang functions, only guard functions (length/1, the type tests, element/2 and a few others specified in the OTP documentation). Guards succeed or fail and don't have side effects. Erlang expressions on the other hand can do what they like.
Code loading
Working out when and how code upgrades work, the incantation to get a gen_server to upgrade to the latest version of a callback module (code:load(Mod), sys:suspend(Pid), sys:change_code(Pid, Mod, undefined, undefined), sys:resume(Pid).).
The code server path (code:get_path/0) - I can't count how many times I ran into undefined function errors that turned out to be me forgetting to add an ebin directory to the code search path.
Building erlang code
Working out a useful combination of emake (make:all/0 and erl -make) and gnu make took quite a long time (about three years so far :).
My current favourite makefiles can be seen at http://github.com/archaelus/esmtp/tree/master
Erlang distribution
Getting node names, dns, cookies and all the rest right in order to be able to net_adm:ping/1 the other node. This takes practise.
Remote shell IO intricacies
Remembering to pass group_leader() to io:format calls run on the remote node so that the output appears in your shell rather than mysteriously disappearing (I think the SASL report browser rb still has a problem with sending some of its output to the wrong node when used over a remote shell connection)
Integrating it into msvc 6, so I could use the editor, and see the results in the output window.
I created a tool, with
command - path to erlc
arguments - +debug_info $(FileName)$(FileExt)
Initial Directory - $(fileDir)
Checked Use Output Window.
Debugging is hard. All I know to do is to stick calls to "error_logger:info_msg" in my code.
Docs have been spotty -- they're correct, but very very terse.
This is my own fault, but: I started coding before I understood eunit, so a lot of my code is harder to test than it should be.
controlling_process()
Use controlling_process(Socket, Pid) if you spawn off in multiple threads. Right packet to the right thread.
net_adm:ping()
You going to start talking to another server? Remember to net_adm:ping('car#bsd-server'). in the shell. Else no communication will get through.
timer:sleep()
Pause for X ms.
The thing that took me the most time to get my head around was just the idea of structuring my code entirely around function calls and message passing. The rest of it either just fell out from there (spawning, remote nodes) or felt like the usual stuff you've got to learn in any new language (syntax, stdlib).

Resources